Search Results

Search found 2390 results on 96 pages for 'concrete inheritance'.

Page 76/96 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • EF Doesn't Like Same Named Tables

    - by Anthony Trudeau
    Originally posted on: http://geekswithblogs.net/tonyt/archive/2013/07/02/153327.aspxIt's another week and another restriction imposed by the Entity Framework (EF). Don't get me wrong. I like EF, but I don't like how it restricts you in different ways. At this point you may be asking yourself the question: how can you have more than one table with the same name?The answer is to have tables in different schemas. I do this to partition the data based on the area of concern. It allows security to be assigned conveniently. A lot of people don't use schemas. I love them. But this article isn't about schemas.In the situation I have two tables:Contact.PersonEmployee.PersonThe first contains the basic, more public information such as the name. The second contains mostly HR specific information. I then mapped these tables to two classes. I stuck to a Table per Class (TPC) mapping, because of problems I've had in the past implementing inheritance with EF. The following code gives you the basic contents of the classes.[Table("Person", Schema = "Employee")]public class Employee {   ...   public int PersonId { get; set; }   [ForeignKey("PersonId")]   public virtual Person Person { get; set; }}[Table("Person", Schema = "Contact")]public class Person {   [Key]   public int Id { get; set; }   ...}This seemingly simple scenario just doesn't work. The problem occurs when you try to add a Person to the DbContext. You get an InvalidOperationException with the following text:The entity types 'Employee' and 'Person' cannot share table 'People' because they are not in the same type hierarchy or do not have a valid one to one foreign key relationship with matching primary keys between them..This is interesting for a couple of reasons. First, there is no People table in my database. Second, I have used the SetInitializer method to stop a database from being created, so it shouldn't be thinking about new tables.The solution to my problem was to change the name of my Employee.Person table. I decided to name it Employee.Employee. It's not ideal, but it gets me past the EF limitation. I hope that this article will help someone else that has the same problem.

    Read the article

  • Is it reasonable to insist on reproducing every defect before diagnosing and fixing it?

    - by amphibient
    I work for a software product company. We have large enterprise customers who implement our product and we provide support to them. For example, if there is a defect, we provide patches, etc. In other words, It is a fairly typical setup. Recently, a ticket was issued and assigned to me regarding an exception that a customer found in a log file and that has to do with concurrent database access in a clustered implementation of our product. So the specific configuration of this customer may well be critical in the occurrence of this bug. All we got from the customer was their log file. The approach I proposed to my team was to attempt to reproduce the bug in a similar configuration setup as that of the customer and get a comparable log. However, they disagree with my approach saying that I should not need to reproduce the bug (as that is overly time-consuming and will require simulating a server cluster on VMs) and that I should simply "follow the code" to see where the thread- and/or transaction-unsafe code is and put the change working off of a simple local development, which is not a cluster implementation like the environment from which the occurrence of the bug originates. To me, working out of an abstract blueprint (program code) rather than a concrete, tangible, visible manifestation (runtime reproduction) seems like a difficult working environment (for a person of normal cognitive abilities and attention span), so I wanted to ask a general question: Is it reasonable to insist on reproducing every defect and debug it before diagnosing and fixing it? Or: If I am a senior developer, should I be able to read (multithreaded) code and create a mental picture of what it does in all use case scenarios rather than require to run the application, test different use case scenarios hands on, and step through the code line by line? Or am I a poor developer for demanding that kind of work environment? Is debugging for sissies? In my opinion, any fix submitted in response to an incident ticket should be tested in an environment simulated to be as close to the original environment as possible. How else can you know that it will really remedy the issue? It is like releasing a new model of a vehicle without crash testing it with a dummy to demonstrate that the air bags indeed work. Last but not least, if you agree with me: How should I talk with my team to convince them that my approach is reasonable, conservative and more bulletproof?

    Read the article

  • Getting my younger brother started on programming

    - by SmartLemon
    My younger brother is 13 years old, I started programming when I started to develop Android applications when I was 15, last year my brother gained an interest in it and he would always pestering me about letting him make something himself, so I wrote him a few tutorials and he built himself a small application that had a few buttons that did something, I think you put in your dob and it would tell you what day you were born on, he took a couple of days building up to his final application, maybe even a week, learning everything he needed. Since then he hasn't really done much more because I have been engulfed in work and such where I have my own programming problems to sort out. I told him that when he was my age (I am 17) that he should be better then me, he was a bit sceptical about this however. I dont think he has as much logical reasoning as I would think he needs to solve more complex problems, but shouldnt that just develop over time as it did with me? He has been pestering me for the past week or something to write him more tutorials, but I didn't have time. All I had with me was a playlist I had downloaded from the new boston from youtube for C++, it's about 73 videos. He is currently about 20-30 videos in, he has come to ask me a few questions about it and thats it. Should I have really properly started him with C++? Should I stop him now and start him again on python or ruby? I know that C++ shouldn't really be a beginners language, especially for someone who is only 13, by the time this question is answered will probably be up to learning about inheritance or something. Some people may see this as not a real question, but it is, and should be used as a reference for others. I want to know, should I start him on a different language whch is more easy? What language then? And would it be better for me to teach him myself (I would make time) or just continue him with the new boston? There are a few more questions throughout this question but these are the main ones. Part of the question people seem to be neglecting is me asking whether I should change what language he is learning to another, or since he is already pretty far through the tutorials should I just leave him with C++ and he can learn the other languages freely by himself?

    Read the article

  • Appcrash and possible malware

    - by Chris Lively
    First off, I'm running MS Intune Endpoint Protection. It is completely up to date. On 10/25 @ 11:53PM I came across a site that caused Intune to freak out: Microsoft Antimalware has detected malware or other potentially unwanted software. For more information please see the following: http://go.microsoft.com/fwlink/?linkid=37020&name=Trojan:Win64/Sirefef.B&threatid=2147646729 Name: Trojan:Win64/Sirefef.B ID: 2147646729 Severity: Severe Category: Trojan Path: file:_C:\Windows\System32\consrv.dll Detection Origin: Local machine Detection Type: Concrete Detection Source: Real-Time Protection User: NT AUTHORITY\SYSTEM Process Name: C:\Windows\explorer.exe Signature Version: AV: 1.115.526.0, AS: 1.115.526.0, NIS: 10.7.0.0 Engine Version: AM: 1.1.7801.0, NIS: 2.0.7707.0 I, of course, elected to simply delete the file. Since then my machine has been randomly giving an error about "Host Process for Windows Services" stopped working. There are generally two different pieces of info: Description Faulting Application Path: C:\Windows\System32\svchost.exe Problem signature Problem Event Name: BEX64 Application Name: svchost.exe Application Version: 6.1.7600.16385 Application Timestamp: 4a5bc3c1 Fault Module Name: StackHash_52d4 Fault Module Version: 0.0.0.0 Fault Module Timestamp: 00000000 Exception Offset: 000062bdabe00000 Exception Code: c0000005 Exception Data: 0000000000000008 OS Version: 6.1.7601.2.1.0.256.27 Locale ID: 1033 Additional Information 1: 52d4 Additional Information 2: 52d47b8b925663f9d6437d7892cdf21b Additional Information 3: ed24 Additional Information 4: ed24528f3b69e8539b5c5c2158896d3e and Description Faulting Application Path: C:\Windows\System32\svchost.exe Problem signature Problem Event Name: APPCRASH Application Name: svchost.exe Application Version: 6.1.7600.16385 Application Timestamp: 4a5bc3c1 Fault Module Name: mshtml.dll Fault Module Version: 9.0.8112.16437 Fault Module Timestamp: 4e5f1784 Exception Code: c0000005 Exception Offset: 00000000002ed3c2 OS Version: 6.1.7601.2.1.0.256.27 Locale ID: 1033 Additional Information 1: 3e9e Additional Information 2: 3e9e8b83f6a5f2a25451516023078a83 Additional Information 3: 432a Additional Information 4: 432a0284c502cce3bbb92a3bd555fe65 Intune claims the machine is clean. I've also tried some of the online scanners like trendmicro, all of which claimed the system is clean. Finally, I tried the "sfc /scannow" and it said all was good. I left my machine on after I left last night and there were about 50 of those messages. Ideas on how to proceed?

    Read the article

  • Can I change the file system on the OS partition on Server 2008 R2?

    - by KCotreau
    I have a client using R1Soft Continuous Data Protection backup, and two of the Server 2008 R2 boxes were erroring out with these errors: Unable to obtain NTFS volume data for device '\\?\Volume{f612849e-7125-11e0-8772-806e6f6e6963}': Incorrect function. Unable to discover information for filesytem volume '\\?\Volume{f612849e-7125-11e0-8772-806e6f6e6963}'; Unable to obtain NTFS volume So I backed up all the registry entries with this, {f612849e-7125-11e0-8772-806e6f6e6963}, in it, and deleted them based on some VERY sparse info from R1Soft. I then decided to restore them before I rebooted, and do a system state backup first using MS backup, and even it errored out saying that there were FAT32 partitions. This was a major clue as the only two computers with problems had these FAT32 partitions. I figured if MS backup can't backup something, any other program is likely to have problems. Also, now that I realized the servers had FAT32 partitions on them, the error referencing NTFS takes on more weight. The partitions on both servers have the label "OS", but on one of the computers, it is given a letter, but on the other not. So I am thinking if I just convert the file systems from FAT32 to NTFS, it may solve the backup problem. So the question is this: Can I just convert those partitions, and does anyone have any concrete knowledge of any major downsides, like the servers not coming back up (of course, I would do one at a time)? My thinking is that the answer is probably at least 95% no, but they are production servers, so I wanted to get some second opinions.

    Read the article

  • How to give a user NTFS rights to a folder, via Powershell

    - by Don
    I'm trying to build a script that will create a folder for a new user on our file server. Then take the inherited rights away from that folder and add specific rights back in. I have it successfully adding the folder (if i give it a static entry in the script), giving domain admin rights, removing inheritance, etc...but i'm having trouble getting it to use a variable I set as the user. I don't want there to be a static user each time, I want to be able to run this script, have it ask me for a username, it then goes out and creates the folder, then gives that same user full rights to that folder based on the username i've supplied it. I can use Smithd as a user, like this: New-Item \\fileserver\home$\Smithd –Type Directory But can't get it to reference the user like this: New-Item \\fileserver\home$\$username –Type Directory Here's what i have: Creating a new folder and setting NTFS permissions. $username = read-host -prompt "Enter User Name" New-Item \\\fileserver\home$\$username –Type Directory Get-Acl \\\fileserver\home$\$username $acl = Get-Acl \\\fileserver\home$\$username $acl.SetAccessRuleProtection($True, $False) $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Administrators","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow") $acl.AddAccessRule($rule) $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Domain\Domain Admins","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow") $acl.AddAccessRule($rule) $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Domain\"+$username,"FullControl", "ContainerInherit, ObjectInherit", "None", "Allow") $acl.AddAccessRule($rule) Set-Acl \\\fileserver\home$\$username $acl I've tried several ways to get it to work, but no luck. Any ideas or suggestions would be welcome, thanks.

    Read the article

  • How can I bridge a VM to a remote network?

    - by asciiphil
    I have a system running QEMU/KVM (via libvirt). One of its VMs needs to have a presence on a subnet that is not local to the VM host. I have a Linux system on the remote subnet. Is there a way to set up some sort of tunneled bridge to cause the VM to appear present on the remote system? This will be a temporary situation (hopefully just until the VM owner can configure their system) and network performance and long-term maintainability aren't really issues. To give some more concrete information: My VM host has IP address 192.168.54.155/24. The VM has IP address 192.168.65.71/24. I have a remote system at 192.168.65.254/24. Both the VM host and remote system are running Scientific Linux 6.5. I do not control the network or routing in between the VM host and remote system. I do not have access to the guest OS on the VM. I would like traffic to the VM's IP address to end up at the VM even though its host isn't directly connected to the appropriate network. I've tried using iproute2's tunnelling, but Linux won't let me add a tunnel to a bridge. I've considered using some sort of iptables mangling to route traffic over the tunnel and make the VM think it's on the right network, but I'm not sure whether there are better approaches. What's the best way to accomplish this hack?

    Read the article

  • How can I erase the traces of Folder Redirection from the Default Domain Policy

    - by bruor
    I've taken over from an IT outsourcer and have found a struggle now that we're starting a migration to windows 7. Someone decided that they would setup Folder redirection in the Default Domain Policy. I've since configured redirection in another policy at an OU level. No matter what I do, the windows 7 systems pick up the Default Domain Policy folder redirection settings only. I keep getting entries in the event log showing that the previously redirected folders "need to be redirected" with a status of 0x80000004. From what I can tell this just means that it's redirecting them locally. Is there a way I can wipe that section of the GPO clean so it's no longer there? I'm hesitant to try to reset the default domain policy to complete defaults. ***UPDATE 6-26 I found that the following condition occurred and was causing the grief here. I've already implemented the new policies for clients, and for some reason, XP was working great, 7 was refusing to process. The DDP was enforced. Because of this, and the fact that the folder redirection policies were set to redirect back to the local profile upon removal, it was forcing clients to pick up it's "redirect to local" settings. Requirements for to recreate the issue. -Create a new test OU and policy. -Create some folder redirection settings, set them to redirect to local upon removal -Remove settings on that GPO -Refresh your view of the GPO and check the settings. -You'll notice that the settings show "not configured" entries for folder redirection. -Enforce this GPO -Create another sub-OU -Create a GPO linked to this sub-ou and configure some folder redirection settings. -Watch as the enforced GPOs "not configured" setting overrides the policy you just defined. I've had to relink the DDP to all OU's that have "block inheritance" enabled, and disable the "enforced" option on the DDP as a workaround. I'd love to re-enable enforcement of the DDP, but until I can erase the traces of folder redirection settings from the DDP, I think I'm stuck.

    Read the article

  • IIS Messing on Wordpress Permalinks or WP's fault?

    - by Jesus Rodriguez
    Hello, I had a problem and after some research I discovered the exactly point where is failing. blog.domain.com Is not working, it says that the page cannot be found (404) blog.domain.com/index.php Working as expected If you click on Home, it will says that the page cannot be found, if you try to preview a new post, it says that the page cannot be found... I can see every post btw. I run my blog on a Windows hosting using IIS. my permalink is this: /index.php/%postname% IIRC I had to use index.php because my IIS doesn't have URL rewriting. I have no problem with the index.php thing on the url, I have now a good SEO and I don't want to change my permalink but I Don't know why is not working now... just from one day to the next... It's a problem of WP or is just my host messing up? If is my blog, do you know what is causing this? (Just for create a concrete ticket about the exactly problem) Thank you.

    Read the article

  • Picking a linux compatible motherboard

    - by Chris
    Last time I bought a new computer (I build them myself) I got a motherboard that had really poor linux support for a long time. Specifically the audio. I had to wait months before the kernel supported the on board audio chipset. That is exactly the situation I'm trying to avoid this time around. I have some specific questions about "server motherboards" actually. I looked at a few models of server motherboards by intel, and some random models on newegg. I wasn't able to see much of a difference from regular desktop motherboard other than most had two sockets, and support for much more ram. These boards seem more popular with Linux users. Why? AMD and Intel both have server CPUs as well. Some question, what's the difference? To make this question more concrete, I was looking at this this motherboard. The main questions about it that I can't answer are: Can I get a motherboard without on board raid and audio? I wanted to get a hardware raid controller and a PCI audio card. I thought a server motherboard would be cheaper and not have these "extras", since who wants an audio card on a server? Where can I found out about Linux support for the components on this board? "Intel ICH10R", "Realtek ALC889", "Marvell 88E8056" I'm buying this computer to work as a Linux desktop for a lot of compiling, coding and audio/video work, but I don't want to rule out the possibility of installing windows and playing some games at one point. (even if the last game I got has been sitting in its box unopened for almost a year). Is it a good idea to buy a "server motherboard" and play games on it, or are desktop boards better value for this? The ultimate solution for me would be a motherboard that had GPL divers for onboard LAN, a single CPU socket, lots of PCI express and PCI. USB 3.0, and no fancy hard disk controllers since I'll be getting a separate one.

    Read the article

  • Current wisdom on SQL Server and Hyperthreading?

    - by BradC
    Lots of articles out there (see Slava Oks's original SQL 2000 article and Kevin Kline's SQL 2005 update) recommend disabling hyperthreading on SQL servers, or at least testing your specific workload before enabling it on your servers. This issue is gradually becoming less relevant as true multi-core processors replace hyperthreaded ones, but what's the current wisdom on this issue? Does this advice change any with SQL 2005 64-bit, or SQL 2008, or Windows Server 2008? Ideally, this should be tested in advance in a staging environment, but what about for servers that have already made it into production with HT enabled? How can I tell if performance issues we're experiencing might be related to HT? Is there some specific combination of perfmon counters that might point me in that direction, as opposed to all the other things I normally pursue when working on improving SQL performance? Edit: This is especially attractive because of the potential for an across the board improvement for some of my high-cpu servers, but the client is going to want to see something concrete that helps me identify which servers really could benefit from disabling hyperthreading. Of course, conventional performance troubleshooting is ongoing, but sometimes any little bit helps.

    Read the article

  • Deploying concrete5 on nginx

    - by Nithin
    I have a concrete5 site that works 'out of the box' in apache server. However I am having a lot of trouble running it in nginx. The following is the nginx configuration i am using: server { root /home/test/public; index index.php; access_log /home/test/logs/access.log; error_log /home/test/logs/error.log; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html try_files $uri $uri/ index.php; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } # pass the PHP scripts to FastCGI server listening on unix socket # location ~ \.php($|/) { fastcgi_pass unix:/tmp/phpfpm.sock; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; include fastcgi_params; } location ~ /\.ht { deny all; } } I am able to get the homepage but am having problem with the inner pages. The inner pages display an "Access denied". Possibly the rewrite is not working, in effect I think its querying and trying to execute php files directly instead of going through the concrete dispatcher. I am totally lost here. Thank you for your help, in advance.

    Read the article

  • Why are group policy preference drive mappings not applied to the domain administrator account?

    - by Saariko
    I have a working policy on my entire domain. I just found out, when logging with the domain administrator, that this policy is not applied (EDIT: Running : gpresult shows that the GPO's are applied - but, this GPO is for Drive Mappings, and the actual drive mappings are NOT shown) The administrator account - does not have any login script on his profile tab. To note: The mappings were applied before the GPO with a login script using the : net use ... command - all was working perfectly and correctly for the domain administrator user as well - That removes sharing and security problem (IMO) My GPO's are mainly small/atomic settings: single GPO to handle each settings: UAC, Firewall, printers. GPO status for the object is enabled That's an overview of the Drive Maps: Reading on MS support site, I checked the delegation tab, and it is marked as applied to domain and enterprise admins. Every user gets these policies correctly. The OU that is set is the root of the domain. (for testing purpose - I did that to eliminate hierarchy issues - did not help) Block Inheritance is disabled. (never used it anyway) GPO link GPO Security Filterings

    Read the article

  • Setting SVN permissions with Dav SVN Authz

    - by Ken
    There seems to be a path inheritance issue which is boggling me over access restrictions. For instance, if I grant rw access one group/user, and wish to restrict it some /../../secret to none, it promptly spits in my face. Here is an example of what I'm trying to achieve in dav_svn.authz [groups] grp_W = a, b, c, g grp_X = a, d, f, e grp_Y = a, e, [/] * = @grp_Y = rw [somerepo1:/projectPot] @grp_W = rw [somerepo2:/projectKettle] @grp_X = rw What is expected: grp_Y has rw access to all repositories, while grp_W and grp_X only have access to their respective repositories. What occurs: grp_Y has access to all repositories, while grp_W and grp_X have access to nothing If I flip the access ordering where I give everyone access and restrict it in each repository, it promply ignores the invalidation rule (stripping of rights) and gives everyone the access granted at the root level. Forgoing groups, it performs the same with user specific provisions; even fully defined such as: [/] a = rw b = c = d = e = f = g = rw [somerepo1:/projectPot] a = rw b = rw c = rw d = e = rw f = g = rw [somerepo2:/projectKettle] a = rw b c d = rw e = rw f = rw g Which yields the exact same result. According to the documentation I'm following all protocols so this is insane. Running on Apache2 with dav_svn

    Read the article

  • Application for time and projet management

    - by user10826
    I want to improve the way I organize my projects/tasks/schedule What I do now is: keep an excel sheet with the name of the most important tasks/projects, I look at it at the beginning of each day and decide the ones I will focus on on iCal I write down events for each day, or for a concrete time (13 to 14 hours). I set up each day the tasks I want to accomlish, and allocate them hours I use Things (culture code) to keep info about tasks and projects not very important and which are not time allocated yet (GTD name = someday) I use Mail on Mac and create folders for the mails I want to process with the name of the different projects I save the main info for each project on freemind maps My system works well at the moment but it is pretty complicated to use. I want to make it better and I am looking for something with these requirements: must be 100% offline accessable it should use as less programs/resources as possible, ideally just one program should be able to manage all my info I can use the GTD methodology mixed with priorities and I can allocate each task converted to event on my calendar I can have different daily/weekly, etc views on a calendar to see the "big picture" must run on mac os x leopard price does not matter, I will pay for this So, according to your experience, can you recommend me something like this? Thanks

    Read the article

  • VPN - local and remote networks IP collision

    - by Guido García
    I have created a VPN connection in Windows using the New Network Connection wizard that comes with Windows. It works without problems in most places, but there is one concrete place where, despite the connection to the remote public IP works fine, it is not able to validate the login/password and establish the VPN connection. In this place, the network is 10.0.0.x (the same I use in other places where I am able to connect). The remote network is 192.168.x.x, so I suspect there is some kind of IP collision, because before connecting, a traceroute to i.e. 192.168.0.40 does not fail. 1 4 ms 1 ms 1 ms LINKSYS [10.0.0.1] 2 5 ms 1 ms 1 ms 172.26.27.1 3 4 ms 5 ms 3 ms 192.168.1.100 ... (more) I can't modify the local network further than the first router (10.0.0.1). That is the only different I've found so far. Any idea about how to solve it? Thank you.

    Read the article

  • fedora, dhcpd fails to start

    - by soxs060389
    History: I got a tiny shiny plugserver which I want to plug to my ADSL router (or however you want to call it) on one end (eth0), and the other end (eth1) I want to run a dhcp server for my LAN. ATM I am stuck with getting LAN to work. OS is fedora 12. I configured my /etc/dhcp/dhcpd.conf like this: # # DHCP Server Configuration file. # see /usr/share/doc/dhcp*/dhcpd.conf.sample # see 'man 5 dhcpd.conf' # option domain-name "unknown.org"; option domain-name-servers 192.168.44.1; option subnet-mask 255.255.255.0; option broadcast-address 192.168.44.255; default-lease-time 86400; max-lease-time 172800; subnet 192.168.44.0 netmask 255.255.255.0 { host fedorabigbox { hardware ethernet 00:19:66:8E:61:74; fixed-address 192.168.44.21; } #host mobile #{ # hardware ethernet ***; # fixed-address 192.168.44.22; #} range 192.168.44.100 192.168.44.110; option routers 192.168.44.1; } # this is just dummy, as read many howtos, some suggesting to add a subnet blah netmask blah for each interface subnet 192.168.33.0 netmask 255.255.255.0 { range 192.168.33.100 192.168.33.110; option routers 192.168.33.1; } But the server fails to start when trying to start it via /etc/init.d/dhcpd start In general it would be nice if someone can point me to a in detail explanation of how network works, I am pretty new to this stuff. More concrete question: How to point the subnets to eth1 and the other to eth0, how can this be achieved? Does someone see any errors or flaws? Syntax should be correct, allready checked that with the dhcpd syntax check. Thanks for any help

    Read the article

  • picking a linux compatable motherboard

    - by Chris
    Last time I bought a new computer (I build them myself) I got a motherboard that had really poor linux support for a long time. Specifically the audio. I had to wait months before the kernel supported the on board audio chipset. That is exactly the situation I'm trying to avoid this time around. I have some specific questions about "server motherboards" actually. I looked at a few models of server motherboards by intel, and some random models on newegg. I wasn't able to see much of a difference from regular desktop motherboard other than most had two sockets, and support for much more ram. These boards seem more popular with Linux users. Why? AMD and Intel both have server CPUs as well. Some question, what's the difference? To make this question more concrete, I was looking at this this motherboard. The main questions about it that I can't answer are: Can I get a motherboard without on board raid and audio? I wanted to get a hardware raid controller and a PCI audio card. I thought a server motherboard would be cheaper and not have these "extras", since who wants an audio card on a server? Where can I found out about Linux support for the components on this board? "Intel ICH10R", "Realtek ALC889", "Marvell 88E8056" I'm buying this computer to work as a Linux desktop for a lot of compiling, coding and audio/video work, but I don't want to rule out the possibility of installing windows and playing some games at one point. (even if the last game I got has been sitting in its box unopened for almost a year). Is it a good idea to buy a "server motherboard" and play games on it, or are desktop boards better value for this? The ultimate solution for me would be a motherboard that had GPL divers for onboard LAN, a single CPU socket, lots of PCI express and PCI. USB 3.0, and no fancy hard disk controllers since I'll be getting a separate one.

    Read the article

  • Application for time and project management

    - by user10826
    I want to improve the way I organize my projects/tasks/schedule What I do now is: keep an excel sheet with the name of the most important tasks/projects, I look at it at the beginning of each day and decide the ones I will focus on on iCal I write down events for each day, or for a concrete time (13 to 14 hours). I set up each day the tasks I want to accomlish, and allocate them hours I use Things (culture code) to keep info about tasks and projects not very important and which are not time allocated yet (GTD name = someday) I use Mail on Mac and create folders for the mails I want to process with the name of the different projects I save the main info for each project on freemind maps My system works well at the moment but it is pretty complicated to use. I want to make it better and I am looking for something with these requirements: must be 100% offline accessable it should use as less programs/resources as possible, ideally just one program should be able to manage all my info I can use the GTD methodology mixed with priorities and I can allocate each task converted to event on my calendar I can have different daily/weekly, etc views on a calendar to see the "big picture" must run on mac os x leopard price does not matter, I will pay for this So, according to your experience, can you recommend me something like this? Thanks

    Read the article

  • Windows 7 & Photoshop CS5.1 - "Fonts missing" issue - I have the font!! (sort of)

    - by Tigue Von Bond
    I've noticed a really aggravating issue with Adobe Photoshop CS5.1 on at least two occasions. I downloaded a layered PSD file to work with, in the release notes it directed me to a download page for all of the font used, which was Futura Medium Condensed. I chcked and did not have any Futura fonts at all. So I downloaded and installed the font from the source provided by the provider of the PSD. I closed and reopened Photoshop and when I open the PSD file I get an error saying: Some text layers contain fonts that are missing. These layers will need to have the missing fonts replaced before they can be used for vector based output. I then go to edit the text layer and receive: The following fonts are missing for text layer "discount" Future CondensedExtraBold Font substitution will occur. Continue? If I click OK, it substitutes Myriad Pro for this layer. Didn't I download the right font? I go into the font dropdown and see I have a font with a slightly different name "Futura-CondensedExtraBold-Th Regular" I have also seen this issue with Helvetica. I have received a PSD file, same "some text layers contain fonts that are missing These..." error dialog when I open up the file - and when I go to edit a layer with text I get: The following fonts are missing for text layer "Home": Helvetica Font substitution will occur. Continue? I click continue - it substitutes Myriad Pro - and check my font list and sure enough I have a bunch of Helvetica fonts, none exactly named "Helvetica" Is this a common issue? Googling it yielded a few people with similar problems (I think all on Macs) but either no concrete help or no response. Is it that the two font names aren't EXACT matches? If that is the case is there any way of setting up Photoshop to more intelligently substitute or even set up some sort of mapping (if "Helvetica" then substitute "Helvetica Lt Std" ? Is there anything else, maybe something that I am not thinking of?

    Read the article

  • How to get data out of a Maxtor Shared Storage II that fails to boot?

    - by Jonik
    I've got a Maxtor Shared Storage II (RAID1 mode) which has developed some hardware failure, apparently: it fails to boot properly and is unreachable via network. When powering it on, it keeps making clunking/chirping disk noise and then sort of resets itself (with a flash of orange light in the usually-green LEDs); it then repeats this as if stuck in a loop. In fact, even the power button does nothing now – the only way I can affect the device at all is to plug in or pull out the power cord! (To be clear, I've come to regard this piece of garbage (which cost about 460 €) as my worst tech purchase ever. Even before this failure I had encountered many annoyances about the drive: 1) the software to manage it is rather crappy; 2) it is way noisier that what this type of device should be; 3) when your Mac comes out of sleep, Maxtor's "EasyManage" cannot re-mount the drive automatically.) Anyway, the question at hand is how to get my data out of it? As a very concrete first step, is there a way to open this thing without breaking the plastic casing into pieces? It is far from obvious to me how to get beyond this stage; it opens a little from one end but not from the other. If I somehow got the disks out, I could try mounting the disk(s) on one of the Macs or Linux boxes I have available (although I don't know yet if I'd need some adapters for that). (NB: for the purposes of this question, never mind any warranty or replacement issues – that's secondary to recovering the data.)

    Read the article

  • Joomla performance problems on AWS

    - by Bobby Jack
    I'm running a site on AWS with the following setup: Single m1.small instance (web server) Single RDS m1.small db Joomla 1.5 Generally, the site is performant, but is fairly low-traffic - say around 50-100 visits / hour. However, at peak time, we see about double that traffic. During peak time, pretty much every day: CPU usage on the web server slowly climbs to 100% CPU usage on the RDS server climbs quite quickly to about 30%, from an average of about 15 Database connections shoot up to about 140, from a normal average of about 2 or 3 The site is then occasionally unreachable, certainly according to pingdom monitoring. Does anyone recognise this behaviour? Can you point me in the right direction to begin investigating? Of course, RDS makes it difficult to do things like slow query logging, so I've started by regularly dumping the mysql process list into a file to see if there's anything I can spot there, but it would be good to have something more concrete to investigate. UPDATE At least, can someone confirm that I'm definitely right in saying that the level of traffic implies the problem must be a specific type of query taking way longer than it should to execute? This would happen if a table gets locked, and many queries need to write to it, right? For this very reason, I've already changed the __session table type to InnoDB.

    Read the article

  • Map a URL bought with Dreamhost to Amazon EC2 (AWS)

    - by Edan Maor
    I have several URLs I purchased through Dreamhost. I'm starting to use Amazon's AWS, and I'd like to map the URLs to Amazon. This is something of a silly question, and I've already done the same thing several times to other services (mapping from Dreamhost to WebFaction). But for some reason when I tried to find the proper way to do the same mapping to Amazon, I find a lot of detailed writing talking about whether I should be using CNAME or A records, etc. So I wanted to ask in the simplest possible terms and hopefully get a simple, concrete answer: I bought a URL from Dreamhost, I have an EC2 server running on AWS (to which I already mapped an Elastic IP address). How do I make the URL map to AWS? And if there are several options, which one should I effectively be using? P.S. Meta-question - why are things so much more difficult with AWS? When I search Google for "Move from Dreamhost to WebFaction, I get very simple answers on how to do the mapping. In what way is AWS different?

    Read the article

  • How to connect AD Explorer from Sysinternals to Global Catalog

    - by Oliver
    I'm using the sysinternals AD Explorer quite frequently to search and inspect an Active Directory without any big problems. But now i'd like to connect not only to a single AD Server. Instead i like to inspect the global catalog. If i enter within the AD Explorer connect dialog only the dns name of the machine (e.g. dns.to.domain.controller) that is serving the global catalog i only receive the concrete domain for which it is responsible, but not the whole forest (that's normal behaviour and expected by me). If i'm going to add the default port number (3268) for the global catalog in the form dns.to.domain.controller:3268 AD Explorer will simply crash without any further message. The global catalog itself works as expected under the given name and port number, cause our apache server use exactly this address and port number to authenticate some users. So any hints or tips to access the global catalog out of AD Explorer? Or there are any other nice tools like AD Explorer out there that doesn't have any problems to access the global catalog?

    Read the article

  • Determine from where is "sh" being run under apache www-data user using using PF or NETSTAT

    - by Eugene van der Merwe
    I am working with a compromised Ubuntu 8.04 Plesk 9.5.4 server. It seems that a script on the server is continuously doing reverse lookups to random IPs on the Internet. I first spotted it during by using top and then noticed flashes of this coming up continuously: sh -c host -W 1 '198.204.241.10' I wrote a this script to interrogate ps every 1 second to see how frequently this script happens: #!/bin/bash while : do ps -ef | egrep -i "sh -c host" sleep 1 done The results are that this script runs often, every few seconds: www-data 17762 8332 1 10:07 ? 00:00:00 sh -c host -W 1 '59.58.139.134' www-data 17772 8332 1 10:07 ? 00:00:00 sh -c host -W 1 '59.58.139.134' www-data 17879 17869 0 10:07 ? 00:00:00 sh -c host -W 1 '198.204.241.10' www-data 17879 17869 1 10:07 ? 00:00:00 sh -c host -W 1 '198.204.241.10' www-data 17879 17869 0 10:07 ? 00:00:00 sh -c host -W 1 '198.204.241.10' root 18031 17756 0 10:07 pts/2 00:00:00 egrep -i sh -c host www-data 18078 16704 0 10:07 ? 00:00:00 sh -c host -W 1 '59.58.139.134' www-data 18125 17996 0 10:07 ? 00:00:00 sh -c host -W 1 '91.124.51.65' root 18131 17756 0 10:07 pts/2 00:00:00 egrep -i sh -c host www-data 18137 17869 0 10:07 ? 00:00:00 sh -c host -W 1 '198.204.241.10' www-data 18137 17869 1 10:07 ? 00:00:00 sh -c host -W 1 '198.204.241.10' My theory is if I can see who is launching the sh process or form where it's launched I can isolate the problem further. Can somebody please guide me using netstat or ps to identify from where sh is being run? I might get many suggestions that the OS is out of date and so the Plesk, but please bear in mind there are some very concrete reasons why this server is running legacy software. My question is aimed at a advanced Linux systems administrators who have in depth experience with security compromises and using netstat and ps to get to the bottom of it.

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >