Search Results

Search found 7651 results on 307 pages for 'execution plan'.

Page 225/307 | < Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >

  • Recovering ZFS pool with errors on import.

    - by Sqeaky
    I have a machine that had some trouble with some bad RAM. After I diagnosed it and removed the offending stick of RAM, The ZFS pool in the machine was trying to access drives by using incorrect device names. I simply exported the pool and re-imported it to correct this. However I am now getting this error. The pool Storage no longer automatically mounts sqeaky@sqeaky-media-server:/$ sudo zpool status no pools available A regular import says its corrupt sqeaky@sqeaky-media-server:/$ sudo zpool import pool: Storage id: 13247750448079582452 state: UNAVAIL status: The pool is formatted using an older on-disk version. action: The pool cannot be imported due to damaged devices or data. config: Storage UNAVAIL insufficient replicas raidz1 UNAVAIL corrupted data 805066522130738790 ONLINE sdd3 ONLINE sda3 ONLINE sdc ONLINE A specific import says the vdev configuration is invalid sqeaky@sqeaky-media-server:/$ sudo zpool import Storage cannot import 'Storage': invalid vdev configuration I should have 4 devices in my ZFS pool: /dev/sda3 /dev/sdd3 /dev/sdc /dev/sdb I have no clue what 805066522130738790 is but I plan on investigating further. I am also trying to figure out how to use zdb to get more information about what the pool thinks is going on. For reference This was setup this way, because at the time this machine/pool was setup it needed certain Linux features and booting from ZFS wasn't yet supported in Linux. The partitions sda1 and sdd1 are in a raid 1 for the operating system and sdd2 and sda2 are in a raid1 for the swap. Any clue on how to recover this ZFS pool?

    Read the article

  • RHEL 5.3 Kickstart - How specify location of individual package in Workstation folder?

    - by Ed
    I keep getting "package does not exist" errors during the install. I made a kickstart ISO to create an unattended install of a RHEL 5.3 build machine for C++ software releases. It pulls the kickstart config file from our internal web server. This is handy; it makes it easy to test and modify without having to make a new ISO. And I plan to check it in to version control if I can get it working. Anyway, the rpm packages are located in two folders on the disk; Client and Workstation. The packages install fine for the ones that are physically located under the Client folder. It cannot find those under the Workstation folder such as as doxygen and subversion complaining that packages do not exist. Is there a way to specify the individual package location? # ----------------------------------------------------------------------------- # P A C K A G E S # ----------------------------------------------------------------------------- %packages @gnome-desktop @core @base @base-x @printing @development-tools emacs kexec-tools fipscheck xorg-x11-server-Xnest xorg-x11-server-Xvfb #Packages Located in Workstation Folder *** Install can not find any of these ?? bison doxygen gcc-c++ subversion zlib-devel freetype-devel libxml2-devel Thanks in advance, -Ed

    Read the article

  • Servers/Websites Keep Going Down

    - by Tyler Johnson
    Okay, I'm a noobie. I know how to build and compose a website, but I have no idea what I'm doing when it comes to servers and server commands, etc. I've recently had a problem with all of my sites on our servers going down all at once and then I have to go in and reboot the server for them to come up again. At first this was annoying, but now it is becoming agonizing as it now takes 3-4 reboots for the websites to come back up. I contacted support for my hosting, but they are not being very helpful. They just keep telling me what the issue might be and basically telling me that I'm going to have to look into it and figure it out, which really isn't possible since I know nothing. Anyway, here are the things they said were possible reasons: They said I have "strange logs" in my Apache webserver log, error: sh: fetch: command not found. My php.ini memory limit is: 256M which is very high. It should be 32M or 64M. Server is reaching Max Clients, meaning we have more than 150 visitors at a time. (They supposedly "fixed" this, but the sites/server are still going down) I have some Wordpress sites with plugins getting errors like: PHP Warning: pack(): Type H: illegal hex digit G in... PHP Fatal error: Cannot use object of type stdClass as array in... PHP Fatal error: Maximum execution time of 30 seconds exceeded in... PHP Fatal error: Call to undefined function file_exists() in... PHP Parse error: syntax error, unexpected '<' I know that's a lot, but I really am at wits end and have no idea what to do now. If anyone could maybe give me some advice or point me in the right direction I would greatly appreciate it! Thanks! Oh, and here are the specs for my server: RAM: 2048MB CPU Shares: 40 Primary Disk: 50GB Data Transfer: 75GB Port Speed: 5Mbps Type: Linux

    Read the article

  • Why is my server performance degrading to the point of stopping, periodically?

    - by Pascal Aschwanden
    So, once in a while, I see in firebug that a request takes over 15 or even 60 seconds to respond and sometimes never. Here is what I've ruled out: It's not the CPU, cuz every time I check the Server load its less then 6 for all 3 numbers It's not the memory, because thats fairly low too, less the 50% It's not the I/O anymore, because I've seen the graphs that Joyent sent back to me when I requested them, and they show less then 3MB of I/O (mostly all read). It's not the SQL performance - I've profiled every last SQL command that runs, and they're all (99.9% of them anyway) running in less then 30ms, most run in less then 5ms. Oh and I've been profiling all the script execution times, and even the when the problem occurs, the script always manages to finish in 50ms or less (that's 1 / 20th of a second ). Now, I do run alot of ajax calls. 1 every 2 seconds per user and I have 300 DAU+. But, even if all 300 are playing simultaneously, thats still only 150 calls per second max. The only other thing I can think of is that one of my neighbors is funky. The problem is highly intermittent. 99% of the time it works perfectly and there's excellent performance. but 99%+ is not good enough. Eventually the performance gets so bad I have to restart the server, at which point everything is fine again. I've done this about 4 times now. Any ideas? Note: this is on joyent, vps, intro package 256mb of ram with bursting. here are the mysql dump info: Traffic ø per hour Received 18 MiB 29 MiB Sent 134 MiB 221 MiB Total 151 MiB 251 MiB Connections ø per hour % max. concurrent connections 5 --- --- Failed attempts 0 0.00 0.00% Aborted 0 0.00 0.00% Total 9,418 15.59 k 100.00%

    Read the article

  • Site name questions [closed]

    - by avenas8808
    I'm creating a website on German cars. No issues with it, since it's basically a PHP site, and let's say for the sake of argument, there are no server issues. However, I plan to call it "autohaus" (well, for the sake of this scenario anyway, it's not its actual name) The site is not for commercial gain; it's a fansite. Legally, can I use a name for a website if it's a commonly-used one - "autohaus" appears to be this. There are other sites called "Autohaus" but could someone use a similar name for non-commercial, information/educational use? Basically, what's the copyright law for such a situation? Currently my site is only on localhost for now, but what if I did want to release it to the web? Obviously the legal issues have to be taken into account before the site goes live, but there's no Data Protection Act. etc. since it's not selling anything, and no-one's buying anything, and the pictures in it are being used educationally (but that's for another question entirely)] [NOTE: This question is on trademark/names etc. not domains]

    Read the article

  • Same native and tagged vlan possible on Redhat?

    - by Chris Phillips
    Hi guys and gals, I'm looking at implementing a systems using a number of tagged and a native vlan connected to a server over a a/p bonded interface. The untagged vlan is for physical machine access, the tagged vlans are connected to bridges and then to QEMU VM's inside the machine. Hopefully this plan is fine, but I'm trying to implement a crippled version of this in a dev environment due to a lack of underlying network config in this location where I just have the same single vlan delivered to the machine on a tag AND plain. I'm nto clear if this is going to work (and that I should just be confident that it will work using different vlans) as I'm seeing odd things like a vm is arping out over the vlan out to the core switch, but the arp reply is coming back on the untagged interface. Now an ARP reply is unicast right? So it's a deliberate thing to send the ARP response on the untagged interface, and not a case that a broadcast response isn't being passed on the tagged side... i.e. there's some underlying logic pushing it that way. Something about the MACs somehow? This is on a CentOS 5.5 machine, vlan's from vconfig. (I've seen reference to the Linux mac-vlan project work, but that's not available here by default.) so 1) Should having the SAME vlan tagged and untagged work? 2) Will different tagged vlans to the untagged interface work nice and easily?

    Read the article

  • Need help trying to allow my remote PowerShell script to run on my Windows 2008 r2 Server

    - by Pure.Krome
    I've got a Windows 2008 r2 server with Sql Server 2008 r2 installed. I've got a Sql Server Agent which tries to run a Powershell job, but fails :- Message Executed as user: FooServer\SqlServerUser. A job step received an error at line 1 in a PowerShell script. The corresponding line is '& '\\polanski\Backups\Database\7ZipFooDatabases.ps1'' Correct the script and reschedule the job. The error information returned by PowerShell is: 'File \\polanski\Backups\Database\7ZipFooDatabases.ps1 cannot be loaded. The file \\polanski\Backups\Database\7ZipFooDatabases.ps1 is not digitally signed. The script will not execute on the system. Please see "get-help about_signing" for more details.. '. Process Exit Code -1. The step failed. Ok. So i run Powershell on that server then set the execution policy to unrestricted. To check .. PS C:\Users\theUser> Get-ExecutionPolicy Unrestricted PS C:\Users\theUser> Kewl :) but it still doesn't work :( Ok ... what happens when i try to run the powershell from the command line.... PS C:\Users\justin.adler . '\polanski\Backups\Database\7ZipMotorshoutDatabases.ps1' Security Warning Run only scripts that you trust. While scripts from the Internet can be useful, this script can potentially harm your computer. Do you want to run \\polanski\Backups\Database\7ZipFooDatabases.ps1? [D] Do not run [R] Run once [S] Suspend [?] Help (default is "D"): er..... didn't I already tell the server that ANY file can be ran? Notice the file is located at... \\polanski\Backups\Database\ So can someone make any suggestions?

    Read the article

  • What switch should we use for PCoIP?

    - by Jay R.
    We have a small lab space that seats 10 people and has 20 machines. Each machine is set to 1920x1200 resolution because the user apps are best used at that resolution. Currently the machines are all located close enough to montors that a DisplayPort cable will reach, but the pending lab remodel positions them around 80 feet or more away in racks. Our proposed solution is to use PCoIP. We purchased 10 PCoIP portals and 20 PCoIP host cards. We plan to set up a dedicated network to handle just the PCoIP traffic. After testing just one portal and one host card with a cheap 1G switch from a local office supply store, we were left with less than good impressions about the usefulness in our lab. The framerates were not spectacular and the mouse seemed jerky. Our concern is that we can't get away with the cheap 1G stuff from the store because adding more machines to the switch will just make the user experience worse. What switch would be recommended to best support our PCoIP situation? We will need to plug in at least 30 cables based on just those machines. Is there a particular feature to search for that makes a difference? Is there a switch that works best with PCoIP? Added Info: The reporting webapp for the host card shows maximum bandwidth usage to be 220000 kbps. The average appears to be around 180000 kbps. The reverse direction is much lower, like 15000 kbps.

    Read the article

  • data transfer rate anomaly (internet)

    - by Nrew
    I tested my internet speed at speedtest.net. And go the result. .42Mbps Download and .21 Upload rate. My classmate got the same download speed of .42Mbps but has .87Mbps upload rate. Does upload rate affect the transfer rate?Because even though we got the same download speed. His transfer rate is about 100kbps downloading a movie from a torrent. And mine is only about 47kbps. Also the same torrent. And even direct download its always 47kbps. Is it possible to tweak something in order to have higher transfer rates. Other details: Were also both using the same ISP. The same slow ISP. And it seems that he's getting the most out of his connection even if his plan is lower than mine. I just don't know why I'm a loser at this. And when I try to complain to the ISP. They say that I'm getting the minimum speed and its okay. That really sucks. And I'm not using any router, so is he. The computer is directly connected to the internet using the modem provided by the ISP.

    Read the article

  • Hyper-V and attaching physical disks

    - by Mike Christiansen
    So, I'm looking at rebuilding my home server. My current setup is the following Windows 7 Ultimate 1TB Boot Drive (my smallest drive) Windows Dynamic Spanned volume, continaing 1x 1TB drive, 2x 2TB drives, totalling 5TB. I am upgrading to a hardware RAID controller, and I would like to run Hyper-V server core. However, I want to retain the ability to join my "file server" to a homegroup, so I must use Windows 7. I know VHDs can only be like 127GB or something, so I obviously need to directly connect disks to my Windows 7 machine. Here is my plan: Server Core 2008 R2 (Hyper-V) 1TB Boot Drive (storing VHDs for boot drives of VMs) - possibly in a RAID 1 with my other 1TB drive 5x 2TB drives (1x 2TB drive hot spare), totalling 10TB, directly attached to a Windows 7 VM, for use of homegroup for this array. In the past, I directly attached the windows dynamic volume to a Windows 7 VM, and performance was abysmal. The question is, with hardware RAID, will it really make that much of a difference? Server specs: Intel Core 2 Quad Q9550 2.83GHz Asus Maximus II Formula (PCI-E x16) 8GB DDR2 RAM PC2-6400 (Yes, I know its a bit out of date)

    Read the article

  • Server 2008 email on Event variables

    - by Jeff Miles
    One of the new features of Server 2008 is the ability to attach a task to a specific event in the event logs. One of the actions available is to send an email through a SMTP server. This is working great, however it would be ideal if in the message body, the Event contents could be placed. I have tried using $eventdescription and %eventdescription%, but those are just shots in the dark. Any amount of googling produces no results. Does anyone know if this is possible? Update: Sparks' suggestion below is a step in the right direction I believe, however that method doesn't seem to work for all values. For example, I can pull the RecordID, Severity and Channel as shown, but I can't use the same method to retreive the EventID, or most importantly the description. Here's the raw XML from one event: [Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"] [System] [Provider Name="DFSR" /] [EventID Qualifiers="16384"]4412[/EventID] [Level]4[/Level] [Task]0[/Task] [Keywords]0x80000000000000[/Keywords] [TimeCreated SystemTime="2009-05-14T18:18:09.000Z" /] [EventRecordID]45692[/EventRecordID] [Channel]DFS Replication[/Channel] [Computer]servername.domain.com[/Computer] [Security /] [/System] [EventData] [Data]9046C3F4-843E-4A53-B941-4B20764072E5[/Data] [Data]D:\departments\Geomatics\Plan Quality\Data Processing\CG3533017 2009-05-13 KT FIXED[/Data] [Data]D:\departments[/Data] [Data]{26D5F604-E603-4F87-8EC3-DE9A945DA8FD}-v927199[/Data] [Data]Departments[/Data] [Data]domain.ca\files\departments[/Data] [Data]B8242CE2-F5EB-47DA-BA5B-1DD2F7EE3AB9[/Data] [Data]DFAA7A54-66CB-4C31-81A0-0F861382C32C[/Data] [Data]CG3533017 2009-05-13-{26D5F604-E603-4F87-8EC3-DE9A945DA8FD}-v927199[/Data] [/EventData] [/Event] I have tried using a ValueQuery for EventData, but it returns no data.

    Read the article

  • Physical Debian to VMWare: vmware-converter, dd-image or otherwise?

    - by Dabu
    we have two debian Lenny production machines, both running larger commercial websites. Now these machines need to be moved, and in the process, they need to be virtualized to VMWare ESX. If you believe the internet information, there are several ways to accomplish this. The easiest for us would be to use our weekly dd backup where the whole disk, however, I have no experience with this kind of technology and if it is really possible. The second best way would be via an application on the source machine virtualizing it and generating an ESX compatible VM. However, the software is beta and unsupported, and after installation, nothing really works (the /etc/init.d/vmware-converter script doesn't actually do anything, start and stop reply with success messages, yet ps shows that there are no new processes). The worst way with the most work would be to install a new machine and set it up manually, copying files and databases as needed. This part is clear in it's execution, and my question(s) do not touch this. Is my 1st way possible? Has anyone done this yet, or better, has a page with instructions? Or is there a help page that explains how to correctly install, run and use the vmware-converter tool using a Debian installation (it's possible that I dod something wrong during installation already)? Thank you.

    Read the article

  • Laptop seemingly randomly "freezes" to the point of no longer executing applications

    - by Aierou
    After upgrading to Windows 8 pro on my Samsung Series 7 Chronos NP700Z5C-S04US (may be relevant, I'm not sure), my computer began to stop allowing the execution of any service or application, as well as discontinuing the update of the clock until a hard shutdown was performed. This seems to occur randomly after periods of inactivity and I've no idea the cause. These are measures I have already taken in order to attempt to stop this: -Obviously Googling potential answers to this problem -Updating all drivers -Researching all events that have occurred around the time of the failure to respond (with no results) -I tried applying "bcdedit /set disabledynamictick no" which was a hotfix for what seemed to be the same error but was not. Here is some more, potentially related, information about the error: -No BSOD (actually, I haven't at all experienced a BSOD with Windows 8) -Computer seems to have a problem shutting down/restarting most of the time (Hangs at the point where it should completely turn off) -New sound instances are not able to play, but previously loaded containers function properly -As mentioned before, the clock freezes at the time of the error -USB devices function properly -Servers that I was running fail to respond on my end, but stay online. If you require more information, please request it specifically and I will be happy to oblige. Thanks.

    Read the article

  • Server 2008 email on Event variables

    - by Jeff Miles
    One of the new features of Server 2008 is the ability to attach a task to a specific event in the event logs. One of the actions available is to send an email through a SMTP server. This is working great, however it would be ideal if in the message body, the Event contents could be placed. I have tried using $eventdescription and %eventdescription%, but those are just shots in the dark. Any amount of googling produces no results. Does anyone know if this is possible? Update: Sparks' suggestion below is a step in the right direction I believe, however that method doesn't seem to work for all values. For example, I can pull the RecordID, Severity and Channel as shown, but I can't use the same method to retreive the EventID, or most importantly the description. Here's the raw XML from one event: [Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"] [System] [Provider Name="DFSR" /] [EventID Qualifiers="16384"]4412[/EventID] [Level]4[/Level] [Task]0[/Task] [Keywords]0x80000000000000[/Keywords] [TimeCreated SystemTime="2009-05-14T18:18:09.000Z" /] [EventRecordID]45692[/EventRecordID] [Channel]DFS Replication[/Channel] [Computer]servername.domain.com[/Computer] [Security /] [/System] [EventData] [Data]9046C3F4-843E-4A53-B941-4B20764072E5[/Data] [Data]D:\departments\Geomatics\Plan Quality\Data Processing\CG3533017 2009-05-13 KT FIXED[/Data] [Data]D:\departments[/Data] [Data]{26D5F604-E603-4F87-8EC3-DE9A945DA8FD}-v927199[/Data] [Data]Departments[/Data] [Data]swg.ca\files\departments[/Data] [Data]B8242CE2-F5EB-47DA-BA5B-1DD2F7EE3AB9[/Data] [Data]DFAA7A54-66CB-4C31-81A0-0F861382C32C[/Data] [Data]CG3533017 2009-05-13-{26D5F604-E603-4F87-8EC3-DE9A945DA8FD}-v927199[/Data] [/EventData] [/Event] I have tried using a ValueQuery for EventData, but it returns no data.

    Read the article

  • Hyper-V and attaching physical disks [migrated]

    - by Mike Christiansen
    So, I'm looking at rebuilding my home server. My current setup is the following Windows 7 Ultimate 1TB Boot Drive (my smallest drive) Windows Dynamic Spanned volume, continaing 1x 1TB drive, 2x 2TB drives, totalling 5TB. I am upgrading to a hardware RAID controller, and I would like to run Hyper-V server core. However, I want to retain the ability to join my "file server" to a homegroup, so I must use Windows 7. I know VHDs can only be like 127GB or something, so I obviously need to directly connect disks to my Windows 7 machine. Here is my plan: Server Core 2008 R2 (Hyper-V) 1TB Boot Drive (storing VHDs for boot drives of VMs) - possibly in a RAID 1 with my other 1TB drive 5x 2TB drives (1x 2TB drive hot spare), totalling 10TB, directly attached to a Windows 7 VM, for use of homegroup for this array. In the past, I directly attached the windows dynamic volume to a Windows 7 VM, and performance was abysmal. The question is, with hardware RAID, will it really make that much of a difference? Server specs: Intel Core 2 Quad Q9550 2.83GHz Asus Maximus II Formula (PCI-E x16) 8GB DDR2 RAM PC2-6400 (Yes, I know its a bit out of date)

    Read the article

  • Install Debian stable linux ISO from USB to dual boot Windows

    - by tgkprog
    I want debian as dual boot with my windows vista, Free'd up 50GB in my d drive. Plan to use 40 for debian install, 6GB for swap space Have a 16GB USB drive Downloaded http://unetbootin.sourceforge.net/ Downloaded DVD files of stable debian-7.0.0-amd64-DVD-1.iso ( debian-7.0.0-amd64-DVD-2.iso and 3) After I choose HD install, unetbootin says place the ISO in the same place. but I have 3. do i need to merge them? if so any freeware to do that? can i do it with 7zip? when I extract with 7 zip there are classes between the 3 ISO files. Just over write? Options to merge (format etc for 7zip) ? Or I must use I tried to keep the 3 files with the other unetbootin files but get an error msg Files I have on my USB 06/30/2013 11:44 PM 2,835,648 ubnkern 06/05/2013 12:14 AM 3,998,007,296 debian-7.0.0-amd64-DVD-1.iso 06/04/2013 03:30 PM 4,696,872,960 debian-7.0.0-amd64-DVD-2.iso 06/05/2013 01:25 AM 4,698,955,776 debian-7.0.0-amd64-DVD-3.iso 06/30/2013 11:45 PM 6,530,278 ubninit 06/30/2013 11:46 PM 155 syslinux.cfg 06/30/2013 11:46 PM 60,928 menu.c32 also i can only copy above files if i format my USB as NTFS On FAT32 says too large to copy .iso How do I get around that? My internet needs a login so cannot do net install

    Read the article

  • Upgrade to Q9550 or i7 920 on a budget?

    - by evan
    I'm planning to upgrade my computer and torn between maxing out the system I have or investing in the X58 architecture. I'm currently using a E6600 Core 2 Duo with 4GB of RAM (800mhz) on an Asus PK5-E motherboard which I built two years ago. My original plan was that one day I'd upgrade machine to 8GB (1066mhz, the max the PK5-E allows) and to the Core 2 QuadQ9550 to give the machine a good four years of life. However, that was before the i7 came out. I use my computer mainly for software development , which I do inside Virtual Machines, and the i7 seems ideal for that because it no longer is limited by the speed of the FSB? And when I looked into it, getting 8GB DDR3 RAM isn't much more expensive than the 8GB of DDR2 and the i7 920 is comparable in price to the Q9550, which doesn't make much sense to me? So the question is it worth swapping the motherboard out for around $250 and upgrading all three components or using that money on SSD or 10rpm drive for the existing system's OS/Apps/Virtual Machine drive? Or just put the $250 towards a completely new machine in a year or two? Would the i7 really give that much of boost compared to the Q9550 for what I'd be using it for? Thanks in advance for your input!!!

    Read the article

  • Diagnosing RAM issues

    - by TaylorND
    I have an old Acer Aspire T180 desktop. The specs are as follows: AMD Athlon 64 3800+ 2.4GHz 1GB DDR2 SDRAM 160GB DVD-Writer (DVD±R/±RW) Gigabit Ethernet 17" Active Matrix TFT Color LCD Windows Vista Home Basic Mini-tower AST180-UA381B According to the information in the computer's documentation the computer comes with 1 GB of RAM. It has two DDR2 SDRAM sticks. I used to have Windows Vista installed. Then I removed it and install Windows 7, and now I have since removed Windows 7 and installed Windows XP. According to Windows XP with both RAM sticks in the computer has 768 MB. Isn't this supposed to be 1 GB of RAM or 1024 MB of RAM? Is the amount of RAM installed only partly used by the Operating System? Is there's something I'm missing? If I remove either one of the RAM sticks I'm left with 448 MB of RAM. These numbers don't seem to add up. If each of the RAM sticks contains at least 448 MB of RAM shouldn't they (both being in) provide 896 MB of RAM. Even then, isn't that less than a GB of RAM? I'm not too experienced in hardware so I thought this would be the best place to ask. As a follow up question, is the RAM I have enough to run/multitask with Windows XP efficiently? I plan to do a lot of computing with the system (although not gaming), should I invest in more RAM?

    Read the article

  • Blue screen of Death on Install

    - by Toby Allen
    I have a machine with Windows Vista Installed. It has an Intel X25 SSD as the System Drive I want to reinstall (I plan to format and overwrite Vista) with XP. When I boot up using the Dell XP CD it loads the initial drivers then i get a Blue Screen. This is quite concerning. The installed OS works ok, but its giving problems so I want to remove it. Should I just format the SSD and try again? Will this make any difference? Can I do something to avoid hitting the Blue Screen? Its possible I had corrupt sectors on one of the other disks, will a new XP install use the System drive or drive 0? Can I force the install to use a specific drive when installing? Error: *** STOP: 0x0000007B (0xF78D2524,0x0000034,0x00000000,0x00000000) I never did find the answer, however I removed the SSD and tried to install on other disk - CRASH I disconnected the other disk and tried to install with only SSD plugged in - CRASH I removed 1 block of RAM - CRASH I used a windows 7 CD - NO CRASH

    Read the article

  • Setting up logging for a remote backup script

    - by Brian Dainis
    So I wrote up a short script that I am planning to run via a cron job daily to package up my site files and send them to a remote location. I also plan to incorporate DB dumps, but I have not gotten that far yet. My issue today however is that Im am uncertain how to log the output of each command for errors, warnings, or other pertinent information the command may output. I would also like to install sometype of fail safe so if something goes horribly wrong the script will stop dead in its tracks and notify me via email or something. Ok the email thing is not as critical, but would be nice. Does anybody have any ideas for that? Here is what I have so far. By the way, both servers are CentOS 6.2 running standard LAMP. #!/bin/sh ################################# ### Set Vars ################################# THEDATE=`date +%m%d%y%H%M` ################################# ### Create Archives ################################# tar -cf /root/backups/files/server_BAK_${THEDATE}.tar -C / var/www/vhosts gzip /root/backups/files/server_BAK_${THEDATE}.tar ################################# ### Send Data to Remote Server ################################# scp /root/backups/files/server_BAK_${THEDATE}.tar.gz user@host:/home/bak1/ftp/backups/ ################################# ### Remove Data from this Server ################################# rm -rf /root/backups/files/server_BAK_${THEDATE}.tar.gz

    Read the article

  • i7-980X at 70% speed

    - by Buxley
    Hi we bought a nice computer to use to solve optimization problems. Intel i7-980X@3,33 GHz with 12 GB of Team Group 1600 MHz DDR3 RAM. When we use Gurobi the computer uses all 12 cores at maximum in the beginning of the solve. However after a while (about 8 hrs) it all cores jump between 65 and 85% When I solve the same models on an i7 930 all cores are at a near 100% level even after longer solution times. We first thought that the Hard disk was the bottleneck since Gurobi writes out nodefiles after the memorylimit is used. However since the new computer have 12 GB of RAM we put the memorylimit to 7 GB so the solver only used the RAM and still with the same performance in the processor. Any ideas about the bottleneck? As I said earlier it works at 100% for the first hours or so . Thanks very much for any answers! Our plan was to overclock it but we can't even get it to work at normal speed yet!

    Read the article

  • i7 x980 at 70% speed

    - by Buxley
    Hi we bought a nice computer to use to solve optimization problems. intel i7 X980@3,33 Mhz with 12 Gb of Team Group 1600 MHz ddr3 Ram. When we use Gurobi The Computer uses all 12 cores at maximum in the beginning of the solve. However after a while (about 8 hrs) it all cores jump between 65 and 85% When I solve the same models on an I7 930 all cores are at a near 100% level even after longer solution times. we first thought that the Harddisk was the bottleneck since Gurobi writes out nodefiles after the memorylimit is used. However since the new computer have 12 GB of Ram we put the memorylimit to 7 GB so the solver only used the RAM and still with the same performance in the processor. Any Ideas about the bottleneck? As I said earlier it works at 100% for the first hours or so . Thanks very much for any answers! Our plan was to overclock it but we can't even get it to work at normal speed yet!

    Read the article

  • File permissions on web server

    - by plua
    I have just read this useful article on files permissions, and I am about to implement a as-strict-as-possible file permissions policy on our webserver. Our situation: we have a web server accessed through sftp by different users from within our company, and we have the general public accessing Apache - sometimes uploading files through PHP. I distinguish folders and files by their use. So based on this reading, here is my plan: All people who need to upload files will have separate users. But all of those users will belong to two groups: uploaders, and webserver. Apache will belong to the group webserver. Directories Permission: 771 Owner: user:uploaders Explanation: to access files in the folder, everybody needs to have execute permission. Only uploaders will be adding/removing files, so they also get r+w permission. Files within the web-root Permission: 664 Owner: user:uploaders Explanation: they will be uploaded and changed by different users, so both owner and group need to have w+r permissions. Webserver needs to only read files, so r permission only. Upload-directories Permission: 771 Owner: user:webserver Explanation: when files need to be uploaded, Apache needs to be able to write to this directory. But I figure it is safer to change the owner to webroot, thus giving Apache sufficient privileges (and all uploaders also belong to this group and will have the same permissions), while safeguarding from "others" writing to this folder. Uploaded files Permission: 664 Owner: user:webserver Explanation: after uploading Apache might need to delete files, but this is no problem because they have w+r permission of the folder. So no need to make this file any more accessible than r access for group. Being not an expert on file permissions, my question is whether or not this is the best possible policy for our situation? Any suggestions welcome.

    Read the article

  • 2 Computers, same network, different outgoing speeds when uploading to internet?

    - by user117339
    I have 2 work machines in my office, a PowerMac G5 and a MacBook Air. Both behind an IPCop firewall. The PowerMac is connected through a gigabit switch, the MacBook Air is connected through a Netgear 802.11g access point that is then plugged into the gigabit switch. There is also a FreeNAS box, both machines are able to read and write files to it at close to their pipe speeds. The main problem is when I am trying to upload files to the internet at large. The G5 is only hitting 0.1 - 0.25 Mbps. The Macbook is able to hit 2-3 Mbps. The setup (G5 / IPCop / Network) has been the same for 5 years. The issues with the internet speed started about 3 months ago. I hadn't tested on the Macbook at this point. I had complained to the ISP, they said their modem needed a firmware update, did that nothing changed. Reset IPCop, turned off squid, etc. No changes. The ISP switched the office over to a better plan with a theoretical 6 Mbps up, still no change. At this point I tried testing the Macbook, and lo and behold there's the speed. But why? I have tried changing out everything, cables, switches, using another ethernet port on the G5, wiping the system, using DHCP, using manual IPs, changing DNS servers, etc. Nothing works. I figured that if there was something horribly wrong with the network, then internally I would find a similar issue, but that is perfect. iperf, ping, etc show no dropped packets and near saturation of the internal network. I'm at a loss as to what the heck is going on. Any ideas would be appreciated! Below are some screenshots of speedtest.net: G5: Macbook Air:

    Read the article

  • Replicated filesystem and EC2 MySQL

    - by El Yobo
    I'm currently investigating migrating our infrastructure over to run on Amazon's EC2 and am trying to figure out the best way to set up a MySQL service. I'm leaning towards running our own MySQL instances, rather than going with Amazon's RDS, but am still considering the best approach for performance and cost on the instance itself. In order to have persistent data, the MySQL data needs to be on an EBS volume (with some form of striped RAID, e.g. RAID0 or RAID10) to improve persistence. However, EBS IO is limited by the network interface (gigabit, so a theoretical maximum of 128 MB/s), while the ephemeral volumes have no such problem. I did see a suggestion for running two MySQL servers on an instance, with a master running on the ephemeral disk (which we would also RAID) and a slave storing changes to an EBS volume, but this has some additional overhead and complexity (two servers). What I was imagining is using some form of replicated file system such that I could have a filesystem on top of a RAID0 of ephemeral volumes to maximise performance all changes from the above immediately replicated to another RAID1 volume backed by multiple EBS volumes to ensure no data loss The advantages of this would be best possible IO performance for the DB server; no network delay in IO decreased IO on EBS volumes (as all read IO will be done on the ephemeral volumes) so decreased cost good data security, as it's backed onto redundant EBS volumes However, I haven't seen an appropriate system to replicate all changes from one volume to the other; is there a filesystem, or any other approach, which will do this? The distributed file systems, e.g. GlusterFS, DRBD etc seem to focus on replicating disks between servers, can they be set up to do what I'm interested in here? I also haven't seen anything about other's taking this approach. Do I have a solution in need of a problem here (i.e. is performance good enough, so this whole idea is redundant)? Is there some flaw in the plan?

    Read the article

< Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >