Search Results

Search found 18151 results on 727 pages for 'upside down'.

Page 534/727 | < Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >

  • Server 2008 RAID 5 Write Speeds

    - by Solipsism
    I recently configured a RAID 5 partition in Server 2008 with 4 RAID 5 disks. These disks are connected through a SATA expansion card that uses PCIe. This morning, I checked and they had finally finished synchronizing, and so I tried to do some speed tests. Copying off the disks started pretty much fine - speeds began at 125MB/s, then trailed down to about 70MB/s, which I found odd but not worrying. Writing TO the disks however is a completely different story. I attempted to copy some of my VM host ISOs onto the disks (~2-4 GB apiece) and this resulted in speeds of approximately 10MB/s. I tried copying both from a local disk (connected directly to the motherboard) and from another server ththe gigabit network and results were the same. I checked the performance monitor while transferring the files and the only thing that stuck out was that my memory hard faults shot up to 6,000 per minute (spiking around 200/s) by explorer.exe. The system is running 2GB of DDR667 ECC RAM and a quad-core 2.3GHz opteron. Is there anything I can do to fix this performance issue (buy more RAM? move the drives to a faster box?, etc) or am I just screwed so long as I stick to windows.

    Read the article

  • Windows 7 logon script net use fails

    - by Bryan
    Our network PCs currently consists of Windows XP Professional on a mixed 2008/2003 domain, with exception to one machine, which is a new Windows 7 PC we have bought for testing before we deploy the operating system. But we have discovered a problem with our logon script which automatically maps network drives for our users. The logon scripts are done via User GPOs, but the script itself is just a .cmd file using net use. The permissions are perfectly fine, as the same user can log on to a Windows XP machine and get their drives mapped without problem, but this one drive mapping constantly fails. This is repeatable using the net use command, and fails every time - it actually prompts the user for a username and password when executed interactively, yet if we enter \\server\share from a run dialog, the contents of the network share appear and are accessible without any further authentication. The Windows 7 PC (just like the XP systems) are domain members and the account being used is a domain account, which does have access to the share (as stated, it works fine on XP). I fail to understand what is happening here, as other shares on the server get mapped on the Windows 7 system. More info: The effective permissions of the share in question only grant the user 'list' permission on the root directory, the share permissions are 'everyone,full control'. I've created a new share with the same permissions just to test if it was down to the 'list' permissions on the root directory, but the Windows 7 machine maps this one fine.

    Read the article

  • In Windows 7 power management, is it possible to set different sleep settings for different SATA disks?

    - by Ben Voigt
    I'm having an issue with Windows 7 either freezing up or generating a BSOD coming out of sleep. I suspect that it is related to my boot/OS drive, an OCZ Vertex SE SSD, because numerous other Vertex users have reported sleep problems. Notably, if I put the computer to sleep, it almost always wakes correctly. If it goes to sleep after a timeout, it almost always BSODs. I disabled timed sleep and now it freezes when left unattended. My next step is to disable "Put hard disks to sleep after X minutes", but I'd like to change this setting only for the SSD and not for the rotating data disks, which I would like to spin down normally. Does anyone know a place to configure sleep on a per-disk basis? I don't need to set different timeouts on different disks (although that would be nice), simply setting "this disk sleeps" and "sleep is disabled for this disk" would be great. Additional system information: Windows 7 Ultimate x64, Core i5 - P55 chipset, Intel RST drivers are installed. One SSD, two rotating HDD, and a DVD-RW drive are all connected to the Intel SATA ports. I could potentially move some of these to my motherboard's other SATA controller if that would help.

    Read the article

  • Adventures in Drupal multisite config with mod_rewrite and clean urls

    - by moexu
    The university where I work is planning to offer Drupal hosting to staff/faculty who want a Drupal site. We've set up Drupal multisite with clean urls and it's mostly working except for some weird redirects. If you have two sites where one is a substring of the other then you'll randomly be redirected to the other site. I tracked the problem to how mod_rewrite does path matching, so with a config file like this: RewriteCond %{REQUEST_URI} ^/drupal RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupal/index.php?q=$1 [last,qsappend] RewriteCond %{REQUEST_URI} ^/drupaltest RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupaltest/index.php?q=$1 [last,qsappend] /drupaltest will match the /drupal line and all of the links on the /drupaltest page will be rewritten to point to /drupal. If you put the end of string character ($) at the end of each rewrite condition then it will always match on the correct site and the links will always be rewritten correctly. That breaks down as soon as a user logs in though because the query string is appended to the url so just the base url will no longer match. You can also fix the problem by ordering the sites in the config file so that the smallest substring will always be last. I suggested storing all of the sites in a table and then querying, sorting, and rewriting the config file every time a Drupal site is requested so that we could guarantee the order. The system administrator thought that was kludgy and didn't address the root problem. Disabling clean urls should also fix the problem but the users really want them so I'd prefer to keep them if possible. I think we could also fix it by using an .htaccess file in each site to handle the clean url rewriting but that also seems suboptimal since it will generate a higher load on the server and the server is intended to host the majority of the university's external facing web content. Is there some magic I can do with mod_rewrite to get it to work? Would another solution be better? Am I doing something the wrong way to begin with?

    Read the article

  • Apps won't start after vanilla reboot

    - by Daniel R Hicks
    I had Adobe and Norton nagging me to reboot, so I did that -- clicked Reboot from the Start button. Everything seemed pretty normal as it shut down and came back up, but once up a bunch of apps won't start. The first one I noticed was Firefox. It would flash the disk light normally, but never appear on the screen. Then I tried to bring up an OpenOffice Calc window and same thing. I tried to bring up MS Word, and the splash screen appeared, but never the main screen, and the splash screen just sat there, with a swirly over it. But I tried Solitaire, Notepad++, Paint, and several others, and they popped up just fine. And I'm typing this from IE 8, which, if anything, came up faster than usual. When I try to open up "Network and Sharing Center" the window appears, but nothing appears in it, and eventually it's tagged "not responding". When I kill that window I get (after a delay) "Windows Explorer is not responding", and when I say "OK" the screen resets. I tried rebooting again, and no joy -- same as before. Have done nothing particularly strange on this box, and it's not generally at significant risk for malware. I haven't installed anything new other than the afore-mentioned updates. One other thing: Several minutes after rebooting I get the message "Error: Unable to start Bluetooth Stack Service." The Bluetooth radio is turned on, and I rarely have anything Bluetooth attached, and I don't recall that I've ever seen this message before. Added: Looking at Event Viewer, I'm getting a lot of "The description for Event ID 1 from source xxx cannot be found." Is there any significance to this? Added: I'm looking at restoring from backup, but the procedure is, at best, unclear. Is it sufficient to restore from "Backup and Restore Center", or must I restore from the restore DVD first?

    Read the article

  • BackupExec 12 + RALUS - VERY slow backups

    - by LVDave
    We use Backup Exec 12 and the Remote Agent for Linux/Unix Servers (RALUS) to backup a large RHEL5 system. For various reasons we need to do a daily working set job. These working-set jobs run abysmally slow. The link between the target machine and the BE server is gigabit, and any other type of job runs 1-3GB/min. These working-set jobs start out at perhaps 40MB/min and over the course of the backup job slowly drops down so low that the BE job rate display in the "current jobs" goes blank.. Since we usually are only doing changed-files for one day, the job is usually small and finishes overnight and we don't worry abotu the slowness, but we had some issues with the backup server, and missed about 6 days of fairly heavy work on the Linux box, so this working-set job will be a doozy.. We have support with Symantec, and I've pestered them a lot about this, they've had me run RALUS in debug mode, sent them that log and a VXgather from the BE host and they had no fix/workaround.. To give an idea, I have the mentioned working-set job running for the last 3 1/2 hours and it's backed up just under 10MEGAbytes.... I'm posting this here to see if anybody in the "real world" has seen this/and/or has any ideas what might be causing these abysmally slow jobs, since Symantec seems to be clueless...

    Read the article

  • Nginx settings are screwing up my Drupal form submissions, how do I fix?

    - by bflora
    How do I tell Nginx to "ignore" specific URLS or pages on my web site? I run a Drupal site where anonymous visitors get served via NGINX while logged in users get served via Apache. We do this to keep the load down and scale better. It works great, except, since we set up nginx, a good number of Drupal forms no longer work. For example, before installing Nginx, if you created a new article, then clicked "edit" and edited the article. You could click "save" and your changes to the article would be saved. After setting up nginx, when you make edits and then click "save," the page simple refreshes, but now with "nginx-index.php" inserted into the URL. And your changes to the form were not actually saved to the database. So if you go to edit an article, you'll be on domain.com/node/##/edit or something like that. When you try to save your changes to the form, you'll wind up at domain.com/nginx-index.php?q=node/##/edit. And your changes will not be saved. There is a way around this, but only for administrative users. If you go to a form where this problem is happening, then comment or comment-out three lines in our settings.php file, the form will save properly. Those three lines are: // 'cache_form' = array( // 'engine' = 'db', // ), If they're commented, you uncomment them, them save the form. If they're uncommented, you comment them out and save the form. Obviously, this sucks. My friend who set up our server (and then left the country) told me that there are some Nginx settings that can tell it to "ignore" certain URLs or pages which could work here. How do I do this and where do I do it?

    Read the article

  • Nginx: Loopback connection via PHP's getimage size crashes server (Magento's CMS)

    - by Alex
    We were able to trace down a problem that is crashing our NGINX server running Magento until the following point: Background info: Magento Backend has a CMS function with a WYSIWYG editor. This editor loads some pictures via a controller in magento (cms/directive). When we set the NGINX error_log level to info, we get the following lines (line break inserted for better readability): 2012/10/22 18:05:40 [info] 14105#0: *1 client closed prematurely connection, so upstream connection is closed too while sending request to upstream, client: XXXXXXXXX, server: test.local, request: "GET index.php/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL,,/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9024", host: "test.local" When checking the code in the debugger, the following call does never return (in ´Varien_Image_Adapter_Abstract::getMimeType()` # $this->_fileName is http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif` # $_SERVER['REQUEST_URI'] = http://test.local/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL list($this->_imageSrcWidth, $this->_imageSrcHeight, $this->_fileType, ) = getimagesize($this->_fileName); The filename requests is an URL to the same server which is requesting the script a link to a static .gif that is not existing. Sample URL: http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif When the above line executed, any subsequent request to the NGNIX server does not respond any more. After waiting for around 10 minutes, the NGINX server starts answering requests again. I tried to reproduce the error with a simple test script that only calls getimagesize() with the given URL - but this not crash. It simple leads to an exception saying that the URL could not be loaded (which is fine as the URL is wrong)

    Read the article

  • How to detect/list rogue computers connected to a WIFI network without access to the Wifi Router interface? [migrated]

    - by JJarava
    This is what I believe to be an interesting challenge :) A relative (that leaves a bit too far to go there in person) is complaining that their WIFI/Internet network performance has gone down abysmally lately. She'd like to know if some of the neighbors are using her wifi network to access the internet but she's not too technically savvy. I know that the best way to prevent issues would be to change the Router password, but it's a bit of a PITA having to re-configure all wifi devices... and if the uninvited guest broke the password once, they can do it again... Her wifi router/internet connection is provided by the telco, and remotely managed so she can log-on to their telco account's page and remotely change the router's Wifi password, but doesn't have access to the router status page/config/etc unless she opts out of the telco's remote support and mainteinance service... So, how could she check if there are guests in the wifi with this restrictions and in the most "point and click way"? In this case I'd probably use nmap to look for other devices in the network, but I'm not sure if that's the easiest way to do it. I'm not a wifi expert, so I don't know if there are any wifi-scanning utils that can tell us who's talking to the router... Lastly, she's a Windows user as I guess that'll influence the choice of tools available Any suggestions more than welcome Regards!

    Read the article

  • Cisco 851 (IOS) router: FastEthernet 4 (WAN) got the shutdown flag.

    - by cjavapro
    At a customer location there was a Cisco 851 router (which uses IOS). The PCs on location were all of a sudden unable to connect. We came on site and found that FastEthernet 4 (the WAN port) was "administratively down". We ran these commands to resolve it config t interface fa4 no shutdown exit exit write Now the mystery is how the shutdown flag got there in the first place? The router was on battery backup... but during the outage it was power cycled by the customer. It is possible that there was a short outage by the ISP and that the power cycle caused the shutdown flag to come up. There may have been a hack or an attack pattern that caused the shutdown flag to come up. There may have been a hack or an attack pattern that the router to become unavailable and then caused the shutdown flag to be added on startup. Question: Does anybody have any clues? or at least remember that they had a shutdown flag come up on their WAN port also?

    Read the article

  • Tuning Linux + HAProxy

    - by react
    I'm currently rolling out HAProxy on Centos 6 which will send requests to some Apache HTTPD servers and I'm having issues with performance. I've spent the last couple of days googling and still can't seem to get past 10k/sec connections consistently when benchmarking (sometimes I do get 30k/sec though). I've pinned the IRQ's of the TX/RX queues for both the internal and external NICS to separate CPU cores and made sure HAProxy is pinned to it's own core. I've also made the following adjustments to sysctl.conf: # Max open file descriptors fs.file-max = 331287 # TCP Tuning net.ipv4.tcp_tw_reuse = 1 net.ipv4.ip_local_port_range = 1024 65023 net.ipv4.tcp_max_syn_backlog = 10240 net.ipv4.tcp_max_tw_buckets = 400000 net.ipv4.tcp_max_orphans = 60000 net.ipv4.tcp_synack_retries = 3 net.core.somaxconn = 40000 net.ipv4.tcp_rmem = 4096 8192 16384 net.ipv4.tcp_wmem = 4096 8192 16384 net.ipv4.tcp_mem = 65536 98304 131072 net.core.netdev_max_backlog = 40000 net.ipv4.tcp_tw_reuse = 1 If I use AB to hit the a webserver directly I easily get 30k/s connections. If I stop the webservers and use AB to hit HAProxy then I get 30k/s connections but obviously it's useless. I've also disabled iptables for now since I read that nf_conntrack can slow everything down, no change. I've also disabled the irqbalance service. The fact that I can hit each individual device with 30k/s makes me believe the tuning of the servers is OK and that it must be some HAProxy config? Here's the config which I've built from reading tuning articles, etc http://pastebin.com/zsCyAtgU The server is a dual Xeon CPU E5-2620 (6 cores) with 32GB of RAM. Running Centos 6.2 x64. The private and public interfaces are on separate NICS. Anyone have any ideas? Thanks.

    Read the article

  • Diagnosing Random Network Lag

    - by uesp
    I'm having trouble diagnosing some random lag on a 6 server LAMP cluster serving a MediaWiki site. While we're serving some 100 pages/sec the servers themselves are running fine with less than 0.5 load, no locked processes, no paging, no errors being logged, etc.... Lag is present on all servers and is random: one minute its fine the next it's there. DNS lookups on the servers are randomly slow. For example time nslookup google.com varies randomly from a few milliseconds to several seconds and sometimes times out entirely. While we use IP addresses internally on the cluster this may be a symptom of the root issue. We are not running our own DNS server. The Apache server-status pages randomly lag or time out. Benchmarking using ab between servers shows a few loads sometimes take 3000 ms (almost exactly). Benchmarking server-status on the local server itself usually shows no issue (it showed a lag only once among a few hundred tests). The servers are sitting behind a switch and a firewall which I don't have any access to so I don't know their setup or status. While we are under heavier than normal load a 2 Mbps incoming and 20 Mbps outgoing traffic shouldn't be stressing the switch or firewall should it? My feeling is that it is the switch/firewall or something above them in the ISP like their DNS but can't confirm it. I need some other tests or methods of diagnosing this lag to try and narrow down the ultimate cause.

    Read the article

  • How do I prevent my computer from freezing when it starts to swap?

    - by cdauth
    I work as a Java programmer, so I often have to run several programs at the same time that consume a lot of memory. When my memory is full and Linux starts swapping, my computer almost completely freezes. I can see that it is heavily writing on the hard-disk and everything reacts really slowly, often not at all. Moving the mouse in X sometimes doesn’t work at all, sometimes it has a delay of several seconds, clicking usually has a delay of several minutes. Sometimes it is possible to change to the TTY (with a long delay), there I can usually type without delay, but when I try to log in, it takes several minutes after typing in the user name until the password prompt appears, and usually an error message appears that tells me that the login timed out. So the only possibility is usually to restart the computer. I noticed that other intensive writing to the hard disk also significantly slows down my computer. Sometimes, I used rsync to limit the bandwidth when I copied files around on my own computer, as else the system would be almost unusable. How can this be? At the moment it seems more useful to me to completely turn off swapping. That might crash some processes, which is unfortunate, but the alternative at the moment is to crash all processes by turning off my computer. I am using Gentoo Linux with kernel 3.6.2-gentoo, I have a 10 GB swap partition on a HDD.

    Read the article

  • Windows 7 explorer crashing trying to read external hard disk

    - by Mario De Schaepmeester
    I have a 1TB Western Digital hard drive which is almost full and last time I tried to plug it into my laptop, I got a Windows dialog saying "this hard drive needs to be formatted". I did not panic because I have experienced things like this before and I know it's often solved by simply re-inserting the drive. Now however, whenever I plug it in and try to browse it in explorer by going to "computer", the explorer process crashes after a while. I simply close explorer since it takes ages trying to read the disk and nothing happens. After searching on the internet, the best thing to do would be a chkdsk. I tried it via properties in explorer (which also took a good 5 minutes to open up), locks up as well, after waiting a couple of minutes it says there's no access to the disk so a chkdsk is not possible... I want to make clear that I always use safe removal before pulling out the USB cable. Last time however, safe removal just would not work and when trying to shut down Windows, the logoff screen just would not disappear (I've waited at least 10 minutes or so) and I powered off the PC by force. This may be the cause of the problems but the disk was still recognised immediately after that. I really don't want to format this thing because it contains C: clones of 3 computers and a lot of other stuff that I don't want to re-copy. What would be the best course of action? Update I got chkdsk working via the command line. I used the /F and /R options. I already got a bunch of lines saying "file record segment X is unreadable" or whatever it is in English, my OS is Dutch. It looks bad... Will chdsk repair these errors?

    Read the article

  • Redis connection issue

    - by mre
    We are currently experiencing a lot of Redis errors with the message Unable to connect: read error on connection, trying next server We run Redis on FreeBSD using PHP Redis and we have a hard time reproducing the error on Ubuntu so this might be a hint. There's a long-running issue on that topic on github. Basically we get a socket from the operating system with a call to connect(host, port, timeout) in phpredis, but when we do a select(db_index) afterwards, we get an exception. Could there be an issue with persistance? I assume that connect does nothing in the background and select tries to access the connection, which is actually closed. We don't run into a timeout. We tried tuning TIME_WAIT without success. Any other ideas on where the problem might come from? What is the best way to track the issue down? dtrace maybe? Update We are currently looking into our BGSAVE settings. Interestingly it takes half a second and more to create a fork for the process which regularly writes the data to disk (persistence) and maybe redis can't respond to connect() requests during that timespan.

    Read the article

  • Xen domU mem-set issue

    - by Casper Langemeijer
    I'm running into a problem on my xen 4.0.1 server (debian squeeze) My host has 32G of memory, Domain-0 has 2048 M assigned to it. (scaled down with xm mem-set Domain-0 2048) top in Domain-0 confirms this. I created a virtual machine config file (using xen-tools) with the following options: memory = '512' maxmem = '2048' Both host and guest machines are running the standard 2.6.32-5-xen-amd64 debian kernel. 'xm create' creates a virtual machine with 512MB of memory as expected. Then 'xm mem-set domU 1024' will not expand the memory to 1024MB running 'xm mem-set domU 400' does set the memory to about 400MB Then 'xm mem-set domU 1024' will expands the memory back to 512MB Based on this, you would say that xm ignores the maxmem and silently sets maxmem to 512, but in the output of xm top the MAXMEM column reads 2G. the MEM column will not go over 512M. The output of xm list tells another story, it shows 1024 when I 'xm mem-set domU 1024'. I've googled myself all away around the internet for this issue and found that most people don't scale back Domain-0. I know I've seen a bugreport about the issue I'm experiencing, but can't find it anymore. Does anyone see what I'm doing wrong here? Hmm.. I just upgraded my kernel to the one provided by debian backports. The issue has gone.

    Read the article

  • Suggestions for Backup solution

    - by jiewmeng
    i am considering between windows home server simple nas extra HDD's in desktop btw, i will be the main user i am looking to fulfil the following needs: reliability (i am think RAID 1 or 5) not so prone to virus/malware infections (will using a separate NAS or home server help? say windows home server is still a windows pc except separated by network?) power efficiency (eg. spin down when not in use) download (eg. i may want to dl big files/torrents overnight and i may not want to use a full powered PC for it? does a full pc vs NAS provide significant power usage to justify cost of new system esp. since i am only user?) performance (i guess i like to write/access my files fast, on 2nd thought, maybe for backup i can forgo this? maybe for a WD Green HDD? but how much slower will it be? plus since i am the only user, i think the whole HDD will be mine?)

    Read the article

  • Excel equivilant of java's String.contains(String otherString)

    - by corsiKa
    I have a cell that has a fairly archaic String. (It's the mana cost of a Magic: the Gathering spell.) Examples are 3g, 2gg, 3ur, and bg. There are 5 possible letters (g w u b r). I have 5 columns and would like to count at the bottom how many of each it contains. So my spreadsheet might look like this A B C D E F G +-------------------------------------------- 1|Name Cost G W U B R 2|Centaur Healer 1gw 1 1 0 0 0 3|Sunspire Griffin 1ww 0 1 0 0 0 // just 1, even though 1ww 4|Rakdos Shred-Freak {br}{br} 0 0 0 1 1 Basically, I want something that looks like =if(contains($A2,C$1),1,0) and I can drag it across all 5 columns and down all 270 some cards. (Those are actual data, by the way. It's not mocked :-) .) In Java I would do this: String[] colors = { "B", "G", "R", "W", "U" }; for(String color : colors) { System.out.print(cost.toUpperCase().contains(color) ? 1 : 0); System.out.print("\t"); } Is there something like this in using Excel 2010. I tried using find() and search() and they work great if the color exists. But if the color doesn't exist, it returns #value - so I get 1 1 #value #value #value instead of 1 1 0 0 0 for, example, Centaur Healer (row 2). The formula used was if(find($A2,C$1) > 0, 1, 0).

    Read the article

  • Emails sent from Coldfusion using the same SMTP/Exchange server works from one machine but fails for another

    - by Peter Herdenborg
    First, apologies if this question is too vague or has too little information to really be answerable. I am not normally working with these issues, and I don't have full access to the environment. However, the hosting provider seems to have a hard time tracking down the issue, so I am hoping that someone can at least provide me with some qualified guesses about the most likely problem. Here goes: A client I work for has a hosted IT environment, based on virtual machines running Windows 2008 R2 Standard. Our website, based on Coldfusion 9 was recently migrated from one virtual machine to another, and though Coldfusion is configured in the exact same way, using the same SMTP server, i.e. the client's Exchange server hosted in the same environment and in the same AD as both web servers, sending emails to external recipients is no longer working. It is still working fine when testing from the old machine. This is what I've learnt so far (all emails are sent using a valid from-address on the client's domain): Emails sent to other recipients on the same domain are delivered without any problem. Emails sent to external recipients on other domains are never delivered. When sending emails to both internal and external recipients, no emails are delivered. When receiving one of these emails to an internal address, the sender is now indicated as "[email protected]", while when sent from the old machine, it used to say just "sender". This seems to me that it could hint that the Exchange machine "recognizes" the old web server while it is a stranger to the new. In Coldfusion's mail log, all messages appear to be successfully delivered to the SMTP server. Any ideas what settings to look at, what log entries to search for or how to compare the old web server with the new one will be highly appreciated.

    Read the article

  • Looking for advice on using dd to backup a dual boot laptop.

    - by AvatarOfChronos
    My questions boils down to this. If I do "dd if=/dev/sda of=usbdrive" can anybody confirm that this will get everything including mbr/partition information/all four partitions and create a drive that I can swap with the failing internal drive without losing anything? If this is done while the computer is running will it still copy everything? At this point I'm afraid to shutdown the computer for fear of it never starting again. Secondly, how tolerant is dd of failing drives? Has anybody used it to recover a half dead drive before that can share any potential pitfalls? Did it get the data ok or is this going to be a hope for the best kind of situation? And lastly, If the usbdrive is larger than the failing internal drive I'll still be able to expand the partitions later so I'm not losing space? this last part seems silly to ask but with my current streak of bad luck I'll end up overwriting some magic bit and forever turning a 640gb hdd into a 500gb hdd. Also if anybody has a better solution to create a complete clone that gets everything I'm all for hearing about it. PostScript: I had been making periodic backups however when whatever miasma that killed the laptop struck it also got the NAS :( Post PostScript: both devices were on a UPS system.

    Read the article

  • Windows Server wbadmin recover with commas

    - by dlp
    I want to do a recovery of files with commas in their names from the command line, ala: wbadmin start recovery -version:10/01/2013-12:00 -itemType:File -overwite:Overwrite -quiet "-Items:C:\Path\To\File, With Comma.txt,C:\Path\To\File 2, With Comma.txt" So there are two files: C:\Path\To\File, With Comma.txt C:\Path\To\File 2, With Comma.txt The problem is wbadmin assumes commas separates each file, so it sees 4 files specified instead of 2. I've tried putting a \ in front of commas that are part of the file names like so: wbadmin start recovery -version:10/01/2013-12:00 -itemType:File -overwite:Overwrite -quiet "-Items:C:\Path\To\File\, With Comma.txt,C:\Path\To\File 2\, With Comma.txt" but it doesn't work, it just says there's a syntax error. The documentation on Technet doesn't seem to mention anything that'll help either. OS is Windows Server 2008 R2. A clarifying comment: I've changed the file names to be different than the actual names to be less revealing, but I also see I dumbed it down too much. The comma can occur either in the file name itself like C:\Path\To\File, With Comma.txt' or in the path to the file, like:C:\Path, To\Other\File.txt`.

    Read the article

  • windows: force user to use specific network adapter

    - by Chad
    I'm looking for a configuration/hack to force a particular application or all traffic from a particular user to use a specific NIC. I have an legacy client/server app that has a "security feature" that limits connections based on IP address. I'm trying to find a way to migrate this app to a terminal server environment. The simple solution is for the development team to update the code in the application, however in this case that's not an option. I was thinking I might be able to install VMware NIC's installed for each user on the terminal server and do some type of scripting to force that user account to use a specific NIC. Anybody have any ideas on this? EDIT 1: I think I have a hack to work around my specific problem, however I'd love to hear of a more elegant solution. I got lucky in that the software reads the server IP address out of a config file. So I'm going to have to make a config file for each user and make a customer programs files for each user. Then add a VMware NIC for each user and make each server IP address reside on a different subnet. That will force the traffic for a particular user to a particular IP address, however its really messy and all the VM NIC's will slow down the terminal server. I'll setup a proof of concept Monday and let the group know how it affects performance.

    Read the article

  • Switch between network configurations via command line in fedora 17

    - by Mike Fairhurst
    I have two different setups I use on my work laptop; one enables synergy over an ethernet ssh tunnel with my work computer on the local network, and the other opens an HTTP tunnel to my work computer from outside the network. When I have wifi enabled at work, my laptop seems to use it by preference. This makes synergy run incredibly slowly. At home I must use wifi. I have scripts that begin my ssh tunnels, add my ssh keys, and starts up other programs like synergy, and close themselves when I shut my laptop. However, every day I have to start out my routine by opening my gnome-control-center and turning on my ethernet. I have tried route add and ifup, none of it works, so I dove into gnome-control-center's source code and found that it enabled the connection by libnm's method nm_client_activate_connection with some libnm specific structs that I am having trouble tracking down. I'm not much of a c programmer, and I'm not familiar with either GTK or libnm. Does anybody know what fedora 17 does with ethernet connections to fully enable them? Or does anybody know what libnm does to fully enable an ethernet connection? Do I have to write a c script to run libnm for me to fully emulate whatever gnome-control-center is trying to do?

    Read the article

  • SQL Server 2000, large transaction log, almost empty, performance issue?

    - by Mafu Josh
    For a company that I have been helping troubleshoot their database. In SQL Server 2000, database is about 120 gig. Something caused the transaction log to grow MUCH larger than normal to over 100 gig, some hung transaction that didn't commit or roll back for a few days. That has been resolved and it now stays around 1% full or less, due to its hourly transaction log backups. It IS my understanding that a GROWING transaction log file size can cause performance issues. But what I am a little paranoid about is the size. Although mainly empty, MIGHT it be having a negative effect on performance? But I haven't found any documentation that suggests this is true. I did find this link: http://www.bigresource.com/MS_SQL-Large-Transaction-Log-dramatically-Slows-down-processing-any-idea-why--2ahzP5wK.html but in this post I can't tell if their log was full or empty, and there is not any replies to the post in this link. So I am guessing it is not a problem, anyone know for sure?

    Read the article

  • troubleshooting really slow login on a (linux) machine

    - by Peeter Joot
    Within the last couple of weeks, any attempt to login to a specific linux server has gotten really slow. Once I've logged in, things appear to run without significant delay, but some other login like activities (like starting a new screen session) are slow. The machine's been rebooted a couple of times recently and that hasn't helped. , and it doesn't appear to be $PATH search (where $PATH can sometimes include bad NFS mounts), which I've seen historically in our environment. I've also tried completely removing my .profile/.bash*/... type of init files to rule out anything bad there. I also see slow login for at least one other userid on the system. One thing I've noticed is the following message when trying to exit from a screen terminal: Utmp slot not found -> not removed and am wondering if this is related (having a vague recollection that Utmp has something to do with login). Any idea what that message means, or how to fix it, and if it would be related? Failing that, what sort of problem determination tools are available to investigate what is slowing down this login process?

    Read the article

< Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >