Search Results

Search found 13275 results on 531 pages for 'deep copy'.

Page 275/531 | < Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >

  • Recovering portion(s) of file with CRC (cyclic redundancy check) errors in Robocopy

    - by Mark A
    Is it possible to recover portions of files with CRC errors? If so, how? I have a partially damaged hard drive (2.5" SATA) that I have partially recovered using Spinrite 6.0 (took 2 weeks to run!). I have been successful in getting many of the files off of the drive using Robocopy . /V /S /E /COPY:DAT /R:1 /W:0, but some of the files get to +/- 90% in Robocopy and then fail with a CRC Data Error (cyclic redundancy check). I am wondering if it is possible to recover the first 90% of the file and try to recover it in a text editor. 1.0% ... 91.0% 91.1% 2010/06/14 18:21:13 ERROR 23 (0x00000017) Copying File F:\Documents and Settings\user\Local Settings\Application Data\Identities\{GUID}\Microsoft\Outlook Express\Mailbox Folder.dbx Data error (cyclic redundancy check). Thanks in advance!

    Read the article

  • Using rsync when files on one end are all lowercase

    - by DormoTheNord
    I want to rsync a lot of files from a Windows box to a Linux server. The problem is, the files on Windows are all mixed case, and the files on the linux server need to be all lowercase. One solution is to have a script that rsyncs to a different directory on the server, copy the files into the main directory, and then convert them all to lowercase. I'd rather find a more elegant solution, though. I'd prefer a command line application, but I'd be willing to go with a GUI application if that's the best option.

    Read the article

  • Secondary drive corrupted/not reading

    - by Sebastian
    When I connect my Seagate Barracuda to the computer, it shows up on the list of drives in Windows Explorer, but right clicking on it crashes Windows Explorer, opening it won't do anything, and I cannot start disk manager when it is connected. Trying to search the drive also freezes Windows Explorer. I have tried to run CHKDSK on it, and it was "unable to read $J data stream" for the Usn Journal. Also, it was an internal drive, but I pulled it out and hooked it up externally so I could test if it was that drive causing the problems. Is there any way for me to copy the files off of the drive?

    Read the article

  • Why can't I install apps on Windows 8 using specific dial-up modem connections?

    - by Vincent of Earth
    This problem has persisted since I first tried out Windows 8 Consumer Preview, and also affects Windows 8, Windows 8.1 Preview, and Windows 8.1. Specifically, the problem occurs when I try to install apps from the Windows Store on a Globe Tattoo Broadband or Smart Bro dial-up connection (two common ways of connecting to the internet in the Philippines). I can confirm that this isn't a problem with my copy of Windows or my Microsoft account because I was able to install any app on other connections like public WiFi. This problem has persisted on three different dongles and two different computers. So why can't I install apps on those two specific types of connections?

    Read the article

  • mysql my.cnf ignored

    - by mr12086
    [issue] I'm trying to modify a my.cnf value on my production server but the changes aren't taking effect after a sudo service mysql restart, using an exact copy of the my.cnf (downloaded and replaced original) on my development server the changes made are visible from show variables in mysql commandline. my.cnf is located at /etc/mysql/my.cnf sudo find / -name my.cnf /etc/mysql/my.cnf So only one file exists on the entire system.. Production is ubuntu 10.04 LTS 64bit Development is ubuntu 11.10 32bit Mysql versions are 5.1.61 & 5.1.62 respectively. Kind Regards, [my.cnf] yes it seems to have had all the comments removed and replaced with whitespace. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = 127.0.0.1 key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP query_cache_limit = 1M query_cache_size = 16M log_error = /var/log/mysql/error.log expire_logs_days = 10 max_binlog_size = 100M innodb_file_per_table = 1 [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? ---update 2-6-11 Since I have not received any responses except the one below which appears to misunderstand my point, I am updating this post hoping to get more responses. I have used the terminal command sudo opensnoop -p PID where PID is the mdworker process ID to try and determine what Spotlight is doing and hopefully find the files it's having trouble with. Here's what happens: After indexing for a few hours, mdworker is gone. It no longer shows up in Activity Monitor under "All Processes" and the Terminal window with the opensnoop result stops moving. I then proceeded to execute the same command on mds to see what it was doing and here's what I get, repeatedly: 501 57 mds 21 / 501 57 mds 21 /Volumes/Sno Leppard 501 57 mds 21 /Volumes/Tiger 501 57 mds 21 /Volumes/Leppard 501 57 mds 21 /Volumes/Disk Warrior 501 57 mds 21 /Volumes/ONM Data These represent all the volumes currently mounted in the system. All except ONM Data, which is the one I am trying to index, are excluded from SPotlight indexing at the moment. The sequence above repeats over and over, with slight variation, sometimes skipping one of the volumes. Questions - what happened to mdworker? What is mds doing? I will let this run until tomorrow morning and throughout the day and monitor for any changes. Any input would be very much appreciated. Even if you're not sure what the ultimate answer is, please alert me to anything you think I may be missing. Hopefully at some point we will figure this out... Thanks, M __final edit__ I finally resolved the issue and here is how I did it. I used the terminal command "sudo opensnoop -p PID" where the PID is the process id of the processes I was monitoring. I was looking at all instances of mds and mdworker running in the system. After the third time through indexing the same data set (see info above), I contacted Apple and got to their highest level of support - they were flabbergasted as well. They advised me to install yet another default 10.6.6 system and try again. The same pattern repeated - mds and mdworker(s) would start indexing and eventually the spotlight icon would say 6 hours remaining and all mdworkers were gone, mds at 90% or so of CPU. But I did finally figure out that the first time mdworker stopped like that, the last file it touched was always in the same folder. I excluded that folder from spotlight search and the rest of the data set indexed within about 2 hours with no strange behavior or failures. I copied that folder to another machine and Spotlight barfed immediately. Exclude that folder and all is well again. I have no clue what is causing this behavior, still, but I did find a functional solution to the problem. Anyone with a similar problem - run opensnoop on all instances of mds and mdworker and wait patiently for wdworker to exit. Look at the last file it touched and exclude the enclosing folder from being indexed. I was able to repeat the issue and solution on 2 different installs and 2 different copies of the data set. Hope this helps. If we find an actual cause of the folder being such a problem (it is called MICHAEL BRECKER RECORD SOLOS and contains almost 1 GB of audio related files - performer, live, SD2 - things like that), I will edit again to let you all know. Thanks for ay attempts to help, M

    Read the article

  • USB transfer speed "logarithmically" decreases. Why and can it be improved?

    - by starship
    I have an external hard drive. Just today I tried to copy a bigger file (it was film of ~230 MB) and at first it rushed up until ~70%. There is started decreasing. At first it started at around 56 MB/s Then it rapidly dropped to 23 MB/s (File transfer was 70% complete) Then it slowly started decreasing until it was around 2 MB/s (File was ~90% complete) When it finished the transfer it was slightly above 1.5 MB/s. To describe it graphically: If you drew a curve of the decrease it would probably resemble the graph of a logarithm function So, what I'm really asking is: "Why does this happen?" and "Is there a way around it?" Thank you!

    Read the article

  • Create mirror software raid with bad blocks hdd. How to check data integrity?

    - by rumburak
    There is error in System event log like this one: "The device, \Device\Harddisk1\DR1, has a bad block." Because of above I created Raid 1 on this disk and other one. I'm using Windows Server 2008 R2 software RAID volumes. Volume in Disk Manager is marked as "Failed Redundancy" and "At Risk". I could command to "Reactivate Disk" and it's starts to re-sync, but after a while it stops and returns to previous state. It stops re-sync on bad block on old disk and creates same error in System event log. Old disk status is Errors, new disk status is Online. How can I check that there is exact copy of the old disk on new one ? It is server machine so I would prefer to keep it running during this check.

    Read the article

  • Allowing users to install fonts in Windows 7 (through GPO)

    - by djk
    Hi, This is somewhat related to my previous question, http://serverfault.com/questions/48155/why-do-installed-fonts-disappear-after-reboot. Having got the font install issue sorted out under XP just fine, recently we've got a Windows 7 workstation and I've created a special GPO for it. Initially it was UAC that was demanding administrative access to C:\windows\fonts despite the fact the policy dictates that directory is writable (as is the relevant registry entries, on XP anyway). The issue now though is that when I try to copy a font or hit install it claims that the font "does not appear to be a valid font". This happens with every type of font as well. Is there some new and special consideration when allowing these changes on Windows 7? Any input would be appreciated. Many thanks, Doug

    Read the article

  • Win 7 move ssd from SATA 1 to SATA 0, drive letter from G: to C:

    - by GaryH
    I got a new SSD, plugged it in on my notebook to the available SATA 1 connector and installed Win7 (Ultimate) on it as drive G:. It is working great. Now I would like to move the SSD to the SATA 0 connector and change the drive letter to C:. The existing 500gb HD that has another copy of Win7 (home) on it I will format and connect to the SATA 1 connector as the G: or some other letter drive. Is this possible? Is there software that will go through the registry and "correct" all of the entries for "G:" for everything installed and fix it all up? Or am I better off biting the bullet and setting the hardware where I want it and doing a fresh install of everything? Thanx, G

    Read the article

  • Recovering drivers from previous installation?

    - by Walkerneo
    Yesterday I bought a new computer and I've been setting it up since this morning. The hard drive came with windows 7 home premium (x64) installed, but I decided to instead install windows 7 ultimate (x64) - this is what I'm currently using. Unfortunately, there were some drivers that were installed that I need in this installation. I only notice because the computer doesn't have the ethernet drivers, so I'm only able to connect to the internet via wireless. There are other drivers missing as well, but I'm not yet seeing the effects. I still have the Windows.old folder with the previous installation, which has the drivers. Is there any way to copy the necessary ones over? I tried the options for updating driver software through device manager with the path set to System32\drivers of the old installation, but it didn't find anything.

    Read the article

  • Deploying new code live

    - by nicoX
    What's the best practise to deploy new code on a live (e-commerce) site? For now I have stopped apache for +/- 10 seconds when renaming directory public_html_new to public_html and old to public_html_old. This creates a short down-time, before I start Apache again. The same question goes if using Git to pull the new repo to the live directory. Can I pull the repo while the site is active? And how about if I need to copy a DB as well? During the tar (backup purpose) compression of the live site I noticed that changes occurred in the media directory. That indicated to me that files keep on changing periodically. And if these changes can interfere if Apache is not stopped during deployment.

    Read the article

  • Bug: Weird symbol in pdf generated from Indesign CS3

    - by Joe Yau Pong
    I recently encountered a weird bug in Adobe Acrobat. I generated a PDF from Indesign and some weird symbols in "Build relationships" appear out of no where. http://i.stack.imgur.com/FrIII.jpg Here is the image When I copy the words from PDF viewer and back to notepad, the words are correct: "Build relationshiops" Here are the my configurations: Mac OS 10.4 Indesign CS3 Acrobat 9 Pro I'm going to update my software to CS6 soon, but what seems to be the problem here? Any suggestion. Thanks very much to answer in advanced.

    Read the article

  • Installing Ruby on Rails on Ubuntu 10.04: A Living Nightmare

    - by emptyset
    Update #3: Starting over from scratch, shortened this post, decided to re-install a clean copy of Ubuntu 10.04 on a VM and go through the walk-through again. So, all the steps go without a hitch. As root: root@ubuntu:~/rubygems-1.3.7# ruby -v ruby 1.8.7 (2010-01-10 patchlevel 249) [x86_64-linux] root@ubuntu:~/rubygems-1.3.7# gem -v 1.3.7 root@ubuntu:~/rubygems-1.3.7# rails -v Rails 2.3.8 Now, as myself (in a separate term): emptyset@ubuntu:~$ ruby -v ruby 1.8.7 (2010-01-10 patchlevel 249) [x86_64-linux] emptyset@ubuntu:~$ gem -v /usr/local/lib/site_ruby/1.8/rubygems.rb:10:in `require': no such file to load -- rubygems/defaults (LoadError) from /usr/local/lib/site_ruby/1.8/rubygems.rb:10 from /usr/local/bin/gem:8:in `require' from /usr/local/bin/gem:8 emptyset@ubuntu:~$ rails -v bash: /usr/bin/rails: Permission denied So, this appears to be a permissions issue, but I don't understand why. Specifically, if I have to start making things go+rx all over the place, I really need to understand which specific files need the permissions change.

    Read the article

  • NFS failover WITHOUT DRBD?

    - by user439407
    So I am trying to set up a redundant NFS share in a cloud environment(all links internal, half gig links), and I am looking into using heartbeat for failover, but all the guides seem to be about combining DRBD and heartbeat to create a robust environment. If need be I can do that, but since my content is almost completely static, I would like to avoid the extra overhead and complexity of DRBD if possible, but still be able to fail over if one of the NFS servers fails. Is it possible to use heartbeat with NFS to achieve high-availability without using DRBD to copy the blocks? I am not married to NFSv4, so if NFSv3 over UDP is necessary, that won't be a problem(only a very small number of clients will be connecting to the share) Any comments are appreciated.

    Read the article

  • Office Communicator 2007 (MOC): How to make chat history visible to newcomers

    - by Thomas L Holaday
    How can someone who joins an existing Microsoft Communicator chat see the history of what has gone before? For example: Larry: [describes problem] Moe: [enhances problem] Curly: We should ask Shemp [Shemp joins] Shemp: What's going on in this thread? Is there any way for Shemp to see what Larry and Moe have already typed? I have tried copy-pasting the whole thing, but that invokes an error with no error message - possibly "too much text." Update: Is this functionality what Microsoft calls Group Chat, and requires a separate product?

    Read the article

  • Samsung 830 very slow benchmark numbers

    - by alekop
    I just bought a new SSD, and installed a fresh copy of Windows on it. I didn't see any noticeable difference in boot times, app start-up times, so I decided to benchmark it. Asus P7P55D-E Intel i5-760 Samsung 830 256GB SATA III Windows 7 Ultimate 64-bit The Windows experience index gave the drive a 7.3 rating, but real-world performance is not particularly impressive. Any ideas why the numbers are so low? UPDATE: It turns out that SATA III support is turned off by default on the P7P55D motherboard. After enabling it in BIOS (Tools - Level Up), the scores went up: Read Write Seq 325 183 4K 16 49 IOPS 32K 28K It's an improvement, but still far below what they should be for this drive.

    Read the article

  • Best way to backup and restore millions of files

    - by bongo
    Hi, I'm facing a rebuilding of the volume on which I host the mail storage (kerio mailserver, which uses maildirs). I need to backup and restore as quick as possible the 3.5+ millions (for about 600GB) small files of the store directory. It takes more than 12 hours via rsync to a NFS share, but I also have a 1TB firewire 800 raid1 disk that I can use (from some preliminary tests it's faster). I'm working off a XServe intel. What is the fastest way to do it? Rsync? Finder copy? tar?

    Read the article

  • Duplicate incoming TCP traffic on Debian Squeeze

    - by Erwan Queffélec
    I have to test a homebrew server that accepts a lot of incoming TCP traffic on a single port. The protocol is homebrew as well. For testing purposes, I'd like to send this traffic both : - to the production server (say, listening on port 12345) - to the test server (say, listening on port 23456) My clients apps are "dumb" : they never read data back, and the server never replies anyway, my server only accepts connections, and do statistical computations and store/forward/service both raw and computed data. Actually, client apps and hardware are so simple there is no way I can tell clients to send their stream on both servers... And using "fake" clients is not good enough. What could be the simplest solution ? I can of course write an intermediary app that just copy incoming data and send it back to the testing server, pretending to be the client. I have a single server running Squeeze and have total control over it. Thanks in advance for your replies.

    Read the article

  • rsync to multiple destinations using same filelist?

    - by Dylan B.
    I'm wondering if it's possible for rsync to copy one directory to multiple remote destinations all in one go, or even in parallel. (not necessary, but would be useful.) Normally, something like the following would work just fine: $ rsync -Pav /junk user@host1:/backup $ rsync -Pav /junk user@host2:/backup $ rsync -Pav /junk user@host3:/backup And if that's the only option, I'll use that. However, /junk is located on a slow drive with quite a few files, and rebuilding the filelist of some ~12,000 files each time is agonizingly slow (~5 minutes) compared to the actual transfer/updating. Is it possible to do something like this, to accomplish the same thing: $ rsync -Pav /junk user@host1:/backup user@host2:/backup user@host3:/backup Thanks for looking!

    Read the article

  • VB6 network errors Windows 2008 run on ESX VMWare

    - by hivedome
    We have an application built in vb6, the executables for the application are run locally on a Windows 2008 Terminal Server the dll's for the application are located on a network share. Intermittently parts of the application crash with inpage errors.. we realise the .dll it references then copy that dll locally to the windows 2008 server and register the application can then run again alternatly we reboot the server and all ok. ideally we do not want the exec's or dll's on local server, they should be located on network share for other Terminal Servers to access. error values we receive are C0000203 C00000C4 I have disabled windows 2008 UAC and DEP.. Has anyone experienced this type of behaviour in 2008?

    Read the article

  • git/gitolite: big git repo with several mini projects

    - by Jay
    I'm pretty new to the whole version control thing, and even more so with git. I recently installed git on my computer(s) and set it up on a NAS server. However, I have several client folders with several project folders per client folder. Each one of these client folders is a giant repo, encompassing every project inside it. What I'm wondering is, is there a way to break this apart? So, for instance: The NAS is my 'origin', and has gitolite installed On computer1 I have every project folder in a client folder ever created (clean branch), In computer2 I do not a new checkout of the client branch (because all the projects in that branch are all completed and I don't need a working copy of it), but I do have a brand new project folder for that client "newproject". Is there a way to commit and push to the NAS repo from computer2? Or perhaps is there a better way of organizing all this?

    Read the article

  • Booting off windows image through network

    - by Mr. Sir King Osman
    I have a HP st5742, which is a tower that does not have a hard drive and I am trying to boot it off the network, preferably off an image. It was designed along with the program HP Image Manager, however this program has been discontinued by HP and I can not seem to find a way to get a copy. If this helps, I am running my network with windows server 2008 R2 and would like the streaming client to be running windows. I have spent days searching for a way to deploy this machine however I can not seem to find a straight forward program, guide, or way to do this. I am new to this sort of thing but I willing to reading into the subject, all I need is a point in the right direction. Any help would be greatly appreciated.

    Read the article

  • Windows Server 2008 - unable to bind any TCP port

    - by Kalphiter
    OS: Win Server 2008 RC2 Windows firewall on (no effect when off) I have suddenly been plagued by an issue in which I cannot find any similar ones with a search. I am running about 20 game servers that bind to a UDP port, then bind to a TCP port 1 above the UDP port. Suddenly, a day ago, new TCP binds stopped functioning. Now, I have confirmed that other applications cannot listen on most ports. For example, I have a java program that I made a copy of, and tried the following ports: 33001, 23789, 89... completely random ports. As far as the applications already that have TCP bindings, such as HTTP and MySQL, only port 8080 was one port I discovered could work, and only for Apache. If applications would leave their default port they could not bind, however they returned to normal when the port was default. I've checked for listening applications through netstat and curports, also checked for any connections on these ports, and they're completely free.

    Read the article

  • zfs rename/move root filesystem into child

    - by Anton
    Similar question exists but the solution (using mv) is awful because in this case it works as "copy, then remove" rather than pure "move". So, I created a pool: zpool create tank /dev/loop0 and rsynced my data from another storage in there directly so that my data is now in /tank. zfs list NAME USED AVAIL REFER MOUNTPOINT tank 591G 2.10T 591G /tank Now I've realized that I need my data to be in a child filesystem, not in /tank filesystem directly. So how do I move or rename the existing root filesystem so that it becomes a child within the pool? Simple rename won't work: zfs rename tank tank/mydata cannot rename to 'tank/mydata': datasets must be within same pool (Btw, why does it complain the datasets are not within same pool when if fact I only have one pool?) I know there are solutions that involve copying all the data (mv, or sending the whole dataset to another device and back), but shouldn't there be a simple elegant way? Just noting that I do not care of snapshots at this stage (there are none yet to care of).

    Read the article

< Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >