Search Results

Search found 698 results on 28 pages for 'rsync'.

Page 3/28 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Rsync on Windows - Socket operation on non-socket

    - by TLS
    I get the following error when trying to run the latest Cygwin version of rsync in Windows XP SP2. The error occurs for attempts at both local syncs (that is: source and destination on the local harddisk only) and remote syncs (using "-e ssh" from the openssh package). Any advice on how to fix/workaround it? bash-3.2$ rsync -a dir1 dir2 rsync: Failed to dup/close: Socket operation on non-socket (108) rsync error: error in IPC code (code 14) at /home/lapo/packaging/tmp/rsync-2.6.9/pipe.c(143) [receiver=2.6.9] rsync: read error: Connection reset by peer (104) rsync error: error in IPC code (code 14) at /home/lapo/packaging/tmp/rsync-2.6.9/io.c(604) [sender=2.6.9]

    Read the article

  • rsync verify a file already exists in dest folder so it will skip the copy on the 1st sync

    - by joel_gil
    I have been looking at different tutorials about rsync about some specific situation I have. I have a home server with all my pics, this server is my backup, my PC is the one that receives the new pics and until now i had been manually copying and pasting new photos from the PC to the server. I was trying to setup rsync to do this automatically and in principle, it does without problem. Now the issue; when I fire up rsync it start copying all the files, even the ones were already in the destination (this is because it is the 1st sync). so my question is: Is it possible for rsync to verify that a file is the same (name/size/bin) so it will skip the copy on the 1st sync?

    Read the article

  • rsync set group owner, group permission

    - by ChrisInEdmonton
    I want to use rsync to transfer files from my computer to a remote Linux system. Regardless of the local file's group ownership, I want to set these values on the remote side. If I was on the remote Linux system, I could create the directory and set the ownership and permissions as: mkdir my_directory chown :my_group my_directory chmod 775 my_directory If I create the directory locally and then use rsync (remember, I don't have my_group locally), I do: rsync -ae ssh --chmod=ug+rw,Dug+rwx my_directory remoteserver:dest That works, but I cannot figure out how to set the group owner through rsync. If I do a chmod g+s dest, my_directory has the correct group owner but all of the files inside have the incorrect group owner.

    Read the article

  • rsync to EC2 using ssh -i

    - by isomorphismes
    I'm able to ssh -i mykey.pem to EC2. I'm able to scp -i mykey.pem to EC2. But when I try to rsync -avz -e "ssh -i mykey.pem" I get this error: Warning: Identity file mykey.pem not accessible: No such file or directory. Permission denied (publickey). rsync: connection unexpectedly closed (0 bytes received so far) [sender] rsync error: unexplained error (code 255) at io.c(605) [sender=3.0.9] Any suggestions what I've done wrong?

    Read the article

  • rsync doesn’t sync files

    - by modi
    Hi I'm using rsync(with Cygwin) to sync 2 local folder The folder contains binary files I'm using the following command rsync.exe -av dir1/ dir2/ but the files in dir2 where only partially update, there are few different files does anybody know of a problem with rsync on windows? should i use some other flags 10'xs

    Read the article

  • rsync command deletion error "IO error encountered -- skipping file deletion"

    - by Jam88
    I use rsync command to take backup of files from one of my ubuntu server to another ubuntu machine. Backup server trigger a script that use rysnc command. Here is the command I use rsync -rltvh --partial --stats --exclude=.beagle/ --exclude=.* --delete-after root@live_server:/home/ /home/live_server_backup/home /tmp/logfile.log 2&1 live_server is ssh-able without password. So it works. Now problem is with --delete-after option After all file synced .At the end I can see deletion procedure skipped.logfile error is like IO error encountered -- skipping file deletion When i tried to find log there were some error while file sync rsync: send_files failed to open "/home/xyz/Desktop/PPT_session_1_context.pdf": Permission denied (13) So my understanding is as rsync could not read all the files from target for safety reason it is skipping the file deletion. Is there any way to make --delete-after work even if there is some permission error? I do not want to use force deletion as it will be dangerous in some situation.

    Read the article

  • minimal rsync installation on windows xp?

    - by Aman Jain
    Hi I want to install rsync on windows xp. I have searched the web, but most of the solutions suggest using cygwin, but is there any other way to do this? I don't want to install cygwin because it takes lot of space. Moreover, I need to make it communicate with a rsync daemon on Linux, therefore alternatives to rsync on windows won't help. Thanks

    Read the article

  • rsync creating thousands of ..ds_store files from mounted volume

    - by daniel Crabbe
    I've been using rsync on OS X to sync all our website admins. It was working fine until the OS X 10.6.3 update! Now it creates thousands of empty (0-kb) folders. It only does it when synching to a mounted network drive (which we need to do) as when I sync to my local drive it works as usual! I've tried excludes which don't seem to be working... also tried a different version of rsync so it's an OS X issue. echo "" echo "~*~*~*~*~*~*~*~*~*~*~*~*~*" echo " SYNCING up KINEMASTIK" echo "~*~*~*~*~*~*~*~*~*~*~*~*~*" /usr/local/bin/rsync -aNHAXv --progress --exclude-from 'exclude.txt' /Volumes/Groups/Projects/483_Modern_Activity_Website/web/youradmin/ /Users/dan/Dropbox/documents/WORK/kinemastik/WEBSITE/youradmin/ echo "" echo "~*~*~*~*~*~*~*~*~*~*~*~*~*" echo " SYNCING up CHRIS BROOKS YOURADMIN" echo "~*~*~*~*~*~*~*~*~*~*~*~*~*" /usr/local/bin/rsync -aNHAXv --progress --exclude-from 'exclude.txt' /Volumes/Groups/Projects/483_Modern_Activity_Website/web/youradmin/ /Volumes/Groups/Projects/516_ChrisBrooks/website/youradmin/ Has anyone experienced the same problem?

    Read the article

  • Multi Thread Rsync Transfer

    - by reefine
    For some reason when running a single rsync command I am getting 1 MB/sec to 2 MB/sec even when I connecting 2 servers both connected to 1 Gbps ports. rsync -v --progress -e ssh /backup/mysqldata/mysql-bin.000199 [email protected]:/secondary/mysqldata/mysqldata/mysql-bin.000199 I have over 800 GB of data to transfer split among 500 or so files all starting with: mysql-bin.000* I've found that running 25-30 rsync simultaneously from seperate SSH windows gets me upwards of 25 MB/sec but it will take me hours to run these all manually. Is there anyway to get the 25 MB/sec from a single rsync command?

    Read the article

  • Rsync: remote source and destination

    - by goncalopp
    If both source and destination are remote, rsync complains: The source and destination cannot both be remote. rsync error: syntax or usage error (code 1) at main.c(1156) [Receiver=3.0.7] Is there a insurmountable technical obstacle to making rsync do this? Or it's simply a case of it's-not-yet-implemented? It seems relatively easy to create a local buffer in memory that mediates the transfer between two remotes, holding both hashes and data. Conversely, is there other (unix) software that implements this functionality?

    Read the article

  • Rsync and Lazy mode ?

    - by fabien-barbier
    Since transferring or copying a file that is being used sometimes causes corruption of the transferred file, can we define a time interval in which Rsync checks each file in a given directory to see if there is a change within that time interval ? Files that are not changed during that interval will be transferred, while those that have changes will not. Can I do that with rsync ? Or another tool ? Is there a script to add this functionality to Rsync ? Thanks

    Read the article

  • rsync osx to linux

    - by Nick
    I did a backup to a remote nfs folder with rsync, from a MAC to a Remote Debian. The final backup is 58GB less than the original. Rsync says that everything was OK, and nothing to update. Macintosh:/Volumes/Data1 root# du -sh Produccion/ 319G Produccion/ root@Disketera:/mnt/soho_storage/samba/shares# du -sh Produccion/ 260G Produccion/ can I trust in rsync? I'm using rsync -av --stats /Volumes/Data1/Produccion/ /mnt/red/ (/mnt/red is my samba mountpoint) Some differents folders root@Disketera:/mnt/soho_storage/samba/shares/Produccion/tiposok# du -sh * 0 IndoSanBol 0 IndoSans-Bold 0 IndoSans-Italic 0 IndoSans-Light 0 IndoSans-Regular 40K PalatinoLTStd-Black.otf 40K PalatinoLTStd-BlackItalic.otf 40K PalatinoLTStd-Bold.otf 44K PalatinoLTStd-BoldItalic.otf 44K PalatinoLTStd-Italic.otf 40K PalatinoLTStd-Light.otf 40K PalatinoLTStd-LightItalic.otf 40K PalatinoLTStd-Medium.otf 40K PalatinoLTStd-MediumItalic.otf 56K PalatinoLTStd-Roman.otf 12K TCL IndoSans_mac Macintosh:/Volumes/Data1/Produccion/tiposok root# du -sh * 36K IndoSanBol 40K IndoSans-Bold 36K IndoSans-Italic 36K IndoSans-Light 36K IndoSans-Regular 40K PalatinoLTStd-Black.otf 40K PalatinoLTStd-BlackItalic.otf 40K PalatinoLTStd-Bold.otf 44K PalatinoLTStd-BoldItalic.otf 44K PalatinoLTStd-Italic.otf 40K PalatinoLTStd-Light.otf 40K PalatinoLTStd-LightItalic.otf 40K PalatinoLTStd-Medium.otf 40K PalatinoLTStd-MediumItalic.otf 56K PalatinoLTStd-Roman.otf 160K TCL IndoSans_mac

    Read the article

  • Is there a way to rsync in batches?

    - by Chris
    I have a huge chunk of data (11G) in a subversion repository that I'm using rsync to migrate to Alfresco, which lucene indexes new files as they hit the file system. I'm using a dav mount as a proxy to allow me to rsync. The issue I'm having is the indexing post-rsync is quite an expensive operation for such a huge chunk of data, so I was wondering whether there's a way I could logically separate the rsync into identically-sized batches (say 500MB each) so I could schedule them in cron. At the moment, I'm traversing the top level folders and taking the smallest ones across first, but once I'm done with those, the much larger sub-directories are going to be quite troublesome. Please let me know if you need any further info. Thanks in advance.

    Read the article

  • How to avoid copying corrupted files with rsync

    - by Roberto Aloi
    I have an HDD with plenty of files, some of which are unfortunately corrupted. I'm now trying to copy the good files into a new HDD. I'm using: rsync -azP SRC TGT When rsync comes to one of the corrupted files, I can see a message in the console: rsync: read errors mapping XXX: Input/output error (5) In the target folder, I still see the corrupted file, which I'm not able to open and which I have to delete manually. Is there any option to tell rsync not to copy files after a i/o error?

    Read the article

  • rsync to ONLY keep files in destination that have been removed from source

    - by David Corley
    We use rsync to copy filesystem contents from one machine to another as a backup. We first run MACHINE-X-MACHINE-Y rsync for a straight backup with the --delete and --delete-excluded switches We also run an internal Rsync between the MACHINE-Y destination, and another folder on MACHINE-Y with either of the delete flags. This maintains a non-destructive copy in the event someone inadvertently deletes a file on MACHINE-X. However, it also has the overhead of being a complete copy of what has already been synchronized. Ideally I want to be able to run the non-destructive rsync in such a way that the destination ONLY receives the deleted files and so avoids unnecessary duplication . Is there any way to do this?

    Read the article

  • Is rsync --delete safe in case of disk failure

    - by enedene
    I have two data hard drives on my Linux server and I use second as a backup for a first drive. I use rsync for that purpose. An example would be: rsync -r -v --delete /media/disk1/ /media/disk2/ What this does is that it copies every file/directory from /media/disk1/ to /media/disk2/ but also deletes any difference. For example, lets say that files A and B but not file C are on disk1, and on disk2 there is no A and B files, but there is C. The result would be that after the command on disk2 I'd have files A and B, but file C would be deleted, just like on disk1. Now, a rather disastrous scenario had crossed my mind; what if disk1 dies, system continues to work since system files are on my system disk, but when rsync tries to backup my data on disk2 from broken disk1, it deletes all the files from disk2 because it can't read anything on disk1. Is this a possible scenario, or is there a protection from it build in rsync?

    Read the article

  • How does RSYNC does incremental Backups

    - by Mirage
    How does RSYNC knows which files are chnages and which are not. Does it logs its data anywhere in the file. Because I want to do incrrmental backups but first it will transfer all files. So my main question is if I upload the initial files via FTP but not by Rsync. Will Rsync still skips those existing files or it will upload everythibg on first run

    Read the article

  • rsync without password, none of google (server fault) tutorials worked

    - by Jake Armstrong
    I need to use rsync for a daily backup operation and in the past (on different servers) I managed to just use a rsa key etc, but now none of google (serverfault) tutorials work at all. It keeps asking me for a password. I have webmin and ssh/root access to both servers. My steps: create a key on server 1 send key.pub to server 2 add key.pub to .ssh/authorized_keys chmod 700 .ssh/authorized_keys go back to server 1 and try rsync and it keep asking for password... rsync command: rsync -avz -e ssh file.txt root@server2:/root EDIT: well, I cleaned up everything and this time, instead of inserting a custom name to the key I used the standard one on server1. sent the .pub to server2 and it worked as a charm... So the answer is that server1's ssh wasn't even using the right key...

    Read the article

  • how to use rsync over ftp

    - by bumperbox
    debian4 linux i have the following cmd line which works fine rsync -avr -e ssh /home/dir [email protected]:/home/ but i need to setup it up now to rsync to a remote server that only has ftp on it how do i go about that ? i looked at the rsync help but quickly got lost (i don't do this stuff very often) thanks alex

    Read the article

  • Forcing rsync to convert file names to lower case

    - by SvrGuy
    We are using rsync to transfer some (millions) files from a Windows (NTFS/CYGWIN) server to a Linux (RHEL) server. We would like to force all file and directory names on the linux box to be lower case. Is there a way to make rsync automagically convert all file and directory names to lower case? For example, lets say the source file system had a file named: /foo/BAR.gziP Rsync would create (on the destination system) /foo/bar.gzip Obviously, with NTFS being a case insensitive file system there can not be any conflicts... Failing the availability of an rsync option, is there an enhanced build or some other way to achieve this effect? Perhaps a mount option on CYGWIN? Perhaps a similar mount option on Linux? Its RHEL, in case that matters.

    Read the article

  • rsync --link-dest behaviour when run as sudo

    - by fotNelton
    In order to create regular backups, I'm using rsync together with --link-dest so as to create hard-links for unchanged files. For example: rsync -ax \ --partial --delete --delete-excluded --inplace \ --exclude-from=/tmp/temp_excludes \ --link-dest=/Volumes/Backup/current \ /Users /Volumes/Backup/2012-06-25 This works very well as long as I start the process from my normal user account. Though as soon as I start the process using sudo it behaves erradically, meaning that rsync copies all the unchanged files instead of hard-linking them. Since sudo modifies the environment, I've already also tried sudo -E in conjunction with making sure that my sudoers file has the corresponding option set. Well, that didn't work either. So, the question is, how can I run rsync using sudo? Whereas the above example only shows a backup of the Users directory, I also need to backup some system files that I can only access as root.

    Read the article

  • Subsequent runs of rsync locally don't reduce data transferred

    - by sharakan
    I have an EC2 instance with data I want to sync to a mounted, but remote, volume, as a backup. rsync seems like the way to go with this, so as a test I took my test file (a Postgres pg_dump file) and used rsync -v to copy it to the mounted volume: [ec2-user work]$ rsync -v dump.sql.1 ../backup/dump.sql dump.sql.1 sent 821704315 bytes received 31 bytes 3416650.09 bytes/sec total size is 821603948 speedup is 1.00 Then, I ran it again, expecting to see minimal sent/received numbers because it would just be checksums. Instead... [ec2-user work]$ rsync -v dump.sql.1 ../backup/dump.sql dump.sql.1 sent 821704315 bytes received 31 bytes 3402502.47 bytes/sec total size is 821603948 speedup is 1.00 I'm new to rsync so perhaps I'm missing something, but isn't the idea that the source and destination files are checked for differences, and then a patch is generated and applied to the destination? Why is this not reducing the amount of data 'sent' to just the size of the checksums? Some background if it's relevant: the mounted volume is using s3fs, mounted with s3fs <bucketname> backup.

    Read the article

  • rsync stuck with the --checksum option

    - by billc.cn
    I use back-in-time to backup my Linux installation. It serves as an advanced wrapper for the rsync command. Today I tried to add /var/log to the list of folders to be backed up and it caused some serious performance problems. The job seems to stuck on a particular file and the CPU usage of the rsync parent process reaches 100%. I then used lsof to see which file caused the problem and it seems to be the /var/log directory. I did some googling and some experiments with the different rsync options and found --checksum to be the offender. Without the parameter, an incremental backup finishes properly in minutes. With it, the process will stuck when rsync tries to sync a constantly changing log file. This kind of make sense, but it still seems to be a bug to me. Am I using the option correctly? Is there a workaround for this?

    Read the article

  • Rsync: pure Ruby implementation?

    - by peter
    I have a Rsync program Deltacopy with an executable as client and server but would like to replace this if possible with a pure Ruby implementation of Rsync. I found gems like six-rsync and rsync-update but they seem to be no general implementations. I'm looking for a pure Ruby solution, so no executables involved and preferably runnable on multiple OS. If possible a simple sample would be great. I only look for Rsync, no other transfer or backup solutions please.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >