Search Results

Search found 698 results on 28 pages for 'rsync'.

Page 21/28 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Can't change to Korean-named directory on my debian server

    - by DaLynX
    I made a rsync backup of some directories from a macbook laptop to a debian server. Some of these have korean characters (Hangeul) in their names. After fixing my server's locale, it displays well when I do a ls for instance. But I can't cd to it. Example: $ ls -1 | head ??? dirA dirB … But if try to go browse that directory: $ cd ? ? ? cd: 3: can't cd to ??? Any idea what's wrong and how to fix it ?

    Read the article

  • ZFS replication between 2 ZFS file systems

    - by XO01
    I initially replicated tank/storage1 -- usb1/storage1-slave (depicted below), and then (deliberately) destroyed the snapshot I replicated from. By doing this, did I lose the ability to incrementally (zfs send -i) replicate between these 2 file systems? What's the best way to approach SYNC'ing these file systems after destroying this snapshot? # zfs list NAME USED AVAIL REFER MOUNTPOINT tank 128G 100G 23K /tank tank/storage1 128G 100G 128G /tank/storage1 usb1 122G 563G 24K /usb1 usb1/storage1-slave 122G 563G 122G /usb1/storage1-slave usb1/storage2 21K 563G 21K /usb1/storage2 What if I initially RSYNC'd tank/storage1 -- usb1/storage1-slave, and decided to incrementally replicate 'via zfs send -i'.

    Read the article

  • zip being too nice (osX)

    - by stib
    I use zip to do a regular backup of a local directory onto a remote machine. They don't believe in things like rsync here, so it's the best I can do (?). Here's the script I use echo $(date)>>~/backuplog.txt; if [[ -e /Volumes/backup/ ]]; then cd /Volumes/Non-RAID_Storage/; for file in projects/*; do nice -n 10 zip -vru9 /Volumes/backup/nonRaidStorage.backup.zip "$file" 2>&1 | grep -v "zip info: local extra (21 bytes)">>~/backuplog.txt; done; else echo "backup volume not mounted">>~/backuplog.txt; fi this all works fine, except that zip never uses much CPU, so it seems to be taking longer than it should. It never seems to get above 5%. I tried making it nice -20 but that didn't make any difference. Is it just the network or disc speeds bottlenecking the process or am I doing something wrong?

    Read the article

  • deploying a Python application from a PHP developer

    - by user1218776
    I'm a little confused on the deployment process for Python. Let's say you create a brand new project with virtualenv source bin/activate pip install a few libraries write a simple hello world app pip freeze the dependencies When I deploy this code into a machine, do I need first make sure the machine is sourced before installing dependencies? I don't mean to sound like a total noob but in the PHP world, I don't have to worry about this because it's already part of the project. All the dependencies are registered with the autoloader in place. The steps would be: rsync the files (or any other method) source bin/activate pip install the dependencies from the pip freeze output file It feels awkward, or just wrong and very error prone. What are the correct steps to make? I've searched around but it seems many tutorials/articles make an assumption that anyone reading the article has past python experience (imo).

    Read the article

  • Tips for debugging Samba performance?

    - by j-g-faustus
    Samba gives me 24 MB/s read and 44 MB/s write, while ftp gives 97 and 112 MB/s under the same circumstances. The documentation says that Generally, you should find that Samba performs similarly to ftp at raw transfer speed. In my case it clearly doesn't. Where can I find tips on how to debug Samba performance? Or alternatively tips for replacing Samba with something else? (I can't use ftp, unfortunately, as I need something that can be used with rsync/rsnapshot.) More details: Both computers are running Ubuntu 10.10 (using Samba because I have a Mac as well) The Samba share is on a local home network, mounted as $ mount ... //server.local/share/ on /mnt/share type cifs (rw,mand) Samba performance was tested by copying (cp) a single file of ~4GB to and from the share, using time for timing and calculating transfer speed by hand. ftp performance are the numbers from the ftp client for get/put of the same file. iperf gives network speed ~900 Mbits/s bonnie++ gives disk speeds 200 MB/s on both sides for block reads as well as block writes Tried changing the parameters suggested in the performance tuning HOWTO (read/write raw, read size, socket options), most of them made little to no difference. (The one that made a difference caused write speed to drop 50%.)

    Read the article

  • Are there any home/soho NAS devices that will backup/sync to the cloud?

    - by 3rdparty
    Looking for a home office (SOHO) market (priced) network hard drive (NAS) that will sync some or all of its content to a cloud-based backup service. The only option I've been able to find so far is NetGear's [ReadyNAS Vault][1] however from what I've read it's not as secure as it could be, and the service is quite expensive ($200/yr for 50GB of cloud storage) - it's 'powered' by ElephantDrive Ideally would love to see something like Wuala integrated into a Lacie Network HDD - conveniently, I suspect this is in the works as Lacie recently acquired Wuala, however nothing has come of it yet. I know there are options to use rsync with a customizable NAS (such as the very versatile and hackable D-Link DNS-323, but the easier this is to setup and maintain, the better. Thanks! ps. I had many links posted within this question, but was limited to posting with only one due to anti-spam restrictions - gotta get my 'reputation' higher!

    Read the article

  • WinSCP equivalent for Linux/Ubuntu

    - by Shashank
    I'm shifting most of my projects to a Linux machine, and one of the things that I miss is WinSCP. I've found other answers saying that nautilus, FileZilla etc. can be used for SFTP, but something that I loved about WinSCP was that it has two panes (FileZilla's got that) and I could start synchronization from any directory. Unison or Rsync could work, but I'd have to create a folder pair every time I want to sync two folders. Is there an SFTP client for Linux that has a two-paned view and allows ad-hoc synchronization? Thanks!

    Read the article

  • MySQL & tmpfs : performance

    - by Serty Oan
    I was wondering if, and how much, using tmpfs could improve MySQL performance and how it should be done ? My guess would be to do mount -t tmpfs -o size=256M /path/to/mysql/data/DatabaseName, and to use the database normally but maybe I'm wrong (I'm using MyISAM tables only). Will a hourly rsync between the tmpfs /path/to/mysql/data/DatabaseName and /path/to/mysql/data/DatabaseName_backup penalize performances ? If so, how should I make the backup of the tmpfs database ? So, is it a good way to do things, is there a better way or am I losing my time ?

    Read the article

  • Best Solution for Load Balancing geographically distributed NFS File Access?

    - by DairyKnight
    I'm trying to find an optimum solution for accessing the NFS file share in my company. We have a central file server in North America and has 30GB~50GB of updated data everyday. And it's very slow for our Europe and Asia branches to access directly. Therefore, I'm trying to setup two replicate servers in those continents. I'm currently using rsync, but wonder if there exists a better solution acts more like a distributed RAID, which allows the user to transparently access the file whether synced or not. And user request will be dispatched to remote server if the file is not yet synced. I'm now looking into DRBD, but it seems not to have the functionality of auto-dispatching requests. Does anyone know if there's a better solution?

    Read the article

  • How to merge (and not replace) folders when copying, on mac?

    - by Cawas
    There's a similar question about windows. This is the same, but for mac. If I try to copy or move a folder to somewhere it already exists, it asks to replace it. That would result in deleting the target. Rather I want to merge. There's already a aquataskforce request about this, and it's a discussion going for a lont time if it's even something that should exist on Mac, due to its whole philosophy. Discussions at apple are outdated and didn't help much as well. As usual, there are professional solutions for doing this, such as Changes and Araxis. And there is the rsync or command line alternatives. But I want a free and simple solution, something like how it is done in Windows or Linux. I won't be doing it much anyway. By the way, PathFinder don't have such option as well and FolderMerge doesn't work on Snow Leopard as far as my 1 test went.

    Read the article

  • Worth it to move /var to physical disk vs logical?

    - by Tammer Ibrahim
    Brief question about partition layout. I use an SSD for /, /boot, /usr, & /home partitions. I'd like to move /var to a mechanical disk to minimize writes to the SSD. I'm mainly concerned about maximizing drive life rather than maximizing performance (although I obviously wouldn't want to cripple my server). My mechanical disks consist of two drives sharing LVM, and a third used for nightly rsync backups. I also have a bunch of old 2.5in hard disks lying around. My question is, should I simply create a new LVM volume '/var' on my primary data store, or would it be worth the increased energy consumption (in terms of maximizing the lifetime of the LVMed drives) to install a low volume 2.5in disk to use just for /var? On a more general level my question is about the trade offs of placing OS mounts on the same physical volumes as my data. Thanks for any help!

    Read the article

  • Remote Yum mirror

    - by specto
    I have a bunch of remote computers that must be updated to the most recent packages for RedHat 4 and RedHat 5. I am using mrepo to mirror the RHN packages, however the remote computers do not have an internet connection. Because of this I have to update the mirror server that is part of the remote computers with a dvd. This is to cut down shipping costs to just a dvd. I am attempting to script this so I can fit all of the new packages on a CD or a DVD. I send updates about once or twice a month depending on package requirements. So my question is, is their a good method to do this so that the only things transferred are the new packages? I wish I could just use rsync. Thanks.

    Read the article

  • Client backup solution for small (100-150 user) homogenous win/nix/mac office?

    - by Gomibushi
    We are currently using Symantec Backup Exec with Desktop and Laptop Option for our Windows clients, time machine for mac and offer simple rsync to linux users, in addition to home folders that are always backed up and available. We are not overly happy with the horrid complexity and multitude of minor bugs in SBE, but "when you don't touch it, it mostly works". Ideally we'd like to offer a real and full backup solution to all clients, but mostly to Linux users, as they don't have a good alternative. I have barely tested Druva on windows, and it is promising in its simplicity and "it just works" looks, but does anyone have experience with it? This post lists some that I will look at.

    Read the article

  • Tools to backup an external hard disk

    - by Kaushik Gopal
    Hey people, What's the best method to take an exact copy of my external hard disk? A guru suggested rsync, but I was wondering if there's an easier alternative. I do remember reading somewhere that Acronis also does this. Was looking for your advice on the best option. I'm running Windows. Essentially i have an external HDD which has a lot of stuff synchronized across various pcs. I wish to take a backup of this external Hard disk (ext.HDDs aren't entirely reliable so want to keep a backup of my ext.HDD). Cheers. K

    Read the article

  • Time Machine for Windows

    - by Kevin L.
    A simple Google search for "Time Machine for Windows" results in a flurry of different little apps. But instead of relying on forum anecdotes and advertisements, I call on the much wiser Super User beta community for some depth on this one. Having Time Machine running on Leopard is like a warm, fuzzy blanket of comfort that I never got with RAID, rsync, or SyncToy on Windows. I'm not asking the community what the "best" backup software for Windows is, but instead: Is there any true Time Machine clone for Windows, one that includes as many of the following as possible: Completely transparent, "set-it-and-forget-it" backup Incremental backups (changes only) for every hour for a day, every day for a month, and every week until the backup disk is full Ability to rebuild from this backup disk in case of main drive meltdown (the backup doesn't have to be bootable; neither are Time Machine disks) Extremely easy to use UI (target user == wife). Bonus points for a beautiful UI

    Read the article

  • zip being too nice (Mac OS X)

    - by stib
    I use zip to do a regular backup of a local directory onto a remote machine. They don't believe in things like rsync here, so it's the best I can do (?). Here's the script I use echo $(date)>>~/backuplog.txt; if [[ -e /Volumes/backup/ ]]; then cd /Volumes/Non-RAID_Storage/; for file in projects/*; do nice -n 10 zip -vru9 /Volumes/backup/nonRaidStorage.backup.zip "$file" 2>&1 | grep -v "zip info: local extra (21 bytes)">>~/backuplog.txt; done; else echo "backup volume not mounted">>~/backuplog.txt; fi This all works fine, except that zip never uses much CPU, so it seems to be taking longer than it should. It never seems to get above 5%. I tried making it nice -20 but that didn't make any difference. Is it just the network or disc speeds bottlenecking the process or am I doing something wrong?

    Read the article

  • VMware - I/O error - how to fix?

    - by Maya-G
    I'm running XP pro on VMware - it mounts and runs just fine. However, if I power down the VM and try to copy it, or even if I try to do a simple Mac Backup (using Carbon Copy Cloner), I get an i/o error at one very specific VMDK file. Here's a sample of the error - this from CCC: "12/20 22:49:30 Detected input/output error 12/20 22:49:33 rsync: read errors mapping "/Users/blahblah/Documents/Virtual Machines.localized/WindowsXP-Professional150G.vmwarevm/WindowsXP-Professional150G-000001-s065.vmdk": Input/output error (5)" How can I regain my ability to do backups of my Mac without this I/O error?

    Read the article

  • Keeping my zsh or bash profile synced up on all my machines.

    - by Joseph Silvashy
    I work on several different machines, all of which are *nix. I have a lot of specific things I like my shell to, or the prompt to look like, or aliases, etc, etc. I'm sure all of you folks deal with this as well. What do you think the best way to keep all my machines' shells to act the same? First off, I'm aware that different machines will need different paths to bins and other differences, so my first inclination is to just include a file at the end of my profile, this is the one that we'll keep in sync. What is the best way to keep files synced up? I can put the file on a remote system, and perhaps use git, to push, then pull my changes every once and a while. However, isn't Rsync better suited for this?

    Read the article

  • SMART Status Data Interpretation - Disk Utility

    - by Mah
    Last week my external harddisk (Seagate Barracuda 1.5TB in a custom enclosure) showed signs of failure (Disk Utility SMART Pre-failure status - several bad sectors) and I decided to change it. I bought a new HDD (Seagate Barracuda 2TB) and connected it to my Ubuntu box with a SATA to USB cable that could not report SMART status. I copied all the contents of the old HDD to the new HDD (one partition with rsync, the other with parted cp) and then gently replaced the old HDD with the new one inside my aluminum enclosure. For obscure reasons after reconnecting the new HDD through the old enclosure, the Linux box could not detect my partitions. I recovered the partitions with testdisk and restarted the computer. After the restart I checked the SMART status of the new HDD an I get this: Read Error Rate --------------- Normalized 108 Worst 99 Threshold 6 Value 16737944 I got a high value on the Seek Error Rate as well. Wondering why this happens I copied 2 GB directory from one partition to the other and rechecked the SMART status (5 minutes later). This time I got the following: Read Error Rate --------------- Normalized 109 Worst 99 Threshold 6 Value 24792504 As you see there has been an increase in the error rate. I am unable to interpret these numbers. Is my new hard disk already dying? What are the acceptable values in these fields for Seagate hard disks? Then why the assessment is still good? While I could get temperature and airflow temperature data from my old HDD, I can not fetch them for the new one. I noticed that my old hdd had got really hot sometimes. Is it possible that the enclosure is killing the harddisks due to high temperature?... Thanks

    Read the article

  • Move MySQL database while instance is online

    - by Mike Scott
    I have a MySQL instance containing a number of databases, one of which is an archive database (although using the INNODB rather than ARCHIVE storage engine) that is not queried or written to in normal operation. The data filesystem is filling up and I'd like to move the archive database's data directory to a different filesystem (and then symlink it back, obviously). If there are no SQL statements attempting to query or update the data during the move, can I safely do this while the MySQL instance and the other databases stay online and in use? I plan to rsync the database directory to the new filesystem, then rename the old one on the original filesystem to something different and create the new symlink. lsof reports that MySQL does have the .ibd files open, so presumably it would have to reopen them.

    Read the article

  • Backup a linux webserver to windows

    - by shaiss
    I have our websites hosted at a thrid party webserver. I have all the admin access needed. I have a local Win2K3 machine that's using retrospect to backup all the networked machines and server, navicat to backup the mysql dbs locally and on the remote linux webserver. So the only part that remains is incremental backups of the files on the webserver. Anyone have any suggestions on how to do this? rSync with deltacopy? Any others?

    Read the article

  • Best way to grow Linux software RAID 1 to RAID 10

    - by Hans Malherbe
    mdadm does not seem to support growing an array from level 1 to level 10. I have two disks in RAID 1. I want to add two new disks and convert the array to a four disk RAID 10 array. My current strategy: Make good backup. Create a degraded 4 disk RAID 10 array with two missing disks. rsync the RAID 1 array with the RAID 10 array. fail and remove one disk from the RAID 1 array. Add the available disk to the RAID 10 array and wait for resynch to complete. Destroy the RAID 1 array and add the last disk to the RAID 10 array. The problem is the lack of redundancy at step 5. Is there a better way?

    Read the article

  • Best way to grow Linux software RAID 1 to RAID 10

    - by Hans Malherbe
    mdadm does not seem to support growing an array from level 1 to level 10. I have two disks in RAID 1. I want to add two new disks and convert the array to a four disk RAID 10 array. My current strategy: Make good backup. Create a degraded 4 disk RAID 10 array with two missing disks. rsync the RAID 1 array with the RAID 10 array. fail and remove one disk from the RAID 1 array. Add the available disk to the RAID 10 array and wait for resynch to complete. Destroy the RAID 1 array and add the last disk to the RAID 10 array. The problem is the lack of redundancy at step 5. Is there a better way?

    Read the article

  • Renting linux server just to make backups of my personnal data ?

    - by Matthieu
    Hi all, I would like to be able to backup ALL my computers data on a Linux server. For now, I have a home server, but soon I will be travelling, without home (so no home server). I was thinking of renting a dedicated linux webserver, but this is expensive, and I don't need a fast machine "web-oriented" with mysql server and all, I just need a full SSH access (full control, and then I install my programs). Does "backup servers" exist ? Am I doing it wrong (maybe that is not a good solution) ? Note : I run Mac OS, Windows and Linux, I backup through rsync, I want full control on my backup, not an automated "magic" backup like MobileMe or anything like that. Edit : I need around 500Gb storage

    Read the article

  • Can I rely on S3 to keep my data secure?

    - by Jamie Hale
    I want to back up sensitive personal data to S3 via an rsync-style interface. I'm currently using s3cmd - a great tool - but it doesn't yet support encrypted syncs. This means that while my data is encrypted (via SSL) during transfer, it's stored on their end unencrypted. I want to know if this is a big deal. The S3 FAQ says "Amazon S3 uses proven cryptographic methods to authenticate users... If you would like extra security, there is no restriction on encrypting your data before storing it in Amazon S3." Why would I like extra security? Is there some way my buckets could be opened to prying eyes without my knowing? Or are they just trying to save you when you accidentally change your ACLs and make your buckets world-readable?

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >