Search Results

Search found 6101 results on 245 pages for 'incremental backup'.

Page 3/245 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Is there a reason to use library backups if I'm backup up full disks?

    - by Ben Brocka
    In Windows Backup I can backup libraries or whole drives (or specific folders). I want a complete backup of all relevant drives. After selecting the drives, there's still the option to backup libraries: Is the backup going to do anything different if I include libraries as well as drives? Should I just backup the whole drive instead? Space used by the backup shouldn't be an issue, since I know the incremental backup is pretty smart..

    Read the article

  • Killing a Plesk 11.5 backup process in Ubuntu

    - by Klaaz
    I want to kill a backup process initiated by Plesk in Ubuntu but don't know which processes safely can be killed: ps aux | grep backup root 20505 0.0 0.0 4392 604 ? Ss 01:43 0:00 /bin/sh -c [ -x /opt/psa/admin/sbin/backupmng ] && /opt/psa/admin/sbin/backupmng >/dev/null 2>&1 psaadm 20510 0.0 0.0 30884 1816 ? S 01:43 0:00 /opt/psa/admin/sbin/backupmng psaadm 20511 0.0 0.0 30884 644 ? S 01:43 0:01 /opt/psa/admin/sbin/backupmng psaadm 20512 0.0 0.6 270472 49356 ? S 01:43 0:03 /usr/bin/sw-engine -c /opt/psa/admin/conf/php.ini /opt/psa/admin/plib/backup/scheduled_backup.php --dump 1 root 20517 0.0 14.9 1400124 1214696 ? SN 01:43 0:27 /usr/bin/perl /opt/psa/admin/bin/plesk_agent_manager server --owner-uid=0bd9576c-f832-4362-b4f4-3c1afac22be2 --owner-type=server --dump-rotation=7 --backup-profile-name=serverXL_ --session-path=/opt/psa/PMM/sessions/2013-10-23-014303.864 --output-file=ftp://[email protected]//backup/serverXL/ --ftp-passive-mode root 27423 0.0 0.0 13652 888 pts/2 S+ 10:35 0:00 grep --color=auto backup root 29103 1.0 14.8 1400124 1209760 ? SN 02:16 5:21 /usr/bin/perl /opt/psa/admin/bin/plesk_agent_manager server --owner-uid=0bd9576c-f832-4362-b4f4-3c1afac22be2 --owner-type=server --dump-rotation=7 --backup-profile-name=serverXL_ --session-path=/opt/psa/PMM/sessions/2013-10-23-014303.864 --output-file=ftp://[email protected]//backup/serverXL/ --ftp-passive-mode root 29106 0.0 14.8 1400404 1212456 ? DN 02:16 0:07 /usr/bin/perl /opt/psa/admin/bin/plesk_agent_manager server --owner-uid=0bd9576c-f832-4362-b4f4-3c1afac22be2 --owner-type=server --dump-rotation=7 --backup-profile-name=serverXL_ --session-path=/opt/psa/PMM/sessions/2013-10-23-014303.864 --output-file=ftp://[email protected]//backup/serverXL/ --ftp-passive-mode It seems the FTP process is the culprit?

    Read the article

  • Any software to do minute by minute backup ?

    - by Ranhiru
    Of all the freeware backup programs i've checked out, the most frequent backup i could do automatically was every hour :( Is there any freeware out there that can backup my data every minute or so? :) I'm talking about a few mega bytes of VERY important data... Preferably over the LAN backup too :)

    Read the article

  • rdiff-backup command to restore

    - by Hulk
    Let say i have a source directory which contains The contents /foo/a /foo/b(These are the files in a directory on a remote system) using rdiff command i make a backup as rdiff-backup [email protected]::/foo backups And a,b are now present in my backups directory.And then i delete file a from the remote system and again i do a sync so my local directory has the file b only. My question is that how do i restore file a if the deletion and sync is done on the same day Thanks..

    Read the article

  • Execute Backup-SqlDatabase cmdlet remotely

    - by Maxim V. Pavlov
    When I run the following script line locally on an SQL Server machine, it executes perfectly: Backup-SqlDatabase -ServerInstance $serverName -Database $sqldbname -BackupFile "$($backupFolder)$($dbname)_db_$($addinionToName).bak" $serverName contains a short name of the SQL Server instance. SQL Server is 2012, so these new cmdlets work like a charm. On the other hand, when I am trying to perform a DB backup from a TeamCity agent machine like this (Through Invoke-Command cmdlet): function BackupDB([String] $serverName, [String] $sqldbname, [String] $backupFolder, [String] $addinionToName) { Import-Module SQLPS -DisableNameChecking Backup-SqlDatabase -ServerInstance $serverName -Database $sqldbname -BackupFile "$($backupFolder)$($dbname)_db_$($addinionToName).bak" } Invoke-Command -computername $SQLComputerName -Credential $credentials -ScriptBlock ${function:BackupDB} -ArgumentList $SQLInstanceName, $DatabaseName, $BackupDirectory, $BakId results in an error: Failed to connect to server $serverName. + CategoryInfo : NotSpecified: (:) [Backup-SqlDatabase], ConnectionFailureException + FullyQualifiedErrorId : Microsoft.SqlServer.Management.Common.ConnectionFailureException,Microsoft.SqlServer.M anagement.PowerShell.BackupSqlDatabaseCommand What is the correct way to execute Backup-SqlDatabase cmdlet remotely?

    Read the article

  • How to automatically take daily HDD backup?

    - by user13107
    I think my HDD might have crashed. I don't want to lose data if this happens again. I have dual boot Windows/Ubuntu system. What is the best way to backup data and other software settings in Ubuntu? I don't care much about Windows partition, important stuff is in Ubuntu. I have 1 Tb external HDD (laptop HDD is of 500 gb total). One way would be to run rsync every day (or via cronjob) to backup everything to external HDD. What might be better ways of achieving this (backup)? Also are instant backup software recommended? Are there any disadvantages of instant backup as opposed to daily rsync?

    Read the article

  • How to safely backup MySQL using VSS-based backup solutions

    - by Rhyven
    One of my clients is running MySQL on a Windows Server 2008 system. Their regular backups are performed using StorageCraft's ShadowCopy, which uses the VSS service to perform backups of open files. Some investigation indicates that MySQL is not entirely VSS-aware, and that the tables need to be locked prior to the shadow operation, then unlocked afterwards. There is a post at http://forum.storagecraft.com/Community/forums/p/548/2702.aspx which indicates the steps that need to be performed, however the user had some difficulty in performing them and no follow up solution was ever posted. Specifically, they succeeded in writing a batch file to lock the database, however once the batch file returns from MySQL it drops the connection and thus relinquishes the lock. I'm looking for a method that I can send the MySQL command FLUSH TABLES WITH READ LOCK, then perform the backup, then send UNLOCK TABLES when the backup is complete. Alternatively, I can exclude the MySQL data storage folder from backup, and schedule a mysqldump backup into a folder that will then be backed up by VSS. Can I have some recommendations please?

    Read the article

  • backup of TFS with DPM 2007 - different backup times

    - by user46516
    I have DPM set up to back up my TFS server every 30 minutes, the reason being it's a way better interface than the quirky SQL backup interface. I also do a full backup nightly using an SQL maintenance job. My thinking is I would use DPM to restore my databases in case of losing my database and the nightly full backup would be a "just in case" the DPM restore doesn't work. I was thinking a little harder about this set up today and started to think about the fact that the DPM backup of the individual databases happens at different 30 min windows.. i.e. one happens at 13h30, another at 13h34 etc. Would this difference in time be a problem when it comes to restoring the TFS server? If I restore the databases and they are from different times, will this create corruption with pointers in one database pointing to missing items in the other database.. do the databases even rely on each other or are they completely interdependant. Lastly, how would SQL (log) backup cope with this?

    Read the article

  • MySQL – Video Course – MySQL Backup and Recovery Fundamentals

    - by Pinal Dave
    Data is the one of the most crucial things for any organization and keeping data safe is the biggest challenge for any DBA. This is true for any organizations. Think about the scenario that you have a database which is extremely important and suddenly you accidently delete the most important table from that database. I am sure this is a very difficult time. In times like this people often get stressed or just make even second mistake. In my career of 10 years I have done often this mistake and often got stressed out due to un-availability of the database backup. In the SQL Server field, we have plenty of the help on this subject, but in MySQL domain there is not enough help. For the same reason I have build this MySQL course on Backup and Recovery. Course Outline Data is very important to any application and business. It is very important that every business plan for data safety. Database backup strategies are often discussed after the disaster has already happened. In this introductory course we will explore a few of the basic backup strategies every business should implement for data safely. We will explore how we can recover our server quickly after any unfriendly incident to our MySQL database. Click to View Course Here are various important aspects which we have discussed in this course. How to take backup of single database? How to take backup of multiple database? How to backup various database objects? How to restore a single database? How to restore multiple databases? How to use MySQL Workbench for Backup and Restore? How to restore Point in Time for any database? What is the best time to backup? How to copy database from one server to another server? All of the above concepts and many more subjects are covered in the MySQL Backup and Recovery Fundamentals course. It is available on Pluralsight. Scenarios As learning about Backup and Recovery can be very much boring, I decided to create two fictitious characters and demonstrate the entire course based on their conversation. The story is about Mike and Rahul. Mike is Sr. Database administrator in USA and Rahul is an intern in India. Rahul aspires to become a senior database administrator and this is a story about his challenges and how he overcomes those challenges. I had a great time to build this course and I have got very good feedback on this course. I encourage all of you to attempt to learn MySQL Backup and Recovery Fundamental course with this innovative effort. It will be very valuable to know your feedback. You will need a valid Pluralsight subscription to watch this course. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Tips and Tricks, T SQL

    Read the article

  • Real-time offline folder-to-folder backup application needed (Windows)

    - by niktech
    I recently started using Intel Matrix Storage RAID solution that allowed me to use my 5 1TB drives for two RAID volumes. First one a 1TB RAID 0 striped across all 5 drives and second one a RAID 5 across the rest of the free space on all drives (around 2.85TB usable space). The RAID 0 I use for OS, applications and games while the RAID 5 I use as a more-permanent type storage (photos, etc). Now I do realize that running the OS and applications on RAID 0 across 5 drives is very dangerous, which is what brings up the following question. Is there a reliable freeware realtime backup application that can backup a set of folders from one drive to another drive (no online backups needed)? I've already tried a few (Mozy, Yadis, Comodo Backup, GFI Backup, Idoo, Crash Plan) but none meet my requirements: Low CPU and RAM usage. Realtime Backups - as soon as a file is modified in the source folder, it is added to the backup queue which will be processed with the lowest priority when the CPU is idle. This backup queue should persist in cases of computer restarts (ie: the source and destination folders should always have the same set of files, except for the ones waiting in the backup queue). Incremental Backups - if only 10 bytes changed in a 1GB file, the app should only copy those 10 new bytes. Ability to back up locked and opened files (some apps, like Yadis, can't back up critical files like browser favorites). Ability to run as a service (no need for any user to log-in to have the app started). Optional requirements: Compression of the destination into a well-known format (RAR, Zip) that can be directly read without the use of the application. Preset source folders (such as Browser Favorites, Game Saves, Application Settings, etc). The idea is to use RAID 0 array as "semi-persistent RAM-like" storage which in case of a failure can be quickly rebuilt by reinstalling the OS, apps and games and copying over the settings, saves, favorites from the RAID 5. I'm also thinking of taking this RAID 0 as RAM idea to the extreme with SSDs (as soon as we get some nice 6Gb/s SATA III SSDs out there), where a couple of SSDs chained in RAID 0 will work as yet another semi-persistent cache layer sitting between the RAM and the HD. I'm just hoping there already exists an application that satisfies these requirements... otherwise I'll have to write one myself, which I would prefer not to do.

    Read the article

  • What are the advantages and disadventages of git or bzr + rsync vs rdiff-backup?

    - by Azendale
    I used to use rsync to do backups, but then I switched to rdiff-backup to incremental backups. Recently, I discovered git and bzr while working on a coding project. So, I was thinking, I could have my backup disk be a repository in either git or bzr. Then I could rsync to the repository, and commit the changes. Would there be any performance concerns with this? Any other issues that I'm not thinking of? The benefit I see in using rsync is that you can restart an interrupted transfer, while rdiff-backup reverts to the last version, and then starts again. Any reason not to do it this way? Anything I'm not thinking of?

    Read the article

  • Backup software for Ubuntu - which one?

    - by Industrial
    Hi everybody, I have spent some time testing out different backup solutions for my small home office during the last weeks, but still haven't found anything that have been working out too well yet. We can definitely work with a non-GUI script if that's what it takes, if only the requirements are fulfilled: Upload to Amazon S3 Europe. We get unbelievable slow uploading speed to US, so uploading 400+ GB of data will not be happening anytime this year... Incremental backups - only changed files shall be uploaded or we will have a big bill from Amazon in the end of each month.. Files should not be uploaded in one big per-folder archive. This is not efficient at all, since if we change one file in a subfolder, a huge two-digit GB sized file would have to be uploaded during next backup. Not good for economy again, or traffic overhead on our internet connection. What options are available to us? Thanks!

    Read the article

  • Exchange backup verification shows no files

    - by Olaf
    [SBS2003SP2] If i read the exhange log it shows the backup contains no files just folders and the total size seems to be Ok. I i try to restore the folders are empty... But the 14 files that where backupped dissapeared in the verification log?! Other backups on the same medium turned out to be fine. Any idea what's wrong here? This is my log: Backup Status Operation: Backup Active backup destination: File Media name: "testbackup.bkf created 2-6-2010 at 11:25" Volume shadow copy creation: Attempt 1. Backup of "SERVER1\Microsoft Information Store\First Storage Group" Backup set #1 on media #1 Backup description: "Set created 2-6-2010 at 11:25" Media name: "testbackup.bkf created 2-6-2010 at 11:25" Backup Type: Normal Backup started on 2-6-2010 at 11:26. Backup completed on 2-6-2010 at 12:21. Directories: 4 Files: 14 Bytes: 26.842.932.104 Time: 55 minutes and 38 seconds Verify Status Operation: Verify After Backup Active backup destination: File Active backup destination: \backup\Server1\Backup Files\testbackup.bkf Verify of "SERVER1\Microsoft Information Store\First Storage Group" Backup set #1 on media #1 Backup description: "Set created 2-6-2010 at 11:25" Verify started on 2-6-2010 at 12:21. Verify completed on 2-6-2010 at 12:47. Directories: 4 Files: 0 Different: 0 Bytes: 26.842.932.104 Time: 25 minutes and 46 seconds

    Read the article

  • Looking for a free/cheap backup solution on OSX for NTFS external HDD [closed]

    - by Steve Middleton
    Hi, I'm looking for an incremental backup system on OSX that plays nice with USB NTFS-formatted external HDDs. I an currently using NTFS-3G (http://macntfs-3g.blogspot.com/) to work with the HDD, but since Time Machine doesn't work with NTFS drives, I have just been manually copying files from my HDD to back it up (ugh). I need to use an NTFS drive because I am going back and forth between Windows machines at my school and a mac at home. I have no permissions at my school to install any Windows software, so I am forced to find a OSX solution. Please -- any help you can give me would be greatly appreciated! Thanks!

    Read the article

  • why is rdiff-backup not compatible with encfs ---reverse

    - by user330273
    I'm trying to use encfs with rdiff-backup to ensure that my backups to a remote server are encrypted. The easiest way to do this would be to use encfs --reverse - which means encfs will create a virtual encrypted file system, which I can then backup using rdiff-backup. Except that it doesn't work. Rdiff-backup fails every time with an "input/output error" on the encfs virtual filesystem. It seems I'm not the only one with this problem, but no one has said what the problem is: this person reported the same issue, but was just told to use sshfs instead (see below on that); in this question on serverfault, one of the answers just states that "rdiff-backup seems to have trouble accessing the EncFS-reverse filesystem." There's an open bug report on the Debian bug tracker(bug 731413, I can't post the link) on this bug, but it's been open since December 2013 with no response. Does anyone know what the problem actually is? Is there a workaround? I can't use the two most commonly suggested alternatives - sshfs and then running encfs on that, or using Duplicity - as both require a much higher bandwidth connection than I have access to (Duplicity requires regular full backups).

    Read the article

  • How to backup a remote VPS machine?

    - by morpheous
    I am considering opting for a VPS solution, with the server running Ubuntu server. I am pretty new to this, and I need to come up with a backup policy for my server data. Initial data is likely to be about 80Mb, and I expect the data to grow at approximately 5Mb to 10 Mb a day. Can anyone recommend: A backup/restore policy (best practises for a small startup) Which tools to use for backup? Another thing that is not clear to me is - where are the files backed up to normally (in the case of remote servers). If the files are backed up to the same machine (or even to another machine but with the same host), there is potentially, a single point of failure). How do people normally backup their server data, and is the probability of machine meltdown or the host company server farm "catching fire" so remote as not to be worth worrying about - especially for a small (read one man) startup like me?

    Read the article

  • One-Way Backup Service? [closed]

    - by Jon Rodriguez
    Up until a month ago, my girlfriend has used MobileMe to backup all the files on her MacBook. This turned out terribly when a quirk of MobileMe caused it to erase all of her files on MobileMe, and then sync the newly-erased MobileMe down to her computer, erasing everything. A week's worth of college essays and CS homework were gone. Now, I am terrified of any commercial cloud-backup solutions because of the possibility of this happening. Going off the list provided in these answers, could you please help me find a good backup service that is completely one-way? I want a service where there is literally not a single line of code that has the possibility of writing to my computer's drive. I want a pure one-way backup service.

    Read the article

  • Backup strategies for linux based file servers

    - by iceman
    I want to know some enterprise-wide backup strategies used for linux based file servers. What are the tools and techniques used when making a backup. for e.g when a backup fails on a machine, it should email the admin about the failure and also a log file. This won't happen incase the HDD fails and the system is completely out of work, but in other cases where a backup didn't take place, the admin should be able to know. What tool/scripts can be used for this particular scenarios?

    Read the article

  • Backup restore issue

    - by rrahman_bd
    I have a windows server 2008 with exchange database full backup. Now for testing purpose i want to restore that full server to a VM. I have copied the whole backup to VM01 and took it to Private Network. And now i want to take backup from VM01 and restore it to VM02. Both are at private network. I was able to find VM02 test backup by searching in VM1. But the copyed restore file is not showing in the window !!! Is there any policy so that i can not restore except from the original location?

    Read the article

  • rsync & rdiff backup combination giving erros

    - by Maikel van Leeuwen
    On the server I'm making every day a backup with rdiff-backup like: rdiff-backup /home/ /backup/home Then every week I want to make a rsync backup offside with sshfs like: rsync -avz /home/server/backup/home /backup/server-home/ This is giving me the following errors: Fatal Error: Previous backup to /backup/server-home/. seems to have failed. Rerun rdiff-backup with --check-destination-dir option to revert directory to state before unsuccessful session. Does anybody have a good solution to deal with this errors/situation? *2x edit for typo's

    Read the article

  • Increase backup speed, Backup Exec 2010 - QNAP TS419U+ NAS

    - by user99912
    We have a QNAP NAS and the network shares are being backed up by Backup Exec 2010 over SMB. We can't install the remote agent on the NAS as it has an ARM processor and, as far as I am aware, there is no compatible agent. Do you have any suggestions on any faster method of backing up these shares as opposed to the current scenario? Currently the network bandwidth is not the issue, it seems that this access method is just not able to go any quicker. We've also added the NAS shares to the start of the selection list, but we're still running into 18 hours total backup time (total amount of data on the NAS is roughly 650GB). Any comments and/or suggestions welcome. EDIT: Data is being pulled from the NAS by Backup Exec to a LTO4 tape drive

    Read the article

  • automating sql express backup via VSS backup

    - by Ornus
    I need to set up on my server automated daily SQL db backups (sql express, so no maintenance plans). To keep things simple I'm gonna use a backup solution (JungleDisk) that uses VSS to back up the DB file. SQL fully supports VSS and on requests freezes DB I/O, so I understand I'm taking snapshots. JungleDisk supports doing differential back up and compression, so it simplifies things and keeps the cost/bandwidth down. Is it enough to just backup up db file (mdf). Do I need to back up transaction log (ldf) file as well? I'm ok with losing a day's worth of work (since the last backup). if I go this route, what's the best way to restore the database? are there any issues with this approach I'm not aware of?

    Read the article

  • does windows incremental backup include system state backup?

    - by Kossel
    I'm managing my very small office server with windows server 2008. since I have only one server, and the user group is really small. I made the first hdd into 2 partitions. one (C:) for windows and Active directory, another (D:) for tomcat and database. I'm doing incremental back C: and D: daily to hdd2 (E:) using windows server backup. is it enough to let me do fully restore my server in case of disaster? I ask this because I have read there is also a system state backup, and I also have to do that periodically in order to get AD back? isn't it with incremental/full backup I can do full bare-metal recovery?

    Read the article

  • backup without overwriting old backups

    - by AbsentasLT
    I'm using Ubuntu server 14.04 to backup all data from '/mnt/test/ folder' to '/home/john/' with TAR and archive to stuff.tar.gz and to make it to backup automatical. I use cron to backup it every week so what if i want to use cron to create an additional backup file instead of overwriting the existing one? So, after month I'd have 4 backups, each with a unique name. Is there a way? Script ar other backup tool what would do that?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >