Search Results

Search found 26798 results on 1072 pages for 'difference between detach attach and restore backup a db'.

Page 135/1072 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • Toast vs Disk Util?

    - by Grishanko
    I have several users that are insisting on the purchase of Toast. They will be using it to make backups of disks at possibly re-burn them if needed. I have used Disk Utility for that function. At this point there is no addition functionality needed. However, that can always change in the future. Is there any advantages or disadvantages to either solution?

    Read the article

  • Rsync and wildcards

    - by Jay White
    I am trying to back up both the "Last Session" and "Current Session" files for Google Chrome in one command, but using a wildcard doesn't seem to work. I am trying with the following command rsync -e "ssh -i new.key" -r --verbose -tz --stats --progress --delete '/cygdrive/c/Users/jay/AppData/Local/Google/Chrome/User Data/Default/*Session' user@host:"/chrome\ sessions/" and get the following error rsync: link_stat "/cygdrive/c/Users/jay/AppData/Local/Google/Chrome/User Data/Default/*Session" failed: No such file or directory (2) What am I doing wrong?

    Read the article

  • Instant database snapshot

    - by raj
    My product uses oracle 9 database in its backend. every week the new release of the product is launched which will want to fire some DML, DDL queries to the database. I usually test the product release in a dummy database before applying it in the main database. I create a database dump using exp command, then import them into dummy database using imp. then i test the product in the dummy database and checks if there are any errors. This exp and imp takes about 3 hours to complete. Is there any alternative as : instant snapshot of the live database (which will be independent of the live one)? or is there any option to keep dummydatabase in sync with the originl database always. Yhis can be done by making the product firing DML&DDL queries to both the databases.. but this will be a HUGE performance problem.. how can i overcome this?

    Read the article

  • Nearest PC equivalent to Mac Target Disk Mode?

    - by username
    Mac firmware has a special boot mode that allows you to offer its internal hdd to another computer as an external disk (you just connect the two machines via an IEEE 1394 cable). Only the second machine needs a functioning OS installed. Any good suggestions for something similar on the PC side of things? Block level access isn't important to me, I'd just like to be able to copy files off it. It doesn't matter to me if it uses Ethernet, IEEE 1394, or wifi - I just like having a quick way to access files on a client PC. Is there any single-purpose Linux distro specially designed to do this? It'd be nice to have something super simple, quickbooting, and small that I could install on a USB drive. I used to use Knoppix, but it's overkill as a Target Mode replacement.

    Read the article

  • How to copy directories using debugfs?

    - by tjbp
    The debugfs manpage gives the impression that the command 'rdump . .' will recursively copy all files found on the specified filesystem from the debugfs cwd to the native filesystem's cwd. Instead I seem to receive a syntax error, and no copy is initiated? These are the commands I run: cd /path/to/transfer/destination debugfs /dev/sda1 -R rdump . . My task is to copy the entire contents of a clean yet unmountable USB storage device to its host machine's HD. The host machine does not support the inode size used by the USB device's filesystem (256) and its software is not upgradeable, so my intention was to use debugfs to transfer the files. If anyone has any other suggestions for this task I'd be grateful.

    Read the article

  • Backing up the master boot

    - by petersohn
    I want to back up the master boot on my hard drive, in case something screws it up. What software do you recommend for this? My first idea is to boot from a Linux CD and dd the first 512 bytes of /dev/sda, and dd it back to recover. Will this solution work, and is it safe?

    Read the article

  • Dell External SAS 5/E HBA and Hyper-V

    - by JohnyD
    I have a Dell R710 running Win2008 R2 + Hyper-V with dual SAS 5/E HBA's. I'm building a Linux VM to install Bacula on and I need to connect it to my Dell PowerVault 124T via the SAS HBA. I've been doing some looking online and I have yet to find a straightforward answer on how to connect a SAS HBA to a VM, let alone a Linux VM. The flavor is Ubuntu 32-bit.

    Read the article

  • Carbonite has taken over my iMac

    - by Larry Rothfork
    I used Carbonite to back up 75GB on my iMac. I also created a folder on my iMac to copy files to, from an external hard drive and then use Carbonite to back up from there. And THEN thinking I had everything safely backed up and, in order to make room on my hard drive I DELETED some of those files, and instead of increasing disk space..my disk space has shrunk to 2GB... I know, I know..you can't use Carbonite like that, but now I have two questions. 1) What is the explanation for the decrease in disk space even though I have deleted about 20GB of those backed up files from my hard drive? It must have something to do with the way Carbonite references backed up files, And 2) Is there a way to extricate myself from this situation?

    Read the article

  • Carbonite has taken over my iMac

    - by Larry Rothfork
    I used Carbonite to back up 75GB on my iMac. I also created a folder on my iMac to copy files to, from an external hard drive and then use Carbonite to back up from there. And THEN thinking I had everything safely backed up and, in order to make room on my hard drive I DELETED some of those files, and instead of increasing disk space..my disk space has shrunk to 2GB... I know, I know..you can't use Carbonite like that, but now I have two questions. 1) What is the explanation for the decrease in disk space even though I have deleted about 20GB of those backed up files from my hard drive? It must have something to do with the way Carbonite references backed up files, And 2) Is there a way to extricate myself from this situation?

    Read the article

  • Backing up Initial and Running configurations for Nortel Baystack 325-24G

    - by i.h4d35
    I recently came across a Nortel Baystack 325-24G switch. This is the first time I've come across a Nortel device of any sort, so I am a little intimidated. My problem is that I have been trying to get the startup and running configurations via both the CLI and the Menus but its become quite apparent that it isn't like the Cisco Switches/Routers. I've searched online but have only found Configuration Guides by Avaya. Also I'd like to know - is there a way to take backups regularly (something like tftp)? Pardon me but I'm a n00b when it comes to routers and switches. Thanks in advance.. EDIT: Still havent found a way to get the running config via the CLI

    Read the article

  • Automate backing up e-mails in Outlook Express

    - by Michael Itzoe
    My client is a small business (three employees) that uses Outlook Express. They'd like to back up their email. I showed them how to export, but they balked at that. Is there a way I can automate exporting email? They already have a batch file they use that zips a copy of their data and I'd like to be able to add something to that to include email. Is this possible?

    Read the article

  • How to make TimeMachine back up contents of any path or mounted volume

    - by Olfan
    I keep different types of data in different encrypted sparsebundle images (say, one for each client) which automatically mount upon login but can't be opened by anybody other than myself. So, after login I have a number of virtual volumes in /Volumes/ which keeps my client data both secure and organized. How do I include data inside these virtual Volumes in TimeMachine's backups, or data residing in any path on any partition/volume? I found a promising solution description at blog.eurocomp.info involving editing the com.apple.TimeMachine.plist but all I can get TimeMachine to do is backing up the sparsebundle files themselves. I want it to back up the files inside the mounted image, though - something like adding /Volumes/Client_abc/ to TimeMachine's search path. Please do not redirect my to this previous question as it doesn't solve the problem at all. Please also refrain from telling me why you think I should not want this answer as that will not solve anything either. Please lastly don't say "it can't be done" unless you can technically prove that claim.

    Read the article

  • Bash script to keep last x number of files and delete the rest

    - by Brady
    I have this bash script which nicely backs up my database on a cron schedule: #!/bin/sh PT_MYSQLDUMPPATH=/usr/bin PT_HOMEPATH=/home/philosop PT_TOOLPATH=$PT_HOMEPATH/philosophy-tools PT_MYSQLBACKUPPATH=$PT_TOOLPATH/mysql-backups PT_MYSQLUSER=********* PT_MYSQLPASSWORD="********" PT_MYSQLDATABASE=********* PT_BACKUPDATETIME=`date +%s` PT_BACKUPFILENAME=mysqlbackup_$PT_BACKUPDATETIME.sql.gz PT_FILESTOKEEP=14 $PT_MYSQLDUMPPATH/mysqldump -u$PT_MYSQLUSER -p$PT_MYSQLPASSWORD --opt $PT_MYSQLDATABASE | gzip -c > $PT_MYSQLBACKUPPATH/$PT_BACKUPFILENAME Problem with this is that it will keep dumping the backups in the folder and not clean up old files. This is where the variable PT_FILESTOKEEP comes in. Whatever number this is set to thats the amount of backups I want to keep. All backups are time stamped so by ordering them by name DESC will give you the latest first. Can anyone please help me with the rest of the BASH script to add the clean up of files? My knowledge of BASH is lacking and I'm unable to piece together the code to do the rest.

    Read the article

  • Restoring the exact state of a linux install to a different laptop with different sized drives and other hardware

    - by user259774
    I have an IBM running a Manjaro install that has already been used and settled into, with packages installed, browser profiles, etc, etc. The drive is 60gb, and it has a swap partition and an ext4 root partition. I need to move this profile to a Toshiba computer with a 320gb drive. How should I go about this? My inclination would be to shut down the toshiba, boot a live linux system, dd the whole 60gb drive to a file, boot the toshiba to a live system, then dd the file to its 320gb drive. Would this work? I know that it wouldn't with windows, but I believe this is an artificially imposed limitation from Microsoft. Is this correct, or is Linux similarly limited? If not, how could I go about this? Would clonezilla work, or would the hardware disparities prevent it from working?

    Read the article

  • Areca 1280ml RAID6 volume set failed

    - by Richard
    Today we hit some kind of worst case scenario and are open to any kind of good ideas. Here is our problem: We are using several dedicated storage servers to host our virtual machines. Before I continue, here are the specs: Dedicated Server Machine Areca 1280ml RAID controller, Firmware 1.49 12x Samsung 1TB HDDs We configured one RAID6-set with 10 discs that contains one logical volume. We have two hot spares in the system. Today one HDD failed. This happens from time to time, so we replaced it. Upon rebuilding a second disc failed. Normally this is no fun. We stopped heavy IO-operations to ensure a stable RAID rebuild. Sadly the hot-spare disc failed while rebuilding and the whole thing stopped. Now we have the following situation: The controller says that the raid set is rebuilding The controller says that the volume failed It is a RAID 6 system and two discs failed, so the data has to be intact, but we cannot bring the volume online again to access the data. While searching we found the following leads. I don't know whether they are good or bad: Mirroring all the discs to a second set of drives. So we would have the possibility to try different things without loosing more than we already have. Trying to rebuild the array in R-Studio. But we have no real experience with the software. Pulling all drives, rebooting the system, changing into the areca controller bios, reinserting the HDDs one-by-one. Some people are saying that the brought the system online by this. Some are saying that the effect is zero. Some say, that they blew the whole thing. Using undocumented areca commands like "rescue" or "LeVel2ReScUe". Contacting a computer forensics service. But whoa... primary estimates by phone exceeded 20.000€. That's why we would kindly ask for help. Maybe we are missing the obvious? And yes of course, we have backups. But some systems lost one week of data, thats why we'd like to get the system up and running again. Any help, suggestions and questions are more than welcome.

    Read the article

  • backing up a virtual machine

    - by ErocM
    I inquired with the support of justcloud.com telling them that I have a vmware vm that I was wondering if it could be backed up while in use. I can back up the vm once it is shut down but I was wondering if their "shadow copy" would back it up while running. This was their response: Thank you for your email. I am really very sorry but virtual machines can't be backed up for a simple reason that they are virtual, they have virtual memory, not physical memory. Please let me know if there is anything else I can help with. Kind Regards, Barry James User Experience Team www.justcloud.com These are physical files so I wasn't sure I even understood the response. Am I wrong in thinking that a vm can be backed up while in use? Does this response even make sense? I need a cheap alternative to backing up the vm off the server in case it goes down. Any suggestions?

    Read the article

  • How to copy files from shadow copy with long source path

    - by Jake
    The files and folders in my shared network drive (set up with DFS) were mass deleted. Currently I am trying to recover the files from the shadow copy "Previous Version". Problem is, thousands of files are deeply nested with long paths making the file path too long. When copying, it shows the dialog "Source Path Too Long". My guess is that the file path just barely hits the limit when saved into the network drive, but shadaw copy service appends the date and time to the folders so the path character limit is exceeded. How else can I copy the files from shadow copy?

    Read the article

  • stsadm -o. What does the -o mean?

    - by ddono25
    I am working on a large SharePoint farm, mainly with the backend SQL Servers. We have always used stsadm -o for all stsadm functions, but no one seems to know why. I can't seem to find the info specific for stsadm, would it be general Windows command-line sytax?

    Read the article

  • How to dump remote database without mysqldump?

    - by deceze
    I want to dump the database on my remotely hosted site in regular intervals using a shell script. Unfortunately the server is locked down pretty tight, has no mysqldump installed, binary files can't be executed by normal users/in home directories (so I can't "install" it myself) and the database lives on a separate server, so I can't grab the files directly. The only thing I can do is log into the webserver via SSH and establish a connection to the database server using the mysql command line client. How can I dump the contents to a file a la mysqldump in SQL format? Bonus: If possible, how can I dump the contents directly to my end of the SSH connection?

    Read the article

  • How to copy directories using debugfs?

    - by STM
    The debugfs manpage gives the impression that the command 'rdump . .' will recursively copy all files found on the specified filesystem from the debugfs cwd to the native filesystem's cwd. Instead I seem to receive a syntax error, and no copy is initiated? These are the commands I run: cd /path/to/transfer/destination debugfs /dev/sda1 -R rdump . . My task is to copy the entire contents of a clean yet unmountable USB storage device to its host machine's HD. The host machine does not support the inode size used by the USB device's filesystem (256) and its software is not upgradeable, so my intention was to use debugfs to transfer the files. If anyone has any other suggestions for this task I'd be grateful.

    Read the article

  • Cloning hard drive -- data, operating system settings, everything

    - by Salman A
    I am using Windows XP. My hard drive (Seagate 160gig Barracuda) is about to fail. Its already developed bad sectors and it seems to get worse everyday. Data transfer mode is down to PIO mode 2, chkdsk runs every now and then, registry and important windows files get corrupted and I spend 30-60 minutes running chkdsk /f /r from the recovery console. I've got a replacement (Seagate 5000gig Barracuda) and now i want to transfer each and every thing on to the new drive. I don't want to go through windows and software installation, I spent ages getting all those software installed and configured on that hard drive. Need advice: whats the best way to transfer everything onto the new drive so that it behaves just like the old one. And are there any "gotchas".

    Read the article

  • Help creating image from LVM

    - by jackhab
    I need to duplicate CentOS hard drive image for multiple stations. The HD has the following layout: Disk /dev/sdb: 250GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 107MB 107MB primary ext3 boot 2 107MB 250GB 250GB primary lvm I saved /dev/sdb1 to file with fsarchiver but for sdb2 I get: /fsarchiver savefs an2.fsa /dev/sdb2 oper_save.c#1006,filesystem_mount_partition(): can't detect and mount filesystem of partition [/dev/sdb2], cannot continue. removed an2.fsa Although fsarchiver probe simple correctly detects sdb2 as LVM2_member. Is fsarchiver correct tool for this job? What's wrong? I'm on Ubuntu 9.1 with fsarchiver 0.6.8 and lvm tools installed. Thanks.

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >