Search Results

Search found 26798 results on 1072 pages for 'difference between detach attach and restore backup a db'.

Page 5/1072 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Windows Server 2008 System State Backup

    - by MJ
    What I'm looking for, is info on what is contained in the server 2008 system state backup. It is incredibly large (10+ G), and annoying to backup remotely. is there a way to take a full system state, and then do like a weekly incremental? I know the wbadmin tool, but its options are limited. I'm also looking for the option to remove the 2nd or 3rd oldest backup.

    Read the article

  • Backup Exec 2010 throwing error trying to restore Exchange mailbox

    - by Mindflux
    Error category : Resource ErrorsError : e000848c - Unable to attach to a resource. Make sure that all selected resources exist and are online, and then try again. If the server or resource no longer exists, remove it from the selection list. Edit the selection list properties, click the View Selection DFor additional information regarding this error refer to link V-79-57344-33932 I've got the Exchange agent loaded on the Exchange server. Through talking with some other folks I've added the Exchange Management Console to the Media(Backup) server. None of this has helped. I can back up Exchange all day long, however I cannot restore from it. I've followed the link given (V-79-57344-33923) which goes here and none of that has helped either. Server is running: Win Server 2008 w/ SP2 (64 bit) Backup Exec 2010 I am backing up to a Tandberg T24 tape library.

    Read the article

  • why is rdiff-backup not compatible with encfs ---reverse

    - by user330273
    I'm trying to use encfs with rdiff-backup to ensure that my backups to a remote server are encrypted. The easiest way to do this would be to use encfs --reverse - which means encfs will create a virtual encrypted file system, which I can then backup using rdiff-backup. Except that it doesn't work. Rdiff-backup fails every time with an "input/output error" on the encfs virtual filesystem. It seems I'm not the only one with this problem, but no one has said what the problem is: this person reported the same issue, but was just told to use sshfs instead (see below on that); in this question on serverfault, one of the answers just states that "rdiff-backup seems to have trouble accessing the EncFS-reverse filesystem." There's an open bug report on the Debian bug tracker(bug 731413, I can't post the link) on this bug, but it's been open since December 2013 with no response. Does anyone know what the problem actually is? Is there a workaround? I can't use the two most commonly suggested alternatives - sshfs and then running encfs on that, or using Duplicity - as both require a much higher bandwidth connection than I have access to (Duplicity requires regular full backups).

    Read the article

  • How to restore a windows 7 system from a secondary drive

    - by Klas Mellbourn
    I have a stationary computer with Windows 7 Ultimate 64 bit. The primary (SSD) hard drive seems to have stopped working completely, it is not even visible in BIOS. The computer has a secondary hard drive (non-SSD, NTFS, 2TB). I have had Windows backup running and saving backups to that secondary drive. I am planning to buy a new SSD drive to replace the faulty one. I want to restore the backup to this new SSD drive. What is the most straightforward way to do this? A step-by-step description would be greatly appreciated. Further information: I have a Windows install DVD and the computer has a DVD-drive. The secondary drive is not bootable, so I cannot currently access it. The new SSD drive will probably not be identical to the original, so it might need different drivers

    Read the article

  • How to backup a remote VPS machine?

    - by morpheous
    I am considering opting for a VPS solution, with the server running Ubuntu server. I am pretty new to this, and I need to come up with a backup policy for my server data. Initial data is likely to be about 80Mb, and I expect the data to grow at approximately 5Mb to 10 Mb a day. Can anyone recommend: A backup/restore policy (best practises for a small startup) Which tools to use for backup? Another thing that is not clear to me is - where are the files backed up to normally (in the case of remote servers). If the files are backed up to the same machine (or even to another machine but with the same host), there is potentially, a single point of failure). How do people normally backup their server data, and is the probability of machine meltdown or the host company server farm "catching fire" so remote as not to be worth worrying about - especially for a small (read one man) startup like me?

    Read the article

  • Backup strategies for linux based file servers

    - by iceman
    I want to know some enterprise-wide backup strategies used for linux based file servers. What are the tools and techniques used when making a backup. for e.g when a backup fails on a machine, it should email the admin about the failure and also a log file. This won't happen incase the HDD fails and the system is completely out of work, but in other cases where a backup didn't take place, the admin should be able to know. What tool/scripts can be used for this particular scenarios?

    Read the article

  • One-Way Backup Service? [closed]

    - by Jon Rodriguez
    Up until a month ago, my girlfriend has used MobileMe to backup all the files on her MacBook. This turned out terribly when a quirk of MobileMe caused it to erase all of her files on MobileMe, and then sync the newly-erased MobileMe down to her computer, erasing everything. A week's worth of college essays and CS homework were gone. Now, I am terrified of any commercial cloud-backup solutions because of the possibility of this happening. Going off the list provided in these answers, could you please help me find a good backup service that is completely one-way? I want a service where there is literally not a single line of code that has the possibility of writing to my computer's drive. I want a pure one-way backup service.

    Read the article

  • Detach an entity from a JPA persistence context (JPA 2.0 / Hibernate / EJB 3 / J2EE 6)

    - by Julien
    Hi, I wrote a stateless EJB method allowing to get an entity in "read-only" mode. The way to do this is to get the entity with the EntityManager then detach it (using the JPA 2.0 EntityManager). My code is the following: @PersistenceContext private EntityManager entityManager; public T getEntity(int entityId, Class<T> specificClass, boolean readOnly) throws Exception{ try{ T entity = (T)entityManager.find(specificClass, entityId); if (readOnly){ entityManager.detach(entity); } return entity; }catch (Exception e){ logger.error("", e); throw e; } } Getting the entity works fine, but the call to the detach method returns the following error: GRAVE: javax.ejb.EJBException at ... Caused by: java.lang.AbstractMethodError: org.hibernate.ejb.EntityManagerImpl.detach(Ljava/lang/Object;)V at com.sun.enterprise.container.common.impl.EntityManagerWrapper.detach(EntityManagerWrapper.java:973) at com.mycomp.dal.MyEJB.getEntity(MyEJB.java:37) I can't get more information and don't understand what the problem is... Could somebody help ?

    Read the article

  • rsync & rdiff backup combination giving erros

    - by Maikel van Leeuwen
    On the server I'm making every day a backup with rdiff-backup like: rdiff-backup /home/ /backup/home Then every week I want to make a rsync backup offside with sshfs like: rsync -avz /home/server/backup/home /backup/server-home/ This is giving me the following errors: Fatal Error: Previous backup to /backup/server-home/. seems to have failed. Rerun rdiff-backup with --check-destination-dir option to revert directory to state before unsuccessful session. Does anybody have a good solution to deal with this errors/situation? *2x edit for typo's

    Read the article

  • Windows 7 backup to 3TB Seagate external drive got 0x8078002A error [migrated]

    - by Zhang18
    I'm using the Windows 7 backup and restore utility to create a system image and personal file backup to an external Seagate GoFlex 3TB disk. I got the following error: One of the backup files could not be created. Details: The request could not be performed because of an I/O device error. Error code: 0x8078002A I searched all over the internet and found these two related discussions (discussion 1 and discussion 2). Note the 1st discussion is for a Western Digital drive, which seem to have a solution with the WD Quick Formatter tool. But I downloaded that software and it cannot detect my Seagate drive. The 2nd discussion is directly relevant but it does not offer a solution. I've spent days on this and am at a loss... Please help if you know what to do to make it work! Thank you!

    Read the article

  • Increase backup speed, Backup Exec 2010 - QNAP TS419U+ NAS

    - by user99912
    We have a QNAP NAS and the network shares are being backed up by Backup Exec 2010 over SMB. We can't install the remote agent on the NAS as it has an ARM processor and, as far as I am aware, there is no compatible agent. Do you have any suggestions on any faster method of backing up these shares as opposed to the current scenario? Currently the network bandwidth is not the issue, it seems that this access method is just not able to go any quicker. We've also added the NAS shares to the start of the selection list, but we're still running into 18 hours total backup time (total amount of data on the NAS is roughly 650GB). Any comments and/or suggestions welcome. EDIT: Data is being pulled from the NAS by Backup Exec to a LTO4 tape drive

    Read the article

  • Backup and rescue disk creation

    - by Polppan
    I am in the process of backing up my PC using "Macrium backup and restore". I have successfully backed my PC, (both C and D drive) to an external hard disk. I have a question regarding creating rescue disks. I am following the steps as mentioned in this document. If I am creating an ISO file based on the document, how it is relates to the backup I have taken to my external disk ? I see no relation between creating rescue disks and backup data or am I missing something obvious? Any insight will be highly appreciable...

    Read the article

  • ubuntu to ubuntu backup in internal network

    - by amirash
    hey, i got my development "home" server witch is ubuntu 10, i brought today a computer in order to make a backup to this computer (the development server does also to him self backups every day but im paranoaid so i want to have two backups just in case on diffrent computers) what is the best way to backup the system core of the development server (like norton ghost) & do a full & incrmnt backup of him to the new computer that ive brought? rsync? rdiff? scp? clonezilla?

    Read the article

  • does windows incremental backup include system state backup?

    - by Kossel
    I'm managing my very small office server with windows server 2008. since I have only one server, and the user group is really small. I made the first hdd into 2 partitions. one (C:) for windows and Active directory, another (D:) for tomcat and database. I'm doing incremental back C: and D: daily to hdd2 (E:) using windows server backup. is it enough to let me do fully restore my server in case of disaster? I ask this because I have read there is also a system state backup, and I also have to do that periodically in order to get AD back? isn't it with incremental/full backup I can do full bare-metal recovery?

    Read the article

  • ubuntu to ubuntu backup in internal network

    - by amirash
    hey, i got my development "home" server witch is ubuntu 10, i brought today a computer in order to make a backup to this computer (the development server does also to him self backups every day but im paranoaid so i want to have two backups just in case on diffrent computers) what is the best way to backup the system core of the development server (like norton ghost) & do a full & incrmnt backup of him to the new computer that ive brought? rsync? rdiff? scp? clonezilla?

    Read the article

  • Debian/Linux backup files changed by user

    - by verhogen
    I would like to backup my server that is hosting a few websites in such a way that I can restore everything to the way it was from a fresh format. I know that I should backup all the home folders and then probably my /etc/ folders. Is there a way to figure out all the folders that are relevant for backup in that they were not automatically generated or installed from apt-get? It would ideally restore all the users with their current passwords as well. Basically, enough to clone the system but only copying configuration files.

    Read the article

  • WSS 3.0 Backup/Restore Root Site Collection to Sub-Site of New Site Collection

    - by bfrancis
    Our intranet was originally setup to be at the root of its site collection. We are trying to change this so that our new internet site will live in the root and the intranet will be a sub-site. At this point I have created a new web application and site collection to house the internet and intranet. I used the 'stsadm -o backup' command to create a backup of our current intranet. I then ran the 'stsadm -o restore' command to restore the intranet site collection to wss/sites/intranet. This seems to have worked as I am able to access the intranet from this location. The issue I now seem to have is that images, sub-sites, etc. are all making reference as if the intranet is still the root site. So for example a link to a sub-site is pointing to wss/department/technology/default.aspx and it needs to point to wss/sites/intranet/department/technology/default.aspx. I am looking for help and/or clarification on two things: 1. Am I approaching the migration of a root site collection to a sub-site the best way? 2. How would I go about updating the link references so that they are based on the intranet now being a sub-site instead of the root site?

    Read the article

  • backup without overwriting old backups

    - by AbsentasLT
    I'm using Ubuntu server 14.04 to backup all data from '/mnt/test/ folder' to '/home/john/' with TAR and archive to stuff.tar.gz and to make it to backup automatical. I use cron to backup it every week so what if i want to use cron to create an additional backup file instead of overwriting the existing one? So, after month I'd have 4 backups, each with a unique name. Is there a way? Script ar other backup tool what would do that?

    Read the article

  • rdiff-backup command to restore

    - by Hulk
    Let say i have a source directory which contains The contents /foo/a /foo/b(These are the files in a directory on a remote system) using rdiff command i make a backup as rdiff-backup [email protected]::/foo backups And a,b are now present in my backups directory.And then i delete file a from the remote system and again i do a sync so my local directory has the file b only. My question is that how do i restore file a if the deletion and sync is done on the same day Thanks..

    Read the article

  • SQL SERVER – 2008 – Introduction to Snapshot Database – Restore From Snapshot

    - by pinaldave
    Snapshot database is one of the most interesting concepts that I have used at some places recently. Here is a quick definition of the subject from Book On Line: A Database Snapshot is a read-only, static view of a database (the source database). Multiple snapshots can exist on a source database and can always reside on the same server instance as the database. Each database snapshot is consistent, in terms of transactions, with the source database as of the moment of the snapshot’s creation. A snapshot persists until it is explicitly dropped by the database owner. If you do not know how Snapshot database work, here is a quick note on the subject. However, please refer to the official description on Book-on-Line for accuracy. Snapshot database is a read-only database created from an original database called the “source database”. This database operates at page level. When Snapshot database is created, it is produced on sparse files; in fact, it does not occupy any space (or occupies very little space) in the Operating System. When any data page is modified in the source database, that data page is copied to Snapshot database, making the sparse file size increases. When an unmodified data page is read in the Snapshot database, it actually reads the pages of the original database. In other words, the changes that happen in the source database are reflected in the Snapshot database. Let us see a simple example of Snapshot. In the following exercise, we will do a few operations. Please note that this script is for demo purposes only- there are a few considerations of CPU, DISK I/O and memory, which will be discussed in the future posts. Create Snapshot Delete Data from Original DB Restore Data from Snapshot First, let us create the first Snapshot database and observe the sparse file details. USE master GO -- Create Regular Database CREATE DATABASE RegularDB GO USE RegularDB GO -- Populate Regular Database with Sample Table CREATE TABLE FirstTable (ID INT, Value VARCHAR(10)) INSERT INTO FirstTable VALUES(1, 'First'); INSERT INTO FirstTable VALUES(2, 'Second'); INSERT INTO FirstTable VALUES(3, 'Third'); GO -- Create Snapshot Database CREATE DATABASE SnapshotDB ON (Name ='RegularDB', FileName='c:\SSDB.ss1') AS SNAPSHOT OF RegularDB; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO Now let us see the resultset for the same. Now let us do delete something from the Original DB and check the same details we checked before. -- Delete from Regular Database DELETE FROM RegularDB.dbo.FirstTable; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO When we check the details of sparse file created by Snapshot database, we will find some interesting details. The details of Regular DB remain the same. It clearly shows that when we delete data from Regular/Source DB, it copies the data pages to Snapshot database. This is the reason why the size of the snapshot DB is increased. Now let us take this small exercise to  the next level and restore our deleted data from Snapshot DB to Original Source DB. -- Restore Data from Snapshot Database USE master GO RESTORE DATABASE RegularDB FROM DATABASE_SNAPSHOT = 'SnapshotDB'; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Clean up DROP DATABASE [SnapshotDB]; DROP DATABASE [RegularDB]; GO Now let us check the details of the select statement and we can see that we are successful able to restore the database from Snapshot Database. We can clearly see that this is a very useful feature in case you would encounter a good business that needs it. I would like to request the readers to suggest more details if they are using this feature in their business. Also, let me know if you think it can be potentially used to achieve any tasks. Complete Script of the afore- mentioned operation for easy reference is as follows: USE master GO -- Create Regular Database CREATE DATABASE RegularDB GO USE RegularDB GO -- Populate Regular Database with Sample Table CREATE TABLE FirstTable (ID INT, Value VARCHAR(10)) INSERT INTO FirstTable VALUES(1, 'First'); INSERT INTO FirstTable VALUES(2, 'Second'); INSERT INTO FirstTable VALUES(3, 'Third'); GO -- Create Snapshot Database CREATE DATABASE SnapshotDB ON (Name ='RegularDB', FileName='c:\SSDB.ss1') AS SNAPSHOT OF RegularDB; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Delete from Regular Database DELETE FROM RegularDB.dbo.FirstTable; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Restore Data from Snapshot Database USE master GO RESTORE DATABASE RegularDB FROM DATABASE_SNAPSHOT = 'SnapshotDB'; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Clean up DROP DATABASE [SnapshotDB]; DROP DATABASE [RegularDB]; GO Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Simple Backup Strategy for Amazon EC2 instances / volumes?

    - by minerj
    You have entered Introductory Backups for Amazon EC2 EBS-backed Windows Images 010... I have been browsing my brains out to find a simple backup strategy for our single windows 2008 server running SharePoint Services. This is an EBS-backed image of one server with one data volume. I don’t need anything exotic. I only need a “daily” backup (losing a day’s worth of data is not catastrophic). We have created and saved an EBS backed AMI image (Windows 2008) we are comfortable using. We started off making backups by simply creating a new EBS AMI image. This is really simple, but the running server is put offline during the first 10 – 15 minutes of creating the image – not ideal. The standard way of creating backups would seem to be creating snapshots of volumes attached to a running instance. Again it’s pretty simple and the server remains usable during the snapshot generation. The apparent Catch-22 is that you can’t simply launch a new instance directly from a snapshot. I know how to bundle a running instance to S3 storage and then register the AMI from the S3 bucket. This does allow me to capture a backup of a running instance and, if the running instance is lost, register the AMI from the S3 bucket and launch the new AMI to recover the instance, but this seems really convoluted and it seems ridiculous to have to juggle back and forth between the AWS Console and the S3 Organizer plug-in for Firefox to get this accomplished. (Please don't mention the command line approach, this is an 010 level course). From playing around with EBS-backed images, the following approach appears to work for me (all done within the AWS Console): 1.For your backups, simply snapshot the system volume (/dev/sda1) as needed. 2.If you lose your running instance, do the following: a.Create a new volume from your last snapshot backup b.Launch another instance of your starting AMI (must be EBS-backed) c.Stop this instance. d.Detach the existing system volume from the new stopped instance and discard. e.Attach the newly created volume as system volume (/dev/sda1) to the stopped instance. f.Re-start the new instance. I have tested this out a couple of times and it seems to work for me. Question: Is there anything wrong with this approach?

    Read the article

  • How to restore an os from an image created by macrium reflect

    - by user23950
    Can you recommend of other os imaging software that you use if you haven't use macrium reflect yet. And how do I restore the os from that image? And which is faster? reinstalling the os then install the applications that you need. Or making use of the imaging software to backup the installation along with the applications?Which takes more time?

    Read the article

  • Automate BESR 8.5 Restore

    - by Mike
    I have been searching for a way, script, rain dance, to automate the restore of several BESR 8.5 created images (v2i file extension). Does anyone have any experience on how to pull this off? I have tried Ghost Solution Suite 2.5, but it doesn't seem to work with images that are password protected. Any help, tool, 3rd party program, etc, would be greatly appreciated. Thanks

    Read the article

  • SQL SERVER – Retrieve and Explore Database Backup without Restoring Database – Idera virtual databas

    - by pinaldave
    I recently downloaded Idera’s SQL virtual database, and tested it. There are a few things about this tool which caught my attention. My Scenario It is quite common in real life that sometimes observing or retrieving older data is necessary; however, it had changed as time passed by. The full database backup was 40 GB in size, and, to restore it on our production server, it usually takes around 16 to 22 minutes, depending on the load server that is usually present. This range in time varies from one server to another as per the configuration of the computer. Some other issues we used to have are the following: When we try to restore a large 40-GB database, we needed at least that much space on our production server. Once in a while, we even had to make changes in the restored database, and use the said changed and restored database for our purpose, making it more time-consuming. My Solution I have heard a lot about the Idera’s SQL virtual database tool.. Well, right after we started to test this tool, we found out that it really delivers what it promises. Using this software was very easy and we were able to restore our database from backup in less than 2 minutes, sparing us from the usual longer time of 16–22 minutes. The needful was finished in a total of 10 minutes. Another interesting observation is that there is no need to have an additional space for restoring the database. For complete database restoration, the single additional MB on the drive is not required anymore. We can use the database in the same way as our regular database, and there is no need for any additional configuration and setup. Let us look at the most relevant points of this product based on my initial experience: Quick restoration of the database backup No additional space required for database restoration virtual database has no physical .MDF or .LDF The database which is restored is, in fact, the backup file converted in the virtual database. DDL and DML queries can be executed against this virtually restored database. Regular backup operation can be implemented against virtual database, creating a physical .bak file that can be used for future use. There was no observed degradation in performance on the original database as well the restored virtual database. Additional T-SQL queries can be let off on the virtual database. Well, this summarizes my quick review. And, as I was saying, I am very impressed with the product and I plan to explore it more. There are many features that I have noticed in this tool, which I think can be very useful if properly understood. I had taken a few screenshots using my demo database afterwards. Let us see what other things this tool can do besides the mentioned activities. I am surprised with its performance so I want to know how exactly this feature works, specifically in the matter of why it does not create any additional files and yet, it still allows update on the virtually restored database. I guess I will have to send an e-mail to the developers of Idera and try to figure this out from them. I think this tool is very useful, and it delivers a high level of performance way more than what I expected. Soon, I will write a review for additional uses of SQL virtual database.. If you are using SQL virtual database in your production environment, I am eager to learn more about it and your experience while using it. The ‘Virtual’ Part of virtual database When I set out to test this software, I thought virtual database had something to do with Hyper-V or visualization. In fact, the virtual database is a kind of database which shows up in your SQL Server Management Studio without actually restoring or even creating it. This tool creates a database in SSMS from the backup of the same database. The backup, however, works virtually the same way as original database. Potential Usage of virtual database: As soon as I described this tool to my teammate, I think his very first reaction was, “hey, if we have this then there is no need for log shipping.” I find his comment very interesting as log shipping is something where logs are moved to another server. In fact, there are no updates on the database from log; I would rather compare it with Snapshot Replication. In fact, whatever we use, snapshot replicated database can be similarly used and configured with virtual database. I totally believe that we can use it for reporting purpose. In fact, after this database was configured, I think the uses of this tool are unlimited. I will have to spend some more time studying it and will get back to you. Click on images to see larger images. virtual database Console Harddrive Space before virtual database Setup Attach Full Backup Screen Backup on Harddrive Attach Full Backup Screen with Settings virtual database Setup – less than 60 sec virtual database Setup – Online Harddrive Space after virtual database Setup Point in Time Recovery Option – Timeline View virtual database Summary No Performance Difference between Regular DB vs Virtual DB Please note that all SQL Server MVP gets free license of this software. Reference: Pinal Dave (http://blog.SQLAuthority.com), Idera (virtual database) Filed under: Database, Pinal Dave, SQL, SQL Add-On, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, SQLAuthority News, T SQL, Technology Tagged: Idera

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >