Search Results

Search found 1556 results on 63 pages for 'backups'.

Page 8/63 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Will Yosemite Server backup 8.1 work on Windows Server 2008 R2 to backup Exchange 2010?

    - by best
    I've inherited the backup support of some Windows server 2003 machines that used Yosemite Server backup 8.1 The company has joined the BizSpark program and want to use the license of Windows Server 2008 R2 and Exchange 2010. I've tried to email barracuda who took over Yosemite to ask this but with no success. (do you need a support contract?) I don't have a spare machine or even space for a VM to test it on, does anybody know it it will work?

    Read the article

  • Backup solution, or, how Duplicati duped me

    - by blarghmaster
    TL/DR version: Mono + Duplicati.commandline.exe restore etc. etc. spits this out for several files regardless of what I try. I am able to list sets, list files in said sets, even do a verify, but each time i do a restore of any kind, i get errors to the effect of : Failed to restore file: "snapshot/blahblah/2005-11-07.tar.gz", Error message: The partial file record for snapshot/blahblah/2005-11-07.tar.gz does not match the file Any advice here, or an idea of where to look for a better solution? FULL STORY: Ive recently put together an nice clean, friendly backup solution for several servers, predominantly Linux, but occasionally a windows box is added too. The solution as is meets all my requirements and does it well... save 1: cross-compatibility The solution is based on a combination of several elements, but eventually comes done to using Duplicity and Duplicati for the actual storage of files. The entire solution was ready to go before i realized that Duplicati, does not, in fact allow me to restore my files to a Linux box, regardless of what the commandline under Mono might tell you. It just spits out errors on random zip and image files, for apparently no good reason as i have tried several options to get it to restore, and several versions of Mono including installing it pretty much lib-for-lib. There is no effective log file for the reasons for these errors, and even the "--debug-output=true" flag does nothing. I am able to list sets, list files in said sets, even do a verify, but each time i do a restore of any kind, i get errors to the effect of : Failed to restore file: "snapshot/blahblah/2005-11-07.tar.gz", Error message: The partial file record for snapshot/blahblah/2005-11-07.tar.gz does not match the file Now i could most likely use the friendly instructions on Duplicati's site and script a bash equivalent of the restore, but that's not exactly ideal. Any advice on this? or possibly an alternative solution that presents the same benefits of Duplicati/Duplicity but that actually works across platforms?

    Read the article

  • How do I restore a database on a remote SQL server 2005 from a local backup?

    - by MatsT
    I have been given access to (parts of) a remote SQL Server 2005 with SQL Server authentication in order to be able to make changes to a database without involving other people who is not working on the project. The database have been created on my local machine. Is there any way to restore the remote database from a backup file on my local computer? I do not currently have access to the filesystem on the remote server. EDIT: To clarify, the access I have is that i can log in to the server via the SQL Server Management Studio. I have one connection to my local database server and one connection to the remote server. What I basically want to do is copy the database from one connection to the other.

    Read the article

  • How do I restore a database on a remote SQL server 2005 from a local backup?

    - by MatsT
    I have been given access to (parts of) a remote SQL Server 2005 with SQL Server authentication in order to be able to make changes to a database without involving other people who is not working on the project. The database have been created on my local machine. Is there any way to restore the remote database from a backup file on my local computer? I do not currently have access to the filesystem on the remote server. EDIT: To clarify, the access I have is that i can log in to the server via the SQL Server Management Studio. I have one connection to my local database server and one connection to the remote server. What I basically want to do is copy the database from one connection to the other.

    Read the article

  • CA ArcServe r11.1 - have to switch Tape Drive Offline then Online to finish backup

    - by Richard
    Ill keep it brief, I have an HP Ultrium 1 in a server currently running CA ArcServe r11.1. I have 5 daily backup tapes, each of which are new. 3 of the 5 work fine without intervention but 2 of them stop at varying points through the backup asking for a new tape, even though that tape is not full. The way I have found around this is to switch the tape drive offline for 10 minutes then switch it back online, whilst the backup is still running. Has anyone ever seen this before? If so, any ideas how to permanently fix this. If all else fails just some pointers in the right direction. Thanks

    Read the article

  • Bacula Volume Retention and Automatic Recycling

    - by Kyle Brandt
    Will a bacula volume be recycled if: No more volumes are appendable The date of the last job on it is past the volume retention period Jobs on that volume are not past the job and file retention periods This is the way the manual reads to me, but I saw a post saying all retention periods have to be up before a volume is recycled (which does make less sense to me). Anyone know for sure from experience?

    Read the article

  • Laptop Backup Synch to the Data Center Without VPN

    - by Sameer
    We would like to synchronize our users or backup their laptops to the data center – looking for suggestions/alternatives to synch them to the data center where they don’t have to know about it. Blue sky like to haves: • Don’t want VPN but needs to secure • Admin can access all files • Global dedup • Select file types only – MS Office, PSTs, PDFs • Incremental change only • Right now 60 users but needs to scale (all Windows7 64 bit) • Can allocate budget if have to Don’t mean to be vague but hoping to get some proven places to start.

    Read the article

  • Server 2008 SP1 VSS Writers Not Responding

    - by Jason
    I've got a Windows Server 08 box on SP1 that is having some problems. We've experienced backup problems and I've traced it down to VSS Writers not responding. From the command line, if I type vssadmin list providers, I get Provider name: 'Microsoft Software Shadow Copy provider 1.0' Provider type: System Provider Id: {b5946137-7b9f-4925-af80-51abd60b20d5} Version: 1.0.0.7 If I type vssadmin list writers, I get this vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool (C) Copyright 2001-2005 Microsoft Corp. Waiting for responses. These may be delayed if a shadow copy is being prepared. I could wait this out for hours and it won't move. I looked up how Server 2008 handles VSS writers, and you can't reregister them like you could in Server 2003 http://social.technet.microsoft.com/Forums/en/windowsserver2008r2general/thread/062cc52c-899b-45f3-8d0c-798b92363f41 Does anyone know how to fix something like this or where to turn next?

    Read the article

  • How do I split a large MySql backup file into multiple files?

    - by Brian T Hannan
    I have a 250 MB backup SQL file but the limit on the new hosting is only 100 MB ... Is there a program that let's you split an SQL file into multiple SQL files? It seems like people are answering the wrong question ... so I will clarify more: I ONLY have the 250 MB file and only have the new hosting using phpMyAdmin which currently has no data in the database. I need to take the 250 MB file and upload it to the new host but there is a 100 MB SQL backup file upload size limit. I simply need to take one file that is too large and split it out into multiple files each containing only full valid SQL statements (no statements can be split between two files).

    Read the article

  • How to schedule a cron job to backup a MySql database every week?

    - by KevinM
    What is a command line I can use to back up a MySql database every single week into a file name with the date (so that it doesn't collide with previous backups)? Also, is this a reasonable backup strategy? My database is relatively small (a complete export is only 3.2 megs right now). The churn rate is relatively low. I need to be able to get the complete DB back if something goes wrong. And it would be extra cool if there's a way that I could see the changes that occur across a time span.

    Read the article

  • Should I be running my scheduled backups as SYSTEM or as the our domain admin?

    - by MetalSearGolid
    I have a daily backup which is scheduled through the Task Scheduler. It failed with a strange error code last night, but I was able to search and find a blog post with how to avoid the error in the future. However, one of his recommendations was to run the backups as the Administrator user of the domain. Since all of the files being backed up are local to this system, should I continue to have the backups run as SYSTEM? Or is it actually better to run it as a different user? I have been running these backups for well over a year now and have only had a handful of failures, but ironically when it does fail, the error code means it was a permissions issue (or so I read, this code seems to be undocumented by Microsoft). Thanks in advance for any insight into this. Might as well post the error code here too, in case anyone would like to share their insight on this as well, but I rarely ever get this error, so I don't care too much about it: 4294967294

    Read the article

  • How to give a Linux user permission to create backups, but not permission to delete them?

    - by ChocoDeveloper
    I want to set up automated backups that are kept safe from myself (in case a virus pwns me). The problem is the "create" and "delete" permissions are the same thing: write permission. So what can I do about it? Is it possible to decouple the create/delete permissions? Another option could be to let the user "root" make the backups. The problem is my home directory is encrypted, and I don't want to backup everything. Any ideas? For the backups I'm using Deja Dup, which is installed by default in Fedora and Ubuntu.

    Read the article

  • Does SQL Server Maintainance Cleanup consider exact times when deleting old backups?

    - by Heinzi
    Let's say I have a daily maintainance task that: Backups all databases and then removes backups that are older than 3 days. Now let's say the first backup at day 1, starting at 10:00, results in the following files db1.bak 2012-01-01 10:04 db2.bak 2012-01-01 10:06 Now let's say at day 4 the first step of the maintainance task (backup the DBs) happens to finish at 10:05. Will SQL Server delete db1.bak and keep db2.bak (would be logical, but might be surprising for the user) or keep both or remove both?

    Read the article

  • Real tortoises keep it slow and steady. How about the backups?

    - by Maria Zakourdaev
      … Four tortoises were playing in the backyard when they decided they needed hibiscus flower snacks. They pooled their money and sent the smallest tortoise out to fetch the snacks. Two days passed and there was no sign of the tortoise. "You know, she is taking a lot of time", said one of the tortoises. A little voice from just out side the fence said, "If you are going to talk that way about me I won't go." Is it too much to request from the quite expensive 3rd party backup tool to be a way faster than the SQL server native backup? Or at least save a respectable amount of storage by producing a really smaller backup files?  By saying “really smaller”, I mean at least getting a file in half size. After Googling the internet in an attempt to understand what other “sql people” are using for database backups, I see that most people are using one of three tools which are the main players in SQL backup area:  LiteSpeed by Quest SQL Backup by Red Gate SQL Safe by Idera The feedbacks about those tools are truly emotional and happy. However, while reading the forums and blogs I have wondered, is it possible that many are accustomed to using the above tools since SQL 2000 and 2005.  This can easily be understood due to the fact that a 300GB database backup for instance, using regular a SQL 2005 backup statement would have run for about 3 hours and have produced ~150GB file (depending on the content, of course).  Then you take a 3rd party tool which performs the same backup in 30 minutes resulting in a 30GB file leaving you speechless, you run to management persuading them to buy it due to the fact that it is definitely worth the price. In addition to the increased speed and disk space savings you would also get backup file encryption and virtual restore -  features that are still missing from the SQL server. But in case you, as well as me, don’t need these additional features and only want a tool that performs a full backup MUCH faster AND produces a far smaller backup file (like the gain you observed back in SQL 2005 days) you will be quite disappointed. SQL Server backup compression feature has totally changed the market picture. Medium size database. Take a look at the table below, check out how my SQL server 2008 R2 compares to other tools when backing up a 300GB database. It appears that when talking about the backup speed, SQL 2008 R2 compresses and performs backup in similar overall times as all three other tools. 3rd party tools maximum compression level takes twice longer. Backup file gain is not that impressive, except the highest compression levels but the price that you pay is very high cpu load and much longer time. Only SQL Safe by Idera was quite fast with it’s maximum compression level but most of the run time have used 95% cpu on the server. Note that I have used two types of destination storage, SATA 11 disks and FC 53 disks and, obviously, on faster storage have got my backup ready in half time. Looking at the above results, should we spend money, bother with another layer of complexity and software middle-man for the medium sized databases? I’m definitely not going to do so.  Very large database As a next phase of this benchmark, I have moved to a 6 terabyte database which was actually my main backup target. Note, how multiple files usage enables the SQL Server backup operation to use parallel I/O and remarkably increases it’s speed, especially when the backup device is heavily striped. SQL Server supports a maximum of 64 backup devices for a single backup operation but the most speed is gained when using one file per CPU, in the case above 8 files for a 2 Quad CPU server. The impact of additional files is minimal.  However, SQLsafe doesn’t show any speed improvement between 4 files and 8 files. Of course, with such huge databases every half percent of the compression transforms into the noticeable numbers. Saving almost 470GB of space may turn the backup tool into quite valuable purchase. Still, the backup speed and high CPU are the variables that should be taken into the consideration. As for us, the backup speed is more critical than the storage and we cannot allow a production server to sustain 95% cpu for such a long time. Bottomline, 3rd party backup tool developers, we are waiting for some breakthrough release. There are a few unanswered questions, like the restore speed comparison between different tools and the impact of multiple backup files on restore operation. Stay tuned for the next benchmarks.    Benchmark server: SQL Server 2008 R2 sp1 2 Quad CPU Database location: NetApp FC 15K Aggregate 53 discs Backup statements: No matter how good that UI is, we need to run the backup tasks from inside of SQL Server Agent to make sure they are covered by our monitoring systems. I have used extended stored procedures (command line execution also is an option, I haven’t noticed any impact on the backup performance). SQL backup LiteSpeed SQL Backup SQL safe backup database <DBNAME> to disk= '\\<networkpath>\par1.bak' , disk= '\\<networkpath>\par2.bak', disk= '\\<networkpath>\par3.bak' with format, compression EXECUTE master.dbo.xp_backup_database @database = N'<DBName>', @backupname= N'<DBName> full backup', @desc = N'Test', @compressionlevel=8, @filename= N'\\<networkpath>\par1.bak', @filename= N'\\<networkpath>\par2.bak', @filename= N'\\<networkpath>\par3.bak', @init = 1 EXECUTE master.dbo.sqlbackup '-SQL "BACKUP DATABASE <DBNAME> TO DISK= ''\\<networkpath>\par1.sqb'', DISK= ''\\<networkpath>\par2.sqb'', DISK= ''\\<networkpath>\par3.sqb'' WITH DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 4, INIT"' EXECUTE master.dbo.xp_ss_backup @database = 'UCMSDB', @filename = '\\<networkpath>\par1.bak', @backuptype = 'Full', @compressionlevel = 4, @backupfile = '\\<networkpath>\par2.bak', @backupfile = '\\<networkpath>\par3.bak' If you still insist on using 3rd party tools for the backups in your production environment with maximum compression level, you will definitely need to consider limiting cpu usage which will increase the backup operation time even more: RedGate : use THREADPRIORITY option ( values 0 – 6 ) LiteSpeed : use  @throttle ( percentage, like 70%) SQL safe :  the only thing I have found was @Threads option.   Yours, Maria

    Read the article

  • PHP postgres backup.

    - by russell kinch
    Hi. I am trying to make a Postgres PHP backup script. I have downloaded one for the command line which looks like this: #!/bin/bash find /home/russell/pg_bkp -type f -mtime +7 -exec rm {} \; time=`date +%Y-%m-%d`; # date in reverse so that lastest date appears last in the list of backup files. PGPASSWORD=****** pg_dump -i -h localhost -p 5432 -U postgres -F c -b -v -f "/home/russell/pg_bkp/$time.backup" ah3 How can I implement this in PHP? The extension that this creates is .backup. It works great and have used it many times. the data is perfect, but doing it from inside my website would be better. Thanks

    Read the article

  • Firebird .NET: Database backup not working (small file)

    - by Norbert
    Hi, I am trying to backup my Firebird 2.5 database file by code: FbBackup backupSvc = new FbBackup(); backupSvc.ConnectionString = MyConnectionManager.buildConnectionString(); backupSvc.BackupFiles.Add(new FbBackupFile(backupPathFilenameAndExtension, 2048)); backupSvc.Verbose = true; backupSvc.Options = FbBackupFlags.IgnoreLimbo; backupSvc.Execute(); The database gets saved to the specified directory. However, the file saved file is only 168kB large. The original database is nearly 7MB in size. What goes wrong? Thanks, Norbert

    Read the article

  • Recreation of MySQL DB using "mysql mydb < mydb.sql" is really slow when the table has tens of milli

    - by Jian Lin
    It seems that a MySQL database that has a table with tens of millions of records will get a big INSERT INTO statement when the following mysqldump some_db > some_db.sql is done to back up the database. (is it 1 insert statement that handles all the records?) So when reconstructing the DB using mysql some_db < some_db.sql then the CPU is hardly busy (about 1.8% usage by the mysql process... I don't see a mysqld either?) and also the hard disk doesn't seem to be too busy... Last time, the whole restore process took 5 hours. Is there a way to make it faster? Such as, when doing mysqldump, can it break the INSERT statement into shorter ones, so that the mysql doesn't need to parse the line so hard when restoring the DB?

    Read the article

  • Is copying /var/lib/mysql a good alterntive to mysqldump?

    - by kemp
    Since I'm making a full backup of my entire debian system, I was thinking if having a copy of /var/lib/mysql directory is a viable alternative to dumping tables with mysqldump. are all informations needed contained in that directory? can single tables be imported in another mysql? can there be problems while restoring those files on a (probably slightly) different mysql server version?

    Read the article

  • Git is ignoring .git directories in subdirectories

    - by Danny
    I'm using git as a backup tool and 'roaming profile' for my $HOME directory between laptop and desktop. My problem is that under my $HOME I have a Development directory with multiple git projects I'm working on. Git will not allow me to add the subdirectories .git folders. So to commit to these projects I have to push the changes into my $HOME git repo, pull on laptop (where they were created and .git dir exsits) and commit. I've read about submodules, but it's not really what I want. I just want the children .git folders to be treated like any old directory so I can move them around and back them up. Has anyone done this or have an idea how I would?

    Read the article

  • Copying data from STDOUT to a remote machine using SFTP

    - by freddie
    In order to backup large database partitions to a remote machine using SFTP, I'd like to use the databases dump command and send it directly over using SFTP to a remote location. This is useful when needing to dump large data sets when you don't have enough local disk space to create the backup file, and then copy it to a remote location. I've tried using python + paramiko which provides this functionality, but the performance much worse than using the native openssh/sftp binary to transfer files. Does anyone have any idea on how to do this either with the native sftp client on linux, or some library like paramiko? (but one that performs close to the native sftp client)?

    Read the article

  • Recreation of DB using "mysql mydb < mydb.sql" is really slow when the table has tens of millions of

    - by Jian Lin
    It seems that a MySQL database that has a table with tens of millions of records will get a big INSERT INTO statement when the following mysqldump some_db > some_db.sql is done to back up the database. (is it 1 insert statement that handles all the records?) So when reconstructing the DB using mysql some_db < some_db.sql then the CPU is hardly busy (about 1.8% usage by the mysql process... I don't see a mysqld either?) and also the hard disk doesn't seem to be too busy... Last time, the whole restore process took 5 hours. Is there a way to make it faster? Such as, when doing mysqldump, can it break the INSERT statement into shorter ones, so that the mysql doesn't need to parse the line so hard when restoring the DB?

    Read the article

  • Loop Stored Procedure in VB.Net (win form)

    - by Mo
    Hello, I am trying to run a for loop for a backup system and inside that i want to run a SP that will loop. Below is the code that does not work for me.. Any ideas please? Dim TotalTables As Integer Dim i As Integer TotalTables = 10 For i = 1 To TotalTables objDL.BackupTables(220, i, 001) ' (This is a method from the DL and the 3 parameters are integars) Next I tried the SP and it works perfectly in SQLServer

    Read the article

  • Move database from sql server 2008 to 2005

    - by pencilslate
    I have a database currently in SQL Server 2008 to be moved to SQL Server 2005. I would like to backup the 2008 db to a bak file and import it to 2005, but couldn't find any options in SSMS 2008 while taking backup. Has anyone had a similar need in the past? How did you manage this through?

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >