Search Results

Search found 5747 results on 230 pages for 'backup'.

Page 106/230 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • Backing Up Transaction Logs to Tape?

    - by David Stein
    I'm about to put my database in Full Recovery Model and start taking transaction log backups. I am taking a full nightly backup to another server and later in the evening this file and many others are backed up to tape. My question is this. I will take hourly (or more if necessary) t-log backups and store them on the other server as well. However, if my full backups are passing DBCC and integrity checks, do I need to put my T-Logs on tape? If someone wants point in time recovery to yesterday at 2pm, I would need the previous full backup and the transaction logs. However, other than that case, if I know my full back ups are good, is there value in keeping the previous day's transaction log backups?

    Read the article

  • Linux Development System Layout.Configuration

    - by tom smith
    Hi. Looking to create a linux based development/test system. I'm the only one using it. Will be using a variant of rhel/centos/fedora, with a 640G drive, and an external 250G as a kind of backup. Looking for thoughts/comments on the layout/config of the drive for the install/creation process. My primary goal is to be able to "backup"/restore the work product so i'd like OS to be separate from everything else. Thoughts/commnents/ponters appreciated. Thanks

    Read the article

  • traffic shaping for certain (local) users

    - by JMW
    Hello, i'm using ubuntu 10.10 i've a local backup user called "backup". :) i would like to give this user just a bandwidth of 1Mbit. No matter which software wants to connect to the network. this solution doesn't work: iptables -t mangle -A OUTPUT -p tcp -m owner --uid-owner 1001 -j MARK --set-mark 12 iptables -t mangle -A POSTROUTING -p tcp -m owner --uid-owner 1001 -j MARK --set-mark 12 tc qdisc del dev eth0 root tc qdisc add dev eth0 root handle 2 htb default 1 tc filter add dev eth0 parent 2: protocol ip pref 2 handle 50 fw classid 2:6 tc class add dev eth0 parent 2: classid 2:6 htb rate 10Kbit ceil 1Mbit tc qdisc show dev eth0 tc class show dev eth0 tc filter show dev eth0 does anyone know how to do it? thanks a lot in advance

    Read the article

  • Running cronjob in the odd-numbered days

    - by Spacedust
    I'm currently running my MySQL backup script on every day of the week: 0 1 * * 1 sh /root/mysql_monday.sh 0 1 * * 2 sh /root/mysql_tuesday.sh 0 1 * * 3 sh /root/mysql_wednesday.sh 0 1 * * 4 sh /root/mysql_thursday.sh 0 1 * * 5 sh /root/mysql_friday.sh 0 1 * * 6 sh /root/mysql_saturday.sh 0 1 * * 0 sh /root/mysql_sunday.sh Now I would like to keep backups for one week more so two weeks in total just to be more secure. For example: I though I can create one backup file on monday in the even days and then again in the odd-numbered days. For even days I can just use: 0 1 */2 * 1 sh /root/mysql_monday_even.sh 0 1 */2 * 2 sh /root/mysql_tuesday_even.sh 0 1 */2 * 3 sh /root/mysql_wednesday_even.sh 0 1 */2 * 4 sh /root/mysql_thursday_even.sh 0 1 */2 * 5 sh /root/mysql_friday_even.sh 0 1 */2 * 6 sh /root/mysql_saturday_even.sh 0 1 */2 * 0 sh /root/mysql_sunday_even.sh But what about the odd-numbered days ?

    Read the article

  • How to sync a folder on a remote computer to a server on a domain

    - by Pierre-Alain Vigeant
    We have a small remote office that often share data with us. I learned that the data is shared as a email attachment, but that obviously leads to versioning hell and overriding. I am looking for a way for then to synchronize a folder directly on our main office domain controller. I personally use LiveMesh, but I would like a tool that is synchronized to our server directly without a 3rd party hosting the data, since we already have an online backup service taking care of the offsite backup. What enterprise class tool would let us synchronize a folder from a remote computer that is out of our domain, into our the file server of our domain? The synchronization has to be two-way, e.g.: Someone from the remote office will create an invoice. Someone from our office will review it and make modification to it. The remote office need to see the change. Our server is on Windows 2003.

    Read the article

  • Restoring using SyncBack without profiles

    - by Thomas Matthews
    I backed up my internal hard drive (C:) using SyncBack onto an external (USB) hard drive with maximum compression. I then performed a clean install of Windows Vista onto the computer. I forgot to copy the SyncBack logs before the clean install. And now when ever I try to restore a directory, the RAR/ZIP files are copied to the system hard drive instead of extracting their contents to the hard drive. Also, SyncBack is not traversing the folders during the Restore process. How can I tell SyncBack to expand the compressed files? I am running the freeware version of SyncBack. I have to create new log files (unless SyncBack put them somewhere on the external drive). My alternative is to write a program that traverses the folders on the external drive and extracts files from the RAR/ZIP files. I am using Windows Vista, Service Pack 2, and the data size prior to backup was about 200 GB. (The backup process took over 72 hours due to "hiccups").

    Read the article

  • Win2008 DC in a Windows 2000 domain: can I keep the old DC?

    - by gravyface
    Will be putting a new Windows 2008 SE Server into a single domain network with two domain controllers, both running Windows 2000 Server. The functional level of the domain is mixed mode/2000. Until a second 2008 DC can be purchased, I'd like to leave the current Win2k operational master DC as a backup DC as the other member servers running 2003 have either accounting/SQL or Exchange on them. Eventually all the w2k servers will be decommissioned, but until then, I need another DC for redundancy. Following the standard process for adding a new DC, can I leave the old operational master DC (or the other backup DC) running after I transfer the FSMO roles to the new server? Will this cause any issues?

    Read the article

  • Disk image of a Windows 2000 NTFS hard drive

    - by Federico
    Hi, I need to create a disk image from a Windows 2000, NTFS formatted, hard drive. This image has to be used to create backup hard drives to replace the original disk in case an emergency situation arises. This is a medical equipment, so I cannot physically disconnect the disk because I would violate the warranty of the equipment. This machine has a DVD R/W, ethernet and USB 2.0 access, and we have the rights to install any application I want in the Windows 2000 system. 1) Is there any way to do this without installing any new software in the Windows 2000 system, so it is the least invasive as possible? 2) If we have to install a software to do the backup, which software do you recommend? Any hint will be greatly appreciated. Thanks in advance, Federico

    Read the article

  • No free disk space ;[

    - by skomak
    Hi I have weird situation because Linux df command says that there is no free disk space [root@backup cache]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 72G 70G 0 100% / /dev/sda1 190M 11M 170M 7% /boot tmpfs 248M 0 248M 0% /dev/shm but du -sh /* says [root@backup cache]# du -sh /* 4.0K /bacula-restores 7.4M /bin 5.4M /boot 3.6T /data 116K /dev 55M /etc 204K /home 76M /lib 16K /lost+found 12K /media 0 /misc 16K /mnt 8.0K /mount 0 /net 8.0K /opt 0 /proc 2.3G /root 32M /sbin 8.0K /selinux 168K /share 8.0K /srv 0 /sys 361M /test 20K /tmp 3.2G /usr 1.5G /var Could you tell me where is a problem? Where is my space? I can't figure it out :(

    Read the article

  • Why is scp not overwriting my destination file?

    - by Noli
    I'm trying to back up a file via the command scp /tmp/backup.tar.gz hostname:/home/user/backup.tar.gz When I run it, the scp progress bar shows up and it looks like its transferring the file, however when I log into the destination server to check the file, the timestamp and filesize haven't changed from the older version, so it looks like scp didn't overwrite the old file at all. It only sees to work when I manually delete the file from the destination server. I'm running ubuntu, and this is happening on two servers: one cygwin ssh, and one fedora core 3. Anyone have any idea why this is happening? I thought scp would ONLY overwrite existing files.. Thanks

    Read the article

  • How can I move mysites to a new location

    - by Bob
    I recently restored my content and was instructed to create mysites in a different location than was originally used. Now I have several users mysites in /personal. The new desired location is /mysites. From what I found in the documentation I should back them up and restore them to the new location. Here's what I've done: Backup individual site collection for user mysite stsadm -o backup -url "https://myUrl/personal/john_smith" -filename johnsmith.bkup Restore individual site collection for user mysite stsadm -o restore -url "https://myUrl/mysites/john_smith" -filename johnsmith.bkup -overwrite The result of this and the problem is when i enumerate sites i end up with this: <Site Url="https://myUrl/mysites" Owner="domainname\john.smith" ContentDatabase="WSS_Content_MySites" StorageUsedMB="1.6" StorageWarningMB="90000" StorageMaxMB="100000" /> it leaves off the username part of the url. and if I restore more that one they want to overwrite each other.

    Read the article

  • wget not converting links

    - by acrosman
    I am trying to mirror a fairly large site (20,000+ pages) prior to a major overhaul. Basically, I need a backup before cutting over to the new one in case we forgot something we need (we'll have about 1,000 pages at launch). The site is run on a CMS that I cannot easily extract usable data from, so I'm trying to make the copy with wget. My problem is that wget does not appear to be actually converting links, despite the presence of --convert-links or -k in the command. I've tried a couple of different combinations of flags, but I haven't been able to get the output I need. Most recent failed attempt was: nohup wget --mirror -k -l10 -PafscSnapshot --html-extension -R *calendar* -o wget.log http://www.example.org & I've also included the --backup-converted, and --convert-links instead of -k (not that it have mattered). I've done it with and without -P and -l, again no that they should matter. Results in files that still have links like: http://www.example.org/ht/d/sp/i/17770

    Read the article

  • 284 GiB of data, 217.4 GiB of space

    - by Malfist
    I want to reinstall my OS, but I don't have the hard drive space to backup any more (I have a RAID 1 array, so I haven't done it for a while). In my /home I have 284.8 GiB of data, and I have a spare 250 GB (or 217.4 GiB) hard drive that I've been using for backup. What type of compression algorithm (if any) is capable of this type of compression? I don't care about the time, I have a quad core though, so something that utilizes all 4 cores would be great. I have tried 7zip with no success. Ran on one core for two days and failed because of lack of space. Any ideas?

    Read the article

  • SQL restore from single file db to filegroup

    - by Mauro
    I have a 180GB MOSS 2007 database whose maintenance (i.e. backups and restores) are becoming a problem. Part of the issue can be resolved by splitting the three content sites down into their own site collections, however this will likely still leave me with a 100gb DB to deal with. Whilst this isnt entirely problematic for SQL it does mean that backups / restores take far too long. my idea is to split each of the databases to 30gb files, then to import the content into them which should distribute the content across the file groups,making it much easier / faster to backup/restore. Is there a way to backup from a single file and restore to a filegroup? If i have the wrong understanding of filegroups then I'm more than happy to find out other methods of managing the size of databases.

    Read the article

  • Harddrive in the freezer ever work for you?

    - by Stefan Thyberg
    Once upon a time, my little 10 GB drive in my webserver failed and of course I had no backup, teaching me to immediately set up an automatic backup job afterwards. Anyhow, this drive refused to start and as a last-ditch effort I put it in a plastic bag and put it in the freezer overnight, since I had heard somewhere that it might work and I really didn't have any other options. The next day I take it out, immediately plug it in outside the case and lo and behold, the drive works long enough for me to copy my data off it. Have you ever had a similar experience with this method?

    Read the article

  • Outlook 2010 - Export of an Exchange OST to PST creates files with different sizes each time

    - by Jiri Pik
    This is a most weird issue. I have a couple of exchange OST mailboxes, and just for security, I am exporting them using File / Import / Export to a file / Export to PST file. If I run the export consecutively, it always creates files with different file sizes, WITH NO ERROR OR WARNING that something went wrong. The files should be of the same size as you run it right after the previous backup finished. I found out that if the filesize is substantially lower, then a reboot and back up can fix this up. What's your insight into this problem? What could cause that the files have different sizes and what could have caused that there is no warning? I suspected some Windows Search issue as sometimes the backup fails with a dialog error stating that Windows Search terminated the export.

    Read the article

  • Using AutoMySQLBackup on Rackspace Cloud

    - by xref
    Since Rackspace Cloud only allows FTP access it makes using AutoMySQLBackup a little trickier, and while it is at least creating DB dumps I get errors in the backup log: ###### WARNING ###### Errors reported during AutoMySQLBackup execution.. Backup failed Error log below.. .../backups/automysqlbackup: line 1791: /usr/bin/find: Permission denied .../backups/automysqlbackup: line 1855: /usr/bin/find: Permission denied .../backups/automysqlbackup: line 803: /usr/bin/find: Permission denied .../backups/automysqlbackup: line 1972: /usr/bin/du: Permission denied Since files are being created I'm assuming the find command failing has to do with actually rotating out and deleting the old backups? Line 803: find "${CONFIG_backup_dir}/${subfolder}${subsubfolder}" -mtime +"${rotation}" -type f -exec rm {} \; Any ideas for alternatives?

    Read the article

  • How to go about scheduling a task in windows 7 to change wireless connection

    - by Skindeep2366
    This may or not be something that can be done. I cannot find anything on the wireless connection manager built into windows 7 let alone methods for passing params into it. Problem is as follows: I have 2 wireless routers. One provides internet access, the other provides sole access to the local network. Every day at 4am the main system creates a backup in 2 locations. One is a External usb drive, the other is a location on the network. This is all cool if it is remembered to change over to the local network router before leaving. But if it is forgotten the roof will collapse, the walls will burn, and I will be... well you get the idea. Solution: there is already a custom event that fires a automated backup program at 4am everyday. I need someway to force the wireless network to use the correct connection at say 3:58am everyday. Any ideas????

    Read the article

  • P2V using Acronis True Image Home 10 and Windows 7

    - by Anthony
    I have a full system image using Acronis True Image Home 10 and want to run it as a virtual machine on Windows 7 Professional. I have created a virtual machine but Windows Virtual PC doesn't allow access to a USB external hard disk when booting from the Acronis Recovery CD. I've copied the backup onto the host machine and I can access it via the network using the Acronis boot CD but I'm wondering if there is an easier way? Does any other free Virtual Machine software support USB devices during boot (i.e. I can restore a backup image from the USB hard disk directly)

    Read the article

  • Can't login to SQL Server after moving machine to different office/domain

    - by Dan
    Our company has just been bought and the over the weekend I have brought up the last few machines to plug into their network (they are under a different Windows Domain). The last machine is our Vault system and the SQL Server was using Windows Authentication. I have plugged it into their network and its working fine, but i cannot connect to SQL Server with Management Studio and, I fear, no backup jobs will also be working. When I try to login under Windows Auth, it has the user name of "NEWDOMAIN\Administrator" (greyed out) and then presents a "login failed" message with error code "18456". Can anyone help me with this, or will I just have to reinstall SQL Server, Vault and restore the backup I took before the move?

    Read the article

  • Best way to compare (diff) a full directory structure?

    - by Adam Matan
    Hi, What's the best way to compare directory structures? I have a backup utility which uses rsync. I want to tell the exact differences (in terms of file sizes and last-changed dates) between the source and the backup. Something like: Local file Remote file Compare /home/udi/1.txt (date)(size) /home/udi/1.txt (date)(size) EQUAL /home/udi/2.txt (date)(size) /home/udi/2.txt (date)(size) DIFFERENT Of course, the tool can be ready-made or an idea for a python script. Many thanks! Udi

    Read the article

  • MySQL: Auto-increment value: 0 is smaller than max used value: xx

    - by Rhodri
    Increasingly I'm getting tables having to be repaired dwith the message returned of: Auto-increment value: 0 is smaller than max used value: xx This has happened on tables with 200 rows and tables with ~3 million rows, but so far the same few tables have had the problem. I'm running MySQL 5.0.22. The repairs are run by a script which checks every minute for the need to repair MySQL tables. I also have an automated backup of the 6 Gigabyte database running very two hours and the repairs always get trigged around the time of the backup. Any ideas?

    Read the article

  • How can I move mysites to a new location

    - by Bob
    I recently restored my content and was instructed to create mysites in a different location than was originally used. Now I have several users mysites in /personal. The new desired location is /mysites. From what I found in the documentation I should back them up and restore them to the new location. Here's what I've done: Backup individual site collection for user mysite stsadm -o backup -url "https://myUrl/personal/john_smith" -filename johnsmith.bkup Restore individual site collection for user mysite stsadm -o restore -url "https://myUrl/mysites/john_smith" -filename johnsmith.bkup -overwrite The result of this and the problem is when i enumerate sites i end up with this: <Site Url="https://myUrl/mysites" Owner="domainname\john.smith" ContentDatabase="WSS_Content_MySites" StorageUsedMB="1.6" StorageWarningMB="90000" StorageMaxMB="100000" /> it leaves off the username part of the url. and if I restore more that one they want to overwrite each other.

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >