Search Results

Search found 329 results on 14 pages for 'tape'.

Page 1/14 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Tape Storage - How do I setup a tape backup system for use with my NAS

    - by John Himmelman
    I currently have a QNAP NAS with a raid 5 config (~600gb storage) but don't have a reliable backup solution. I've heard great things about tape backup systems (reliability, durability, etc..). How can I go about setting up a tape backup system? The tape drives seem very expensive (1k+ for a decent one, more than the price of my NAS). What are the important specs to compare and features to take into consideration? Edit: Does anyone have links to some good resources? There is a ton of articles, guides, and sites on this subject, not sure where to start.

    Read the article

  • Back Up to Tape the Way You Shop For Groceries

    - by rickramsey
    Imagine if this was how you shopped for groceries: From the end of the aisle sprint to the point where you reach the ketchup. Pull a bottle from the shelf and yell at the top of your lungs, “Got it!” Sprint back to the end of the aisle. Start again and sprint down the same aisle to the mustard, pull a bottle from the shelf and again yell for the whole store to hear, “Got it!” Sprint back to the end of the aisle. Repeat this procedure for every item you need in the aisle. Proceed to the next aisle and follow the same steps for the list of items you need from that aisle. Sounds ridiculous, doesn’t it? Not only is it horribly inefficient, it’s exhausting and can lead to wear out failures on your grocery cart, or worse, yourself. This is essentially how NetApp and some other applications write NDMP backups to tape. In the analogy, the ketchup and mustard are the files to be written, yelling “Got it!” is the equivalent of a sync mark at the end of a file, and the sprint back to the end of an aisle is the process most commonly called a “backhitch” where the drive has to back up on a tape to start writing again. Writing to tape in this way results in very slow tape drive performance and imposes unnecessary wear on the tape drive and the media, especially when writing small files. The good news is not all tape drives behave this way when writing small files. Unlike midrange LTO drives, Oracle’s StorageTek T10000D tape drive is designed to handle this scenario efficiently. The difference between the two drive types is that the T10000D drive gives you the ability to write files in a NetApp NDMP backup environment the way you would normally shop for groceries. With grocery shopping, you essentially stream through aisles picking up items as you go, and then after checking out, yell, “Got it!”, though you might do that last step silently. With the T10000D, it has a feature called the Tape Application Accelerator, which prevents the drive from having to stop after each file is written to notify NetApp or another application that the write was successful. When enabled in the T10000D tape drive, Tape Application Accelerator causes the tape drive to respond to tape mark and file sync commands differently than when disabled: A tape mark received by the tape drive is treated as a buffered tape mark. A file sync received by the tape drive is treated as a no op command. Since buffered tape marks and no op commands do not cause the tape drive to empty the contents of its buffer to tape and backhitch, the data is written to tape in significantly less time. Oracle has emulated NetApp environments with a number of different file sizes and found the following when comparing the T10000D with the Tape Application Accelerator enabled versus LTO6 tape drives. Notice how the T10000D is not only monumentally faster, but also remarkably consistent? In addition, the writing of the 50 GB of files is done without a single backhitch. The LTO6 drive, meanwhile, will perform as many as 3,800 backhitches! At the end of writing the entire set of files, the T10000D tape drive reports back to the application, in this case NetApp, that the write was successful via a tape mark. So if the Tape Application Accelerator dramatically improves performance and reliability, why wouldn’t you always have it enabled? The reason is because tape drive buffers are meant to be just temporary data repositories so in the event of a power loss, there could be data loss in certain environments for the files that resided in the buffer. Fortunately, we do have best practices depending on your environment to avoid this from happening. I highly recommend reading Maximizing Tape Performance with StorageTek T10000 Tape Drives (pdf) to decide which best practice is right for you. The white paper also digs deeper into the benefits of the Tape Application Accelerator. The white paper is free, and after downloading it you can decide for yourself whether you want to yell “Got it!” out loud or just silently to yourself. Customer Advisory Panel One final link: Oracle has started up a Customer Advisory Panel program to collect feedback from customers on their current experiences with Oracle products, as well as desires for future product development. If you would like to participate in the program, go to this link at oracle.com. photo taken on Idaho's Sacajewea Historic Biway by Rick Ramsey - Brian Zents Follow OTN on Blog | Facebook | Twitter | YouTube

    Read the article

  • What other tape manufacturers can I use in IBM LTO Gen 3 SAS Tape Drives

    - by timgws
    I have recently just got my hands on an IBM 3580S3V. I was just wondering if there are any other brands that I can use in this IBM gear? LTO media are usually manufactured only by EMTEC, Imation, Fujifilm, Maxell, TDK and Sony (according to Wikipedia). I am trying to determine who OEM's the IBM tapes, or if possible, a list of media that IBM supports. So far in my Google studies, I have only been able to see IBM as the supported media manufacturer (seeing as they sell tapes, I assume they don't want anyone buying media from competitors!)

    Read the article

  • Backing up VMs to a tape drive

    - by Aljoscha Vollmerhaus
    I've got myself one of these fancy tape drives, HP LTO2 with 200/400 GB cartridges. The st driver reports it like this: scsi 1:0:0:0: Sequential-Access HP Ultrium 2-SCSI T65D I can store and retrieve files like a charm using tar, both tar cf /dev/st0 somedirectory and tar xf /dev/st0 work flawless. However, what I really would like to backup are LVM LVs. They contain entire virtual machines with varying partition layouts, so using mount and tar is not an option. I've tried using something like dd if=/dev/VG/LV bs=64k of=/dev/st0 to achieve this, but there seem to be various problems associated with this approach. Firstly, I would like to be able to store more than 1 LV on a single tape. Now I guess I could seek to concatenate the data on the tape, but I think this would not work very well in an automated scenario with many different LVs of various sizes. Secondly, I would like to store a small XML file along with the raw data that contains some information about the VM contained in the LV. I could dump everything to a directory and tar it up - not very desirable, I would have to set aside huge amounts of scratch space. Is there an easier way to achieve this? Thirdly, from googling around it seems like it would be wise to use something like mbuffer when writing to the tape, to prevent what wikipedia calls "shoe-shining" the tape. However, I can't get anything useful done with mbuffer. The mbuffer man page suggests this for writing to a tape device: mbuffer -t -m 10M -p 80 -f -o $TAPE So I've tried this: dd if=/dev/VG/LV | mbuffer -t -m 10M -p 80 -f -d 64k -o /dev/st0 Note the added "-d 64k" to account for the 64k block size of the tape. However, reading data back from a tape written in this way never seems to yield any useful results - dd has been running for ages now, and managed to transfer only 361M of data from the tape. What's wrong here?

    Read the article

  • Tape vs. Disk: Tape Refuses to be Evicted

    Like a tenant who refuses to be evicted from an area earmarked for redevelopment, tape is alive and kicking. When it comes to long-term backup retention and archiving, it is holding its own against dedupe — especially among large enterprises.

    Read the article

  • Tape vs. Disk: Tape Refuses to be Evicted

    Like a tenant who refuses to be evicted from an area earmarked for redevelopment, tape is alive and kicking. When it comes to long-term backup retention and archiving, it is holding its own against dedupe — especially among large enterprises.

    Read the article

  • How can I make an actual compact cassette "tape" mix-tape from iTunes?

    - by MikeN
    I want to make an actual compact cassette mix-tape as a gift for someone. I use iTunes to manage all of my music. So a few questions: If I gather a bunch of songs on a playlist for sides A/B of the tape, how can I ensure that the volume for all songs is the same as it plays on the tape? I was thinking of finding an old compact casette recorder and putting the single line sterio output of my Mac to the casette's microphone input. Is that a good way to record onto the actual tape? How long is each side of a compact tape? Is there a default speed the tape plays at? Let's say I want to mesh some songs together so that they will completely fill up one side of a tape (cut off 10 seconds off the end of one song or the beginning of another song), what's the best way to do that?

    Read the article

  • To Delete a tape from the ACSLSlibrary

    - by Senthil Kumar
    Hi Anyone there can help me out for an issue as below: There was a stuck tape in the drive and so the stuck tape was removed, but i need to logically delete the tape entry from NB, so that the same media can be inserted back for operations. Netback thinks that the tape is still in that location, hence it should be removed so that the entry is not there and NB does not recognise that the tape in that location, so the same tape can be taken in through inventory. The NB used is NB5.1 Any command to delete this entry, this is a clustered based Environment (Active/Passive), and we use a ACSLS library (Physical) as well a Switch-SN6000(Logical) Kindly help me out as when we tried to delete the media from GUI it said- Could not delete- Cannot delete assigned volume (92).

    Read the article

  • Internal LTO tape drive becomes hot

    - by claasz
    We use an internal LTO3 tape drive (HP Ultrium 920) in a PC (no particular server hardware) running Linux. The tape drive becomes quite hot - I don't have the exact temperature, but you may touch it for a second or two, then it hurts ;-) This happens when the tape has nothing to do (during reading/writing, it might become even hotter, I haven't checked that). Besides that, the system is working fine. Now I'm wondering Why does the tape becomes so hot? Is this something I need to care about? Is there something like a 'standby' mode for the tape? (I think it should not consume that much energy when it is not used)

    Read the article

  • Is there still a place for tape storage?

    - by Jon Ericson
    We've backed up our data on LTO tapes for years and it's a real comfort to know we have everything on tape. A sister project and one of our data providers have both moved to 100% disk storage because the cost of disk has dropped so much. When we propose systems to potential customers these days we tend to downplay or not mention our use of tape systems for data storage since it might seem outdated. I feel more comfortable with having data saved in two separate formats: disks and tape. In addition, once data is securely written to tape, I feel (perhaps naively) that it's been permanently saved. Not having to rely on a RAID controller to be able to read back data is another plus for me. Do you see a place for tape backup these days?

    Read the article

  • HP Storageworks 448 tape drive input/output error with Ubuntu

    - by Dan D
    I'm trying to set a backup to tape of a machine using flexbackup. However any attempt to write to the tape drive (via either flexbackup or just tar) results in "/dev/st0: Input/output error" The machine seems to recognise the drive (HP Storageworks Ultrium 448) and that there's a tape in it and "mt status" seems to work... "mt -f /dev/st0 rewind" or "erase" throw no errors... root@stor001:/# mt status SCSI 2 tape drive: File number=0, block number=0, partition=0. Tape block size 0 bytes. Density code 0x42 (LTO-2). Soft error count since last status=0 General status bits on (41010000): BOT ONLINE IM_REP_EN root@stor001:/# cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: HL-DT-ST Model: DVDRAM GSA-4084N Rev: KS02 Type: CD-ROM ANSI SCSI revision: 05 Host: scsi2 Channel: 00 Id: 03 Lun: 00 Vendor: HP Model: Ultrium 2-SCSI Rev: S65D Type: Sequential-Access ANSI SCSI revision: 03 "tell" does however root@stor001:/# mt -f /dev/st0 tell /dev/st0: Input/output error Based on a forum post I found, I tried: root@stor001:/# dd if=/dev/zero of=/dev/nst0 bs=1024 count=10 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 5.0815 s, 2.0 kB/s which gave the person on the forum an error but seems to work for me. If anyone has any suggestions, I'm all ears...

    Read the article

  • Directly reading a LTO tape drive

    - by John
    On our server (M$ 2003) is it possible to directly read our LTO 4 tape drive and copy the entire ntbackup created bkf file on it to an external hard disk? (Is the tape backup even stored on a tape as a bkf file, I’m going off when we only used external usb HD’s.)

    Read the article

  • CA ArcServe r11.1 - have to switch Tape Drive Offline then Online to finish backup

    - by Richard
    Ill keep it brief, I have an HP Ultrium 1 in a server currently running CA ArcServe r11.1. I have 5 daily backup tapes, each of which are new. 3 of the 5 work fine without intervention but 2 of them stop at varying points through the backup asking for a new tape, even though that tape is not full. The way I have found around this is to switch the tape drive offline for 10 minutes then switch it back online, whilst the backup is still running. Has anyone ever seen this before? If so, any ideas how to permanently fix this. If all else fails just some pointers in the right direction. Thanks

    Read the article

  • CA ArcServe r11.1 - have to switch Tape Drive Offline then Online to finish backup

    - by Richard
    Ill keep it brief, I have an HP Ultrium 1 in a server currently running CA ArcServe r11.1. I have 5 daily backup tapes, each of which are new. 3 of the 5 work fine without intervention but 2 of them stop at varying points through the backup asking for a new tape, even though that tape is not full. The way I have found around this is to switch the tape drive offline for 10 minutes then switch it back online, whilst the backup is still running. Has anyone ever seen this before? If so, any ideas how to permanently fix this. If all else fails just some pointers in the right direction. Thanks

    Read the article

  • Is Linear Tape File System (LTFS) Best For Transportable Storage?

    - by rickramsey
    Those of us in tape storage engineering take a lot of pride in what we do, but understand that tape is the right answer to a storage problem only some of the time. And, unfortunately for a storage medium with such a long history, it has built up a few preconceived notions that are no longer valid. When I hear customers debate whether to implement tape vs. disk, one of the common strikes against tape is its perceived lack of usability. If you could go back a few generations of corporate acquisitions, you would discover that StorageTek engineers recognized this problem and started developing a solution where a tape drive could look just like a memory stick to a user. The goal was to not have to care about where files were on the cartridge, but to simply see the list of files that were on the tape, and click on them to open them up. Eventually, our friends in tape over at IBM built upon our work at StorageTek and Sun Microsystems and released the Linear Tape File System (LTFS) feature for the current LTO5 generation of tape drives as an open specification. LTFS is really a wonderful feature and we’re proud to have taken part in its beginnings and, as you’ll soon read, its future. Today we offer LTFS-Open Edition, which is free for you to use in your in Oracle Enterprise Linux 5.5 environment - not only on your LTO5 drives, but also on your Oracle StorageTek T10000C drives. You can download it free from Oracle and try it out. LTFS does exactly what its forefathers imagined. Now you can see immediately which files are on a cartridge. LTFS does this by splitting a cartridge into two partitions. The first holds all of the necessary metadata to create a directory structure for you to easily view the contents of the cartridge. The second partition holds all of the files themselves. When tape media is loaded onto a drive, a complete file system image is presented to the user. Adding files to a cartridge can be as simple as a drag-and-drop just as you do today on your laptop when transferring files from your hard drive to a thumb drive or with standard POSIX file operations. You may be thinking all of this sounds nice, but asking, “when will I actually use it?” As I mentioned at the beginning, tape is not the right solution all of the time. However, if you ever need to physically move data between locations, tape storage with LTFS should be your most cost-effective and reliable answer. I will give you a few use cases examples of when LTFS can be utilized. Media and Entertainment (M&E), Oil and Gas (O&G), and other industries have a strong need for their storage to be transportable. For example, an O&G company hunting for new oil deposits in remote locations takes very large underground seismic images which need to be shipped back to a central data center. M&E operations conduct similar activities when shooting video for productions. M&E companies also often transfers files to third-parties for editing and other activities. These companies have three highly flawed options for transporting data: electronic transfer, disk storage transport, or tape storage transport. The first option, electronic transfer, is impractical because of the expense of the bandwidth required to transfer multi-terabyte files reliably and efficiently. If there’s one place that has bandwidth, it’s your local post office so many companies revert to physically shipping storage media. Typically, M&E companies rely on transporting disk storage between sites even though it, too, is expensive. Tape storage should be the preferred format because as IDC points out, “Tape is more suitable for physical transportation of large amounts of data as it is less vulnerable to mechanical damage during transportation compared with disk" (See note 1, below). However, tape storage has not been used in the past because of the restrictions created by proprietary formats. A tape may only be readable if both the sender and receiver have the same proprietary application used to write the file. In addition, the workflows may be slowed by the need to read the entire tape cartridge during recall. LTFS solves both of these problems, clearing the way for tape to become the standard platform for transferring large files. LTFS is open and, as long as you’ve downloaded the free reader from our website or that of anyone in the LTO consortium, you can read the data. So if a movie studio ships a scene to a third-party partner to add, for example, sounds effects or a music score, it doesn’t have to care what technology the third-party has. If it’s written back to an LTFS-formatted tape cartridge, it can be read. Some tape vendors like to claim LTFS is a “standard,” but beauty is in the eye of the beholder. It’s a specification at this point, not a standard. That said, we’re already seeing application vendors create functionality to write in an LTFS format based on the specification. And it’s my belief that both customers and the tape storage industry will see the most benefit if we all follow the same path. As such, we have volunteered to lead the way in making LTFS a standard first with the Storage Network Industry Association (SNIA), and eventually through to standard bodies such as American National Standards Institute (ANSI). Expect to hear good news soon about our efforts. So, if storage transportability is one of your requirements, I recommend giving LTFS a look. It makes tape much more user-friendly and it’s free, which allows tape to maintain all of its cost advantages over disk! Note 1 - IDC Report. April, 2011. “IDC’s Archival Storage Solutions Taxonomy, 2011” - Brian Zents Website Newsletter Facebook Twitter

    Read the article

  • Enterprise Tape Backup solutions

    - by Tom O'Connor
    I'm currently attempting to re-architect a backup solution where I'm working. We've got 2 NAS devices, one in the office, one in the datacentre. The servers in the DC back up to the DC NAS, which is then replicated to the Office NAS. The office NAS exports shares as CIFS and NFS, this bit is fine. At some point, I'll have to expand our storage capacity, currently we've got about 1.4TB of storage space, which is about 96% full. Previously, the tape backup was a script that ran tar a few times and squirted data onto a tape. It worked, but was by no means a perfect solution. Restores are a bit of a pest, adding new data to the backup requires editing the script as root. It's just all a bit non-ideal. I've been evaluating a number of "enterprise" ready backup solutions, such as Yosemite Backup from Barracuda, Acronis Backup/Restore, and something from Arkeia. In the process of evaluating these, I've found 2 big problems. Not all of them allow backup of mounted devices (such as a NFS mounted NAS) Many of these applications don't like our tape device. For the most part, (1) is essential. Our NAS has a feeble processor and can't run applications like backup agents. I suspect that the biggest problem is the tape device, which is a HP C7438A DAT72 connected via USB. Questions: Has anyone else got an USB DAT72 device working with similar software? Is there a better way to back up data from an "appliance" NAS device on which you can't run an agent? Would I be totally out of my mind to specify a cheap HP or Dell server with a couple of 1TB hard disks, and a SAS card to then talk to an HP Ultrium (or similar) device? The biggest drawback to this would be cost (400ish for the server, 200 for the SAS connectivity and 1700 for a LTO4 device) Notes: I'd love to be able to say that I'd get rid of tapes entirely, and use some form of hard disk backup. In a previous job, we had LaCie USB drives, which were decidedly unreliable.

    Read the article

  • Backing up oracle to TAPE

    - by andreas
    Hi folks, our Oracle database has grown very large as of late ~= 400 - 500 GB and saving to filesystem is not scalable anymore to us. We are looking at using RMAN to backup to tape (directly, not to fs then tape). Anyone can shed a light on this please?

    Read the article

  • Bacula & Multiple Tape Devices, and so on

    - by Tom O'Connor
    Bacula won't make use of 2 tape devices simultaneously. (Search for #-#-# for the TL;DR) A little background, perhaps. In the process of trying to get a decent working backup solution (backing up 20TB ain't cheap, or easy) at $dayjob, we bought a bunch of things to make it work. Firstly, there's a Spectra Logic T50e autochanger, 40 slots of LTO5 goodness, and that robot's got a pair of IBM HH5 Ultrium LTO5 drives, connected via FibreChannel Arbitrated Loop to our backup server. There's the backup server.. A Dell R715 with 2x 16 core AMD 62xx CPUs, and 32GB of RAM. Yummy. That server's got 2 Emulex FCe-12000E cards, and an Intel X520-SR dual port 10GE NIC. We were also sold Commvault Backup (non-NDMP). Here's where it gets really complicated. Spectra Logic and Commvault both sent respective engineers, who set up the library and the software. Commvault was running fine, in so far as the controller was working fine. The Dell server has Ubuntu 12.04 server, and runs the MediaAgent for CommVault, and mounts our BlueArc NAS as NFS to a few mountpoints, like /home, and some stuff in /mnt. When backing up from the NFS mountpoints, we were seeing ~= 290GB/hr throughput. That's CRAP, considering we've got 20-odd TB to get through, in a <48 hour backup window. The rated maximum on the BlueArc is 700MB/s (2460GB/hr), the rated maximum write speed on the tape devices is 140MB/s, per drive, so that's 492GB/hr (or double it, for the total throughput). So, the next step was to benchmark NFS performance with IOzone, and it turns out that we get epic write performance (across 20 threads), and it's like 1.5-2.5TB/hr write, but read performance is fecking hopeless. I couldn't ever get higher than 343GB/hr maximum. So let's assume that the 343GB/hr is a theoretical maximum for read performance on the NAS, then we should in theory be able to get that performance out of a) CommVault, and b) any other backup agent. Not the case. Commvault seems to only ever give me 200-250GB/hr throughput, and out of experimentation, I installed Bacula to see what the state of play there is. If, for example, Bacula gave consistently better performance and speeds than Commvault, then we'd be able to say "**$.$ Refunds Plz $.$**" #-#-# Alas, I found a different problem with Bacula. Commvault seems pretty happy to read from one part of the mountpoint with one thread, and stream that to a Tape device, whilst reading from some other directory with the other thread, and writing to the 2nd drive in the autochanger. I can't for the life of me get Bacula to mount and write to two tape drives simultaneously. Things I've tried: Setting Maximum Concurrent Jobs = 20 in the Director, File and Storage Daemons Setting Prefer Mounted Volumes = no in the Job Definition Setting multiple devices in the Autochanger resource. Documentation seems to be very single-drive centric, and we feel a little like we've strapped a rocket to a hamster, with this one. The majority of example Bacula configurations are for DDS4 drives, manual tape swapping, and FreeBSD or IRIX systems. I should probably add that I'm not too bothered if this isn't possible, but I'd be surprised. I basically want to use Bacula as proof to stick it to the software vendors that they're overpriced ;) I read somewhere that @KyleBrandt has done something similar with a modern Tape solution.. Configuration Files: *bacula-dir.conf* # # Default Bacula Director Configuration file Director { # define myself Name = backuphost-1-dir DIRport = 9101 # where we listen for UA connections QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 20 Password = "yourekiddingright" # Console password Messages = Daemon DirAddress = 0.0.0.0 #DirAddress = 127.0.0.1 } JobDefs { Name = "DefaultFileJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = File Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" } JobDefs { Name = "DefaultTapeJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = "SpectraLogic" Messages = Standard Pool = AllTapes Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" Prefer Mounted Volumes = no } # # Define the main nightly save backup job # By default, this job will back up to disk in /nonexistant/path/to/file/archive/dir Job { Name = "BackupClient1" JobDefs = "DefaultFileJob" } Job { Name = "BackupThisVolume" JobDefs = "DefaultTapeJob" FileSet = "SpecialVolume" } #Job { # Name = "BackupClient2" # Client = backuphost-12-fd # JobDefs = "DefaultJob" #} # Backup the catalog database (after the nightly save) Job { Name = "BackupCatalog" JobDefs = "DefaultFileJob" Level = Full FileSet="Catalog" Schedule = "WeeklyCycleAfterBackup" # This creates an ASCII copy of the catalog # Arguments to make_catalog_backup.pl are: # make_catalog_backup.pl <catalog-name> RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog" # This deletes the copy of the catalog RunAfterJob = "/etc/bacula/scripts/delete_catalog_backup" Write Bootstrap = "/var/lib/bacula/%n.bsr" Priority = 11 # run after main backup } # # Standard Restore template, to be changed by Console program # Only one such job is needed for all Jobs/Clients/Storage ... # Job { Name = "RestoreFiles" Type = Restore Client=backuphost-1-fd FileSet="Full Set" Storage = File Pool = Default Messages = Standard Where = /srv/bacula/restore } FileSet { Name = "SpecialVolume" Include { Options { signature = MD5 } File = /mnt/SpecialVolume } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } # List of files to be backed up FileSet { Name = "Full Set" Include { Options { signature = MD5 } File = /usr/sbin } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } Schedule { Name = "WeeklyCycle" Run = Full 1st sun at 23:05 Run = Differential 2nd-5th sun at 23:05 Run = Incremental mon-sat at 23:05 } # This schedule does the catalog. It starts after the WeeklyCycle Schedule { Name = "WeeklyCycleAfterBackup" Run = Full sun-sat at 23:10 } # This is the backup of the catalog FileSet { Name = "Catalog" Include { Options { signature = MD5 } File = "/var/lib/bacula/bacula.sql" } } # Client (File Services) to backup Client { Name = backuphost-1-fd Address = localhost FDPort = 9102 Catalog = MyCatalog Password = "surelyyourejoking" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # # Second Client (File Services) to backup # You should change Name, Address, and Password before using # #Client { # Name = backuphost-12-fd # Address = localhost2 # FDPort = 9102 # Catalog = MyCatalog # Password = "i'mnotjokinganddontcallmeshirley" # password for FileDaemon 2 # File Retention = 30 days # 30 days # Job Retention = 6 months # six months # AutoPrune = yes # Prune expired Jobs/Files #} # Definition of file storage device Storage { Name = File # Do not use "localhost" here Address = localhost # N.B. Use a fully qualified name here SDPort = 9103 Password = "lalalalala" Device = FileStorage Media Type = File } Storage { Name = "SpectraLogic" Address = localhost SDPort = 9103 Password = "linkedinmakethebestpasswords" Device = Drive-1 Device = Drive-2 Media Type = LTO5 Autochanger = yes } # Generic catalog service Catalog { Name = MyCatalog # Uncomment the following line if you want the dbi driver # dbdriver = "dbi:sqlite3"; dbaddress = 127.0.0.1; dbport = dbname = "bacula"; DB Address = ""; dbuser = "bacula"; dbpassword = "bbmaster63" } # Reasonable message delivery -- send most everything to email address # and to the console Messages { Name = Standard mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r" operatorcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r" mail = root@localhost = all, !skipped operator = root@localhost = mount console = all, !skipped, !saved # # WARNING! the following will create a file that you must cycle from # time to time as it will grow indefinitely. However, it will # also keep all your messages if they scroll off the console. # append = "/var/lib/bacula/log" = all, !skipped catalog = all } # # Message delivery for daemon messages (no job). Messages { Name = Daemon mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula daemon message\" %r" mail = root@localhost = all, !skipped console = all, !skipped, !saved append = "/var/lib/bacula/log" = all, !skipped } # Default pool definition Pool { Name = Default Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year } # File Pool definition Pool { Name = File Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year Maximum Volume Bytes = 50G # Limit Volume size to something reasonable Maximum Volumes = 100 # Limit number of Volumes in Pool } Pool { Name = AllTapes Pool Type = Backup Recycle = yes AutoPrune = yes # Prune expired volumes Volume Retention = 31 days # one Moth } # Scratch pool definition Pool { Name = Scratch Pool Type = Backup } # # Restricted console used by tray-monitor to get the status of the director # Console { Name = backuphost-1-mon Password = "LastFMalsostorePasswordsLikeThis" CommandACL = status, .status } bacula-sd.conf # # Default Bacula Storage Daemon Configuration file # Storage { # definition of myself Name = backuphost-1-sd SDPort = 9103 # Director's port WorkingDirectory = "/var/lib/bacula" Pid Directory = "/var/run/bacula" Maximum Concurrent Jobs = 20 SDAddress = 0.0.0.0 # SDAddress = 127.0.0.1 } # # List Directors who are permitted to contact Storage daemon # Director { Name = backuphost-1-dir Password = "passwordslinplaintext" } # # Restricted Director, used by tray-monitor to get the # status of the storage daemon # Director { Name = backuphost-1-mon Password = "totalinsecurityabound" Monitor = yes } Device { Name = FileStorage Media Type = File Archive Device = /srv/bacula/archive LabelMedia = yes; # lets Bacula label unlabeled media Random Access = Yes; AutomaticMount = yes; # when device opened, read it RemovableMedia = no; AlwaysOpen = no; } Autochanger { Name = SpectraLogic Device = Drive-1 Device = Drive-2 Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d" Changer Device = /dev/sg4 } Device { Name = Drive-1 Drive Index = 0 Archive Device = /dev/nst0 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } Device { Name = Drive-2 Drive Index = 1 Archive Device = /dev/nst1 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } # # Send all messages to the Director, # mount messages also are sent to the email address # Messages { Name = Standard director = backuphost-1-dir = all } bacula-fd.conf # # Default Bacula File Daemon Configuration file # # # List Directors who are permitted to contact this File daemon # Director { Name = backuphost-1-dir Password = "hahahahahaha" } # # Restricted Director, used by tray-monitor to get the # status of the file daemon # Director { Name = backuphost-1-mon Password = "hohohohohho" Monitor = yes } # # "Global" File daemon configuration specifications # FileDaemon { # this is me Name = backuphost-1-fd FDport = 9102 # where we listen for the director WorkingDirectory = /var/lib/bacula Pid Directory = /var/run/bacula Maximum Concurrent Jobs = 20 #FDAddress = 127.0.0.1 FDAddress = 0.0.0.0 } # Send all messages except skipped files back to Director Messages { Name = Standard director = backuphost-1-dir = all, !skipped, !restored }

    Read the article

  • Convert a cassette tape recording to digital format

    - by Electric Automation
    Has anyone been successful with transferring audio cassette tape recordings to a digital format? I would like to preserve old cassette tape recordings of my grandparents to some digital format: MP3, WAV, etc... The quality of the tapes are mediocre. I think I can handle the quality restoration but getting the audio from tape to digital is my question. Below is a list of the hardware that I can work with: Cassette Deck: I have a Technics stereo cassette deck model RS-B12. It has separate left and right IN and OUT RCA type jacks on the back. In the front it has a headphone phono jack, plus left and right mic input phono jacks. On the computer side: -I have a Windows Vista PC with no additional software other than what came with the machine from Costco. No sound editing software that I can see. There is no sound card on the PC. On the front panel there is a mini-phono mic input jack and there are several different types of in/out mini-phono jacks on the back. In addition, USB and Firewire. I also have access to a new (2009) iMac with a mini-phono input jack for a powered mic or other audio source and GarageBand that has come with the computer. In addition, USB and Firewire. What are my options for getting these cassette recordings into a digital format? Whats the best format? What sort of wires would I need and will I want to utilize the USB or Firewire or can I simply use the audio inputs on the PC (or Mac) to receive the audio stream?

    Read the article

  • Convert a cassette tape recording to digital format

    - by Optimal Solutions
    Has anyone been successful with transferring audio cassette tape recordings to a digital format? I would like to preserve old cassette tape recordings of my grandparents to some digital format: MP3, WAV, etc... The quality of the tapes are mediocre. I think I can handle the quality restoration but getting the audio from tape to digital is my question. Below is a list of the hardware that I can work with: Cassette Deck: I have a Technics stereo cassette deck model RS-B12. It has separate left and right IN and OUT RCA type jacks on the back. In the front it has a headphone phono jack, plus left and right mic input phono jacks. On the computer side: -I have a Windows Vista PC with no additional software other than what came with the machine from Costco. No sound editing software that I can see. There is no sound card on the PC. On the front panel there is a mini-phono mic input jack and there are several different types of in/out mini-phono jacks on the back. In addition, USB and Firewire. I also have access to a new (2009) iMac with a mini-phono input jack for a powered mic or other audio source and GarageBand that has come with the computer. In addition, USB and Firewire. What are my options for getting these cassette recordings into a digital format? Whats the best format? What sort of wires would I need and will I want to utilize the USB or Firewire or can I simply use the audio inputs on the PC (or Mac) to receive the audio stream?

    Read the article

  • Symantec Backup Exec 12 Tape Alert.

    - by Adam
    Every day, I run 5 backups using 6 tapes. Each day, when I run the inventory, I get a tape alert Error. This occurs every day, on the same job. The error is: Job 'Inventory Daily ********' has reported Multiple Tape Alerts on server '******' Please refer to job log *****.xml for more information. When i look at the Job log, the Utility Job Information says: The device has reported the following TapeAlert diagnostic information Information- The library has been manually turned offline and is unavailable for use. Robotic library for device: PV132T 500 Warning - Library security has been compromised. Robotic Library for device: PV132T 500. Critical - The library has detected a inconsistency in its inventory. 1.Redo the library inventory to correct the inconsistency. 2. Restart the operation. Check the applications users manual or hardware users manual for specific instructions on redoing the library inventory. Roboric Library for Device PV132T 500. When I run the same inventory for a second time, the job completes successfully. I am using Symantec Backup Exec 12 running on Windows Server 2008. I am using a Dell Powervault 132T 500 tape drive. If anyone can help me on how to resolve this problem, it would be very much appreciated.

    Read the article

  • 10 tape technology features that make you go hmm.

    - by Karoly Vegh
    A week ago an Oracle/StorageTek Tape Specialist, Christian Vanden Balck, visited Vienna, and agreed to visit customers to do techtalks and update them about the technology boom going around tape. I had the privilege to attend some of his sessions and noted the information and features that took the customers by surprise and made them think. Allow me to share the top 10: I. StorageTek as a brand: StorageTek is one of he strongest names in the Tape field. The brand itself was valued so much by customers that even after Sun Microsystems acquiring StorageTek and the Oracle acquiring Sun the brand lives on with all the Oracle tapelibraries are officially branded StorageTek.See http://www.oracle.com/us/products/servers-storage/storage/tape-storage/overview/index.html II. Disk information density limitations: Disk technology struggles with information density. You haven't seen the disk sizes exploding lately, have you? That's partly because there are physical limits on a disk platter. The size is given, the number of platters is limited, they just can't grow, and are running out of physical area to write to. Now, in a T10000C tape cartridge we have over 1000m long tape. There you go, you have got your physical space and don't need to stuff all that data crammed together. You can write in a reliable pattern, and have space to grow too. III. Oracle has a market share of 62% worldwide in recording head manufacturing. That's right. If you are running LTO drives, with a good chance you rely on StorageTek production. That's two out of three LTO recording heads produced worldwide.  IV. You can store 1 Exabyte data in a single tape library. Yes, an Exabyte. That is 1000 Petabytes. Or, a million Terabytes. A thousand million GigaBytes. You can store that in a stacked StorageTek SL8500 tapelibrary. In one SL8500 you can put 10.000 T10000C cartridges, that store 10TB data (compressed). You can stack 10 of these SL8500s together. Boom. 1000.000 TB.(n.b.: stacking means interconnecting the libraries. Yes, cartridges are moved between the stacked libraries automatically.)  V. EMC: 'Tape doesn't suck after all. We moved on.': Do you remember the infamous 'Tape sucks, move on' Datadomain slogan? Of course they had to put it that way, having only had disk products. But here's a fun fact: on the EMCWorld 2012 there was a major presence of a Tape-tech company - EMC, in a sudden burst of sanity is embracing tape again. VI. The miraculous T10000C: Oracle StorageTek has developed an enterprise-grade tapedrive and cartridge, the T10000C. With awesome numbers: The Cartridge: Native 5TB capacity, 10TB with compression Over a kilometer long tape within the cartridge. And it's locked when unmounted, no rattling of your data.  Replaced the metalparticles datalayer with BaFe (bariumferrite) - metalparticles lose around 7% of magnetism within 30 days. BaFe does not. Yes we employ solid-state physicists doing R&D on demagnetisation in our labs. Can be partitioned, storage tiering within the cartridge!  The Drive: 2GB Cache Encryption implemented in HW - no performance hit 252 MB/s native sustained data rate, beats disk technology by far. Not to mention peak throughput.  Leading the tape while never touching the data side of it, protecting your data physically too Data integritiy checking (CRC recalculation) on tape within the drive without having to read it back to the server reordering data from tape-order, delivering it back in application-order  writing 32 tracks at once, reading them back for CRC check at once VII. You only use 20% of your data on a regular basis. The rest 80% is just lying around for years. On continuously spinning disks. Doubly consuming energy (power+cooling), blocking diskstorage capacity. There is a solution called SAM (Storage Archive Manager) that provides you a filesystem unifying disk and tape, moving data on-demand and for clients transparently between the different storage tiers. You can share these filesystems with NFS or CIFS for clients, and enjoy the low TCO of tape. Tapes don't spin. They sit quietly in their slots, storing 10TB data, using no energy, producing no heat, automounted when a client accesses their data.See: http://www.oracle.com/us/products/servers-storage/storage/storage-software/storage-archive-manager/overview/index.html VIII. HW supported for three decades: Did you know that the original PowderHorn library was released in '87 and has been only discontinued in 2010? That is over two decades of supported operation. Tape libraries are - just like the data carrying on tapecartridges - built for longevity. Oh, and the T10000C cartridge has 30-year archival life for long-term retention.  IX. Tape is easy to manage: Have you heard of Tape Storage Analytics? It is a central graphical tool to summarize, monitor, analyze dataflow, health and performance of drives and libraries, see: http://www.oracle.com/us/products/servers-storage/storage/tape-storage/tape-analytics/overview/index.html X. The next generation: The T10000B drives were able to reuse the T10000A cartridges and write on them even more data. On the same cartridges. We call this investment protection, and this is very important for Oracle for the future too. We usually support two generations of cartridges together. The current drive is a T10000C. (...I know I promised to enlist 10, but I got still two more I really want to mention. Allow me to work around the problem: ) X++. The TallBots, the robots moving around the cartridges in the StorageTek library from tapeslots to the drives are cableless. Cables, belts, chains running to moving parts in a library cause maintenance downtimes. So StorageTek eliminated them. The TallBots get power, commands, even firmwareupgrades through the rails they are running on. Also, the TallBots don't just hook'n'pull the tapes out of their slots, they actually grip'n'lift them out. No friction, no scratches, no zillion little plastic particles floating around in the library, in the drives, on your data. (X++)++: Tape beats SSDs and Disks. In terms of throughput (252 MB/s), in terms of TCO: disks cause around 290x more power and cooling, in terms of capacity: 10TB on a single media and soon more.  So... do you need to store large amounts of data? Are you legally bound to archive it for dozens of years? Would you benefit from automatic storage tiering? Have you got large mediachunks to be streamed at times? Have you got power and cooling issues in the growing datacenters? Do you find EMC's 180° turn of tape attitude interesting, but appreciate it at the same time? With all that, you aren't alone. The most data on this planet is stored on tape. Tape is coming. Big time.

    Read the article

  • Looking for a new backup solution to replace dying tape drive

    - by E3 Group
    We're running Windows Server 2003 SBS and another machine with Server 2003 Standard on it. The SBS server is about 7 years old running pretty much 24/7 - a HP server of some description. We have an Ultrium 448 cycling LTO2 400GB tapes daily and incrementally backing up approximately 100gb worth of data (20gb C:\ and system state, 40gb exchange, 40gb database for some crap marketing software) on BackupExec 10D. As of 5 months ago, the backups have been consistently failing with IO errors, bad reads and some write errors. When I say consistent, I mean every time and we haven't had a proper backup for the entire 5 months - So if the server explodes tomorrow, 7 years worth of data will just cease to exist. I've only just recently rejoined the company and am looking at rectifying the more concerning problems, so the first thing I did was try a backup to an USB2.0 external drive. It was excruciatingly slow. In fact it was so slow it took 40 hours and it still wasn't finished. I ended up cancelling it and reconfiguring the selections again to reduce file size. This, however, isn't a permanent solution. I concluded that the IO error was either from a faulty tape drive (which has a tape stuck in there right now and not coming out) or from a dying SCSI controller. Neither of them are good news and both are extremely expensive to fix. I'm operating on an extremely low budget so have been looking at outsourcing the backups. A company in Sydney (where I'm located) offer incremental online backups via a NAS. It costs almost double a new tape drive but offers monthly repayments which will let us get through times when cash flow is minimal. It seems like a sweet deal but it is still a little bit pricey. So I'm looking for a cheaper, yet reliable solution. Maybe some in-house NAS or something offsite? The idea is to avoid using tapes. Are there any recommendations for rectifying my current situation? Or are tapes the only way to go? I'm concerned that the server will die one day in the near future and I must be able to restore it to another server with different hardware.

    Read the article

  • LTO 2 tape performance in LTO 3 drive

    - by hmallett
    I have a pile of LTO 2 tapes, and both an LTO 2 drive (HP Ultrium 460e), and an autoloader with an LTO 3 drive in (Tandberg T24 autoloader, with a HP drive). Performance of the LTO 2 tapes in the LTO 2 drive is adequate and consistent. HP L&TT tells me that the tapes can be read and written at 64 MB/s, which seems in line with the performance specifications of the drive. When I perform a backup (over the network) using Symantec Backup Exec, I get about 1700 MB/min backup and verify speeds, which is slower, but still adequate. Performance of the LTO 2 tapes in the LTO 3 drive in the autoloader is a different story. HP L&TT tells me that the tapes can be read at 82 MB/s and written at 49 MB/s, which seems unusual at the write speed drop, but not the end of the world. When I perform a backup (over the network) using Symantec Backup Exec though, I get about 331 MB/min backup speed and 205 MB/min verify speeds, which is not only much slower, but also much slower for reads than for writes. Notes: The comparison testing was done on the same server, SCSI card and SCSI cable, with the same backup data set and the same tape each time. The tape and drives are error-free (according to HP L&TT and Backup Exec). The SCSI card is a U160 card, which is not normally recommended for LTO 3, but we're not writing to LTO 3 tapes at LTO 3 speeds, and a U320 SCSI card is not available to me at the moment. As I'm scratching my head to determine the reason for the performance drop, my first question is: While LTO drives can write to the previous generation LTO tapes, does doing so normally incur a performance penalty?

    Read the article

  • Tape vs SSDs backups regarding long-term storage reliability

    - by user66131
    My question is very specifically about solid state drives, not regular hard drives. I would like to put in place a grandfather-father-son backup scheme, with the SSDs being used for the grandfather and father portions, and the yearly grandfather would be locked in a safe offsite for maybe 5-10 years. Can I expect that after this period of time the data would be preserved as well as it would be on a tape?

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >