When you record video with an iPhone 3gs and then backup using iTunes where are those video file stored? I'm trying to retrieve some lost files.
Thanks
I have a backup copy of my previous Documents and Settings folder which only contains my original user and within that, 2 directories (Favorites and Local Settings) which are visible in cmd shell but when I try to delete them, Windows gives me this error:
If I try to delete the Documents and Settings folder, I receive this warning:
I tried doing this in a cmd shell:
attrib *.* -r -a -s -h /s
But it did not help, nor did it return any errors/warnings.
Unlocker 1.8.5 returns: No Locking handle found.
Any ideas?
I have two SATA HDDs in my desktop PC (one for daily activity, one for storage and backup). I can finely use ReadyBoost with pendrives, but I wonder, Is there a way I could use my underutilized second HDD to participate in the cacheing mechanism (same concept as having two CPU cores crunch things in parallel: have two HDDs fetch data in parallel)? Clearly speaking: I want to enable ReadyBoost on my separate D: drive.
I keep data on a USB drive, but I also keep a copy of all of that data on a hard disk. I like using the hard disk because it's faster and gives me a backup. What standalone tools would work to keep the files on the disk and USB drive in sync? I'd like a single command line executable or standalone GUI app that can do the job--something I could run off of the USB drive. So, things like the MS Sync Tool wouldn't work.
I'm taking over IT responsibilities at a small company. Most of the servers appear to be running various releases of Fedora (file servers, backup servers, oracle servers, etc).
I don't have much experience with Fedora, but I was under the impression its geared for end user desktops/workstations/laptops.
Is Fedora a bad choice for servers?
We need to reformat the SQL cluster disk in our SQL cluster. The drive contains the shared installation files for SQL as well as databases.
My concern is how SQL/The Cluster will react to after we wipe the disk resource.
Questions:
Is there a defined procedure for this?
How should we backup and restore the disk?
After the reformat, how do we get the clustered SQL server back online?
Thanks
I recently changed my physical location, and had to change my DNS server setting in network preferences. However my Mac reverts back to original DNS server IP address on each reboot and I have to manually change it everytime. How can I make my changes persist on reboot?
I am running Snow Leopard 10.6.7
UPDATE
This is has started to occur since the time I restored my entire system from TM backup.
I have over 8GB in my "Code Library" that I maintain on a 64GB ScanDisk Ultra Backup USB Device.
Windows Search 4.0 (installed on Windows XP) can index removable drives, but Windows 7 (which uses Windows Search 4.0) cannot because the USB device identifies itself as a Removable drive and Windows 7 refuses to index removable drives.
How can I mount the USB Thumb Drive as Fixed instead of Removable?
All suggestions welcome and greatly appreciated.
I know this is a loaded question!
What are the best ways to manage Windows (2000, XP, Vista, Win7) workstation from a centralized linux server. I would like to replace the fuctionaility of MS SBS Server with a linux box. The following issues would need to be addressed.
File Sharing
Authentication, Authorization, and Access Control
Software Installation
Centralized Login Script
Centralized Backup
There's a bucket into which some users may write their data for backup purposes.
They use s3cmd to put new files into their bucket.
I'd like to enforce a non-destruction policy on these buckets - meaning, it should be impossible for users to destroy data, they should only be able to add data.
How can I create a bucket policy that only lets a certain user put a file if it doesn't already exist, and doesn't let him do anything else with the bucket.
Is there any way to untar and only extract those files that are above a certain date including directory structure??
I restored a backup on a play server but it was a few days old. However I have a tar archive of the entire structure that is more up to date and healthy so now I want to extract all files (including directory structure) based on a date filter on the files if possible?
I'm trying to clone my windows xp installation.
If I back it up using clone zilla and the my xp machine is infected by virus/spyware would I also be bringing the whole mess when I try to back it up.
Do I need the whole partition/ whole disk if I use an external hard drive to backup.
Would the data be formatted on the partition that I choose?
I am using this is fstab to mount the partition at backup.
/dev/sda5 /media/virtual ntfs defaults 0 0
When i reboot the permissions are automatically set to 777.
I want that only one user i.e userA can read and write , all others should not see the contents of that drive.
What should i do
anything like
/dev/sda5 /media/virtual ntfs userA 700 defaults 0 0
Let's start with saying that I'm a total noob wrt. to server virtualization. That is, I use VMs often during development, but they're simple desktop machine things for me.
Now to my problem: We have two (physical) build servers, one master, one slave running Jenkins to do daily tasks and build (Visual C++ Builds) our release packages for our software. As such these machines are critical to our company, because we do lot's releases and without a controlled environment to create them, we can't ship fixes. (And currently there's no proper backup of these machines in place, because they do not hold any data as such - it just would be a major pain to setup them again should they go bust. (But setting up backup that I'd know would work in case of HW failure would even be more pain, so we have skipped that until now.))
Therefore (and for scaling purposes) we would like to go virtual with these machines.
Outsourcing to the cloud is not an option, not at all, so we'll have to use on-premises hardware and VM hosts.
Each Build-Server (master or slave) is a fully configured (installs, licenses, shares in case of the master, ...) Windows Server box. I would now ideally like to just convert the (two) existing physical nodes to VM images and run them. Later add more VM slave instances as clones of the existing ones.
And here begin my questions:
Should I go for one VM per one hardware-box or should I go for something where a single hardware runs multiple VMs?
That would mean a single point of failure hardware wise and doesn't seem like a good idea ... or??
Since we're doing C++ compilation with Visual Studio, I assume that during a build the hardware (processor cores + disk) will be fully utilized, so going with more than one build-node per hardware doesn't seem to make much sense??
Wrt. to hardware options, does it make any difference which VM software we use (VMWare, MS, Virtualbox, ... ?) (We're using Windows exclusively for our builds.)
Regarding budget: We have a normal small company (20 developers) budget for this. ;-) That is, if it's going to cost a few k$ it's going to cost. If it's free - the better. I strongly prefer solutions where there's no multi-k$ maintenance costs per year.
On fedora-13, I tried using:
unzip -j [nameof.zip]
but this doesn't seem to maintain the folder structure of the original archive. I REALLY need to maintain this structure because the archive is a backup of all my m4a's which are being converted to mp3. If I just convert it as is, then i'll just have a single massive directory full of mp3's, but they won't be in their respective "artist" folder.
I'm looking to store some data online but I want to encrypt the files first. Since I understand that sFTP will only encrypt the transmission of the data, I'm wondering what program others use to encrypt their files prior to sFTPing them to a backup server.
Thanks.
Problem
Retrospect is a backup system that my organization uses, but I can not find support for my Ubuntu 10.04 64bit desktop.
What I have tried (but did not work)
download the Redhat version and attempt to convert to deb
wget http://download.dantz.com/archives/Linux_Client-7_6_100.rpm
sudo alien Linux_Client-7_6_100.rpm
The Restrospect user forum has this thread, which provides an i386 .deb file for installing Retrospect
Question
Is there a way to install this on my system?
Hi,
A user deleted his documents from his laptop somehow and has no backup available. How would one go on his way to recover these deleted files. I have zero experience on this issue.
Are there any open source or freeware tools that I can use to attempt a recovery of these files.
Thanks
I want to backup my list of manually selected packages in Ubuntu, without listing packages installed as dependencies. For example,
dpkg --get-selections
returns a complete list of all installed packages, manually selected as well as dependencies. How can I filter dependencies?
I took a backup of a directory which has a number directories and files inside them. Recently some files have gone missing. I would like to just move over the missing files.
I prefer moving files instead of just copying as space is a premium on this particular box and the files are quite large.
How can i achieve this?
I'm trying to setup rsync to backup a remote directory to my local drive.
I cd to the directory that I want to pull the files to, then I enter:
rsync -vrtW [email protected]:~/public_html
I enter the password then it starts running. I get all the files listed, but none of them actually transfer. What am I missing?
Thanks
I have a six disk ZFS raidz1 pool and had a recent failure requiring a disk replacement. No problem normally, but this time my server hardware died before I could do the replacement (but after and unrelated to the drive failure as far as I can tell).
I was able to get another machine from a friend to rebuild the system, but in the process of moving my drives over I had to swap their cables around a bunch until I got the right configuration where the remaining 5 good disks were seen as online. This process seems to have generated some checksum errors for the pool/raidz.
I have the 5 remaining drives set up now and a good drive installed and ready to take the place of the drive that died. However, since my pool state is FAULTED I'm unable to do the replacement.
root@zfs:~# zpool replace tank 1298243857915644462 /dev/sdb
cannot open 'tank': pool is unavailable
Is there any way to recover from this error? I would think that having 5 of the 6 drives online would be enough to rebuild the right data, but that doesn't seem to be enough now.
Here's the status log of my pool:
root@zfs:~# zpool status tank
pool: tank
state: FAULTED
status: One or more devices could not be used because the label is missing or invalid.
There are insufficient replicas for the pool to continue functioning.
action: Destroy and re-create the pool from a backup source.
see: http://zfsonlinux.org/msg/ZFS-8000-5E
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank FAULTED 0 0 1 corrupted data
raidz1-0 ONLINE 0 0 8
sdd ONLINE 0 0 0
sdf ONLINE 0 0 0
sdh ONLINE 0 0 0
1298243857915644462 UNAVAIL 0 0 0 was /dev/sdb1
sde ONLINE 0 0 0
sdg ONLINE 0 0 0
Update (10/31): I tried to export and re-import the array a few times over the past week and wasn't successful. First I tried:
zpool import -f -R /tank -N -o readonly=on -F tank
That produced this error immediately:
cannot import 'tank': I/O error
Destroy and re-create the pool from a backup source.
I added the '-X' option to the above command to try to make it check the transaction log. I let that run for about 48 hours before giving up because it had completely locked up my machine (I was unable to log in locally or via the network).
Now I'm trying a simple zpool import tank command and that seems to run for a while with no output. I'll leave it running overnight to see if it outputs anything.
I am working on a project that requires accessing the Derby database behind a CDP Backup Server. From what limited research I've been able to complete, I have found that it is possible to access Derby databases over TCP, but I'm at a complete loss for this.
I'm looking to connect via PHP eventually, but first I need to know if this is at all possible with an out-of-the-box CDP server.
Answers are, as always, appreciated.
Thanks!
I have over 8GB in my "Code Library" that I maintain on a 64GB "ScanDisk Ultra Backup USB Device".
Windows Search 4.0 (installed on Windows XP) can index removable drives, but Windows 7 (which uses Windows Search 4.0) cannot because the USB device identifies itself as a "Removable" drive and Windows 7 refuses to index removable drives.
How can I modify Windows 7 Search to index removable drives?
All suggestions welcome and greatly appreciated.
We backup a set of virtual machines to an external USB drive using rsync -a. The source directory is 145G as reported by du -sh, but the target is reporting 181G.
The two file systems are ext3 and the block size is the same, so can someone explain what the discrepancy is?