Search Results

Search found 5747 results on 230 pages for 'backup'.

Page 162/230 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • mounting vsphere 4.0 file system in ubuntu linux

    - by sravan
    hi all, I am using VSpere 4.0 for my project work. I needed 4-5 servers my project work which is based on Database. I felt the virtualisation is very good to get the 5 servers running good at the same time. It was running good until few days back. Yesterday, it suddenly crashed and i had no idea of the reason.Today, it did not even boot up. Now, i need to take the data backup from that system. In order to do the same, i got the hard drive from the machine and tried to mount it on local linux machine.But, i was not successful. The disk was not even recognized by the linux machine. Can some one please tell me how to mount it and get the required data? Thank you all

    Read the article

  • ext4: error loading journal

    - by cloudyOutside
    I have an external hard drive with two partitions: A small FAT32 which is mostly empty and works fine and a large ext4 with tons of data, most of which isn't backed up. The ext4 is visible, but can't be mounted. I get an "error loading journal" error. The drive is a Western Digital Caviar Blue 500GB. Roughly 30GB of that is FAT32 and the rest is the ext4. The light on the enclosure turns red when reading from the bad partition. It was made by Cavalry. There wasn't any warning, but coincidentally, I've been thinking lately that I should get two large capacity drives for real backups. Is there anything that can be done? I'm not even sure I have enough storage to backup everything even if it is redeemable.

    Read the article

  • LAMP: How do I set up http://myservername.com/~user access?

    - by Travesty3
    Been trying to Google this, but I can't figure out good search terms to find any info about what I need, since I don't really know what it's called. I'm pretty much being thrown to the wolves to figure out how to set up a LAMP server. We had someone who knew how to do it, he set one up and then quit. It was set up so that when I went to "http://{myservername}.com/~travis" it showed the contents of my /home/travis/public_html folder. This worked fine, then we lost power and the server restarted (I know, battery backup, but this is a dev server in a dev building so it's OK). Now, the browser can't find that URL. I also need to know how to set this up on a new server, so instead of wasting time diagnosing this problem (probably just something dumb I did messing with settings or something), I really need to know how to set this up from scratch. Thanks for taking the time to read this and (hopefully) answer!

    Read the article

  • Do I need to disable access to a publisher database when setting up SQL Server 2000 Transactional Re

    - by Kev
    I have a production database i.e. where there are constant updates and I've configured this to be published to another server using Transactional Replication. When I configure transactional replication I've been doing the following: disable access to the source database backup source DB then restore to subscription server configure replication re-enable DB access to our apps The problem with this approach is scheduling in downtime, having to suspend all the various timed scheduled tasks we run and shutting down access to our various applications that are dependant on this database. Can I just configure transactional replication without disabling access to the publishing database and the subscriber database will correctly catch up? i.e. are all the DML statements queued on the publisher and as soon as the subscriber is ready they are picked off and executed?

    Read the article

  • how to debug mysql has gone?

    - by fefe
    I have a virtual machine(Ubuntu 12.04, MySQL 5.5) running under VMware and is dedicated to host a mysql server. I connect to this server on internal IP. I'm trying to find out why I get mysql server has gone error. One my windows machines apache it stops because of this issue. I have been trying to fine tune my mysql my.cnf with the following parameters but did not bring the desired result. # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 0.0.0.0 # # * Fine Tuning # wait_timeout = 180 key_buffer = 384M max_allowed_packet = 64M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 500 table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 32M how to debug this issue what is missing from configuration to avoid this error?

    Read the article

  • How to set up a PC which can be booted from Linux AND Windows?

    - by Martin
    Our PC was running Windows XP up to know. It has become incredibly slow and I'm considering switching to Linux (Ubuntu?!) as a fresh OS. However, there are some applications we rarely use which run only on Windows and I also want to have the possibility to easily go back to the old system, if I should find during testing linux, that anything is missing or not available. So the idea is to install Linux on a new (second) hard drive and use the existing Windows XP from a virtual machine (converted by Paragon Drive Backup) in the transition time. We have a lot of data on the PC, tens of GBs of Photos (managed by Picasa), ... My questions: What could be the best way to setup the new hard drive? (Partitions) I assume that I can not access the Linux data from Windows but I could access (read/write) windows drives from Linux? Does anyone know good tutorials for this use case? What other things might I have to consider for transition Windows-Linux?

    Read the article

  • Does anyone know why rsync would keep sending the files over and over again?

    - by beagleguy
    I'm trying to using rsync to backup some files, about half a TB. It's now it a state where it keeps sending the same files everytime it runs. for example: rsync -av /data/source/* user@host:/data/dest sending incremental file list source/file1.txt source/file2.txt I then verify those files are copied over... then the next time it runs it does the same thing rsync -av /data/source/* user@host:/data/dest sending incremental file list source/file1.txt source/file2.txt any idea why it's getting stuck on these files? I've tried to wipe the whole dest directory out and start over but no luck. thanks,

    Read the article

  • Mount linux partition as Windows network share over internet

    - by CptEO
    I have a Linux server running RHEL 6. I have two Windows servers. All servers are connected directly to the web with an external IP, they are not in a local lan. What I would like to achieve is to setup the Linux server so that it offers a single share (the whole partition) that can be mounted as network drive within Windows. I don't want to use any 3rd party software to access the linux server because I want to use the linux server as a backup for Bare Metal Restore. In order to do so, I need to be able to access the linux partition from within the Windows Recovery Enviroment where I cannot install any 3rd party software. The linux server should only be accessible from given IP addresses (e.g. the 2 windows servers). Does anyone know if the setup I would like to have is possible?

    Read the article

  • create log for an encrypted tar

    - by magiza83
    I want to create an encrypted tar but also I want to have a log of what tar has compressed, I'm using the following command: tar -cvvf - --files-from=/root/backup.cfg | openssl des3 -salt -k backuppass | dd of=/root/tmp/back.encrypted But I need to have a log of tar's stdout. I don't know how to get it, because If I use "" in tar command openssl result is not correct. I've also checked tar manual hoping to find some option to write stdout to a file, but I have found nothing. any help? thanks & Regards.

    Read the article

  • netmask: command not found

    - by Ian R.
    I purchased a new server with a few ip's so I modified the /etc/network/interfaces file recently so that my ip's can go live. While editing that file I created a backup and deleted the original file. I recreated the interfaces file using the touch command and gave +x permissions but now, when trying to restart the interface (/etc/network/interfaces restart) I get all sorts of errors: /etc/network/interfaces: line 10: iface: command not found /etc/network/interfaces: line 11: address: command not found /etc/network/interfaces: line 12: netmask: command not found /etc/network/interfaces: line 13: auto: command not found Can any1 point what I forgot to do? Thanks.

    Read the article

  • Working with a copy of my Virtual Machine

    - by Gaby Reyna
    Hi there I'm trying to make a backup/copy of my virtual machine, it's installed in a Windows Server 2000 and I want to make some modifications/tests without changing the original one. The copy is to be used in Windows 7, what I'm trying to do is work/modify an application that communicates with a DB, this application is hosted on the VM, the DB too, and since I don't want to screw up the stable version I want to know how to copy the VM to my desktop pc to experiment without worries. Now, someone told me I might have problems with the IP 'cause the original will have the same IP, and if I change it, it won't work properly. Is this true? If it is indeed true, any suggestions??

    Read the article

  • Optimal way to make MySQL backups for fairly large databases (MyISAM / InnoDB)

    - by WinkyWolly
    Currently we have one beefy MySQL database that runs a couple of high traffic Django based websites as well as some e-commerce websites of decent size. As a result we have a fair amount of large databases using both InnoDB and MyISAM tables. Unfortunately we've recently hit a wall due to the amount of traffic so I've setup another master server to help alleviate reads / backups. Now at the moment I simply use mysqldump with a few arguments and it's proven to be fine.. until now. Obviously mysqldump is a slow quick method however I believe we've outgrown its use. I now need a good alternative and have been looking into utilizing Maatkits mk-parallel-dump utility or an LVM snapshot solution. Succinct short version: I have a fairly large MySQL databases I need to backup Current method using mysqldump is inefficient and slow (causing issues) Looking into something such as mk-parallel-dump or LVM snapshots Any recommendations or ideas would be appreciated - since I have to re-do how we're doing things I rather have it done properly / most efficient :).

    Read the article

  • Need an alerting system if my cloning script fails

    - by rahum
    I've configured a nightly rsync to mirror one server to a standby offsite backup server. The total datastore on the primary is 1.5TB. In the course of getting this working, I ran into numerous instabilities with the environment, which I seem to have sorted out, but even though it's now working, I am still nervous. This is intended to be a disaster-scenario standby server, and if disaster strikes and the standby does not have all the proper data synchronized, I'm out of a job. Thus, I want to script a system that will confirm, after each nightly sync, that the destination data matches the source. I realize that rsync does this, but if rsync doesn't complete fully (which was happening during the setup troubleshooting), I need to know. Any suggestions? I'm best with Ruby, if that is relevant for the solution.

    Read the article

  • Is Rsync like subversion, but for a server?

    - by johnlai2004
    I'm trying to learn how to use rsync. I want to create daily backs up of my production server. Right now I run the command rsync -azr /var/www/* [email protected]:/var/www Now let's say one day, I want to roll back the /var/www/ directory on my production server to last month's version. How do I tell rsync to retrieve version N? On reading that rsync only copies differences between src and dest, I assumed rsync works like subversion where you commit changes to a destination, and keep track of every version, and with the option to checkout any version at anytime. Is that the way rsync works? It's like subversion but for an entire server? That would be great because then it means I don't have to do full ssh copies for my nightly backups.

    Read the article

  • Duplicate name exists solution

    - by user978733
    I have about 70 pc's with exactly same hardware. I decided to automate turning on and off. I took 1 PC. Here is what I've done: Changed bios configuration so that now pc's waking when I turn on AC switch Installed Windows XP and configured so that I can turn off remotelly, changed workgroup name to "WG1", and pc name to "ExamPC" Then created acronis backup image of this pc I installed this image in several PC's and tried to test All worked well till windows opened. The problem is, all tested PC's started Windows nearly at the same time, and all of them popped up error Duplicate name exist. I can't figure out any solution. Any suggestions?

    Read the article

  • Activating Windows 7 generates error code 0xc004F061

    - by Jon
    I got a new SSD and wanted to start over with Windows 7 on that disk. I did a clean install (my mistake) on the SSD and just went passed the activation part (left the key blank). Now that I have my system all setup, configured, files pulled back from backup, and ready to go, I'd like to activate Windows 7. However, I now get this error: The following failure occurred while trying to use the product key: Code: 0xC004F061 Description: The Software Licensing Service determined that this specified product key can only be used for upgrading, not for clean installations. Do I really need to wipe my system again, install Windows Vista, and then do the Windows 7 upgrade in order to use my upgrade key? Is there some kind of work around?

    Read the article

  • A space-efficient guest filesystem for grow-as-needed virtual disks ?

    - by Steve Schnepp
    A common practice is to use non-preallocated virtual disks. Since they only grow as needed, it makes them perfect for fast backup, overallocation and creation speed. Since file systems are usually based on physical disks they have the tendency to use the whole area available1 in order to increase the speed2 or reliability3. I'm searching a filesystem that does the exact opposite : try to touch the minimum blocks need by an aggressive block reuse. I would happily trade some performance for space usage. There is already a similar question, but it is rather general. I have very specific goal : space-efficiency. 1. Like page caching uses all the free physical memory 2. Canonical example : online defragmentation 3. Canonical example : snapshotting

    Read the article

  • Run disk error check on NTFS file?

    - by paulius_l
    I have a feeling that my system hard drive is dying. Benchmark kind of enforces it. Here is the benchmark of my system hard drive during low system activity: And here is the benchmark of backup drive: Furthermore, there are some files which I just can't touch because I get CRC errors and the hard drive activity spikes to 100% with operating speeds less than 1 MB/s while working with such files. I haven't yet tried swapping SATA cable as I have read this might cause the problems. Anyway, I would like to run some tests on specific clustsers where those files I am interested in are stored. I don't want to do the full chkdsk because it takes a very long time. I would like to either find the utility which executes the disk check directly on the clusters where the file belongs or a couple utilities where one tells me the cluster locations and another can check just those locations. How do I check and possibly fix disk errors where the files I am interested in are stored? Edit: S.M.A.R.T. info:

    Read the article

  • ARR servers in the Load Balancing pool automatically go from unavailable to available

    - by Chris
    I have 3 IIS web servers in an ARR web farm. When we do rolling releases, we take one server offline as a backup server and move it into an "Unavailable State" I have noticed that with ARR, servers will not stay in this state...they come back online automatically hours or days later. Does anyone know how to remedy this situation? This is very bad as the server that is down is typically not running the correct version of our code. I need to keep a server unavailable until i tell it otherwise.

    Read the article

  • How to restore one contact from Address Book with Time Machine

    - by doekman
    I want to restore one contact from my Address Book with Time Machine. To do so, I select the contact in Address Book. Then, I press the Time Machine icon in the dock. Then my address book is "taken into space". However, when I browse back in time (either pressing the arrow back, or selecting a time on the right), the contact details do not change. And I am sure the data has been changed between dates. Also, when I do press restore, it's still the new data, not the backup. Is this a bug, or am I doing something wrong? I'm using OS X 10.6.3 in combination with a external USB drive on an iMac.

    Read the article

  • Shrinking physical volumes in LVM on a Linux Guest in ESXi 5.0

    - by Stew
    The problem: Linux guest (OpenSuse 12.1), with multiple virtual disks attached. 3 disks are in a logical volume, two of which are exactly 2TB. None of the disks are independent, and due to the backup software we use, cannot be independent. When the two 2TB virtual disks are "dependent", the snapshot fails stating that the file is too large for the datastore. When I put those two disks in independent mode, snapshots work fine (the other disk is 1.8TB). I have therefore concluded that even shrinking the two physical disks by 100GB should solve the problem, however I am having trouble conceptualizing how to go about getting those disks smaller without breaking the LVM entirely. The actual LV has 1.3TB free, so there is plenty of space to shrink with. What I need to accomplish: Deallocate 100GB from the two, 2TB virtual disks within the linux guest. Shrink the two virtual disks by 100GB within vsphere (not as complicated). Are there any vsphere/LVM gurus that can give me a clue?

    Read the article

  • Bash script getting automatically deleted from Ubuntu 12.04 Server?

    - by Kris Anderson
    I'm running a bash script on an ubuntu 12.04 through cron. The script works fine for a few weeks (runs daily backups of websites, mysql databases, and copies to Amazon S3). However, twice now I've noticed that backups stopped happening. Both times the backup script (backupscript.sh) located in my home folder was no longer there. No one else has access to this server, so nothing was manually changed on the server and no one deleted the file by mistake. The cron job (nano /etc/crontab) still references this script, but the script itself disappears. What could cause this to happen? Does Ubuntu delete the script if it runs into some sort of error?

    Read the article

  • I modified my registry and now my laptop doesn't report its battery or power settings

    - by Crouch
    I saved a backup of my registry and then made a change to it. After the change, the Windows 7 battery meter no longer reported how much battery power was left. I also was no longer able to change between Power Profiles in the Control Panel. I tried to restore the original registry but it didn't restore the lost power features. Now I have to keep my laptop plugged in all day because I never know how much power I have left. Anyone know what to do here?

    Read the article

  • Clone a Red Hat RAID as part of a disaster recovery plan

    - by Campo
    I am looking for recommendations to clone a Red Hat mirrored raid to a single hard drive located in the same machine. The idea is if the servers hardware ever has an issue we have a similar hardware machine ready to go. All we would have to do is pop in the cloned drive. If the servers RAID ever failed we could just switch to the single drive to maintain uptime and restore the original configuration on the spare server with a backup. This is a restaurant and they are open 7 days a week. We do have time from 12:am to 9:00am to perform the necessary steps for a clone and we talking about under 10 Gigs of information. There is a database on the server. I have looked into Rsync and Clonezilla. But I am just not confident either is capable of completing the task I want. Looking for some suggestions and possibly a step by step if you could be so kind.

    Read the article

  • Restore SBS 2003 CompanyWeb calendar from flat files?

    - by BrandonS
    Ok so I have a job of recovering a calendar that was used for an event schedule. The situation is that there was never a backup done except for using Carbonite on the C drive. I have re-installed the server with the same server name and domain. I tried stopping the mssqlsharepoint service and over-righting the two DB files (STS_1.mdf and STS_1_log.LDF) and each one individually each time stopping and restarting the service. Now I knew before I started that this had a slim chance of working but I had to try because everything else i could find involved backing up before restoring and this just wasn't done. Please someone help it is driving me mad I tell you MADDDDDD. ps i am not the genius that set this up just the fool tasked to clean up the mess. :-)

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >