Search Results

Search found 15160 results on 607 pages for 'boot disk'.

Page 333/607 | < Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >

  • How to correctly setup home directories and permissions on a mounted partition.

    - by user36505
    I'm setting up a Fedora 12 server. I have a root (/) partition where the boot (/boot) partition is mounted and then a separate partition (/files) for separating home directories and shares away from the other partitions. The filesystem mounts fine and users can be created to have home directories in /files/home/[user] just fine. However, when I log in as one of those users, I get an error saying "Cannot chdir in to /files/home/[user]: permission denied". If I create a user under the default /home using the same process, everything works fine. The same goes for when I try and browse a share in windows; I can see the shares, but cannot access them. The permissions and owners on /files and /files/home are the same as /home. When the user is created, the user directory owner and permissions are also the same. How can I set the /files partition up so that it can be used as a home directory and for samba sharing rather than using the root (/) partition? Thanks.

    Read the article

  • How to (properly) back up a live QEMU/KVM VM?

    - by Roman
    I'm currently engineering a backup solution for KVM VM's as an additional measure to traditional backups. Unfortunately, all currently (August 2013) existing solutions I came across so far either: do not ensure a consistent backup of the VM (losing RAM state, creating a dirty image, or other things), or require lengthy downtime (complete VM shutdown while backing up). I'm aware of QEMU/libvirt's functionality of taking snapshots, however, it's not yet usable since: image-internal snapshots present you with an ever-changing image file, resulting in a likely dirty backup (assuming one uses qcow2 images at all). one cannot yet merge a currently active external snapshot into the original backing image ("blockcommit"). Out of the above reasons, I'm now implementing a script that: Saves the VM's state and halts it Sets up a devicemapper snapshot(s) where the VM's disk images and state reside Resumes the VM Mount the snapshot(s) of step 2. Backs up the VM's disk and state (configuration for convenience) Merges back the snapshot(s). If I got everything right, this will take consistent backups of VM's with only seconds (if at all, since 1-3 is fast, possibly sub-second) of downtime. Of course, when restoring, the VM will be way in the past, but at least giving me the option of an orderly shutdown/reboot. Am I missing something with this solution? Or has someone indeed already implemented this?

    Read the article

  • Unable to resize ec2 ebs root volume

    - by nathanjosiah
    I have followed many of the tutorials that pretty much all say the same thing which is basically: Stop the instance Detach the volume Create a snapshot of the volume Create a bigger volume from the snapshot Attach the new volume to the instance Start the instance back up Run resize2fs /dev/xxx However, step 7 is where the problems start happening. In any case running resize2fs always tells me that it is already xxxxx blocks big and does nothing, even with -f passed. So I start to continue with tutorials which all basically say the same thing and that is: Delete all partitons Recreate them back to what they were except with the bigger sizes Reboot the instance and run resize2fs (I have tried these steps both from the live instance and by attaching the volume to another instance and running the commands there) The main problem is that the instance won't start back up again and the system error log provided in the AWS console doesn't provide any errors. (it does however stop at the grub bootloader which to me indicates that it doesn't like the partitions(yes, the boot flag was toggled on the partition with no affect)) The other thing that happens regardless of what changes I make to the partitions is that the instance that the volume is attached to says that the partition has an invalid magic number and the super-block is corrupt. However, if I make no changes and reattach the volume, the instance runs without a problem. Can anybody shed some light on what I could be doing wrong? Edit On my new volume of 20GB with the 6GB image,df -h says: Filesystem Size Used Avail Use% Mounted on /dev/xvde1 5.8G 877M 4.7G 16% / tmpfs 836M 0 836M 0% /dev/shm And fdisk -l /dev/xvde says: Disk /dev/xvde: 21.5 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x7d833f39 Device Boot Start End Blocks Id System /dev/xvde1 1 766 6144000 83 Linux Partition 1 does not end on cylinder boundary. /dev/xvde2 766 784 146432 82 Linux swap / Solaris Partition 2 does not end on cylinder boundary. Also, sudo resize2fs /dev/xvde1 says: resize2fs 1.41.12 (17-May-2010) The filesystem is already 1536000 blocks long. Nothing to do!

    Read the article

  • Nvidia RAID 1 Problem. Degraded drives...

    - by Vedat Kursun
    I had a RAID 1 on my system which has a Gigabyte GA 8N SLI motherboard with a Nvidia chipset.(Nvidia Raid IDE ROM BIOS 4.84) When the system was working probably there used to be an icon on the system try which showed my two RAID disks. Bu after my friend accidentally clicked on the "Remove drive safely" icon while trying to disconnect her USB, I noticed that the RAID system wasn't working. After a reboot there was suddenly a failure message during boot screen. When I enter the Nvidia RAID setup utility (F10) I can see that both drives are degraded and that won't change even if I get into them and press R for Rebuild. Other options are only Delete and Exit. When I boot to Windows (XP Pro 32 Bit) I can see both my disks with the same data on each of them but my RAID 1 is broken. It's a relief to see that at least my RAID 1 was active but it's annoying not being able to rebuild it. Is there a way where I can rebuild my RAID 1 without having to delete the array and build it again? Cause I don't want to backup 400 Gigs of data and then recopy it to my drives... (Disks 2 x Seagate ST3500418 AS SATA Drives)

    Read the article

  • Nvidia RAID 1 Problem. Degraded drives...

    - by Vedat Kursun
    I had a RAID 1 on my system which has a Gigabyte GA 8N SLI motherboard with a Nvidia chipset.(Nvidia Raid IDE ROM BIOS 4.84) When the system was working probably there used to be an icon on the system try which showed my two RAID disks. Bu after my friend accidentally clicked on the "Remove drive safely" icon while trying to disconnect her USB, I noticed that the RAID system wasn't working. After a reboot there was suddenly a failure message during boot screen. When I enter the Nvidia RAID setup utility (F10) I can see that both drives are degraded and that won't change even if I get into them and press R for Rebuild. Other options are only Delete and Exit. When I boot to Windows (XP Pro 32 Bit) I can see both my disks with the same data on each of them but my RAID 1 is broken. It's a relief to see that at least my RAID 1 was active but it's annoying not being able to rebuild it. Is there a way where I can rebuild my RAID 1 without having to delete the array and build it again? Cause I don't want to backup 400 Gigs of data and then recopy it to my drives... (Disks 2 x Seagate ST3500418 AS SATA Drives)

    Read the article

  • initrd problem and Kernel panics after openSUSE 11.2 upgrade.

    - by unixbhaskar
    Once I have done the upgrade form openSUSE11.1 to openSUSE11.2 by doing this: zypper dup Now I tried to boot the system and it failed sync with VFS and kernel panic, so clearly a initrd problem . if I'm not mistaken. Now a bit of explanation about the problem: while upgrading it shows me the error updating initramfs( I forgot the exact error or might be warning).Oh yeah it shows some grub warning too. I have had been doing that from a chroot environment.. with all the required file mounted in proper place in the chroot environment. Now .after bit googling and painfully looking the susegeek.com forum and opensuse.org forum I have decided to recreate the initrd ...but the fellow called "mkinitrd" is real real crap as I hev been pointed out by few forum members. I tried to make an initrd image by myself, failed to do so .as it shows error that device not found( if I boot into suse live cd and mount the partition ) then I tried from the chrooted env and it says "there is no space left on the device" A bit bemused :( yeah most of you pointed it right may lack of knowledge of mine. Kindly suggest me and show me steps to do it correctly and get opensuse11.2 up and running. TIA

    Read the article

  • PCI configuration method error (Linux Kernel)

    - by user326580
    (I'm not sure if here is the best place for that question, so I will be pleased if anyone suggests me a more proper forum for that.) I'm trying to install Ubuntu 12.04.4 in a netbook (from an usb), but the kernel stops very early in initialization process. After two days of research, I've found that it boots with the parameter pci=conf2 but not with the default conf1 method. Nevertheless, after kernel boot, it seems that Ubuntu can't find usb devices and I'm not be able to install it. Trying with Debian, its a graphic installer and I found that the mouse isn't working neither.I think pci devices are not working. I tried about 50% of kernel pci boot options in the kernel-parameters file (in conjunction with the implicit default conf1) without luck. Any suggestions? PS: The problem is the same with kernel 2.6 or 3. (In Spanish) No estoy seguro si éste es el mejor lugar para esta pregunta, por lo cual estaré encantado si alguno me sugiere un mejor lugar para ella. Estoy intentando instalar Ubuntu 12.04.4 en una netbook (desde un usb), pero el kernel se detiene muy temprano en la inicialización. Después de dos días de investigar, encontré que arranca con el parámetro pci=conf2 pero no con método default conf1. Sin embargo después de que el kernel arranca, parece que Ubuntu no logra encontrar los dispositivos usb y no puedo instalar el sistema. Intentando con Debian y su instalador gráfico, encontré que el ratón tampoco funcionaba, así que pienso que los dispositivos pci no están funcionando. Intenté con aproximadamente el 50% de las opciones de arranque del kernel para pci (en conjunción con el método implícito conf1) sin suerte. Alguna idea? PS: El problema es el mismo con el kernel 2.6 o 3.

    Read the article

  • How to start MSSQL Server with corrupt model db

    - by Jordan McGuigan
    After moving some databases around (restoring, deleting, etc) we experienced an issue creating new databases. Specifically, When trying to create a new database MSSQL Server it failed because the "The database 'model' is marked RESTORING and is in a state that does not allow recovery to be run". As some online solutions suggested, we tried to Start and Stop the MSSQL Service. Service would not restart because "Could not create tempdb. You may not have enough disk space available. Free additional disk space by deleting other files on the tempdb drive" (FYI: the drive has 100gb of free space). Tried restarting the machine the MSSQL Server is running on. When the server came back online, we received the same error. We have tried deleting tempdb.mdf and restoring the modeldb from the templates folder, but neither of these solved the issue. We are unable to connect to the database, even in single user mode. Many of the online solutions have us running SQL commands against the server, but we are unable to connect (even in single user mode) to the DB to run commands against the server. Specific error messages: Database 'model' cannot be opened. It is in the middle of a restore. (Microsoft SQL Server, Error: 927) The SQL Server (MSSQLSERVER) service is starting. The SQL Server (MSSQLSERVER) service could not be started. A service specific error occurred: 1814. We need the server up and running again ASAP.

    Read the article

  • Solaris 10: How to remove devices from a zpool with /usr currently mounted?

    - by cali-spc
    I use Solaris 10 on SPARC. I have /usr legacy mounted on a zpool 'usr-pool'. I now need to move some of the devices in usr-pool to another zpool which is running out of room. What is the safest way for me to do this? I already know that (since my zpool is not mirrored) I need to destroy and recreate the zpool. I know how to backup and restore a zfs snapshot. However... I'm stumped on how to unmount usr-pool without losing access to the commands I need on /usr to complete the backup/restore. Cursory research indicated that I should boot to OpenBoot (init 0) and then 'boot cdrom -s'. I did this but none of the zpools are accessible on that runlevel. I also read I could just copy /usr to another location, symlink /usr to that location, then do my backup/restore. Is that safe to do? I would appreciate some guidance. S.

    Read the article

  • MysqlTunner and query_cache_size dilemma

    - by wbad
    On a busy mysql server MySQLTuner 1.2.0 always recommends to add query_cache_size no matter how I increase the value (I tried up to 512MB). On the other hand it warns that : Increasing the query_cache size over 128M may reduce performance Here are the last results: >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.25-1~dotdeb.0-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 6G (Tables: 195) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 51 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 19h 17m 8s (254M q [1K qps], 5M conn, TX: 139B, RX: 32B) [--] Reads / Writes: 89% / 11% [--] Total buffers: 24.2G global + 92.2M per thread (1200 max threads) [!!] Maximum possible memory usage: 132.2G (139% of installed RAM) [OK] Slow queries: 0% (2K/254M) [OK] Highest usage of available connections: 32% (391/1200) [OK] Key buffer size / total MyISAM indexes: 128.0M/92.0K [OK] Key buffer hit rate: 100.0% (8B cached / 0 reads) [OK] Query cache efficiency: 79.9% (181M cached / 226M selects) [!!] Query cache prunes per day: 1033203 [OK] Sorts requiring temporary tables: 0% (341 temp sorts / 4M sorts) [OK] Temporary tables created on disk: 14% (760K on disk / 5M total) [OK] Thread cache hit rate: 99% (676 created / 5M connections) [OK] Table cache hit rate: 22% (1K open / 8K opened) [OK] Open file limit used: 0% (49/13K) [OK] Table locks acquired immediately: 99% (64M immediate / 64M locks) [OK] InnoDB data size / buffer pool: 6.1G/19.5G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Reduce your overall MySQL memory footprint for system stability Increasing the query_cache size over 128M may reduce performance Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 192M) [see warning above] The server has 76GB ram and dual E5-2650. The load is usually below 2. I appreciate your hints to interpret the recommendation and optimize the database configs.

    Read the article

  • How to find what files / directories are not copied yet?

    - by user8676
    Hi all, I found the following 'nice' situation: An archive of few disks (actually three disks) which has a bunch of photos (more or less) organized. Well, this is good. A big disk shared on a network which has a bunch of photos which has another folder structure (even if is somewhat recognizable for a human being) than the archive described above, but some of the files on this big network share are the same with the files from the archive. Well, this is bad. What we need is to move the different (new) files from the network share in the archive (perhaps we'll use for this a new disk added to archive). The program that we need is different from a regular File Duplicate Finder program because usually the File Duplicate Finder finds the duplicates from all sources comparing each file with another. We want to find the differences between the two sources. It is fine for us to have a report generated in text file which after this we'll use to do our move. A Windows solution will be preferred. Any ideas? TIA

    Read the article

  • What are the typical methods used to scale up/out email storage servers?

    - by nareshov
    Hi, What I've tried: I have two email storage architectures. Old and new. Old: courier-imapds on several (18+) 1TB-storage servers. If one of them show signs of running out of disk space, we migrate a few email accounts to another server. the servers don't have replicas. no backups either. New: dovecot2 on a single huge server with 16TB (SATA) storage and a few SSDs we store fresh mails on the SSDs and run a doveadm purge to move mails older than a day to the SATA disks there is an identical server which has a max-15min-old rsync backup from the primary server higher-ups/management wanted to pack in as much storage as possible per server in order to minimise the cost of SSDs per server the rsync'ing is done because GlusterFS wasn't replicating well under that high small/random-IO. scaling out was expected to be done with provisioning another pair of such huge servers on facing disk-crunch issues like in the old architecture, manual moving of email accounts would be done. Concerns/doubts: I'm not convinced with the synchronously-replicated filesystem idea works well for heavy random/small-IO. GlusterFS isn't working for us yet, I'm not sure if there's another filesystem out there for this use case. The idea was to keep identical pairs and use DNS round-robin for email delivery and IMAP/POP3 access. And if one the servers went down for whatever reasons (planned/unplanned), we'd move the IP to the other server in the pair. In filesystems like Lustre, I get the advantage of a single namespace whereby I do not have to worry about manually migrating accounts around and updating MAILHOME paths and other metadata/data. Questions: What are the typical methods used to scale up/out with the traditional software (courier-imapd / dovecot)? Do traditional software that store on a locally mounted filesystem pose a roadblock to scale out with minimal "problems"? Does one have to re-write (parts of) these to work with an object-storage of some sort - such as OpenStack object storage?

    Read the article

  • How to fix Truecrypt MBR using Command Prompt or Linux live USB?

    - by Michal Stefanow
    I was playing with TrueCrypt and decided to make a fresh installation of Windows 7 from USB stick. Unfortunately Windows 7 installer: "setup was unable to create a new system partition" My entire HDD has been formatted and is visible as 320GB unallocated space, but no fdisk nor Windows 7 installer nor Windows XP installer could help. (Windows XP doesn't even see the HDD, it sees only USB stick and says "not enough space to install") It may be related to Truecrypt and pre-boot authentification, boot loader and/or MBR. As I don't have optical device I could not have created rescue disk. Right now I need a rescue of some kind, supposingly by erasing/fixing MBR using Linux live USB or using Command Promt. Another approach is to click "repair your comuter" menu from Windows 7 installer then click "restore your computer", then click OK upon error and get access to Command Prompt. Yet another another approach is to start computer without Linux USB I receive this: error:unknown filesystem. grub rescue> Any help would be greatly appreciated as my laptop is kind of not fully operational now. UPDATE: This was asked long time ago, ended up formatting everything (eventually it worked using different bootable USB)...

    Read the article

  • Unnamed, hidden partitions on my 500 GB HD, HP Pavilion dm4 Laptop

    - by emotionull
    I have multiple doubts here. Its a Seagate 500GB 7200RPM HD. I had installed it few months back after my original Laptop HD stopped working. The current drives on my latop, as shown by the Windows Disk Management are: After installing the new HD, I had done a complete clean install of Windows 7 and I didn't create any parition myself, manually. So there are 4 drives. Even previously, before I installed this new HD, my laptop had 4 Partitions. But the there were no un-named partitions like the two in this case. The other two were HP tools and Recovery or something. It was pre-configured, Factory installed Windows. Also, now when I right cick on the unnamed Drives from Disk Management, all the options are greyed out (see image) except the delete partition image. So how do I know what's inside those partitions? Will it be ok if I delete them? I want install Ubuntu and dual boot it with my current windows installation. I cannot do it in current setup as there are already 4 partitions of my HD and if I will try to make a new partition, it will be a logical one (correct me if I am wrong here). So can I delete the un-named, hidden partitions and use them for Ubuntu? A bit unrelated question. As a backup option, can I use the Windows 7's Backup and Restore facility to keep a complete backup of all the drivers and system softwares.

    Read the article

  • Windows Server 2008 R2 Standard to Enterprise Problems

    - by boburob
    A few months ago I setup a Citrix XenApp cluster running on Windows Server 2008 R2 Standard Edition using the temporary 180 day license key. Recently the company bought a Windows Server 2008 R2 Enterprise DataCenter license. This means I need to upgrade the Windows edition from Standard to Enterprise. I attach the disk to the VM and start the upgrade process through XenCenter, it runs through all checks and unpacks all Windows files and seems to create a Windows Setup partition, it then reboots and trys to boot into this partition and I get a blue screen telling me to CHKDSK the hard drive with the following error message: STOP: 0x0000007B As XenApp is already setup and working I really do not want to go down the route of rebuilding this server (as I already had to do this once down to issues with XenApp). The server did have 8GB of RAM assigned to it, I have tried reducing this down to 2GB's as I read this can cause an issue. Also I can boot back into the Windows Server 2008 R2 Standard partition without any problems. UPDATE I have managed to get round the urgency by re-arming the license, giving me another 180 day trial..but would be nice to work out why this is happening!

    Read the article

  • FTP Server upload and filesystem questions

    - by Alex
    I'm a photographer who mainly does event photography. A while ago I bought myself a Nikon WT-4 wireless transmitter, a small device which connects via USB to my Nikon D700 DSLR, and then establishes a WiFi connection to an existing WLAN. It can then upload any pictures I take via FTP to an FTP server somewhere in the network. On my laptop I then have a piece of software which will check a given folder on the disk regularly, this software is smart enough to look at the modified file timestamp, if this timestamp is less than 10 seconds ago, it will not attempt to import the folder and skip the file in this iteration of the import scan. The problem I've discovered seems to be inherent to the FTP protocol, as I have the same problem with Windows 7 built in IIS server, as I do with FileZilla FTP server. When the transmitter starts to upload a file, the FTP server will create a small 300-500 KB file with the correct filename on the disk, but then do nothing with the file until it has completely received the file via FTP. So it seems to create this small dummy file, and then buffer the remainder of the FTP upload until it's finished, and then dump the rest of the file into the dummy file making it the correct size. Problem is, these uploads take about 15-30 seconds depending on reception, but since the folder watch tool will already try to import any file older than 10 seconds, it will always try to import the small dummy files which obviously fails as they're not copmlete yet. Is there any way to 'disable' this behaviour? Ideally I would like my file only to show up once it's been completely uploaded. Or perhaps someone knows another FTP server application (it has to run on win7) which does not show this behaviour?

    Read the article

  • Last step in HDD Recovery (fixing windows)

    - by Atom Computing
    My dad’s hard drive corrupted which was a result of many bad sectors. Anyway, I made a clone of the drive and have now repaired it totally (recreating the MBR and MFT) and doing a series of ChkDsk's on it. I can now see all the files and folder on it and it is all intact. I currently have it as a slave in my computer (where I was doing all the repairs). When putting it back into the computer, it comes up with "A disk read error occurred: Press Ctrl + Alt + Del to Restart". I don’t know why this is happening but think it might have something to do with file permissions. I have tried a start-up recovery on the Vista boot CD and it found no problems. When trying to apply file permissions (and creating file perms for the SYSTEM group (as it didn’t have any for SYSTEM group)) it couldn't apply them for some of the System32 folder files. I have tried applying them as admin and with as powerful privileges I can get. All to no avail. When it is in my PC I can boot it up (I added it into my bootloader) and it boots up fine except when it logs in it comes up with the error - "Rundll32.exe - Windows cannot access the specified device, path or file. You may not have the appropriate permissions to access the item" This message keeps coming back and nothing loads at all. Any help would be greatly received as I have got so far with the data recovery and want to avoid a reformat at all costs due to the vast number of programs installed and I don’t have much time on my hands! Thanks

    Read the article

  • What I should know about memory management?

    - by bua
    first of all: I don't use stackadmin or similar so please don't vote for moving there, I'm reading man top and paper "what every programmer should know about memory ..." I need really simple explanation like for retard ;) Having following top dump: top - 11:21:19 up 37 days, 21:16, 4 users, load average: 0.41, 0.75, 1.09 Tasks: 313 total, 5 running, 308 sleeping, 0 stopped, 0 zombie Cpu(s): 0.4%us, 0.6%sy, 0.9%ni, 96.2%id, 0.1%wa, 0.0%hi, 1.9%si, 0.0%st Mem: 132103848k total, 131916948k used, 186900k free, 54000k buffers Swap: 73400944k total, 73070884k used, 330060k free, 13931192k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3305 tudb 25 10 144m 52m 940 R 6.0 0.0 1306:09 app 3011 tudb 15 0 71528 19m 604 S 3.3 0.0 171:57.83 app 3373 tudb 25 10 209m 93m 940 S 3.0 0.1 1074:53 app 3338 tudb 25 10 144m 47m 940 R 2.7 0.0 780:48.48 app 4227 tudb 25 10 208m 99m 904 S 1.3 0.1 198:56.01 app 8506 tudb 25 10 80.7g 49g 932 S 2.0 39.6 458:31.22 app I'm wondering what is: RES (my expl. physical memory consumption ? see 49GB) VIRT (memory mapped disk to cache? see 80GB) SHR (shared pages?) Swap: (is this cached label - for memory mapped disk into swap cache?) Should sum of RES give MEM: X used? or maybe sum of VIRT?

    Read the article

  • How to automatically start VM created by virt-manager?

    - by Jeff Shattock
    I have created a virtual machine with virt-manager that runs on kvm/qemu. The machine works well when started through virt-manager. However, I would like to be able to start and stop the VM through a script in init.d, so that it comes up and down along with the host. I need to have virt-manager show that the machine is running, and to be able to connect to its console through there. When I use the command line that is produced by running ps -eaf | grep kvm after starting the vm through virt-manager, I get some console messages about redirected character devices, but the machine does start and runs properly. However, I do not get any indication from virt-manager that it has started. How can I modify the command line to get virt-manager to pick up the running VM? Is there anything else about the command line that should change when starting outside of virt-manager? Command line is (slightly reformatted for readability): /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 512 -smp 1 -name BORON \ -uuid fa7e5fbd-7d8e-43c4-ebd9-1504a4383eb1 \ -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/BORON.monitor,server,nowait \ -monitor chardev:monitor -localtime -boot c \ -drive file=/dev/FS1/BORON,if=ide,index=0,boot=on,format=raw \ -net nic,macaddr=52:54:00:20:0b:fd,vlan=0,name=nic.0 \ -net tap,fd=41,vlan=0,name=tap.0 -chardev pty,id=serial0 -serial chardev:serial0 \ -parallel none -usb -usbdevice tablet -vnc 127.0.0.1:1 -k en-us -vga cirrus

    Read the article

  • How can I create two partitions and clone one to the other (using Clonezilla)?

    - by johnny
    I was hoping someone could help. I want to create a "backup" partition. I want to create two partitions on my drive. One is a good install, which I want to then use clonezilla to copy the good partition to the broken/unused partition and have the restored partition boot up as usual. Example, C: goes bad. D: is a "good" copy of C. C gets a corrupt registry. I restore D to C, C will then boot up as usual. So, I need to do the clone with Clonezilla and the restore with the same. I see the part_...clone and restore. Will this do it? How do I get the partitions? EDIT: I am using XP. How can I do this? Also, I know this is not the best thing for all occasions. I have a offline backup as well. I would like to have both. Thanks for any help. I'm using Clonezilla if it matters.

    Read the article

  • No partition on USB Flash Drive?

    - by Skytunnel
    A friend gave me a corrupted USB memory stick to try recovery data from. But I've had some unusual results, so thought I'd share to see if anyone is familiar with this problem... First off I just tried opening from my own PC. Windows prompted to Format the drive, which I of course declined Downloaded TestDisk to anaylsis the drive. And right away I noticed something strange, on the listed drives it comes up as Disk /dev/sdc - 6144 B - USB Flash Drive That's right, the first USB flash drive smaller than a floppy disk!? Moving on anyway... first anaylsis comes up with: Partition sector doesn't have the endmark 0xAA55 TestDisk's Quick Search gave no results, moved on to Deeper Search: No partition found or selected for recovery This left me stumped. I tired a couple of other programs with no success I did manage to get a backup image, but it was just as small as TestDisk indicated, so nothing of use on it After a few hours trying various suggestions from other sources, I gave in and just tried formatting the drive. But returned the message: Windows was unable to complete the format. From googling that, the suggestion was to delete the partition. But there is no partition to delete in this case. most recently I've tried formatting from cmd, and got this result: Format D: /FS:FAT32 The type of the file system is RAW The new file system is FAT32 Verifying 0M 11 bad sectors were encountered during the format. These sectors cannot be guaranteed to have been cleaned The volume is too small for FAT32 Anyone got any suggestions? UPDATE: As per suggestion from @Karen, I tried running a CLEAN from DISKPART, results as follows DiskPart has encountered an error: The request could not be preformed because of an I/O device error.

    Read the article

  • How do I speed up and cache mmap file access over NFS on Linux?

    - by Zan Lynx
    The server and client are both 64-bit Ubuntu 10.04 LTS. The application in question is a custom app that uses mmap() for fast random file access. Its ideal state is when the entire file is cached in RAM. The network connections are really fast 10Gb Ethernet. It is a virtual server blade setup. It isn't the network connections slowing things down because everything performs superbly when using a virtual disk (iSCSI to the SAN). But when we run the application on a NFS home directory mount, performance goes to the dogs. It appears that the Linux kernel isn't caching anything. So it is reading every single disk block needed by mmap() accesses over and over and over again. The NFS mount is done through autofs, which has only default settings. /proc/mounts shows the NFS mount is done with the following options: rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.11.52,mountvers=3,mountproto=tcp,addr=192.168.11.52 How can I make Ubuntu 10.04 cache the file instead of reloading it all the time?

    Read the article

  • How to prevent an SSD from disappearing from BIOS

    - by Midimatt
    I've only recently upgraded my old machine to a new one with a brand new 60gb SSD as my boot drive and a 1TB main drive. Paranoid about completely breaking my SSD, I read up on a lot of issues that I needed to watch out for, including making sure AHCI was turned on and trim enabled. PC has been working fine for a few weeks now, until today. My wife was watching some TV on the machine when it started to act strange and eventually blue screened. She rebooted and the boot mgr was missing. When I got home from work I checked the BIOS and the drive had disappeared. I panicked and looked up some possible fixes, and I discovered a large amount of people having problems with the drive firmware, especially on OCZ Vertex and Agility drives, and my drive is an Agility 3 drive. The problems included blue screens followed by missing drives, and a solution was to reset the CMOS and try again. This worked, and now everything seems to be working fine. My question is, is there any way to prevent this from happening? Am I missing a setting for my SSD? All of the posts I found were from early to mid-2011 nothing for the end of 2011 to 2012. So I am wondering if I've missed anything. EDIT: Checked my drives firmware and it is 2.15, which has had issues reported by users.

    Read the article

  • How do I prevent my computer from freezing when it starts to swap?

    - by cdauth
    I work as a Java programmer, so I often have to run several programs at the same time that consume a lot of memory. When my memory is full and Linux starts swapping, my computer almost completely freezes. I can see that it is heavily writing on the hard-disk and everything reacts really slowly, often not at all. Moving the mouse in X sometimes doesn’t work at all, sometimes it has a delay of several seconds, clicking usually has a delay of several minutes. Sometimes it is possible to change to the TTY (with a long delay), there I can usually type without delay, but when I try to log in, it takes several minutes after typing in the user name until the password prompt appears, and usually an error message appears that tells me that the login timed out. So the only possibility is usually to restart the computer. I noticed that other intensive writing to the hard disk also significantly slows down my computer. Sometimes, I used rsync to limit the bandwidth when I copied files around on my own computer, as else the system would be almost unusable. How can this be? At the moment it seems more useful to me to completely turn off swapping. That might crash some processes, which is unfortunate, but the alternative at the moment is to crash all processes by turning off my computer. I am using Gentoo Linux with kernel 3.6.2-gentoo, I have a 10 GB swap partition on a HDD.

    Read the article

  • Deleting old system folders from a drive that is no longer the windows installation drive

    - by grenade
    I dropped my laptop and was no longer able to boot. There were error messages about a corrupt boot record. Replacing the hard drive and reinstalling Win 7 was how I dealt with it. The old drive still appears to be good and I can read and write to it when I connect it as a second drive and mount as D:. However, if I try to recover the space being used by the windows, programdata, program files & program files(x86) folders, by deleting them I get error messages about needing permission from trustedinstaller. If I set myself as the owner of the folders and retry the delete I get error messages about needing permission from myself! Since I'm pretty sure that I have permission from myself to delete the folders, I can only assume that the OS or file system has gotten its panties twisted. I have tried shift, right click, delete from explorer and also if I run "del /f /s /q D:\Windows" from an admin command prompt, I get a succession of Access is denied messages as well. How do I delete D:\Windows, D:\ProgramData, D:\Program Files & D:\Program Files(x86) from a drive that is not the Windows installation drive?

    Read the article

< Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >