Search Results

Search found 398 results on 16 pages for 'lvm'.

Page 3/16 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Dead drive in LVM/XFS configuration

    - by Freddie Witherden
    I had three drives in LVM: a 2 TB drive and two 1 TB drives (added later). One of the 1 TB drives -- I believe the third one -- has died. Spanning all three drives was an XFS partition. Reading: http://www.novell.com/coolsolutions/appnote/19386.html I see that one way of handling this is to replace the dead drive and copy the metadata over. However, I am currently not in possession of a 1 TB drive and can not readily acquire one. Given this, what are my options? There was nothing important on the drives (if there was I would have them in RAID 1) but I would not mind attempting a recovery. Is there a simple way of forcing LVM to go with just two drives and NUL out anything else? (So that fsck can do its thing.)

    Read the article

  • LVM incorrectly reported missing after power failure

    - by mensi
    We have had a major power failure in the data-center. We are using a set of servers for our storage needs. The main server has several pairs of disks mirrored with mdadm. The resulting /dev/mdX are LVM physical volumes and belong to a big volume-group with all our data. After the powerloss, we had the problem that one of the mdadm devices was not auto-detected due to a missing entry in mdadm.conf. As a consequence, the volumegroup had inactive logical volumes due to the missing PV. We were able to fix the mdadm config and reboot. pvscan shows all expected PVs but one LV still does not come up. vgdisplay shows: [...] Cur PV: 3 Act PV: 2 [...] Neither vgscan nor pvscan show any missing devices. What went wrong? How can we force LVM to activate all PVs?

    Read the article

  • How to install Ubuntu 12.04.1 in EFI mode with Encrypted LVM?

    - by g0lem
    I'm trying to properly install Ubuntu 12.04.1 LTS 64-bit PC (AMD64) with the alternate install CD ".iso" on a lenovo Thinkpad X220. Default Hard Disk (with a pre-installed version of Windows 7) has been replaced with a brand new SSD. The UEFI BIOS of the lenovo Thinkpad X220 is set to "UEFI Boot only" & "USB UEFI BIOS Support" is enabled (I'm using an external USB DVD reader to perform Ubuntu installation). The BIOS is a Phoenix SecureCore Tiano, BIOS version is 8DET56WW (1.26). The attempts below are made with the UEFI BIOS settings described above. Here's what I've tried so far: Boot on a live GParted CD Create a GPT partition table Create a FAT32 partition for UEFI System, set the partition to "EF00" type ("boot" flag) Leave remaining space unformated Boot on Ubuntu 12.04.1 LTS 64-bit PC (AMD64) with alternate CD: Perform the install with network updates enabled Use manual partitioning FAT32 partition created with GParted is used as "EFI System partition" Remaining space is set to be used as "Physical volume for LVM" Then "Configure encrypted volumes" using the previous "Physical volume for LVM" as the encrypted container, passphrase is setup. "Configure the Logical Volume Manager" creating a volume Group using the encrypted container /dev/mapper/sda2_crypt Creation of the Logical Volumes "Create logical volume", choosing the previously created volume Group Assign a mount point and file system to the Logical volumes : LV-root for / LV-var for /var LV-usr for /usr LV-usr-local for /usr/local LV-swap for swap LV-home for /home NOTE: /tmp would be in RAM only using TMPFS Bootloader step: neither my ESP partition (/dev/sda1, /dev/sda or MBR) seems to be the right place for GRUB, I get the following message (X suffix is for demonstration only): unable to install grub in /dev/sdaX Executing 'grub-install /dev/sdaX' failed This is a fatal error. Finish installation without the Bootloader & Reboot The system doesn't start, there's no EFI/GRUB menu at startup. What are the steps to perform a clean and working installation of Ubuntu 12.04.1 Precise Pangolin, 64bit version in U(EFI) mode using the encrypted LUKS + LVM scheme described above?

    Read the article

  • How do you add more space to a Fedora (LVM) partition?

    - by Trevor Boyd Smith
    In a nutshell, i have a VM that ran out of space. I increased the size of the VM's harddrive to be 4 times bigger but the OS partition is still only using 1x the space. I need to change the LVM partition to take up the extra 4x space but I don't know how to extend the LVM partition. (NOTE: To make the screenshots given below I had to boot from a live-cd for gnome-partition-manager (aka gparted). Very unfortunately gparted is only able to "detect LVM" and can't do any LVM operations.) Here is what "gparted" shows. Please notice that the "resize" option is not available: The Problem: I can't find good directions<1 on how to grow the LVM partition via GUI or command-line! How do you grow a LVM partition that was created by the default Fedora install? If you are giving command line directions. Please explain what each line of commands does.

    Read the article

  • linux hardware raid 10 / lvm / virtual machine partition alignment and filesystem optimization

    - by Jason Ward
    I've been reading everything I can find about partition alignment and filesystem optimization (ext4 and xfs) but still don't know enough to be confident in setting up my current configuration. My remaining confusion comes from the LVM layer and if I should use raid parameters on the filesystem in guest os'es. My main questions are: When I use 'pvcreate --dataalignment' do I use the stripe-width as calculated for a filesystem on RAID (128kB for ext4 in my situation), the Stripe size of the RAID set (256kB), something else altogether, or do I not need this? When I create ext2/3/4 or xfs filesystems in guests on the Logical Volumes, should I add the settings for the underlying RAID (e.g. mkfs.ext4 -b 4096 -E stride=64,stripe-width=128)? Does anyone see any glaring errors in my set up below? I'm running some benchmarks now but haven't done enough to start comparing results. I have four drives in RAID 10 on a 3ware 9750-4i controller (more details on the settings below) giving me a 6.0TB device at /dev/sda. Here is my partition table: Model: LSI 9750-4i DISK (scsi) Disk /dev/sda: 5722024MiB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1.00MiB 257MiB 256MiB ext4 BOOTPART boot 2 257MiB 4353MiB 4096MiB linux-swap(v1) 3 4353MiB 266497MiB 262144MiB ext4 4 266497MiB 4460801MiB 4194304MiB Partition 1 is to be the /boot partition for my xen host. Partition 2 is swap. Partition 3 is to be the root (/) for my xen host. Partition 4 is to be (the only) physical volume to be used by LVM (for those who are counting, I left about 1.2TB unallocated for now) For my Xen guests, I usually create a Logical Volume of the needed size and present it to the guests for them to partition as needed. I know there are other ways of handling that but this method works best for my situation. Here's the hardware of interest on my CentOS 6.3 Xen Host: 4x Seagate Barracuda 3TB ST3000DM001 Drives (sector size: 512 logical/4096 physical) 3ware 9750-4i w/BBU (sector size reported: 512 logical/512 physical) All four drives make up a RAID 10 array. Stripe: 256kB Write Cache enabled Read Cache: intelligent StoreSave: Balance Thanks!

    Read the article

  • lvm mirroring space unavailable.

    - by Bryan Ward
    I am trying to migrate my data on lvm to two new disks, and setup mirroring between the two. I have successfully migrated all of the data to the first of the two disks, leaving the second one completely available as a mirror. I verified this using pvdisplay -m /dev/sd{g,h}1 --- Physical volume --- PV Name /dev/sdg1 VG Name vg PV Size 931.51 GiB / not usable 3.19 MiB Allocatable yes PE Size 4.00 MiB Total PE 238466 Free PE 82866 Allocated PE 155600 PV UUID v2nc3j-EFBR-QpuG-xgro-Rm59-fmu6-IB3QcR --- Physical Segments --- Physical extent 0 to 49999: Logical volume /dev/vg/videos Logical extents 0 to 49999 Physical extent 50000 to 99999: Logical volume /dev/vg/home Logical extents 0 to 49999 Physical extent 100000 to 129999: Logical volume /dev/vg/music Logical extents 0 to 29999 Physical extent 130000 to 155599: Logical volume /dev/vg/videos Logical extents 50000 to 75599 Physical extent 155600 to 238465: FREE --- Physical volume --- PV Name /dev/sdh1 VG Name vg PV Size 931.51 GiB / not usable 3.19 MiB Allocatable yes PE Size 4.00 MiB Total PE 238466 Free PE 238466 Allocated PE 0 PV UUID LuTrem-WcsZ-qw7l-2CDS-lLKI-wdq0-QEXhLf --- Physical Segments --- Physical extent 0 to 238465: FREE Then when I try to mirror the home logical volume for example, it says that I do not have sufficient space. I used lvconvert -m1 vg/home and the output was: Insufficient suitable allocatable extents for logical volume : 50000 more required Unable to allocate extents for mirror(s). This puzzling to me because it appears as if there is plenty of space on the second disk to mirror. Is there something I have done wrong here? Or is there a way to explicitly tell LVM where to put each leg of the mirror? I'm using lvm2.

    Read the article

  • Rebuilding LVM after RAID recovery

    - by Xiong Chiamiov
    I have 4 disks RAID-5ed to create md0, and another 4 disks RAID-5ed to create md1. These are then combined via LVM to create one partition. There was a power outage while I was gone, and when I got back, it looked like one of the disks in md1 was out of sync - mdadm kept claiming that it only could find 3 of the 4 drives. The only thing I could do to get anything to happen was to use mdadm --create on those four disks, then let it rebuild the array. This seemed like a bad idea to me, but none of the stuff I had was critical (although it'd take a while to get it all back), and a thread somewhere claimed that this would fix things. If this trashed all of my data, then I suppose you can stop reading and just tell me that. After waiting four hours for the array to rebuild, md1 looked fine (I guess), but the lvm was complaining about not being able to find a device with the correct UUID, presumably because md1 changed UUIDs. I used the pvcreate and vgcfgrestore commands as documented here. Attempting to run an lvchange -a y on it, however, gives me a resume ioctl failed message. Is there any hope for me to recover my data, or have I completely mucked it up?

    Read the article

  • Relayout LVM Disk

    - by Tom
    I have an Ubuntu 11.10 system with two 500GB disks. The partition tables look like this: /dev/sda1 primary 465.52GB /dev/sda2 extended 243.17MB -> /dev/sda5 logical 243.14MB /dev/sdb1 primary 465.76GB sda1 and sdb1 are in a single LVM physical volume group containing a single logical volume containing a single logical filesystem which is mounted as /. sda5 is mounted as /boot. The problem comes when I want to upgrade to Ubuntu 12.04, which requires at least 247MB free on /boot. So I need to reduce the size of sda1 so that I can increase the size of sda2 and sda5. How the heck do I do that? I can find how to shrink the logical volume group, but I'm not at all clear on how to clear out the end part of sda1 so that I can reduce the physical volume group. Does pvresize just deal with this automagically? Or is that wild wishful thinking? I guess the alternatives are to back everything up onto something or other and recreate the thing from scratch or find out whether GRUB2 supports using LVM for /boot.

    Read the article

  • All Xen domU LVM volumes corrupt after reboot

    - by zcs
    I'm running a Debian Squeeze dom0, and after rebooting it all 7 of my domUs have data corruption. Each is setup as ext3 partition directly on a separate lvm2 volume. None of the lvm volumes will mount; all have bad superblocks. I've tried e2fsck with each superblock to no avail. What else can I try? Each domU has two LVM volumes connected to it, one for the disk and one for swap. The disk is mounted at root, formatted as a normal ext3 partition as a xen-blk device. The volumes are never mounted outside of the guest OS. I'm running Ubuntu 11.04 using the instructions here. I'm not sure that they didn't shutdown properly, all I know is they were corrupt after I issues a clean 'reboot' on the dom0. Here's a sample Xen config file; the rest are the same except for name, vcpus, memory, vif and disk. name = 'load1' vcpus = 2 memory = 512 vif = ['bridge=prbr0', 'bridge=eth0'] disk = ['phy:/dev/VolGroup00/load1-disk,xvda,w','phy:/dev/VolGroup00/load1-swap,xvdb,w'] #============================================================================ # Debian Installer specific variables def check_bool(name, value): value = str(value).lower() if value in ('t', 'tr', 'tru', 'true'): return True return False global var_check_with_default def var_check_with_default(default, var, val): if val: return val return default xm_vars.var('install', use='Install Debian, default: false', check=check_bool) xm_vars.var("install-method", use='Installation method to use "cdrom" or "network" (default: network)', check=lambda var, val: var_check_with_default('network', var, val)) # install-method == "network" xm_vars.var("install-mirror", use='Debian mirror to install from (default: http://archive.ubuntu.com/ubuntu)', check=lambda var, val: var_check_with_default('http://archive.ubuntu.com/ubuntu', var, val)) xm_vars.var("install-suite", use='Debian suite to install (default: natty)', check=lambda var, val: var_check_with_default('natty', var, val)) # install-method == "cdrom" xm_vars.var("install-media", use='Installation media to use (default: None)', check=lambda var, val: var_check_with_default(None, var, val)) xm_vars.var("install-cdrom-device", use='Installation media to use (default: xvdd)', check=lambda var, val: var_check_with_default('xvdd', var, val)) # Common options xm_vars.var("install-arch", use='Debian mirror to install from (default: amd64)', check=lambda var, val: var_check_with_default('amd64', var, val)) xm_vars.var("install-extra", use='Extra command line options (default: None)', check=lambda var, val: var_check_with_default(None, var, val)) xm_vars.var("install-installer", use='Debian installer to use (default: network uses install-mirror; cdrom uses /install.ARCH)', check=lambda var, val: var_check_with_default(None, var, val)) xm_vars.var("install-kernel", use='Debian installer kernel to use (default: uses install-installer)', check=lambda var, val: var_check_with_default(None, var, val)) xm_vars.var("install-ramdisk", use='Debian installer ramdisk to use (default: uses install-installer)', check=lambda var, val: var_check_with_default(None, var, val)) xm_vars.check() if not xm_vars.env.get('install'): bootloader="/usr/sbin/pygrub" elif xm_vars.env['install-method'] == "network": import os.path print "Install Mirror: %s" % xm_vars.env['install-mirror'] print "Install Suite: %s" % xm_vars.env['install-suite'] if xm_vars.env['install-installer']: installer = xm_vars.env['install-installer'] else: installer = xm_vars.env['install-mirror']+"/dists/"+xm_vars.env['install-suite'] + \ "/main/installer-"+xm_vars.env['install-arch']+"/current/images" print "Installer: %s" % installer print print "WARNING: Installer kernel and ramdisk are not authenticated." print if xm_vars.env.get('install-kernel'): kernelurl = xm_vars.env['install-kernel'] else: kernelurl = installer + "/netboot/xen/vmlinuz" if xm_vars.env.get('install-ramdisk'): ramdiskurl = xm_vars.env['install-ramdisk'] else: ramdiskurl = installer + "/netboot/xen/initrd.gz" import urllib class MyUrlOpener(urllib.FancyURLopener): def http_error_default(self, req, fp, code, msg, hdrs): raise IOError("%s %s" % (code, msg)) urlopener = MyUrlOpener() try: print "Fetching %s" % kernelurl kernel, _ = urlopener.retrieve(kernelurl) print "Fetching %s" % ramdiskurl ramdisk, _ = urlopener.retrieve(ramdiskurl) except IOError, _: raise elif xm_vars.env['install-method'] == "cdrom": arch_path = { 'i386': "/install.386", 'amd64': "/install.amd" } if xm_vars.env['install-media']: print "Install Media: %s" % xm_vars.env['install-media'] else: raise OptionError("No installation media given.") if xm_vars.env['install-installer']: installer = xm_vars.env['install-installer'] else: installer = arch_path[xm_vars.env['install-arch']] print "Installer: %s" % installer if xm_vars.env.get('install-kernel'): kernelpath = xm_vars.env['install-kernel'] else: kernelpath = installer + "/xen/vmlinuz" if xm_vars.env.get('install-ramdisk'): ramdiskpath = xm_vars.env['install-ramdisk'] else: ramdiskpath = installer + "/xen/initrd.gz" disk.insert(0, 'file:%s,%s:cdrom,r' % (xm_vars.env['install-media'], xm_vars.env['install-cdrom-device'])) bootloader="/usr/sbin/pygrub" bootargs="--kernel=%s --ramdisk=%s" % (kernelpath, ramdiskpath) print "From CD" else: print "WARNING: Unknown install-method: %s." % xm_vars.env['install-method'] if xm_vars.env.get('install'): # Figure out command line if xm_vars.env['install-extra']: extras=[xm_vars.env['install-extra']] else: extras=[] # Reboot will just restart the installer since this file is not # reparsed, so halt and restart that way. extras.append("debian-installer/exit/always_halt=true") extras.append("--") extras.append("quiet") console="hvc0" try: if len(vfb) >= 1: console="tty0" except NameError, e: pass extras.append("console="+ console) extra = str.join(" ", extras) print "command line is \"%s\"" % extra root There are two LVM logical volumes connected to each VM. Here's the fdisk -l output for the disk volume: Disk /dev/VolGroup00/VMNAME-disk: 8589 MB, 8589934592 bytes 255 heads, 63 sectors/track, 1044 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00029c01 Device Boot Start End Blocks Id System /dev/VolGroup00/VMNAME-disk1 1 1045 8386560 83 Linux And the swap volume: Disk /dev/VolGroup00/VMNAME-swap: 536 MB, 536870912 bytes 37 heads, 35 sectors/track, 809 cylinders Units = cylinders of 1295 * 512 = 663040 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0004faae Device Boot Start End Blocks Id System /dev/VolGroup00/VMNAME-swap1 2 809 522240 82 Linux swap / Solaris Partition 1 has different physical/logical beginnings (non-Linux?): phys=(0, 32, 33) logical=(1, 21, 19) Partition 1 has different physical/logical endings: phys=(65, 36, 35) logical=(808, 4, 28)

    Read the article

  • LVM mirror attempt results in "Insufficient free space"

    - by MattK
    Attempting to add a disk to mirror an LVM volume on CentOS 7 always fails with "Insufficient free space: 1 extents needed, but only 0 available". Having searched for a solution, I have tried specifying disks, multiple logging options, adding 3rd log partition, but have not found a solution Not sure if I am making a rookie mistake, or there is something more subtle wrong (I am more familiar with ZFS, new to using LVM): # lvconvert -m1 centos_bi/home Insufficient free space: 1 extents needed, but only 0 available # lvconvert -m1 --corelog centos_bi/home Insufficient free space: 1 extents needed, but only 0 available # lvconvert -m1 --corelog --alloc anywhere centos_bi/home Insufficient free space: 1 extents needed, but only 0 available # lvconvert -m1 --mirrorlog mirrored --alloc anywhere centos_bi/home /dev/sda2 Insufficient free space: 1 extents needed, but only 0 available # lvconvert -m1 --corelog --alloc anywhere centos_bi/home /dev/sdi2 /dev/sda2 Insufficient free space: 1 extents needed, but only 0 available The two disks are of the same size, and have identical partition layouts via "sfdisk -d /dev/sdi part_table; sfdisk /dev/sda < part_table". The current configuration is detailed below. # pvs PV VG Fmt Attr PSize PFree /dev/sda1 centos_bi lvm2 a-- 496.00m 496.00m /dev/sda2 centos_bi lvm2 a-- 465.27g 465.27g /dev/sdi2 centos_bi lvm2 a-- 465.27g 0 # vgs VG #PV #LV #SN Attr VSize VFree centos_bi 3 3 0 wz--n- 931.02g 465.75g # lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert Devices home centos_bi -wi-ao---- 391.64g /dev/sdi2(6050) root centos_bi -wi-ao---- 50.00g /dev/sdi2(106309) swap centos_bi -wi-ao---- 23.63g /dev/sdi2(0) # pvdisplay --- Physical volume --- PV Name /dev/sdi2 VG Name centos_bi PV Size 465.27 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 119109 Free PE 0 Allocated PE 119109 --- Physical volume --- PV Name /dev/sda2 VG Name centos_bi PV Size 465.27 GiB / not usable 3.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 119109 Free PE 119109 Allocated PE 0 --- Physical volume --- PV Name /dev/sda1 VG Name centos_bi PV Size 500.00 MiB / not usable 4.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 124 Free PE 124 Allocated PE 0 # vgdisplay --- Volume group --- VG Name centos_bi System ID Format lvm2 Metadata Areas 3 Metadata Sequence No 10 VG Access read/write VG Status resizable MAX LV 0 Cur LV 3 Open LV 3 Max PV 0 Cur PV 3 Act PV 3 VG Size 931.02 GiB PE Size 4.00 MiB Total PE 238342 Alloc PE / Size 119109 / 465.27 GiB Free PE / Size 119233 / 465.75 GiB # lvdisplay --- Logical volume --- LV Path /dev/centos_bi/swap LV Name swap VG Name centos_bi LV Write Access read/write LV Creation host, time localhost, 2014-08-07 16:34:34 -0400 LV Status available # open 2 LV Size 23.63 GiB Current LE 6050 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1 --- Logical volume --- LV Path /dev/centos_bi/home LV Name home VG Name centos_bi LV Write Access read/write LV Creation host, time localhost, 2014-08-07 16:34:35 -0400 LV Status available # open 1 LV Size 391.64 GiB Current LE 100259 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2 --- Logical volume --- LV Path /dev/centos_bi/root LV Name root VG Name centos_bi LV Write Access read/write LV Creation host, time localhost, 2014-08-07 16:34:37 -0400 LV Status available # open 1 LV Size 50.00 GiB Current LE 12800 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0

    Read the article

  • kernel panic after LVM setup

    - by Manuel Sopena Ballesteros
    I broke my webserver... My setup is: VMWare ESXi environemt CPanel installed CentOS release 6.5 (Final) 4 CPUs 2G RAM 2x VM disks 100G each LVM system This was my previous storage settings (the server was working fine at this time): # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_test01-lv_root 95G 1.4G 88G 2% / tmpfs 939M 0 939M 0% /dev/shm /dev/sdb1 99G 188M 94G 1% /tmp /dev/sda1 485M 54M 407M 12% /boot My web developer asked me to merge /tmp and / disks so this is what I did: Delete /dev/sdb1 partition using fdisk Create a new partition as LVM on /dev/sdb1 using fdisk Create a new physical volume -- pvcreate /dev/sdb1 Extend volume group -- vgextend /dev/sdb1 vg_test01 Extend logical volume -- lvextend -l +100%FREE /dev/vg_test01/lv_root Resize filesystem -- resize2fs /dev/vg_test01/lv_root This is the new configuration: # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_test01-lv_root 213G 105G 97G 52% / tmpfs 939M 0 939M 0% /dev/shm /dev/sda1 485M 54M 407M 12% /boot /usr/tmpDSK 4.0G 145M 3.6G 4% /tmp Since I have the new settings my web server is throwing kernel panics quite often (around every 2 days). The message says: INFO: task <taskName>:<pid> blocked for more than 120 seconds. The list of process affected that I can see from the console are: mysqld queueprocd httpd suphp vmtoolsd loop0 auditd The only way I can fix this is reseting (cold reboot) the VM. I don't think it is a hardware issue as sar is not showing any bottleneck: Linux 2.6.32-431.3.1.el6.x86_64 (test01) 08/22/2014 _x86_64_ (4 CPU) 12:00:01 AM CPU %user %nice %system %iowait %steal %idle 12:10:01 AM all 26.86 0.01 0.98 0.57 0.00 71.57 12:20:01 AM all 1.78 0.02 1.03 0.08 0.00 97.09 12:30:01 AM all 26.34 0.02 0.85 0.05 0.00 72.74 12:40:01 AM all 27.12 0.01 1.11 1.22 0.00 70.54 12:50:01 AM all 1.59 0.02 0.94 0.13 0.00 97.32 01:00:01 AM all 26.10 0.01 0.77 0.04 0.00 73.07 01:10:01 AM all 27.51 0.01 1.16 0.14 0.00 71.18 01:20:01 AM all 1.80 0.07 1.06 0.08 0.00 96.99 01:30:01 AM all 26.19 0.01 0.78 0.05 0.00 72.96 01:40:01 AM all 26.62 0.02 0.87 0.05 0.00 72.45 01:50:02 AM all 1.35 0.01 0.87 0.02 0.00 97.75 02:00:01 AM all 26.11 0.02 0.69 0.02 0.00 73.17 02:10:01 AM all 26.73 0.02 0.89 0.14 0.00 72.21 02:20:01 AM all 1.45 0.01 0.92 0.04 0.00 97.58 02:30:01 AM all 26.59 0.01 1.06 0.03 0.00 72.31 02:40:01 AM all 26.27 0.01 0.72 0.05 0.00 72.95 02:50:01 AM all 0.86 0.01 0.50 0.09 0.00 98.53 03:00:01 AM all 25.61 0.02 0.39 0.03 0.00 73.96 03:10:01 AM all 26.30 0.08 0.66 0.14 0.00 72.82 03:20:01 AM all 0.81 0.01 0.51 0.04 0.00 98.63 03:30:02 AM all 26.15 0.02 0.53 0.07 0.00 73.24 03:40:01 AM all 26.06 0.01 0.47 0.04 0.00 73.42 03:50:01 AM all 0.96 0.02 0.51 0.03 0.00 98.48 Average: all 17.69 0.02 0.79 0.14 0.00 81.36 06:58:14 AM LINUX RESTART 07:00:01 AM CPU %user %nice %system %iowait %steal %idle 07:10:01 AM all 1.04 0.02 0.57 0.95 0.00 97.42 07:20:02 AM all 0.66 0.01 0.39 0.06 0.00 98.87 07:30:01 AM all 25.71 0.01 0.45 0.16 0.00 73.67 07:40:01 AM all 25.88 0.01 0.35 0.08 0.00 73.68 07:50:01 AM all 1.13 0.02 0.55 0.11 0.00 98.19 As you can see the server became unresponsive at 03.50 AM and I had to reset the VM at 06.58 AM to bring the website up again. I would appreciate any help/assistance to fix this issue. thank you very much

    Read the article

  • How does Ubuntu LVM encryption work?

    - by Sridhar Ratnakumar
    While installing Ubuntu Server, during the partition step one of the options is "use entire disk and set up encrypted LVM" (see screenshots). Can anyone explain how it works under the hood? What kind of tools/technologies/algorithms are used? How exactly does this possibly prevent thieves from getting access to the data in the hard disk?

    Read the article

  • LVM snapshot size

    - by Devator
    I currently have 2 volume's: [root@compute4 /]# lvscan ACTIVE '/dev/vps/vm108_img' [30.00 GB] inherit ACTIVE '/dev/vps/vm109_img' [90.00 GB] inherit Now, I use the LVM snapshot function to create backups. My quesion is, what size does the snapshot needs to be? Atleast the same as the volume in question or can it be a lot smaller and will only the changes be saved into it?

    Read the article

  • Can't create LVM due to: not found (or ignored by filtering)

    - by James
    I'm planning to use LVM for KVM, and when I try to create a VG it fails, so how can I create my VG and LV ? Thanks [root@server ~]# vgcreate virtual-machines /dev/sda Device /dev/sda not found (or ignored by filtering). Unable to add physical volume '/dev/sda' to volume group 'virtual-machines'. [root@server ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 2.0T 929G 976G 49% / tmpfs 3.9G 124K 3.9G 1% /dev/shm /dev/sda1 194M 57M 128M 31% /boot [root@server ~]# pvscan No matching physical volumes found

    Read the article

  • How to verify /boot partition on encrypted LVM setup

    - by ml43
    Isn't unencrypted /boot partition a weakness for encrypted LVM setup? Attacker may install a malware to /boot partition so that it may sniff encryption password next time system boots. It may also be done by a malware installed to Windows on dual-boot system without any physical access. Am I missing some protection scheme or at least I may verify that /boot contents didn't change since last system shutdown?

    Read the article

  • Testdisk won’t list files for an ext4 partition inside a LVM inside a LUKS partition

    - by user1598585
    I have accidentally deleted a file that I want to recover. The partition is an ext4 partition inside an LVM partition that is encrypted with dm-crypt/LUKS. The encrypted LUKS partition is: /dev/sda2 which contains a physical volume, with a single volume group, mapped to: /dev/mapper/system And the logical volume, the ext4 partition is mapped to: /dev/mapper/system-home A # testdisk /dev/mapper/system-home will notice it as an ext4 partition but tells me that the partition seems damaged when I try to list the files. If I # testdisk /dev/mapper/system it will detect all the partitions, but the same happens if I try to list their files. Am I doing something wrong or is it a known bug? I have searched but haven’t found any clue.

    Read the article

  • Damaged XenServer Storage LVM partition table

    - by Fiolek
    I have a homeserver running under XenServer control with 3x1TB discs inside, one for XenServer and two mirrored(using Intel's fakeRAID and dmraid) for VMs and a user data(but now I think RAID didn't work). I tried to pass PCI card to VM using PCI-passthroug and I read somewhere that I need to recompile kernel with pciback module but something went wrong(I made mistake in /boot/extlinux.conf and server couldn't run) and I had to use LiveCD of GPartEd(I already had it on USB key) to correct this. But when I re-run the server all VDIs were gone. I have completly no idea what could go wrong. I tried to repair RAID using dmraid -R in the hope that everything will return to noramal but now I think this done more bad than good(and corrupted rest of LVM table...). Is there any possibility to recover this SR or only data from one(~100GB) of VDI? I also wants to apologise for my English, I'm not from English-speaking country and I'm only 16 years old, so I hadn't "time" to learn it(school isn't good place to do this) in sufficient way.

    Read the article

  • Reversing an lvreduce of LVM to original size

    - by praspa
    On a RHEL system that uses LVM 2 with 4K blocks. Have been successful in reducing the LV, but trying to get steps to reverse the operation so that the LV returns to its original size. Using these steps to reduce the LV by 1GB, # umount /foo # e2fsk -f /dev/mylvm/foo # resize2fs /dev/mylvm/foo <Current LV Block count - 1GB/4K> # lvreduce --size <Current # GB - 1GB> /dev/mylvm/foo Then to reverse the reduction # lvextend --size <Original #GB> /dev/mylvm/foo # resize2fs /dev/mylvm/foo The reversal gets close to the orignal size. A 'df -h' reports that it seems to be about ~ 0.1GB shy of the original size. Using these utilities, is there a better procedure to shrink and grow the LV so that the original state can be recovered effectively?

    Read the article

  • Linux LVM snapshot commit or revert?

    - by Shewfig
    Hi, I'm about to perform an experimental upgrade on my CentOS 5 server. If the upgrade fails, I want to be able to back out the changes to the filesystem. This scenario seems similar to the example in Section 3.8 of the LVM HOWTO for LVM2 read-write snapshots - but the example is rather lacking in actual how-to. 1) How would I commit the changes, merging them back into the original partition? 2) How would I revert the changes, restoring the filesystem back to its original state? Should I assume that I'll need to restart several services, if not outright reboot? 3) Is it possible to snapshot only certain directories on a partition, or is it a partition-wide operation? Thanks...

    Read the article

  • Shrink a mounted LVM partition

    - by javanix
    I fear I already know the answer to this question, but here goes. I need to carve out a new partition on a running system. /var/ is mounted from an LVM volume (hdd1_vg-var) and has only 3% used disk space. / is mounted separately (hdd1_vg-root) and has about 80% used disk space. Filesystem Size Used Avail Use% Mounted on /dev/**/hdd1_vg-root 2.0G 1.4G 481M 75% / /dev/**/hdd1_vg-var 33G 699M 31G 3% /var Unfortunately I don't have any free extents to grow this partition organically - vgdisplay shows: Total PE 10000 Alloc PE / Size 10000 / 39.06 GB Free PE / Size 0 / 0 So seeing that I have all this free disk space on /var/, can I shrink /var/ without un-mounting it or is this just a pipe dream? I am really hoping to be able to do this work on a running system - un-mounting would of course not be difficult but it would interfere with system functionality.

    Read the article

  • Debian Raid5 LVM

    - by Develman
    I want to setup a debian server mainly used as data storage. I have 4 devices: /dev/sda (160GB) - installed debian on it /dev/sdb, /dev/sdc, /dev/sdd (all 500GB) - created a raid5 array with it Now I am not sure, how to go on. Is it usefull to create a LVM on the raid5 /dev/md0? How can I do this? Is there a good HowTo? Or can I just create filesystem on the raid5 and create different partitions?

    Read the article

  • Lost Page Write I/O Errors on CentOS LVM setup

    - by Gregg Leventhal
    I have a CentOS 6 box with LVM setup and one of the PVs is a USB disk (I know). One of them is getting the error: Oct 30 10:57:07 alpha01 kernel: lost page write due to I/O error on dm-3 Oct 30 10:57:07 alpha01 kernel: Buffer I/O error on device dm-3, logical block 4 Which is causing problems with all of the LVs on it. pvs shows the PV as unknown device. I can ls to the logical volumes and they show up in lvdisplay, but first I get a bunch of IO errors. I made sure the cables are secure between the USB drive. What should I do to get this back up and running for the meanwhile? Should I unmount each LV and run an fsck.ext4 on each one like fsck.ext4 -y /dev/vg1/lv_logvolname ?

    Read the article

  • Getting an error when mounting LVM snapshot

    - by Sandra
    I have migrated a file based Xen guest to LVM using dd bs=1M if=/dev/zero of=/dev/vg00/vm10 qemu-img convert ~/vm10.qcow2 -O raw /dev/vg00/vm10 and changed the Xen domain file for the VM to use the LV instead of the old file. The VM boots up, and now on the Xen host would I like to make a snapshot of the running VM. # lvcreate --size 10G --snapshot --name vm10-snapshot /dev/vg00/vm10 Logical volume "vm10-snapshot" created # mount /dev/vg00/vm10-snapshot /mnt/snapshot/ mount: you must specify the filesystem type # dmesg |tail EXT3 FS on dm-3, internal journal EXT3-fs: mounted filesystem with ordered data mode. hfs: unable to find HFS+ superblock VFS: Can't find ext3 filesystem on dev dm-4. hfs: unable to find HFS+ superblock hfs: unable to find HFS+ superblock VFS: Can't find ext3 filesystem on dev dm-2. hfs: unable to find HFS+ superblock hfs: unable to find HFS+ superblock hfs: unable to find HFS+ superblock For some reason it can't see it is an EXT3 filesystem. I have also tried to mount with -t ext3, but still didn't mount. # lvdisplay --- Logical volume --- LV Name /dev/vg00/vm10 VG Name vg00 LV UUID I1y1vQ-Bac5-5jwW-melh-TY5h-l9NO-qaelKk LV Write Access read/write LV snapshot status source of /dev/vg00/vm10-snapshot [active] LV Status available # open 2 LV Size 8.00 GB Current LE 2048 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2 --- Logical volume --- LV Name /dev/vg00/vm10-snapshot VG Name vg00 LV UUID GWsOx3-TPpr-GW64-uiMz-u1YN-QU4h-l0Kala LV Write Access read/write LV snapshot status active destination for /dev/vg00/vm10 LV Status available # open 0 LV Size 8.00 GB Current LE 2048 COW-table size 10.00 GB COW-table LE 2560 Allocated to snapshot 0.00% Snapshot chunk size 4.00 KB Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:4 # What could the problem be?

    Read the article

  • Very slow write performance on Debian 6.0 (AMD64) with DMCRYPT/LVM/RAID1

    - by jdelic
    I'm seeing very strange performance characteristics on one of my servers. This server is running a simple two-disk software-RAID1 setup with LVM spanning /dev/md0. One of the logical volumes /dev/vg0/secure is encrypted using dmcrypt with LUKS and mounted with the sync and noatimes flag. Writing to that volume is incredibly slow at 1.8 MB/s and the CPU usage stays near 0%. There are 8 crpyto/1-8 processes running (it's a Intel Quadcore CPU). I hope that someone on serverfault has seen this before :-(. uname -a 2.6.32-5-xen-amd64 #1 SMP Tue Mar 8 00:01:30 UTC 2011 x86_64 GNU/Linux Interestingly, when I read from the device I get good performance numbers: reading without encryption: $ dd if=/dev/vg0/secure of=/dev/null bs=64k count=100000 100000+0 records in 100000+0 records out 6553600000 bytes (6.6 GB) copied, 68.8951 s, 95.1 MB/s reading with encryption: $ dd if=/dev/mapper/secure of=/dev/null bs=64k count=100000 100000+0 records in 100000+0 records out 6553600000 bytes (6.6 GB) copied, 69.7116 s, 94.0 MB/s However, when I try to write to the device: $ dd if=/dev/zero of=./test bs=64k 8809+0 records in 8809+0 records out 577306624 bytes (577 MB) copied, 321.861 s, 1.8 MB/s Also, when I read I see CPU usage, when I write, the CPU stays at almost 0% usage. Here is output of cryptsetup luksDump: LUKS header information for /dev/vg0/secure Version: 1 Cipher name: aes Cipher mode: cbc-essiv:sha256 Hash spec: sha1 Payload offset: 2056 MK bits: 256 MK digest: dd 62 b9 a5 bf 6c ec 23 36 22 92 4c 39 f8 d6 5d c1 3a b7 37 MK salt: cc 2e b3 d9 fb e3 86 a1 bb ab eb 9d 65 df b3 dd d9 6b f4 49 de 8f 85 7d 3b 1c 90 83 5d b2 87 e2 MK iterations: 44500 UUID: a7c9af61-d9f0-4d3f-b422-dddf16250c33 Key Slot 0: ENABLED Iterations: 178282 Salt: 60 24 cb be 5c 51 9f b4 85 64 3d f8 07 22 54 d4 1a 5f 4c bc 4b 82 76 48 d8 a2 d2 6a ee 13 d7 5d Key material offset: 8 AF stripes: 4000 Key Slot 1: DISABLED Key Slot 2: DISABLED Key Slot 3: DISABLED Key Slot 4: DISABLED Key Slot 5: DISABLED Key Slot 6: DISABLED Key Slot 7: DISABLED

    Read the article

  • Recovering a mdadm+lvm+ext4 partition with read error

    - by bitwelder
    One of disks in my NAS has failed. The NAS is running Linux, and it uses mdadm + LVM technology for its filesystems. I do have backup for most of the contents, but not for the very last changes, and if possible, I'd like to recover that from this failing disk. The disk (a 'green drive' WD10EARS 1TB in size) throws this kind of errors: Oct 3 12:00:41 kernel: [ 3625.620000] ata5.00: read unc at 9453282 Oct 3 12:00:41 kernel: [ 3625.620000] lba 9453282 start 9453280 end 1953511007 Oct 3 12:00:41 kernel: [ 3625.620000] sde5 auto_remap 0 Oct 3 12:00:41 kernel: [ 3625.630000] ata5.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x6 Oct 3 12:00:41 kernel: [ 3625.630000] ata5.00: edma_err_cause=00000084 pp_flags=00000003, dev error, EDMA self-disable Oct 3 12:00:41 kernel: [ 3625.640000] ata5.00: failed command: READ FPDMA QUEUED Oct 3 12:00:41 kernel: [ 3625.650000] ata5.00: cmd 60/40:00:e0:3e:90/00:00:00:00:00/40 tag 0 ncq 32768 in Oct 3 12:00:41 kernel: [ 3625.650000] res 41/40:00:e2:3e:90/12:00:00:00:00/40 Emask 0x409 (media error) <F> Oct 3 12:00:41 kernel: [ 3625.660000] ata5.00: status: { DRDY ERR } However, while testing with 'dd', I noticed that if I skip the first 4kB, the read seems to be ok, i.e. a command like. dd if=/dev/sde5 of=dev/null bs=4k count=1000 skip=1 doesn't return any read error. Supposing that there is no other read failure in the rest of the disk, would I be able to recover this 900 GB partition (as I mentioned before, it's a 'linux raid autodetect' partition, that contains a a LVM2 volume that contains a ext4 filesystem) if I copy-clone the partition somewhere else, but the first 4kB?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >