Search Results

Search found 398 results on 16 pages for 'lvm'.

Page 11/16 | < Previous Page | 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Solaris to Linux conversion: Use VxFS or GFS?

    - by w00t
    We're a Solaris shop looking at RedHat Enterprise Linux and one of the things we're wondering is if we should keep Veritas Volume Manager + FileSystem or go with LVM+ext3 or RedHat's preferred cluster filesystem solution, GFS. One of the things we like about Veritas is that it can use Veritas Volume Replicator to have a remote copy of important filesystems. This functionality seems to be missing from RedHat, DRBD doesn't seem to be packaged in RHEL... So my questions are: Does anybody use VxFS/VxVM/VVR on Linux? Thoughts, experiences? Comparison with LVM+ext3? Anybody using GFS? Thoughts, experiences? Do you do remote replication for disaster recovery, and if so, how? Is there a standard RedHat way?

    Read the article

  • Optimal way to make MySQL backups for fairly large databases (MyISAM / InnoDB)

    - by WinkyWolly
    Currently we have one beefy MySQL database that runs a couple of high traffic Django based websites as well as some e-commerce websites of decent size. As a result we have a fair amount of large databases using both InnoDB and MyISAM tables. Unfortunately we've recently hit a wall due to the amount of traffic so I've setup another master server to help alleviate reads / backups. Now at the moment I simply use mysqldump with a few arguments and it's proven to be fine.. until now. Obviously mysqldump is a slow quick method however I believe we've outgrown its use. I now need a good alternative and have been looking into utilizing Maatkits mk-parallel-dump utility or an LVM snapshot solution. Succinct short version: I have a fairly large MySQL databases I need to backup Current method using mysqldump is inefficient and slow (causing issues) Looking into something such as mk-parallel-dump or LVM snapshots Any recommendations or ideas would be appreciated - since I have to re-do how we're doing things I rather have it done properly / most efficient :).

    Read the article

  • Mirroring MySQL server with diffrent configuration

    - by HTF
    I have to migrate MySQL server to a different data centre so I would like to create another MySQL slave server in new DC and then promote it to a master later on. I previously used LVM snapshots and Percona Xtrabackup for this purpose but this time I've optimized MySQL configuration file that prevents me from using these methods. Old server (backup): innodb_log_file_size = 256M innodb_log_files_in_group = 3 New server (restore): innodb_log_file_size = 512M innodb_log_files_in_group = 2 The Xtrabackup script and LVM snapshots copy the whole directory structure so the MySQL server won't start because there is a different size for InnoDB logs. Is there any solution to avoid a downtime in this case? I can't use mysqldumps as there is around 8000 databases so I would have to take the server down for a couple of hours. I was also thinking to use the old settings with Xtrabackup and then change it once the new server is promoted to a master - less downtime but I'm not sure if this will work? Thank you Regards

    Read the article

  • Make services not start automatically after reboot (as they require access to an encrypted partition)

    - by Binary255
    Hi, I use Ubuntu Server 10.04. I more or less only want the server to be accessible over SSH after a reboot. I will then login and mount the encrypted partition myself, after which I start the services which uses it. How would I go about setting something like that up? (My first idea was to have everything except /boot in an encrypted LVM, but I never got logging in through SSH and mounting the LVM to work. Initramfs was a bit too complicated for me. Otherwise I think this would have been the best solution.)

    Read the article

  • Software Raid1 with Trim

    - by Penetal
    I have two Crucial C300 SSD disks that I would like to use as my OS disks in my new home server. I have read around a little and some places say that TRIM is simply not supported on any raid config, hw or sw. Then on some other sites I have seen that new support have come for SW raid via LVM somehow, and this is what I'm curious to know about. Can I get Raid 1 and still have TRIM enabled on SW raid by abstracting it with LVM or in any other way? I will most likely be using either Debian or CentOS.

    Read the article

  • Xen: How to install bootloader for domU (guest os) ?

    - by holms
    I tried to install with "grub-install" grub for guest os (which is debian) from host (centos). Tried with chroot, tried with deboostrap, tried with netinstaller. Centos is running on two raid hdd's,under LVM. Lvm volumes are created everything is formated and works. Only bootloader problem left. Netinstaller just throws me a window with error that grub can't be installed, debootstrap instructions is not clear for me in here, grub-install doesn't work in chroot, and out of chroot (grub-install /dev/mylvm/partition) Please can somebody write how to install grub for domU (guest) os from centos?

    Read the article

  • Resize2fs at 81h and counting

    - by Adam
    Setup: 12x 1TB drives in a RAID6 (MDADM) crypt-setup running ontop of MDADM LVM running on the crypted drives EXT4 on the LVM Background: I added a new drive to the RAID (increasing from 11 to 12 drives), and 'bubbled' up through the layers (MDADM, etc...) to reizing the ext4 partition. This machine is used as a centralized repository for photography and as a backup server (for both Windows and Mac machines) so bringing it down to add the drive and wait for the resizing and everything wasn't really an option. So I started the resize operation several days ago. HTOP is reporting the resize2fs operation as running for 81h now. DMESG and syslog are both clear, and the drives are still accessable. The resize command reports it's started an online resize of the partition, so the process IS running, and it is burning through 100% of one of my cores. Question: Is it normal for the operation to take this long or has something gone horribly wrong? Where would I start looking for signs of trouble?

    Read the article

  • Grub menu will not show the first time I try to boot my ubuntu server 12.04 after it is shutdown for a long time

    - by user211477
    I am running into a booting issue after installing Ubuntu Server 12.04 LTS. Following is the symptom of the problem. SYSTEM DESCRIPTION: Dual core AMD Athlon 64 3 Disks: two SATA (out of which one is SSD) and one PATA. Using LVM for disk partition management. /boot is not under LVM rest of the partitions are. / is on the SSD BIOS boot sequence is correct and points to the disk with /boot and boot loader is installed on this disk. SYMPTOMS: POST messages Blinking cursor on first line then moves to second line Screen flickers then becomes black Everything is unresponsive, hard reboot POST messages will not show up on screen. Monitor displays powersave message Force shutdown machine again. Shutoff power to machine for a few minutes. Restart machine. POST message show up. Grub menu shows up Ubuntu server 12.04 boots normally. From now on Ubuntu server boots normally until machine is shutdown for a long time (for example, 30 mins) Repeat steps 1 through 13 once the machine is started after a long time. WHAT DID I TRY? I read several posts and have tried: radeon.modeset=0 setting the gfxmode edd=0 nolacpi boot-repair Nothing seems to work. In my search I did see only one post with this same symptom. Unfortunately, I am not being able to locate that post anymore. The interesting fact is that with this same machine configuration, if I install Ubuntu Desktop 12.04 then everything works fine. Any help will be appreciated.

    Read the article

  • can't get a good install:11.10 server

    - by jack
    I screwed up my partitioning aparently tring to get lvm and raid1 going. the machine is an intel dual core dt with 2 gig of ram and 2 sata drives, one 250g and the other 500g. This a build for my school in n.e. Thailand. we have 20+ clients now, a website, email. Our old server is dying fast and we are going to add another 12 stations next week. I really need some help here! 1. have onboard gigabit ethernet that aparently uses same driver as realtek 811c. I installed a pcie gigabit card also 811c. At several points the eth0 has accessed the internet fine, but the eth1 will not communicate. 2. I saw a "fix" for this online which from root: rmmod r8169. this imediately killed the working onboard card. 3.I tried to re-install 11.10 figuring that would re-install r8169. However I messed something up in my partitioning and can't get a clean boot now. 6. so I think after 12 re-installs or so and 2 days. I can get through it right if I can start over with clean drives, but I can't figure out how to empty them out what with soft raid and lvm partitions. seems like i've had it going well and then trying to fix that one little problem, i go backwards.Please help! please send email.-thanks

    Read the article

  • Boot error aftter clean Ubuntu 13.04 install: [Reboot and select proper boot device]

    - by IcarusNM
    I am having the same problem as this guy where a fresh Ubuntu install completes beautifully but will not boot. I get the ASUS (?) "Reboot and select proper boot device" error, first with Xubuntu 13.10 and after finally giving up there, and Xubuntu 13.4, I am back to regular Ubuntu 13.4. ASUS motherboard Z77, Intel chipset. Standard internal SATA 500GB HD. 64-bit. All-new hardware less than 3 months old. It was running Ubuntu 12.04LTC great until I tried this upgrade. I have re-installed from scratch every which way: with LVM, without LVM; with the default partitions, with my own partitions. With ext3 or ext4. Alongside; replace; upgrade. No difference. On the last two tries, I have booted afterward from the same USB stick, downloaded and run boot-repair, and now I guess I am off to the boot-repair support email with my URLs from that. It did all kinds of cool stuff but ultimately made no difference. I never got anything like this with Ubuntu 12.04. I've now probably re-installed Ubuntu 13.04 ten times slightly different ways. I finally found how to skip the language packs, so at least that sped things up! :) This starts from the ubuntu-13.04-desktop-amd64.iso and UNetbootin as suggested on the official instructions for USB thumb drive creation from OSX. That part all works fine (booting the USB on the PC and trying Ubuntu and/or installing from there on the PC HD.) I have no CD drive on this PC, but I suppose I could get one. I would rather find some Linux install that works from USB like I've always done. After running boot-repair twice, in the ASUS BIOS I now see three different UEFI boot options in the priority list, and they are all labeled exactly the same: ubuntu (P6: WDC WD5000AAKX-00U6AA0) Then there's a non-UEFI option: P6: WDC WD5000AAKX-00U6AA0 (476940MB) And a fifth option appeared after the first boot-repair: Windows Boot Manager (P6: WDC WD5000AAKX-00U6AA0) I have tried all 5 of these, and I get exactly the same error. I have never had Windows installed on this HD. ASUS is calling it Windows Boot Manaer but I presume that's a mistaken label for whatever boot-repair did. I can boot on USB and run GParted and it looks great. The partitions all look normal. I found another case of this online with no solution posted. I can't find much about it online. Needs a Master Boot Record wipe/redo?? I'm not sure how.

    Read the article

  • How to disable the RAID in x3400 M2

    - by BanKtsu
    Hi I just wanna disable the default RAID in my server IBM System X3400 M2 Server(7837-24X),i have 3 disk drives SAS. I want to make them a JBOD "Just a Bunch Of Disks", because I want to install in the drive 0 CentOS, and the other two make them cache files for a squid server. I disable the RAID in the BIOS: System Settings/Adapters and UEFI drivers/LSI Logic Fusion MPT SAS Driver -PciRoot(0x0)/Pci(0x3,0X0)/Pci(0x0,0x0) LSI Logic MPT Setup Utility RAID Properties/Delete Array Later I boot the CentOS live CD and install the OS in the drive 0, and the others 2 mounted like this: *LVM Volume Groups vg_proxyserver 139508 lv_root 51200 / ext4 lv_home 84276 /home ext4 lv_swap 4032 Hard Drive sdb(/dev/sdb) free 140011 sdc(/dev/sdc) free 140011 sdd(/dev/sdd) sdd1 500 /boot ext4 sdd2 139512 vg_proxyserver physical volume(LVM) But when I restart the server give me the error: Boot failed Hard Disk 0 UEFI PXE PciRoot(0x0)/Pci(0x1,0X0)/Pci(0x0,0x0)/MAC(001A64B15130,0X0)) ........PXE-E18:Server response timeout. UEFI PXE PciRoot(0x0)/Pci(0x1,0X0)/Pci(0x0,0x0)/MAC(001A64B15132,0X0)) ........PXE-E18:Server response timeout. and the OS not start. The IBM force me to do a RAID?,why?

    Read the article

  • How do I improve my incremental-backup performance?

    - by Alistair Bell
    I'm currently using the traditional rsync+cp -al method to create incremental/snapshot backups of our server tree. The backups are going onto a pair of eight-disk towers connected to the backup machine (a Sandy Bridge machine with 16 GB of RAM, running CentOS 5.5) via four eSATA connections (four disks per connection). Each disk is a regular 2 TB disk, so we have 32 TB of disk space connected to the backup machine. We're backing up about 20 TB of data on the servers with this. The problem is that each daily backup is taking more than 24 hours, and the real time-killer isn't the actual rsync, but the time it takes to perform a cp -al of the tree locally on the backup machine. It's taking more than 12 hours just to make the shadow copy of the tree, and as far as I can tell the performance backlog is at the disk (top shows the cp using a lot of RAM but not a lot of CPU and mostly in uninterruptible-sleep state) We have the server data split into four major volumes (and a few minor ones), and each of these backups runs in parallel (with some offsets in the cron to try to get some disks' cp done first). There are two volumes on the backup drive, both striped LVM volumes of 16 TB each. So obviously I need to improve the performance because it's unusable as it stands. The first question is: when CentOS 6 comes out, with support for btrfs, will making snapshots of subvolumes with btrfs substantially increase this performance? The second is: is there a way, with ext3 or something else supported in CentOS 5 or 6, to 'encourage' it to put the directories/inodes in one part of a volume (which could happen to be the part that's on an SSD, via LVM) and the files in another? That would presumably solve the problem, but I don't know of ways to hint ext3 like that.

    Read the article

  • Using ZFS or XFS on a Xen guest running Linux

    - by zoot
    Background: I'm investigating the viability of using a filesystem other than ext3/4, with the ability to run snapshots for backup and rollback purposes. The servers under consideration are mailbox server nodes running on Linode's Xen based VPS platform. I'm particularly drawn to the various published benefits which ZFS offers in terms of data integrity and this year's stable release of native ZFS support in Linux - http://zfsonlinux.org ZFS appears to be the more thorough option in terms of benefits and simplicity (instead of LVM+XFS). Please note that I have little experience with ZFS (which I use on a local FreeNAS installation) and none with XFS, hence the post. To date, my servers are using ext3 filesystems, not managed under LVM. Question in detail: So, I have two questions. (1) Which of the two filesystems would be the better choice for the best of all of the following 3 aspects, running on a Xen Linux guest? Snapshots Data Integrity Performance (2) If ZFS is a viable option, is it practical to use ZRAID across Xen disk images to further enhance the solution for data integrity? Note: I'm reluctant to consider btrfs, given the many warnings I've read about in using it on production systems.

    Read the article

  • Changes to grub in ubuntu 10

    - by jdege
    I've been running CentOS 5 for some years. I've decided to upgrade to Ubuntu, and with 10.04 just out, this seemed like a good time. I'm a tad paranoid, so I started off with a new set of drives - one to install on, one to backup to, and one as a spare. I removed my existing CentOS 5 drives, and did an install, and had no problems. I installed the server version, and used the default full-disk LVM installation. Next, I copies my backup scripts over, edited them to work with the new configuration, and did a test backup. That worked fine, as well. Then comes the real test, could I do an install of the backup onto the spare drive? (I won't put anything of importance on a system that doesn't have a reliable backup, and if I've never done a restore, it's not reliable.) I booted from a System Rescue CD (ver 1.5.3), with the spare drive as /dev/sda, and the backup drive as /dev/sdb. I had no trouble in partitioning, configuring LVM, formatting, making swap, or restoring the file systems. But when I got to restoring grub to the MBR, I ran into problems. My restore instructions from CentOS 5 said run grub, then enter two commands: root (hd0,0) setup (hd0) The first command exits with an error: "Checking if /boot/grub/stage1 exists ... no" I did some googling around, and found that the Grub2 included in recent Ubuntus is very different than the Grub 0.97 included in CentOS 5. One site suggested I use: grub-install --root-dir=/mnt/restore /dev/sda That appeared to work, but when I booted from the drive, I ended up at a grub prompt. Any ideas as to what I need to do? It seems like a simple problem, but my attempts at searching out answers on the web are being swamped by references to the old version of Grub. Help would be appreciated.

    Read the article

  • Disable RAID to JBOD in server IBM x3400 M2

    - by BanKtsu
    Hi I just wanna disable the default RAID in my server IBM System X3400 M2 Server(7837-24X),i have 3 disk drives SAS. I want to make them a JBOD "Just a Bunch Of Disks", because I want to install in the drive 0 CentOS, and the other two make them cache files for a squid server. I disable the RAID in the BIOS: System Settings/Adapters and UEFI drivers/LSI Logic Fusion MPT SAS Driver -PciRoot(0x0)/Pci(0x3,0X0)/Pci(0x0,0x0) LSI Logic MPT Setup Utility RAID Properties/Delete Array Later I boot the CentOS live CD and install the OS in the drive 0, and the others 2 mounted like this: *LVM Volume Groups vg_proxyserver 139508 lv_root 51200 / ext4 lv_home 84276 /home ext4 lv_swap 4032 Hard Drive sdb(/dev/sdb) free 140011 sdc(/dev/sdc) free 140011 sdd(/dev/sdd) sdd1 500 /boot ext4 sdd2 139512 vg_proxyserver physical volume(LVM) But when I restart the server give me the error: Boot failed Hard Disk 0 UEFI PXE PciRoot(0x0)/Pci(0x1,0X0)/Pci(0x0,0x0)/MAC(001A64B15130,0X0)) ........PXE-E18:Server response timeout. UEFI PXE PciRoot(0x0)/Pci(0x1,0X0)/Pci(0x0,0x0)/MAC(001A64B15132,0X0)) ........PXE-E18:Server response timeout. and the OS not start. The IBM force me to do a RAID?,why?

    Read the article

  • Bad performance with Linux software RAID5 and LUKS encryption

    - by Philipp Wendler
    I have set up a Linux software RAID5 on three hard drives and want to encrypt it with cryptsetup/LUKS. My tests showed that the encryption leads to a massive performance decrease that I cannot explain. The RAID5 is able to write 187 MB/s [1] without encryption. With encryption on top of it, write speed is down to about 40 MB/s. The RAID has a chunk size of 512K and a write intent bitmap. I used -c aes-xts-plain -s 512 --align-payload=2048 as the parameters for cryptsetup luksFormat, so the payload should be aligned to 2048 blocks of 512 bytes (i.e., 1MB). cryptsetup luksDump shows a payload offset of 4096. So I think the alignment is correct and fits to the RAID chunk size. The CPU is not the bottleneck, as it has hardware support for AES (aesni_intel). If I write on another drive (an SSD with LVM) that is also encrypted, I do have a write speed of 150 MB/s. top shows that the CPU usage is indeed very low, only the RAID5 xor takes 14%. I also tried putting a filesystem (ext4) directly on the unencrypted RAID so see if the layering is problem. The filesystem decreases the performance a little bit as expected, but by far not that much (write speed varying, but 100 MB/s). Summary: Disks + RAID5: good Disks + RAID5 + ext4: good Disks + RAID5 + encryption: bad SSD + encryption + LVM + ext4: good The read performance is not affected by the encryption, it is 207 MB/s without and 205 MB/s with encryption (also showing that CPU power is not the problem). What can I do to improve the write performance of the encrypted RAID? [1] All speed measurements were done with several runs of dd if=/dev/zero of=DEV bs=100M count=100 (i.e., writing 10G in blocks of 100M). Edit: If this helps: I'm using Ubuntu 11.04 64bit with Linux 2.6.38. Edit2: The performance stays approximately the same if I pass a block size of 4KB, 1MB or 10MB to dd.

    Read the article

  • P2V Wouldn't Boot, Rebuilt initrd, Need to Clean Up

    - by Mike Soule
    We have a CentOS 5.4 server (build 2.6.18-164.el5xen). We went to P2V this server so we can have redundancy, the physical only has one PSU. The P2V only completed 99% of the way, we have a VMWare ticket opened, but they marked the ticket as low priority. I was able to boot into a rescue disc of Red Hat 5.4 and rebuild the initrd with the help of this blog post. Now the only issue is the original server had a modified initrd, which was also from a different OS build and made by an outside provider. We do not have a document outlining modifications. My question is, is it at all possible to copy the initrd off of the physical server and replace it on the virtual and some how have the virtual machine boot? Thanks for any input. Edit: I copied the initrd img from the physical and it recreated the original issue. Here is a screen capture of the error. http://i.imgur.com/MqC73.jpg Edit2: echo Scanning logical volumes lvm vgscan --ignorelockingfailure echo Activating logical volumes lvm vgchange -ay --ignorelockingfailure VolGroup00 resume /dev/VolGroup00/LogVol01 echo Creating root device. mkrootdev -t ext3 -o defaults,ro /dev/VolGroup00/LogVol00 echo Mounting root filesystem. mount /sysroot

    Read the article

  • 150 TB and growing, but how to grow?

    - by seandavi
    My group currently has two largish storage servers, both NAS running debian linux. The first is an all-in-one 24-disk (SATA) server that is several years old. We have two hardware RAIDS set up on it with LVM over those. The second server is 64 disks divided over 4 enclosures, each a hardware RAID 6, connected via external SAS. We use XFS with LVM over that to create 100TB useable storage. All of this works pretty well, but we are outgrowing these systems. Having build two such servers and still growing, we want to build something that allows us more flexibility in terms of future growth, backup options, that behaves better under disk failure (checking the larger filesystem can take a day or more), and can stand up in a heavily concurrent environment (think small computer cluster). We do not have system administration support, so we administer all of this ourselves (we are a genomics lab). So, what we seek is a relatively low-cost, acceptable performance storage solution that will allow future growth and flexible configuration (think ZFS with different pools having different operating characteristics). We are probably outside the realm of a single NAS. We have been thinking about a combination of ZFS (on openindiana, for example) or btrfs per server with glusterfs running on top of that if we do it ourselves. What we are weighing that against is simply biting the bullet and investing in Isilon or 3Par storage solutions. Any suggestions or experiences are appreciated.

    Read the article

  • file system that allow to specify different RAID level per directory and change it afterward

    - by Adam Ryczkowski
    I have 5 hard drives, where I want to keep my data. Some of my files are more important, and some of them are less. So some of them I wish to put on RAID-6, and for some it RAID-5 is sufficient. It is difficult to predict at the moment of creation of the arrays how much space of each type to declare. What I would do if I didn't hear about zfs, is partition the hard drives into identical 100GB partitions, and as my needs grow, assemble those partitions into md devices using linux-raid. Then, I'd combine those devices using lvm into logical volumes where I'd put my data. So when I'd need more space of e.g. RAID-6, I'd take 100GB partition from each hard drive and assemble them into another RAID-6 md device and would use it as physical storage for the logical volume group dedicated for RAID-6 data. Then I could grow the file system on this logical volume. On top of RAID-6 and RAID-5 Volume Groups (managed by lvm) would reside completely independent file systems, which I'd later merge with multiple mount --bind into a single directory structure that would reflect the logical structure of data rather that of the storage. But now, when I heard about the ZFS with all the performance, data-healing and compression capabilities I cannot stop thinking if it can help me. If so, what do you think would be the best setup?

    Read the article

  • Partitioning recommendations for a Proxmox VM Server (OpenVZ)

    - by luison
    We are new to virtualization and we are planning to turn our online server into a virualized one, mainly for maintenance, backup and recovery improvements. Initially we would only have one real virtual system with load plus 1-3 copys for testing and recovering and maybe a small centralized syslog virtual machine. We would like, if possible the host machine to include an iptables plus rsync to back up to other machines and some other global security systems. Due to this and the offerings of our hosting supplier we are mainly considering Proxmox for its simplicity (we like the idea of its web admin panel) and as I also understand that the container approach of OpenVMZ systems may fit well resource wise with our setup. The base system comes with debian so we can personalise it to our requirements. Proxmox installations default installs an LVM partition for the VMs. Our doubts are with the fact of what would be the best partition structure for this considering that: we would like to have a mirror of the root partition we could boot from if required (our provider supports booting the system from another partition via control panel) we ideally would like to have a partition that could be shared among the VM systems. We still don't know if this is possible directly with OpenVMZ containers, otherwise we are considering doing this by sharing it via NFS on the host machine. we want to use the backup system available on the proxmox host administrator to programme VMs backups and then rsync it to another machine. With this based on a Linux Raid of aprox (750Gb) we are considering something like: ext3_1/ - (20Gb) ext3_2/bak_root - (20Gb) mostly unmounted, root partition sync LVM_1 /var/lib/vz - (390Gb) partition for virtual images LVM_2 /shared_data - (30Gb) LVM_3 /backups - (300Gb) where all backups would be allocated Our initial tests with Proxmox seem to have issues with snapshots backups like this, perhaps caused by the fact that they can not be done to another LVM partition (error: command 'lvcreate --size 1024M --snapshot --name vzsnap-ns204084.XXX.net-0 /dev/pve/LV' failed with exit code 5) in which case we might have to use a standart ext3 partition (but unsure if we can do this with the 4 primary partition limitations). Does this makes more or less sense? Would it be mad to for example write VMs /var/logs to a NFS mounted partition (on the host system)? Are their any other easier ways to mount host system partitions (or folders) to the VMs?

    Read the article

  • VMWare vmfs vs NFS datastore with vmdk?

    - by CarpeNoctem
    I want to add a new harddisk to an existing VM and want the best performance possible. The new hard disk will exist on an NFS datastore. Currently I did the following: Created new vmdk on NFS datastore Created new lvm partition using fdisk Create new physical volume, volume group, and logical volume (2TB) Created ext3 partition on logical volume Is there a better way to do this? Should I be doing some vmware-ish file system instead?

    Read the article

  • Quantifying the effects of partition mis-alignment

    - by Matt
    I'm experiencing some significant performance issues on an NFS server. I've been reading up a bit on partition alignment, and I think I have my partitions mis-aligned. I can't find anything that tells me how to actually quantify the effects of mis-aligned partitions. Some of the general information I found suggests the performance penalty can be quite high (upwards of 60%) and others say it's negligible. What I want to do is determine if partition alignment is a factor in this server's performance problems or not; and if so, to what degree? So I'll put my info out here, and hopefully the community can confirm if my partitions are indeed mis-aligned, and if so, help me put a number to what the performance cost is. Server is a Dell R510 with dual E5620 CPUs and 8 GB RAM. There are eight 15k 2.5” 600 GB drives (Seagate ST3600057SS) configured in hardware RAID-6 with a single hot spare. RAID controller is a Dell PERC H700 w/512MB cache (Linux sees this as a LSI MegaSAS 9260). OS is CentOS 5.6, home directory partition is ext3, with options “rw,data=journal,usrquota”. I have the HW RAID configured to present two virtual disks to the OS: /dev/sda for the OS (boot, root and swap partitions), and /dev/sdb for a big NFS share: [root@lnxutil1 ~]# parted -s /dev/sda unit s print Model: DELL PERC H700 (scsi) Disk /dev/sda: 134217599s Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 63s 465884s 465822s primary ext2 boot 2 465885s 134207009s 133741125s primary lvm [root@lnxutil1 ~]# parted -s /dev/sdb unit s print Model: DELL PERC H700 (scsi) Disk /dev/sdb: 5720768639s Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 34s 5720768606s 5720768573s lvm Edit 1 Using the cfq IO scheduler (default for CentOS 5.6): # cat /sys/block/sd{a,b}/queue/scheduler noop anticipatory deadline [cfq] noop anticipatory deadline [cfq] Chunk size is the same as strip size, right? If so, then 64kB: # /opt/MegaCli -LDInfo -Lall -aALL -NoLog Adapter #0 Number of Virtual Disks: 2 Virtual Disk: 0 (target id: 0) Name:os RAID Level: Primary-6, Secondary-0, RAID Level Qualifier-3 Size:65535MB State: Optimal Stripe Size: 64kB Number Of Drives:7 Span Depth:1 Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default Number of Spans: 1 Span: 0 - Number of PDs: 7 ... physical disk info removed for brevity ... Virtual Disk: 1 (target id: 1) Name:share RAID Level: Primary-6, Secondary-0, RAID Level Qualifier-3 Size:2793344MB State: Optimal Stripe Size: 64kB Number Of Drives:7 Span Depth:1 Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default Number of Spans: 1 Span: 0 - Number of PDs: 7 If it's not obvious, virtual disk 0 corresponds to /dev/sda, for the OS; virtual disk 1 is /dev/sdb (the exported home directory tree).

    Read the article

  • Linux RAID-0 performance doesn't scale up over 1 GB/s

    - by wazoox
    I have trouble getting the max throughput out of my setup. The hardware is as follow : dual Quad-Core AMD Opteron(tm) Processor 2376 16 GB DDR2 ECC RAM dual Adaptec 52245 RAID controllers 48 1 TB SATA drives set up as 2 RAID-6 arrays (256KB stripe) + spares. Software : Plain vanilla 2.6.32.25 kernel, compiled for AMD-64, optimized for NUMA; Debian Lenny userland. benchmarks run : disktest, bonnie++, dd, etc. All give the same results. No discrepancy here. io scheduler used : noop. Yeah, no trick here. Up until now I basically assumed that striping (RAID 0) several physical devices should augment performance roughly linearly. However this is not the case here : each RAID array achieves about 780 MB/s write, sustained, and 1 GB/s read, sustained. writing to both RAID arrays simultaneously with two different processes gives 750 + 750 MB/s, and reading from both gives 1 + 1 GB/s. however when I stripe both arrays together, using either mdadm or lvm, the performance is about 850 MB/s writing and 1.4 GB/s reading. at least 30% less than expected! running two parallel writer or reader processes against the striped arrays doesn't enhance the figures, in fact it degrades performance even further. So what's happening here? Basically I ruled out bus or memory contention, because when I run dd on both drives simultaneously, aggregate write speed actually reach 1.5 GB/s and reading speed tops 2 GB/s. So it's not the PCIe bus. I suppose it's not the RAM. It's not the filesystem, because I get exactly the same numbers benchmarking against the raw device or using XFS. And I also get exactly the same performance using either LVM striping and md striping. What's wrong? What's preventing a process from going up to the max possible throughput? Is Linux striping defective? What other tests could I run?

    Read the article

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

  • Running resize2fs on /

    - by Paul Steckler
    I'm trying to resize an ext4 filesystem on a Fedora 11 box. Using fsdisk and lvm, I was able to grow the partition and logical volume containing the filesystem. When I try to run resize2fs on the device containing the filesystem (/dev/sda2 in this case), I get: "Device or resource busy while trying to open /dev/sda2, Couldn't find valid filesystem superblock" I've tried this from a rescue disk that doesn't have the filesystem mounted, no joy. Maybe resize2fs doesn't know about ext4?

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16  | Next Page >