Linux Software RAID1: How to boot after (physically) removing /dev/sda? (LVM, mdadm, Grub2)

Posted by flight on Server Fault See other posts from Server Fault or by flight
Published on 2011-02-28T11:37:39Z Indexed on 2011/03/08 8:12 UTC
Read the original article Hit count: 230

Filed under:
|
|
|

A server set up with Debian 6.0/squeeze. During the squeeze installation, I configured the two 500GB SATA disks (/dev/sda and /dev/sdb) as a RAID1 (managed with mdadm). The RAID keeps a 500 GB LVM volume group (vg0). In the volume group, there's a single logical volume (lv0). vg0-lv0 is formatted with extfs3 and mounted as root partition (no dedicated /boot partition). The system boots using GRUB2.

In normal use, the systems boots fine.

Also, when I tried and removed the second SATA drive (/dev/sdb) after a shutdown, the system came up without problem, and after reconnecting the drive, I was able to --re-add /dev/sdb1 to the RAID array.

But: After removing the first SATA drive (/dev/sda), the system won't boot any more! A GRUB welcome message shows up for a second, then the system reboots.

I tried to install GRUB2 manually on /dev/sdb ("grub-install /dev/sdb"), but that doesn't help.

Appearently squeeze fails to set up GRUB2 to launch from the second disk when the first disk is removed, which seems to be quite an essential feature when running this kind of Software RAID1, isn't it?

At the moment, I'm lost whether this is a problem with GRUB2, with LVM or with the RAID setup. Any hints?

© Server Fault or respective owner

Related posts about lvm

Related posts about mdadm