Rebuilding LVM after RAID recovery

Posted by Xiong Chiamiov on Server Fault See other posts from Server Fault or by Xiong Chiamiov
Published on 2011-01-03T08:46:56Z Indexed on 2011/01/03 8:54 UTC
Read the original article Hit count: 241

Filed under:
|
|
|

I have 4 disks RAID-5ed to create md0, and another 4 disks RAID-5ed to create md1. These are then combined via LVM to create one partition.

There was a power outage while I was gone, and when I got back, it looked like one of the disks in md1 was out of sync - mdadm kept claiming that it only could find 3 of the 4 drives. The only thing I could do to get anything to happen was to use mdadm --create on those four disks, then let it rebuild the array. This seemed like a bad idea to me, but none of the stuff I had was critical (although it'd take a while to get it all back), and a thread somewhere claimed that this would fix things. If this trashed all of my data, then I suppose you can stop reading and just tell me that.

After waiting four hours for the array to rebuild, md1 looked fine (I guess), but the lvm was complaining about not being able to find a device with the correct UUID, presumably because md1 changed UUIDs. I used the pvcreate and vgcfgrestore commands as documented here. Attempting to run an lvchange -a y on it, however, gives me a resume ioctl failed message.

Is there any hope for me to recover my data, or have I completely mucked it up?

© Server Fault or respective owner

Related posts about linux

Related posts about raid