Reusing slot numbers in Linux software RAID arrays

Posted by thkala on Server Fault See other posts from Server Fault or by thkala
Published on 2012-11-11T18:27:09Z Indexed on 2012/11/11 23:02 UTC
Read the original article Hit count: 219

Filed under:
|

When a hard disk drive in one of my Linux machines failed, I took the opportunity to migrate from RAID5 to a 6-disk software RAID6 array.

At the time of the migration I did not have all 6 drives - more specifically the fourth and fifth (slots 3 and 4) drives were already in use in the originating array, so I created the RAID6 array with a couple of missing devices. I now need to add those drives in those empty slots. Using mdadm --add does result in a proper RAID6 configuration, with one glitch - the new drives are placed in new slots, which results in this /proc/mdstat snippet:

...
md0 : active raid6 sde1[7] sdd1[6] sda1[0] sdf1[5] sdc1[2] sdb1[1]
      25185536 blocks super 1.0 level 6, 64k chunk, algorithm 2 [6/6] [UUUUUU]
...

mdadm -E verifies that the actual slot numbers in the device superblocks are correct, yet the numbers shown in /proc/mdstat are still weird.

I would like to fix this glitch, both to satisfy my inner perfectionist and to avoid any potential sources of future confusion in a crisis. Is there a way to specify which slot a new device should occupy in a RAID array?

UPDATE:

I have verified that the slot number persists in the component device superblock. For the version 1.0 superblocks that I am using that would be the dev_number field as defined in include/linux/raid/md_p.h of the Linux kernel source. I am now considering direct modification of said field to change the slot number - I don't suppose there is some standard way to manipulate the RAID superblock?

© Server Fault or respective owner

Related posts about linux

Related posts about software-raid