Red Hat 5.3 on HP Proliant DL380 G5 and failed drive on RAID controller

Posted by thinkdreams on Server Fault See other posts from Server Fault or by thinkdreams
Published on 2010-11-09T14:04:37Z Indexed on 2012/06/17 3:19 UTC
Read the original article Hit count: 503

Filed under:
|
|

I have a development ERP server here in my office that I assist with support on, and originally the DBA requested a single drive setup for some of the drives on the server. Thus the hardware RAID controller (an HP embedded controller) looks like:

  • c0d0 (2 drive) RAID-1
  • c0d1 (2 drive) RAID-1
  • c0d2 (1 drive) No RAID <-- Failed
  • c0d3 (1 drive) No RAID
  • c0d4 (1 drive) No RAID
  • c0d5 (1 drive) No RAID

c0d2 has failed. I replaced the drive immediately with a spare using the hot-swap, but the c0d2 continues to mark itself as failed, even when I umount the partition. I'm loathe to reboot the server since I'm concerned about the server coming back up in rescue mode but I'm afraid that's the only way to get the system to re-read the drive. I assumed there was some sort of auto-detection routine for this, but I haven't been able to figure out the proper procedure.

I have installed the HP ACU CLI utilties, so I can see the hardware RAID setup.

I'd really like to find out what the proper procedure should have been, where I went wrong, and how to correct it now.

Obviously this goes without saying I should NOT have listened to the DBA and set the drives up as RAID-1 throughout as was my first instinct. He wasn't worried about data loss, but it sure would have been easier to replace the failed drive. :)

© Server Fault or respective owner

Related posts about redhat

Related posts about hardware-raid