FC4 RAID5 failed

Gilboa Davara gilboad at gmail.com
Sat Jan 28 10:05:23 UTC 2006


On Sat, 2006-01-28 at 16:05 +0800, 毛睿 wrote:
> I got a strange problem with FC4 software RAID5.
> 
> I have 2 RAID5s in my FC4 box. One contain 6
> partitions /dev/hd[cdefgh]1, another contain 8 partitions /dev/hd[ab]3
> + /dev/hd[cdefgh]2. They all worked fine before.
> 
> After I replaced one failed disk, a strange problem is happened. I
> removed the failed disk, and added new one. Syncing was ok, and was
> finished after hours. /proc/mdstat was also normal. But after I reboot
> the linux box. The 2 RAID5s are all in downgrade mode again! The new
> disk was kicked out! I never met such problem before with RH9. I tried
> many times, the result were all same. I can manually
> stop/start RAIDs, SuperBlocks and /proc/mdstat are all in good
> condition. But whenever I reboot, the new disk will be kicked out
> again. I can guarantee the new disk is good. In /var/log/message, I
> didn't see any error message during shutdown. And during the boot
> procedure, the RAID start procedure even didn't check the failed disk.
> 
>  
> 
> Does anybody ever meet the same problem?

Two things.
When you re-added the new disk and partitioned it, did you remember to
set the partition to fb (RAID auto detect)?
Second, if you did, what did the kernel log had to say about it?
You might want to try to boot from the rescue CD and start the RAIDs
manually (mdadm --assemble) in-order to get a meaningful error
description.

Gilboa




More information about the users mailing list