Are you sure that you don't have a spare designated somewhere?
On Tue, Dec 2, 2008 at 2:32 PM, Eitan Tsur <eitan.tsur(a)gmail.com> wrote:
Ok, so here's what /proc/mdstat says after a clean reboot:
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdb1[0] sdc1[1]
1465143808 blocks level 5, 256k chunk, algorithm 2 [3/2] [UU_]
md_d0 : inactive sdd[2](S)
732574464 blocks
unused devices: <none>
I basically have to do:
>mdadm --stop /dev/md_d0
>mdadm --add /dev/md0 /dev/sdd1
>cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd1[3] sdb1[0] sdc1[1]
1465143808 blocks level 5, 256k chunk, algorithm 2 [3/2] [UU_]
[>....................] recovery = 0.4% (3269252/732571904)
finish=153.7min speed=79057K/sec
unused devices: <none>
Furthermore, the first time I saw this, it was /dev/sdb that had dropped.
Yesterday it was /dev/sdc. Today it's /dev/sdd. That's what throws me off
about this whole thing.
Regards,
-Eitan-
On Sun, Nov 30, 2008 at 2:41 PM, Lonni J Friedman <netllama(a)gmail.com>
wrote:
>
> On Sun, Nov 30, 2008 at 2:38 PM, Eitan Tsur <eitan.tsur(a)gmail.com> wrote:
> > I just recently installed a 3-disk RAID5 array in a server of mine,
> > running
> > FC9. Upon reboot, one of the drives drops out, and is allocated as a
> > spare.
> > I suspect there is some sort of issue where DBUS re-arranges the
> > drive-to-device maps between boots, but I am not sure... Just kind of
> > annoying to have to stop and re-add a drive every boot, and wait the
> > couple
> > hours for the array to rebuild the 3rd disk. Any thoughts? Anyone else
> > encountered such an issue before? What should I be looking for? I'm new
> > to
> > the world of RAID, so any information you can give may be helpful.
>
> What's in /etc/mdadm.conf, /proc/mdstat and dmesg when this fails ?
>
--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
L. Friedman netllama(a)gmail.com
LlamaLand
https://netllama.linux-sxs.org