I just recently installed a 3-disk RAID5 array in a server of mine, running FC9. Upon reboot, one of the drives drops out, and is allocated as a spare. I suspect there is some sort of issue where DBUS re-arranges the drive-to-device maps between boots, but I am not sure... Just kind of annoying to have to stop and re-add a drive every boot, and wait the couple hours for the array to rebuild the 3rd disk. Any thoughts? Anyone else encountered such an issue before? What should I be looking for? I'm new to the world of RAID, so any information you can give may be helpful.
Regards, -Eitan-
On Sun, Nov 30, 2008 at 2:38 PM, Eitan Tsur eitan.tsur@gmail.com wrote:
I just recently installed a 3-disk RAID5 array in a server of mine, running FC9. Upon reboot, one of the drives drops out, and is allocated as a spare. I suspect there is some sort of issue where DBUS re-arranges the drive-to-device maps between boots, but I am not sure... Just kind of annoying to have to stop and re-add a drive every boot, and wait the couple hours for the array to rebuild the 3rd disk. Any thoughts? Anyone else encountered such an issue before? What should I be looking for? I'm new to the world of RAID, so any information you can give may be helpful.
What's in /etc/mdadm.conf, /proc/mdstat and dmesg when this fails ?
# mdadm.conf written out by anaconda DEVICE partitions MAILADDR root@localhost
ARRAY /dev/md0 level=raid5 num-devices=3 spares=1
UUID=0c21bf19:83747f05:70a4872d:90643876
If I switch the "DEVICE partitions" with "DEVICE /dev/sdb1 /dev/sdc1 /dev/sdd1", drives no longer are allocated as spares, however the array still seems to rebuild every boot.
I don't remember the specifics of what was in /proc/mdstat at the time, but currently the array is being rebuilt. I'll reboot after it is complete to give you a copy of it. Basically it allocated the dropped drive as a spare which I'd have to mdadm -stop and mdadm -add to the original array after every boot, manually. Give me an hour or two and I'll get you the output of mdstat.
On Sun, Nov 30, 2008 at 2:41 PM, Lonni J Friedman netllama@gmail.comwrote:
On Sun, Nov 30, 2008 at 2:38 PM, Eitan Tsur eitan.tsur@gmail.com wrote:
I just recently installed a 3-disk RAID5 array in a server of mine,
running
FC9. Upon reboot, one of the drives drops out, and is allocated as a
spare.
I suspect there is some sort of issue where DBUS re-arranges the drive-to-device maps between boots, but I am not sure... Just kind of annoying to have to stop and re-add a drive every boot, and wait the
couple
hours for the array to rebuild the 3rd disk. Any thoughts? Anyone else encountered such an issue before? What should I be looking for? I'm new
to
the world of RAID, so any information you can give may be helpful.
What's in /etc/mdadm.conf, /proc/mdstat and dmesg when this fails ?
--
L. Friedman netllama@gmail.com LlamaLand https://netllama.linux-sxs.org -- fedora-list mailing list fedora-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list Guidelines: http://fedoraproject.org/wiki/Communicate/MailingListGuidelines
If you only have 3 disks, then you can't have: spares=1
On Sun, Nov 30, 2008 at 3:11 PM, Eitan Tsur eitan.tsur@gmail.com wrote:
# mdadm.conf written out by anaconda DEVICE partitions MAILADDR root@localhost
ARRAY /dev/md0 level=raid5 num-devices=3 spares=1 UUID=0c21bf19:83747f05:70a4872d:90643876
If I switch the "DEVICE partitions" with "DEVICE /dev/sdb1 /dev/sdc1 /dev/sdd1", drives no longer are allocated as spares, however the array still seems to rebuild every boot.
I don't remember the specifics of what was in /proc/mdstat at the time, but currently the array is being rebuilt. I'll reboot after it is complete to give you a copy of it. Basically it allocated the dropped drive as a spare which I'd have to mdadm -stop and mdadm -add to the original array after every boot, manually. Give me an hour or two and I'll get you the output of mdstat.
On Sun, Nov 30, 2008 at 2:41 PM, Lonni J Friedman netllama@gmail.com wrote:
On Sun, Nov 30, 2008 at 2:38 PM, Eitan Tsur eitan.tsur@gmail.com wrote:
I just recently installed a 3-disk RAID5 array in a server of mine, running FC9. Upon reboot, one of the drives drops out, and is allocated as a spare. I suspect there is some sort of issue where DBUS re-arranges the drive-to-device maps between boots, but I am not sure... Just kind of annoying to have to stop and re-add a drive every boot, and wait the couple hours for the array to rebuild the 3rd disk. Any thoughts? Anyone else encountered such an issue before? What should I be looking for? I'm new to the world of RAID, so any information you can give may be helpful.
What's in /etc/mdadm.conf, /proc/mdstat and dmesg when this fails ?
Even if I remove that it still happens.
On Sun, Nov 30, 2008 at 3:53 PM, Lonni J Friedman netllama@gmail.comwrote:
If you only have 3 disks, then you can't have: spares=1
On Sun, Nov 30, 2008 at 3:11 PM, Eitan Tsur eitan.tsur@gmail.com wrote:
# mdadm.conf written out by anaconda DEVICE partitions MAILADDR root@localhost
ARRAY /dev/md0 level=raid5 num-devices=3 spares=1 UUID=0c21bf19:83747f05:70a4872d:90643876
If I switch the "DEVICE partitions" with "DEVICE /dev/sdb1 /dev/sdc1 /dev/sdd1", drives no longer are allocated as spares, however the array still seems to rebuild every boot.
I don't remember the specifics of what was in /proc/mdstat at the time,
but
currently the array is being rebuilt. I'll reboot after it is complete
to
give you a copy of it. Basically it allocated the dropped drive as a
spare
which I'd have to mdadm -stop and mdadm -add to the original array after every boot, manually. Give me an hour or two and I'll get you the output
of
mdstat.
On Sun, Nov 30, 2008 at 2:41 PM, Lonni J Friedman netllama@gmail.com wrote:
On Sun, Nov 30, 2008 at 2:38 PM, Eitan Tsur eitan.tsur@gmail.com
wrote:
I just recently installed a 3-disk RAID5 array in a server of mine, running FC9. Upon reboot, one of the drives drops out, and is allocated as a spare. I suspect there is some sort of issue where DBUS re-arranges the drive-to-device maps between boots, but I am not sure... Just kind of annoying to have to stop and re-add a drive every boot, and wait the couple hours for the array to rebuild the 3rd disk. Any thoughts? Anyone else encountered such an issue before? What should I be looking for? I'm
new
to the world of RAID, so any information you can give may be helpful.
What's in /etc/mdadm.conf, /proc/mdstat and dmesg when this fails ?
--
L. Friedman netllama@gmail.com LlamaLand https://netllama.linux-sxs.org -- fedora-list mailing list fedora-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list Guidelines: http://fedoraproject.org/wiki/Communicate/MailingListGuidelines
Is it always the same disk that gets marked offline ? Perhaps the disk is actually bad?
On Sun, Nov 30, 2008 at 4:20 PM, Eitan Tsur eitan.tsur@gmail.com wrote:
Even if I remove that it still happens.
On Sun, Nov 30, 2008 at 3:53 PM, Lonni J Friedman netllama@gmail.com wrote:
If you only have 3 disks, then you can't have: spares=1
On Sun, Nov 30, 2008 at 3:11 PM, Eitan Tsur eitan.tsur@gmail.com wrote:
# mdadm.conf written out by anaconda DEVICE partitions MAILADDR root@localhost
ARRAY /dev/md0 level=raid5 num-devices=3 spares=1 UUID=0c21bf19:83747f05:70a4872d:90643876
If I switch the "DEVICE partitions" with "DEVICE /dev/sdb1 /dev/sdc1 /dev/sdd1", drives no longer are allocated as spares, however the array still seems to rebuild every boot.
I don't remember the specifics of what was in /proc/mdstat at the time, but currently the array is being rebuilt. I'll reboot after it is complete to give you a copy of it. Basically it allocated the dropped drive as a spare which I'd have to mdadm -stop and mdadm -add to the original array after every boot, manually. Give me an hour or two and I'll get you the output of mdstat.
On Sun, Nov 30, 2008 at 2:41 PM, Lonni J Friedman netllama@gmail.com wrote:
On Sun, Nov 30, 2008 at 2:38 PM, Eitan Tsur eitan.tsur@gmail.com wrote:
I just recently installed a 3-disk RAID5 array in a server of mine, running FC9. Upon reboot, one of the drives drops out, and is allocated as a spare. I suspect there is some sort of issue where DBUS re-arranges the drive-to-device maps between boots, but I am not sure... Just kind of annoying to have to stop and re-add a drive every boot, and wait the couple hours for the array to rebuild the 3rd disk. Any thoughts? Anyone else encountered such an issue before? What should I be looking for? I'm new to the world of RAID, so any information you can give may be helpful.
What's in /etc/mdadm.conf, /proc/mdstat and dmesg when this fails ?
Lonni J Friedman wrote:
Is it always the same disk that gets marked offline ? Perhaps the disk is actually bad?
On Sun, Nov 30, 2008 at 4:20 PM, Eitan Tsur eitan.tsur@gmail.com wrote:
Even if I remove that it still happens.
On Sun, Nov 30, 2008 at 3:53 PM, Lonni J Friedman netllama@gmail.com wrote:
If you only have 3 disks, then you can't have: spares=1
On Sun, Nov 30, 2008 at 3:11 PM, Eitan Tsur eitan.tsur@gmail.com wrote:
# mdadm.conf written out by anaconda DEVICE partitions MAILADDR root@localhost
ARRAY /dev/md0 level=raid5 num-devices=3 spares=1 UUID=0c21bf19:83747f05:70a4872d:90643876
If I switch the "DEVICE partitions" with "DEVICE /dev/sdb1 /dev/sdc1 /dev/sdd1", drives no longer are allocated as spares, however the array still seems to rebuild every boot.
I don't remember the specifics of what was in /proc/mdstat at the time, but currently the array is being rebuilt. I'll reboot after it is complete to give you a copy of it. Basically it allocated the dropped drive as a spare which I'd have to mdadm -stop and mdadm -add to the original array after every boot, manually. Give me an hour or two and I'll get you the output of mdstat.
On Sun, Nov 30, 2008 at 2:41 PM, Lonni J Friedman netllama@gmail.com wrote:
On Sun, Nov 30, 2008 at 2:38 PM, Eitan Tsur eitan.tsur@gmail.com wrote:
I just recently installed a 3-disk RAID5 array in a server of mine, running FC9. Upon reboot, one of the drives drops out, and is allocated as a spare. I suspect there is some sort of issue where DBUS re-arranges the drive-to-device maps between boots, but I am not sure... Just kind of annoying to have to stop and re-add a drive every boot, and wait the couple hours for the array to rebuild the 3rd disk. Any thoughts? Anyone else encountered such an issue before? What should I be looking for? I'm new to the world of RAID, so any information you can give may be helpful.
What's in /etc/mdadm.conf, /proc/mdstat and dmesg when this fails ?
check and make sure that the UUID number your specifying in your /etc/mdadm.conf file is correct. You can verify the UUID numbers by typing "ls -l /dev/disk/by-uuid"
Next verify that the UUID numbers in your /etc/mdad.conf file stored in the initrd file is correct, you'll have to extract the initrd file with cpio. I don't remember the full procedure but you should be able to find it pretty easily with your favorite search engine.
Jeff
It is not always the same disk. Most of the time it was /dev/sdb1, however recently it has switched to /dev/sdc1
On Sun, Nov 30, 2008 at 4:23 PM, Lonni J Friedman netllama@gmail.comwrote:
Is it always the same disk that gets marked offline ? Perhaps the disk is actually bad?
On Sun, Nov 30, 2008 at 4:20 PM, Eitan Tsur eitan.tsur@gmail.com wrote:
Even if I remove that it still happens.
On Sun, Nov 30, 2008 at 3:53 PM, Lonni J Friedman netllama@gmail.com wrote:
If you only have 3 disks, then you can't have: spares=1
On Sun, Nov 30, 2008 at 3:11 PM, Eitan Tsur eitan.tsur@gmail.com
wrote:
# mdadm.conf written out by anaconda DEVICE partitions MAILADDR root@localhost
ARRAY /dev/md0 level=raid5 num-devices=3 spares=1 UUID=0c21bf19:83747f05:70a4872d:90643876
If I switch the "DEVICE partitions" with "DEVICE /dev/sdb1 /dev/sdc1 /dev/sdd1", drives no longer are allocated as spares, however the
array
still seems to rebuild every boot.
I don't remember the specifics of what was in /proc/mdstat at the
time,
but currently the array is being rebuilt. I'll reboot after it is
complete
to give you a copy of it. Basically it allocated the dropped drive as a spare which I'd have to mdadm -stop and mdadm -add to the original array
after
every boot, manually. Give me an hour or two and I'll get you the output of mdstat.
On Sun, Nov 30, 2008 at 2:41 PM, Lonni J Friedman <netllama@gmail.com
wrote:
On Sun, Nov 30, 2008 at 2:38 PM, Eitan Tsur eitan.tsur@gmail.com wrote:
I just recently installed a 3-disk RAID5 array in a server of mine, running FC9. Upon reboot, one of the drives drops out, and is allocated as
a
spare. I suspect there is some sort of issue where DBUS re-arranges the drive-to-device maps between boots, but I am not sure... Just kind
of
annoying to have to stop and re-add a drive every boot, and wait
the
couple hours for the array to rebuild the 3rd disk. Any thoughts? Anyone else encountered such an issue before? What should I be looking for? I'm new to the world of RAID, so any information you can give may be helpful.
What's in /etc/mdadm.conf, /proc/mdstat and dmesg when this fails ?
--
L. Friedman netllama@gmail.com LlamaLand https://netllama.linux-sxs.org -- fedora-list mailing list fedora-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list Guidelines: http://fedoraproject.org/wiki/Communicate/MailingListGuidelines
Ok, so here's what /proc/mdstat says after a clean reboot:
*Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sdb1[0] sdc1[1] 1465143808 blocks level 5, 256k chunk, algorithm 2 [3/2] [UU_]
md_d0 : inactive sdd[2](S) 732574464 blocks
unused devices: <none> * I basically have to do:
mdadm --stop /dev/md_d0 mdadm --add /dev/md0 /dev/sdd1 cat /proc/mdstat*
Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sdd1[3] sdb1[0] sdc1[1] 1465143808 blocks level 5, 256k chunk, algorithm 2 [3/2] [UU_] [>....................] recovery = 0.4% (3269252/732571904) finish=153.7min speed=79057K/sec
unused devices: <none>*
Furthermore, the first time I saw this, it was /dev/sdb that had dropped. Yesterday it was /dev/sdc. Today it's /dev/sdd. That's what throws me off about this whole thing.
Regards, -Eitan-
On Sun, Nov 30, 2008 at 2:41 PM, Lonni J Friedman netllama@gmail.comwrote:
On Sun, Nov 30, 2008 at 2:38 PM, Eitan Tsur eitan.tsur@gmail.com wrote:
I just recently installed a 3-disk RAID5 array in a server of mine,
running
FC9. Upon reboot, one of the drives drops out, and is allocated as a
spare.
I suspect there is some sort of issue where DBUS re-arranges the drive-to-device maps between boots, but I am not sure... Just kind of annoying to have to stop and re-add a drive every boot, and wait the
couple
hours for the array to rebuild the 3rd disk. Any thoughts? Anyone else encountered such an issue before? What should I be looking for? I'm new
to
the world of RAID, so any information you can give may be helpful.
What's in /etc/mdadm.conf, /proc/mdstat and dmesg when this fails ?
--
L. Friedman netllama@gmail.com LlamaLand https://netllama.linux-sxs.org -- fedora-list mailing list fedora-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list Guidelines: http://fedoraproject.org/wiki/Communicate/MailingListGuidelines
Are you sure that you don't have a spare designated somewhere?
On Tue, Dec 2, 2008 at 2:32 PM, Eitan Tsur eitan.tsur@gmail.com wrote:
Ok, so here's what /proc/mdstat says after a clean reboot:
Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sdb1[0] sdc1[1] 1465143808 blocks level 5, 256k chunk, algorithm 2 [3/2] [UU_]
md_d0 : inactive sdd[2](S) 732574464 blocks
unused devices: <none>
I basically have to do:
mdadm --stop /dev/md_d0 mdadm --add /dev/md0 /dev/sdd1 cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sdd1[3] sdb1[0] sdc1[1] 1465143808 blocks level 5, 256k chunk, algorithm 2 [3/2] [UU_] [>....................] recovery = 0.4% (3269252/732571904) finish=153.7min speed=79057K/sec
unused devices: <none>
Furthermore, the first time I saw this, it was /dev/sdb that had dropped. Yesterday it was /dev/sdc. Today it's /dev/sdd. That's what throws me off about this whole thing.
Regards, -Eitan-
On Sun, Nov 30, 2008 at 2:41 PM, Lonni J Friedman netllama@gmail.com wrote:
On Sun, Nov 30, 2008 at 2:38 PM, Eitan Tsur eitan.tsur@gmail.com wrote:
I just recently installed a 3-disk RAID5 array in a server of mine, running FC9. Upon reboot, one of the drives drops out, and is allocated as a spare. I suspect there is some sort of issue where DBUS re-arranges the drive-to-device maps between boots, but I am not sure... Just kind of annoying to have to stop and re-add a drive every boot, and wait the couple hours for the array to rebuild the 3rd disk. Any thoughts? Anyone else encountered such an issue before? What should I be looking for? I'm new to the world of RAID, so any information you can give may be helpful.
What's in /etc/mdadm.conf, /proc/mdstat and dmesg when this fails ?
Not in /etc/mdadm.conf... Where else could it be defined?
On Tue, Dec 2, 2008 at 3:53 PM, Lonni J Friedman netllama@gmail.com wrote:
Are you sure that you don't have a spare designated somewhere?
On Tue, Dec 2, 2008 at 2:32 PM, Eitan Tsur eitan.tsur@gmail.com wrote:
Ok, so here's what /proc/mdstat says after a clean reboot:
Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sdb1[0] sdc1[1] 1465143808 blocks level 5, 256k chunk, algorithm 2 [3/2] [UU_]
md_d0 : inactive sdd[2](S) 732574464 blocks
unused devices: <none>
I basically have to do:
mdadm --stop /dev/md_d0 mdadm --add /dev/md0 /dev/sdd1 cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sdd1[3] sdb1[0] sdc1[1] 1465143808 blocks level 5, 256k chunk, algorithm 2 [3/2] [UU_] [>....................] recovery = 0.4% (3269252/732571904) finish=153.7min speed=79057K/sec
unused devices: <none>
Furthermore, the first time I saw this, it was /dev/sdb that had dropped. Yesterday it was /dev/sdc. Today it's /dev/sdd. That's what throws me
off
about this whole thing.
Regards, -Eitan-
On Sun, Nov 30, 2008 at 2:41 PM, Lonni J Friedman netllama@gmail.com wrote:
On Sun, Nov 30, 2008 at 2:38 PM, Eitan Tsur eitan.tsur@gmail.com
wrote:
I just recently installed a 3-disk RAID5 array in a server of mine, running FC9. Upon reboot, one of the drives drops out, and is allocated as a spare. I suspect there is some sort of issue where DBUS re-arranges the drive-to-device maps between boots, but I am not sure... Just kind of annoying to have to stop and re-add a drive every boot, and wait the couple hours for the array to rebuild the 3rd disk. Any thoughts? Anyone else encountered such an issue before? What should I be looking for? I'm
new
to the world of RAID, so any information you can give may be helpful.
What's in /etc/mdadm.conf, /proc/mdstat and dmesg when this fails ?
--
L. Friedman netllama@gmail.com LlamaLand https://netllama.linux-sxs.org -- fedora-list mailing list fedora-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list Guidelines: http://fedoraproject.org/wiki/Communicate/MailingListGuidelines
That would be the only static location that I'm aware of. What does 'mdadm --detail /dev/md0' return ?
On Tue, Dec 2, 2008 at 8:50 PM, Eitan Tsur eitan.tsur@gmail.com wrote:
Not in /etc/mdadm.conf... Where else could it be defined?
On Tue, Dec 2, 2008 at 3:53 PM, Lonni J Friedman netllama@gmail.com wrote:
Are you sure that you don't have a spare designated somewhere?
On Tue, Dec 2, 2008 at 2:32 PM, Eitan Tsur eitan.tsur@gmail.com wrote:
Ok, so here's what /proc/mdstat says after a clean reboot:
Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sdb1[0] sdc1[1] 1465143808 blocks level 5, 256k chunk, algorithm 2 [3/2] [UU_]
md_d0 : inactive sdd[2](S) 732574464 blocks
unused devices: <none>
I basically have to do:
mdadm --stop /dev/md_d0 mdadm --add /dev/md0 /dev/sdd1 cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sdd1[3] sdb1[0] sdc1[1] 1465143808 blocks level 5, 256k chunk, algorithm 2 [3/2] [UU_] [>....................] recovery = 0.4% (3269252/732571904) finish=153.7min speed=79057K/sec
unused devices: <none>
Furthermore, the first time I saw this, it was /dev/sdb that had dropped. Yesterday it was /dev/sdc. Today it's /dev/sdd. That's what throws me off about this whole thing.
Regards, -Eitan-
On Sun, Nov 30, 2008 at 2:41 PM, Lonni J Friedman netllama@gmail.com wrote:
On Sun, Nov 30, 2008 at 2:38 PM, Eitan Tsur eitan.tsur@gmail.com wrote:
I just recently installed a 3-disk RAID5 array in a server of mine, running FC9. Upon reboot, one of the drives drops out, and is allocated as a spare. I suspect there is some sort of issue where DBUS re-arranges the drive-to-device maps between boots, but I am not sure... Just kind of annoying to have to stop and re-add a drive every boot, and wait the couple hours for the array to rebuild the 3rd disk. Any thoughts? Anyone else encountered such an issue before? What should I be looking for? I'm new to the world of RAID, so any information you can give may be helpful.
What's in /etc/mdadm.conf, /proc/mdstat and dmesg when this fails ?
Eitan Tsur wrote:
Not in /etc/mdadm.conf... Where else could it be defined?
On Tue, Dec 2, 2008 at 3:53 PM, Lonni J Friedman <netllama@gmail.com mailto:netllama@gmail.com> wrote:
Are you sure that you don't have a spare designated somewhere?
again I'll suggest you check the mdadm.conf file stored in the initrd file