system is Fedora 26, all current updates
I'm running raid1, two identical 500GB disks partitioned as follows:
/dev/sda1 2048 789501951 789499904 376.5G fd Linux raid autodetect /dev/sda2 789501952 948973567 159471616 76G fd Linux raid autodetect /dev/sda3 948973568 975749119 26775552 12.8G fd Linux raid autodetect /dev/sda4 975749120 976773119 1024000 500M 5 Extended /dev/sda5 * 975751168 976773119 1021952 499M fd Linux raid autodetect
raid volumes:
cat /proc/mdstat Personalities : [raid1] md124 : active raid1 sdb1[0] sda1[1] 394617856 blocks super 1.2 [2/2] [UU] bitmap: 0/3 pages [0KB], 65536KB chunk
md125 : active raid1 sda5[1] sdb5[0] 510656 blocks super 1.2 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk
md126 : active raid1 sda2[1] sdb2[0] 79670272 blocks super 1.2 [2/2] [UU] bitmap: 1/1 pages [4KB], 65536KB chunk
md127 : active (auto-read-only) raid1 sda3[1] sdb3[0] 13379584 blocks super 1.2 [2/2] [UU]
unused devices: <none>
Mounted partitions:
/dev/md126 79631372 5608820 74022552 8% / /dev/md124 388294468 23798252 344748940 7% /home /dev/md125 486308 283858 172822 63% /boot
md127 is swap
I get the following error in logwatch:
mdadm: cannot open /dev/md/boot: No such file or directory mdadm: cannot open /dev/md/root: No such file or directory mdadm: cannot open /dev/md/swap: No such file or directory
and I see that /dev/md contains the following:
# ls -l /dev/md total 0 lrwxrwxrwx 1 root root 8 Aug 24 16:14 home -> ../md124 lrwxrwxrwx 1 root root 8 Aug 24 16:14 xyzzy2.bubble.org:boot -> ../md125 lrwxrwxrwx 1 root root 8 Aug 24 16:14 xyzzy2.bubble.org:root -> ../md126 lrwxrwxrwx 1 root root 8 Aug 24 16:14 xyzzy2.bubble.org:swap -> ../md127
the entries in /etc/mdadm.conf are:
MAILADDR root AUTO +imsm +1.x -all ARRAY /dev/md/boot level=raid1 num-devices=2 UUID=3b187b00:b3b1a1f9:6d75f8f1:62f82999 ARRAY /dev/md/home level=raid1 num-devices=2 UUID=d124bd7f:80519efc:a28d80db:617eafed ARRAY /dev/md/root level=raid1 num-devices=2 UUID=aed6ed78:840451fc:f101760f:79960f8a ARRAY /dev/md/swap level=raid1 num-devices=2 UUID=f84d0bd4:fe7be888:c048d500:cca10896
I have verified that the UUID numbers do match the respective volume names.
so obviously the reason for the error is boot, root, and swap don't exist, if I create a symbolic links the error will go away until the next reboot. The question is what triggers making the symbolic links at boot time, is it /etc/mdadm.conf and do I simply rebuild it? and how can I change them from the hostname to be root, boot, and swap or change the mdadm command to look for the hostname entry?
If I run "mdadm --detail --scan" I get the following; ARRAY /dev/md/xyzzy2.bubble.org:swap metadata=1.2 name=xyzzy2.bubble.org:swap UUID=f84d0bd4:fe7be888:c048d500:cca10896 ARRAY /dev/md/xyzzy2.bubble.org:root metadata=1.2 name=xyzzy2.bubble.org:root UUID=aed6ed78:840451fc:f101760f:79960f8a ARRAY /dev/md/xyzzy2.bubble.org:boot metadata=1.2 name=xyzzy2.bubble.org:boot UUID=3b187b00:b3b1a1f9:6d75f8f1:62f82999 ARRAY /dev/md/home metadata=1.2 name=xyzzy2.bubble.org:home UUID=d124bd7f:80519efc:a28d80db:617eafed
the array names are changed to match what is actually in /dev/md, however before I change the entries in mdadm.conf I want to make sure I'm not going to cause myself grief and have to log directly on the console of the system which is about 30 min and a phone call or two away.
Thanks, Jeff
On Thu, 2017-08-24 at 18:06 -0400, Jeffrey Ross wrote:
system is Fedora 26, all current updates
I'm running raid1, two identical 500GB disks partitioned as follows:
/dev/sda1 2048 789501951 789499904 376.5G fd Linux raid autodetect /dev/sda2 789501952 948973567 159471616 76G fd Linux raid autodetect /dev/sda3 948973568 975749119 26775552 12.8G fd Linux raid autodetect /dev/sda4 975749120 976773119 1024000 500M 5 Extended /dev/sda5 * 975751168 976773119 1021952 499M fd Linux raid autodetect
raid volumes:
cat /proc/mdstat Personalities : [raid1] md124 : active raid1 sdb1[0] sda1[1] 394617856 blocks super 1.2 [2/2] [UU] bitmap: 0/3 pages [0KB], 65536KB chunk
md125 : active raid1 sda5[1] sdb5[0] 510656 blocks super 1.2 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk
md126 : active raid1 sda2[1] sdb2[0] 79670272 blocks super 1.2 [2/2] [UU] bitmap: 1/1 pages [4KB], 65536KB chunk
md127 : active (auto-read-only) raid1 sda3[1] sdb3[0] 13379584 blocks super 1.2 [2/2] [UU]
unused devices: <none>
Mounted partitions:
/dev/md126 79631372 5608820 74022552 8% / /dev/md124 388294468 23798252 344748940 7% /home /dev/md125 486308 283858 172822 63% /boot
md127 is swap
I get the following error in logwatch: mdadm: cannot open /dev/md/boot: No such file or directory mdadm: cannot open /dev/md/root: No such file or directory mdadm: cannot open /dev/md/swap: No such file or directory
and I see that /dev/md contains the following:
# ls -l /dev/md total 0 lrwxrwxrwx 1 root root 8 Aug 24 16:14 home -> ../md124 lrwxrwxrwx 1 root root 8 Aug 24 16:14 xyzzy2.bubble.org:boot -> ../md125 lrwxrwxrwx 1 root root 8 Aug 24 16:14 xyzzy2.bubble.org:root -> ../md126 lrwxrwxrwx 1 root root 8 Aug 24 16:14 xyzzy2.bubble.org:swap -> ../md127
the entries in /etc/mdadm.conf are:
MAILADDR root AUTO +imsm +1.x -all ARRAY /dev/md/boot level=raid1 num-devices=2 UUID=3b187b00:b3b1a1f9:6d75f8f1:62f82999 ARRAY /dev/md/home level=raid1 num-devices=2 UUID=d124bd7f:80519efc:a28d80db:617eafed ARRAY /dev/md/root level=raid1 num-devices=2 UUID=aed6ed78:840451fc:f101760f:79960f8a ARRAY /dev/md/swap level=raid1 num-devices=2 UUID=f84d0bd4:fe7be888:c048d500:cca10896
I have verified that the UUID numbers do match the respective volume names.
so obviously the reason for the error is boot, root, and swap don't exist, if I create a symbolic links the error will go away until the next reboot. The question is what triggers making the symbolic links at boot time, is it /etc/mdadm.conf and do I simply rebuild it? and how can I change them from the hostname to be root, boot, and swap or change the mdadm command to look for the hostname entry?
If I run "mdadm --detail --scan" I get the following; ARRAY /dev/md/xyzzy2.bubble.org:swap metadata=1.2 name=xyzzy2.bubble.org:swap UUID=f84d0bd4:fe7be888:c048d500:cca10896 ARRAY /dev/md/xyzzy2.bubble.org:root metadata=1.2 name=xyzzy2.bubble.org:root UUID=aed6ed78:840451fc:f101760f:79960f8a ARRAY /dev/md/xyzzy2.bubble.org:boot metadata=1.2 name=xyzzy2.bubble.org:boot UUID=3b187b00:b3b1a1f9:6d75f8f1:62f82999 ARRAY /dev/md/home metadata=1.2 name=xyzzy2.bubble.org:home UUID=d124bd7f:80519efc:a28d80db:617eafed
the array names are changed to match what is actually in /dev/md, however before I change the entries in mdadm.conf I want to make sure I'm not going to cause myself grief and have to log directly on the console of the system which is about 30 min and a phone call or two away.
My mdadm.conf uses /dev/md1 for "boot" and I have a "name" directive in it and my /dev/md/ links were not being created either, so I simply added a line (as shown below) to rc.local to created them each boot if they were not being created. I figure that if something changes that fixes this oddity then it does not matter since I am using `if` to check and to only create the link if it is not already there.
# sample line, remember that it should not be wrapped: if [ -b /dev/md1 ] ; then if [ ! -L /dev/md/boot ]; then /bin/ln -s /dev/md1 /dev/md/boot ; fi ; fi
On 08/24/2017 07:58 PM, Doug H. wrote:
On Thu, 2017-08-24 at 18:06 -0400, Jeffrey Ross wrote:
system is Fedora 26, all current updates
I'm running raid1, two identical 500GB disks partitioned as follows:
/dev/sda1 2048 789501951 789499904 376.5G fd Linux raid autodetect /dev/sda2 789501952 948973567 159471616 76G fd Linux raid autodetect /dev/sda3 948973568 975749119 26775552 12.8G fd Linux raid autodetect /dev/sda4 975749120 976773119 1024000 500M 5 Extended /dev/sda5 * 975751168 976773119 1021952 499M fd Linux raid autodetect
raid volumes:
cat /proc/mdstat Personalities : [raid1] md124 : active raid1 sdb1[0] sda1[1] 394617856 blocks super 1.2 [2/2] [UU] bitmap: 0/3 pages [0KB], 65536KB chunk
md125 : active raid1 sda5[1] sdb5[0] 510656 blocks super 1.2 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk
md126 : active raid1 sda2[1] sdb2[0] 79670272 blocks super 1.2 [2/2] [UU] bitmap: 1/1 pages [4KB], 65536KB chunk
md127 : active (auto-read-only) raid1 sda3[1] sdb3[0] 13379584 blocks super 1.2 [2/2] [UU]
unused devices: <none>
Mounted partitions:
/dev/md126 79631372 5608820 74022552 8% / /dev/md124 388294468 23798252 344748940 7% /home /dev/md125 486308 283858 172822 63% /boot
md127 is swap
I get the following error in logwatch: mdadm: cannot open /dev/md/boot: No such file or directory mdadm: cannot open /dev/md/root: No such file or directory mdadm: cannot open /dev/md/swap: No such file or directory
and I see that /dev/md contains the following:
# ls -l /dev/md total 0 lrwxrwxrwx 1 root root 8 Aug 24 16:14 home -> ../md124 lrwxrwxrwx 1 root root 8 Aug 24 16:14 xyzzy2.bubble.org:boot -> ../md125 lrwxrwxrwx 1 root root 8 Aug 24 16:14 xyzzy2.bubble.org:root -> ../md126 lrwxrwxrwx 1 root root 8 Aug 24 16:14 xyzzy2.bubble.org:swap -> ../md127
the entries in /etc/mdadm.conf are:
MAILADDR root AUTO +imsm +1.x -all ARRAY /dev/md/boot level=raid1 num-devices=2 UUID=3b187b00:b3b1a1f9:6d75f8f1:62f82999 ARRAY /dev/md/home level=raid1 num-devices=2 UUID=d124bd7f:80519efc:a28d80db:617eafed ARRAY /dev/md/root level=raid1 num-devices=2 UUID=aed6ed78:840451fc:f101760f:79960f8a ARRAY /dev/md/swap level=raid1 num-devices=2 UUID=f84d0bd4:fe7be888:c048d500:cca10896
I have verified that the UUID numbers do match the respective volume names.
so obviously the reason for the error is boot, root, and swap don't exist, if I create a symbolic links the error will go away until the next reboot. The question is what triggers making the symbolic links at boot time, is it /etc/mdadm.conf and do I simply rebuild it? and how can I change them from the hostname to be root, boot, and swap or change the mdadm command to look for the hostname entry?
If I run "mdadm --detail --scan" I get the following; ARRAY /dev/md/xyzzy2.bubble.org:swap metadata=1.2 name=xyzzy2.bubble.org:swap UUID=f84d0bd4:fe7be888:c048d500:cca10896 ARRAY /dev/md/xyzzy2.bubble.org:root metadata=1.2 name=xyzzy2.bubble.org:root UUID=aed6ed78:840451fc:f101760f:79960f8a ARRAY /dev/md/xyzzy2.bubble.org:boot metadata=1.2 name=xyzzy2.bubble.org:boot UUID=3b187b00:b3b1a1f9:6d75f8f1:62f82999 ARRAY /dev/md/home metadata=1.2 name=xyzzy2.bubble.org:home UUID=d124bd7f:80519efc:a28d80db:617eafed
the array names are changed to match what is actually in /dev/md, however before I change the entries in mdadm.conf I want to make sure I'm not going to cause myself grief and have to log directly on the console of the system which is about 30 min and a phone call or two away.
My mdadm.conf uses /dev/md1 for "boot" and I have a "name" directive in it and my /dev/md/ links were not being created either, so I simply added a line (as shown below) to rc.local to created them each boot if they were not being created. I figure that if something changes that fixes this oddity then it does not matter since I am using `if` to check and to only create the link if it is not already there.
# sample line, remember that it should not be wrapped: if [ -b /dev/md1 ] ; then if [ ! -L /dev/md/boot ]; then /bin/ln -s /dev/md1 /dev/md/boot ; fi ; fi
That sounds like a work around to an improper configuration someplace. I went ahead and rebuilt the mdadm.conf file and I'll deal with the long files names, I'll see tomorrow if the error from the logs has been eliminated.
Thanks Jeff
On 2017-08-25 16:10, Jeffrey Ross wrote:
On 08/24/2017 07:58 PM, Doug H. wrote:
On Thu, 2017-08-24 at 18:06 -0400, Jeffrey Ross wrote:
system is Fedora 26, all current updates
I'm running raid1, two identical 500GB disks partitioned as follows:
/dev/sda1 2048 789501951 789499904 376.5G fd Linux raid autodetect /dev/sda2 789501952 948973567 159471616 76G fd Linux raid autodetect /dev/sda3 948973568 975749119 26775552 12.8G fd Linux raid autodetect /dev/sda4 975749120 976773119 1024000 500M 5 Extended /dev/sda5 * 975751168 976773119 1021952 499M fd Linux raid autodetect
raid volumes:
cat /proc/mdstat Personalities : [raid1] md124 : active raid1 sdb1[0] sda1[1] 394617856 blocks super 1.2 [2/2] [UU] bitmap: 0/3 pages [0KB], 65536KB chunk
md125 : active raid1 sda5[1] sdb5[0] 510656 blocks super 1.2 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk
md126 : active raid1 sda2[1] sdb2[0] 79670272 blocks super 1.2 [2/2] [UU] bitmap: 1/1 pages [4KB], 65536KB chunk
md127 : active (auto-read-only) raid1 sda3[1] sdb3[0] 13379584 blocks super 1.2 [2/2] [UU] unused devices: <none>
Mounted partitions:
/dev/md126 79631372 5608820 74022552 8% / /dev/md124 388294468 23798252 344748940 7% /home /dev/md125 486308 283858 172822 63% /boot
md127 is swap
I get the following error in logwatch: mdadm: cannot open /dev/md/boot: No such file or directory mdadm: cannot open /dev/md/root: No such file or directory mdadm: cannot open /dev/md/swap: No such file or directory
and I see that /dev/md contains the following:
# ls -l /dev/md total 0 lrwxrwxrwx 1 root root 8 Aug 24 16:14 home -> ../md124 lrwxrwxrwx 1 root root 8 Aug 24 16:14 xyzzy2.bubble.org:boot -> ../md125 lrwxrwxrwx 1 root root 8 Aug 24 16:14 xyzzy2.bubble.org:root -> ../md126 lrwxrwxrwx 1 root root 8 Aug 24 16:14 xyzzy2.bubble.org:swap -> ../md127
the entries in /etc/mdadm.conf are:
MAILADDR root AUTO +imsm +1.x -all ARRAY /dev/md/boot level=raid1 num-devices=2 UUID=3b187b00:b3b1a1f9:6d75f8f1:62f82999 ARRAY /dev/md/home level=raid1 num-devices=2 UUID=d124bd7f:80519efc:a28d80db:617eafed ARRAY /dev/md/root level=raid1 num-devices=2 UUID=aed6ed78:840451fc:f101760f:79960f8a ARRAY /dev/md/swap level=raid1 num-devices=2 UUID=f84d0bd4:fe7be888:c048d500:cca10896
I have verified that the UUID numbers do match the respective volume names.
so obviously the reason for the error is boot, root, and swap don't exist, if I create a symbolic links the error will go away until the next reboot. The question is what triggers making the symbolic links at boot time, is it /etc/mdadm.conf and do I simply rebuild it? and how can I change them from the hostname to be root, boot, and swap or change the mdadm command to look for the hostname entry?
If I run "mdadm --detail --scan" I get the following; ARRAY /dev/md/xyzzy2.bubble.org:swap metadata=1.2 name=xyzzy2.bubble.org:swap UUID=f84d0bd4:fe7be888:c048d500:cca10896 ARRAY /dev/md/xyzzy2.bubble.org:root metadata=1.2 name=xyzzy2.bubble.org:root UUID=aed6ed78:840451fc:f101760f:79960f8a ARRAY /dev/md/xyzzy2.bubble.org:boot metadata=1.2 name=xyzzy2.bubble.org:boot UUID=3b187b00:b3b1a1f9:6d75f8f1:62f82999 ARRAY /dev/md/home metadata=1.2 name=xyzzy2.bubble.org:home UUID=d124bd7f:80519efc:a28d80db:617eafed
the array names are changed to match what is actually in /dev/md, however before I change the entries in mdadm.conf I want to make sure I'm not going to cause myself grief and have to log directly on the console of the system which is about 30 min and a phone call or two away.
My mdadm.conf uses /dev/md1 for "boot" and I have a "name" directive in it and my /dev/md/ links were not being created either, so I simply added a line (as shown below) to rc.local to created them each boot if they were not being created. I figure that if something changes that fixes this oddity then it does not matter since I am using `if` to check and to only create the link if it is not already there.
# sample line, remember that it should not be wrapped: if [ -b /dev/md1 ] ; then if [ ! -L /dev/md/boot ]; then /bin/ln -s /dev/md1 /dev/md/boot ; fi ; fi
That sounds like a work around to an improper configuration someplace. I went ahead and rebuilt the mdadm.conf file and I'll deal with the long files names, I'll see tomorrow if the error from the logs has been eliminated.
Thanks Jeff
if anybody is interested, rebuilding the /etc/mdadm.conf file did resolve the error