On 12/04/2014 03:35 PM, Chris Murphy wrote:
On Thu, Dec 4, 2014 at 12:21 PM, Ian Pilcher
<arequipeno(a)gmail.com> wrote:
> I just tried to install RC5 on my home system, which uses Intel BIOS
> RAID, and anaconda crashed during its initial storage scan.
>
>
https://bugzilla.redhat.com/show_bug.cgi?id=1170755
>
> Short of major surgery (pulling the RAID drives, breaking the RAID,
> etc.), can anyone think of a way to get F21 installed on this system?
When booted from any install media, in a shell, what do you get for:
Here's what I get when booted into F20:
cat /proc/mdstat
Personalities : [raid1]
md126 : active raid1 sdb[1] sdc[0]
976759808 blocks super external:/md127/0 [2/2] [UU]
md127 : inactive sdc[1](S) sdb[0](S)
5288 blocks super external:imsm
unused devices: <none>
mdadm -D /dev/md126
/dev/md126:
Container : /dev/md/imsm0, member 0
Raid Level : raid1
Array Size : 976759808 (931.51 GiB 1000.20 GB)
Used Dev Size : 976759940 (931.51 GiB 1000.20 GB)
Raid Devices : 2
Total Devices : 2
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 3d7bd72f:82a8cbcc:2d217397:12f3ff95
Number Major Minor RaidDevice State
1 8 16 0 active sync /dev/sdb
0 8 32 1 active sync /dev/sdc
mdadm -E /dev/sd[ab]
I assume you mean "mdadm -E /dev/sd[bc]".
/dev/sdb:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.1.00
Orig Family : d7e8a7e3
Family : d7e8a7e3
Generation : 00114ca2
Attributes : All supported
UUID : 1ebd7712:2a74af1f:34298316:cb855b50
Checksum : d1e422ca correct
MPB Sectors : 1
Disks : 2
RAID Devices : 1
Disk00 Serial : MSK5235H29X18G
State : active
Id : 00000001
Usable Size : 1953519880 (931.51 GiB 1000.20 GB)
[Volume0]:
UUID : 3d7bd72f:82a8cbcc:2d217397:12f3ff95
RAID Level : 1
Members : 2
Slots : [UU]
Failed disk : none
This Slot : 0
Array Size : 1953519616 (931.51 GiB 1000.20 GB)
Per Dev Size : 1953519880 (931.51 GiB 1000.20 GB)
Sector Offset : 0
Num Stripes : 7630936
Chunk Size : 64 KiB
Reserved : 0
Migrate State : idle
Map State : normal
Dirty State : dirty
Disk01 Serial : MSK5235H2PJ7TG
State : active
Id : 00000002
Usable Size : 1953519880 (931.51 GiB 1000.20 GB)
/dev/sdc:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.1.00
Orig Family : d7e8a7e3
Family : d7e8a7e3
Generation : 00114ca2
Attributes : All supported
UUID : 1ebd7712:2a74af1f:34298316:cb855b50
Checksum : d1e422ca correct
MPB Sectors : 1
Disks : 2
RAID Devices : 1
Disk01 Serial : MSK5235H2PJ7TG
State : active
Id : 00000002
Usable Size : 1953519880 (931.51 GiB 1000.20 GB)
[Volume0]:
UUID : 3d7bd72f:82a8cbcc:2d217397:12f3ff95
RAID Level : 1
Members : 2
Slots : [UU]
Failed disk : none
This Slot : 1
Array Size : 1953519616 (931.51 GiB 1000.20 GB)
Per Dev Size : 1953519880 (931.51 GiB 1000.20 GB)
Sector Offset : 0
Num Stripes : 7630936
Chunk Size : 64 KiB
Reserved : 0
Migrate State : idle
Map State : normal
Dirty State : dirty
Disk00 Serial : MSK5235H29X18G
State : active
Id : 00000001
Usable Size : 1953519880 (931.51 GiB 1000.20 GB)
From the program.log, I'm seeing sd[ab]5 are bcache partitions,
and
looks like they're outside of the imsm container, is that correct? I
don't know whether this is a supportedlayout, my expectation when
using firmware RAID is that the firmware (and later the md driver)
asserts complete control over the entire block device. Any partitions
are created within the container. If that's correct, the bcache
partitions would need to be inside the imsm container as well. But in
any case the installer shouldn't crash, it should inform. The version
of mdadm in F21 is the same for three months, so if there's a
regression here it's not recent.
The RAID device (md126) is composed of sdb and sdc. md126p5 is the
bcache backing device. sda is an (un-RAIDed) SSD; sda2 is the bcache
cache device.
As I just posted to the Bugzilla ...
I did some playing around with running "anaconda --askmethod" from a
live CD, and the results were ... interesting. At first, I got the
expected crash, so I edited /usr/lib/python2.7/site-packages/blivet/
devicelibs/mdraid.py and added some additional logging to both
name_from_md_node and md_node_from_name. Just to ensure that the
updated version was used, I also deleted the mdraid.pyc and mdraid.pyo.
Once I did this, the crash went away.
This suggests a couple of different possibilities to me:
1. There was something messed up about the pre-compiled files.
2. (Far more likely) some sort of race condition with udev. Partitions
on top of MD RAID seem to take a particularly long time for all of
the helper programs to run, so perhaps udev simply isn't finished
creating all the symlinks that anaconda needs (until I slow anaconda
down by adding the logging and/or removing the pre-compiled files).
--
========================================================================
Ian Pilcher arequipeno(a)gmail.com
-------- "I grew up before Mark Zuckerberg invented friendship" --------
========================================================================