I spent some time trying to arm wrestle my machines fake raid controllers(ck804, sil 3118) and a pci board I had lying around(adaptec 1210sa), and came to the conclusion that dmraid was causing me serious grief.
First off, I could _not_ get dmraid to go away during the install on x86_64. Even the nodmraid kernel option failed me. Each time I booted up with two disks connected to any controller, the only drive I had at my disposal was /dev/mapper/mapper0. I did notice that there was a 'disable dmraid' toggle in the advanced setup button in the installer, however this option was grayed out in all situations that I could come up with.
Second, all three of the SATA raid controllers I tested failed in similar, yet different ways during the install process. They mainly failed during file system creation, but there was some variances in behavior that were interesting. For example, some would cause anaconda to dump and crash, others would simply cause the installer to abort and reboot the machine.
Third, and the most annoying, was that I couldn't install a plain s/w raid configuration with 2 drives connected. There was just no way to keep the system from assuming that I meant to use /dev/mapper/mapper0 even after I ensured that the superblock metadata was wiped out with a quick call to 'dmraid -Er /dev/sdX' to be sure that it was really gone.
In a related note, it would be nice if the installer would allow you to create a broken raid 1 in the installer, anaconda should definitely warn you and tell you that what you are doing is loopy, but there should be a way to just say, "go ahead if you really wanna". I was unable to find this magic, and had to install on a single drive. I guess I'll have to migrate to s/w raid after the install is finished.
I am more than interested now in diagnosing and fixing the issues I've spewed forth here. I've even gone so far as to start some bugzilla goodness on tkt:246817
Any takers in dev land want to give me a hand with this one?
Sean
--- Sean Bruno sean.bruno@dsl-only.net wrote:
In a related note, it would be nice if the installer would allow you to create a broken raid 1 in the installer, anaconda should definitely warn you and tell you that what you are doing is loopy, but there should be a way to just say, "go ahead if you really wanna".
I thought it was going to be loopy too...
http://www.redhat.com/archives/fedora-livecd-list/2006-August/msg00000.html
I might just have to post a 7 line patch tomorrow that implements the above... without using so much as a single loop device ;)
-dmc/jdog
____________________________________________________________________________________ Got a little couch potato? Check out fun summer activities for kids. http://search.yahoo.com/search?fr=oni_on_mail&p=summer+activities+for+ki...
On Fri, Jul 06, 2007 at 01:59:53 -0700, Jane Dogalt jdogalt@yahoo.com wrote:
--- Sean Bruno sean.bruno@dsl-only.net wrote:
In a related note, it would be nice if the installer would allow you to create a broken raid 1 in the installer, anaconda should definitely warn you and tell you that what you are doing is loopy, but there should be a way to just say, "go ahead if you really wanna".
I thought it was going to be loopy too...
http://www.redhat.com/archives/fedora-livecd-list/2006-August/msg00000.html
I might just have to post a 7 line patch tomorrow that implements the above... without using so much as a single loop device ;)
I wouldn't let it create a 'broken' (e.g. in degraded mode) raid 1 device. You can create raid 1 arrays with only one device rather than with two devices and one of them missing. It isn't hard to add other devices to a raid 1 array later.
Letting the installer *use* a degraded array is a grayer area. I think it would be better to require the admin to modify the array first otherwise you can have unexpected things happen when people split arrays by failing a drive, but end up having the failed device get used when the array is restarted later, instead of the intended device.
I have asked for anaconda to be changed to allow 1 element raid 1 arrays to be used in the past (188314) and have been turned down.
Sean Bruno wrote:
<snip> Third, and the most annoying, was that I couldn't install a plain s/w raid configuration with 2 drives connected. There was just no way to keep the system from assuming that I meant to use /dev/mapper/mapper0 even after I ensured that the superblock metadata was wiped out with a quick call to 'dmraid -Er /dev/sdX' to be sure that it was really gone.
<snip>
Sean
On my x86 box, I used the "nompath" parameter during install to prevent /dev/mapper/mapper0 from being used and then could see all disks properly, including pre-existing software raid arrays. Seems that even if you are not using the bios raid capability of a SATA setup, anaconda thinks you want to. Must be a way to check the bios to see if the setup is for raid or not.
On Fri, 2007-07-06 at 12:27 -0400, Clyde E. Kunkel wrote:
Sean Bruno wrote:
<snip> Third, and the most annoying, was that I couldn't install a plain s/w raid configuration with 2 drives connected. There was just no way to keep the system from assuming that I meant to use /dev/mapper/mapper0 even after I ensured that the superblock metadata was wiped out with a quick call to 'dmraid -Er /dev/sdX' to be sure that it was really gone.
<snip> > Sean >
On my x86 box, I used the "nompath" parameter during install to prevent /dev/mapper/mapper0 from being used and then could see all disks properly, including pre-existing software raid arrays. Seems that even if you are not using the bios raid capability of a SATA setup, anaconda thinks you want to. Must be a way to check the bios to see if the setup is for raid or not.
That didn't work for me. I am still presented with the /dev/mapper/mapper0 device as my only install target.
Sean
Sean Bruno wrote:
I spent some time trying to arm wrestle my machines fake raid controllers(ck804, sil 3118) and a pci board I had lying around(adaptec 1210sa), and came to the conclusion that dmraid was causing me serious grief.
First off, I could _not_ get dmraid to go away during the install on x86_64. Even the nodmraid kernel option failed me. Each time I booted up with two disks connected to any controller, the only drive I had at my disposal was /dev/mapper/mapper0. I did notice that there was a 'disable dmraid' toggle in the advanced setup button in the installer, however this option was grayed out in all situations that I could come up with.
You're sure that's "mapper0"? That looks suspicious to me. Asside from lvm (which would typically say "VolGroupMM-LogVolNN" or similar), the names we create for device mapper devices such as multipath and dmraid are of the forms "mpathN" (for multipath) and "format_$METADATAINFO", i.e. "sil_ahadejcacefa", where "sil" reflects that it's a SiL metadata format and "ahadejcacefa" is a hash of information from the metadata itself. "mapper0" isn't something the installer would choose for a device name.
On Wed, 2007-07-11 at 10:31 -0400, Peter Jones wrote:
Sean Bruno wrote:
I spent some time trying to arm wrestle my machines fake raid controllers(ck804, sil 3118) and a pci board I had lying around(adaptec 1210sa), and came to the conclusion that dmraid was causing me serious grief.
First off, I could _not_ get dmraid to go away during the install on x86_64. Even the nodmraid kernel option failed me. Each time I booted up with two disks connected to any controller, the only drive I had at my disposal was /dev/mapper/mapper0. I did notice that there was a 'disable dmraid' toggle in the advanced setup button in the installer, however this option was grayed out in all situations that I could come up with.
You're sure that's "mapper0"? That looks suspicious to me. Asside from lvm (which would typically say "VolGroupMM-LogVolNN" or similar), the names we create for device mapper devices such as multipath and dmraid are of the forms "mpathN" (for multipath) and "format_$METADATAINFO", i.e. "sil_ahadejcacefa", where "sil" reflects that it's a SiL metadata format and "ahadejcacefa" is a hash of information from the metadata itself. "mapper0" isn't something the installer would choose for a device name.
I just went back and retested, it is definitely "/dev/mapper/mpath0" as you stated Peter. I have no idea why I said mapper0 and why I repeated it so frequently.
Anyway, what things can I test out for you folks?
Sean