Very rough storage validation matrix draft

Chris Murphy lists at colorremedies.com
Mon Mar 17 20:58:50 UTC 2014


On Mar 17, 2014, at 1:33 PM, Adam Williamson <awilliam at redhat.com> wrote:
> 
> 
>> Beta	QA:Testcase_partitioning_guided_multi_empty
>> What is this?
> 
> More than one disk. (We *could* have multiple tests covering various
> scenarios here, but I was trying to keep things relatively compact.)

Patch. Testcase_partitioning_guided_multi_empty > Testcase_partitioning_guided_multidev_empty

Basically, pick two devices and see if anything blows up during install or first boot.


>> - What is QA:Testcase_partitioning_custom_existing_precreated? Layout
>> created elsewhere and this tests the ability of the installer to use
>> that without making changes? Basically assigning mount points to
>> existing? 
> 
> Yeah, I think that's what I was thinking of.
> 
>> Needs a RAID column I think, if we're going to test the anaconda
>> supported "create raid elsewhere" and use it in anaconda workflow.
> 
> Thanks.

We can make that a bonus column *shrug* it's not obviously supported in the installer but the anaconda team has said it's supposed to work.


> 
>> - Seems like in general we need more RAID tests. I don't see a
>> hardware raid test. 
> 
> I can't recall whether I dropped this intentionally or inadvertently,
> I'll try and check. But, of course, HW raid and BIOS RAID are really
> rather different cases from software RAID.

Mmm, well I'm not sure what the failure vectors are for HW and BIOS RAID. The hwraid case should just look and behave like an ordinary single device. The firmware RAID case starts out the same way at boot time, but then becomes a variation of software raid, as it's implemented by mdadm, the only difference being on-disk metadata format.

Looks like in Rawhide's installer Firmware RAID is listed in specialized disks, which is different than hardware raid I think.

Anyway, I see why they're tested separately.

> 
>> Or any explicit software raid0, raid1, raid10, raid4 (a.k.a. nutcase),
>> raid5 or raid6 tests. Should there be a separate software raid matrix
>> section? And should the matrix show only what we want to
>> "support/test"? Or only those we'd block on? Or all possible
>> checkboxed options, and subjectively list some of them as "bonus"
>> release level, rather than alpha/beta/final?
> 
> We certainly need to cover SW RAID in the custom testing, you're right,
> it's an obvious miss. Not sure of the best way to approach it offhand.
> If you'd like to draft something up that'd be great, or else I'll try
> and do it.

I think any raid layout is a small population of the user base. But I also think there's broad benefit to resiliently bootable raid1, so it makes sense for us to care about /boot, rootfs, and /home on raid1, and hopefully refine it so that one day it'll work better on UEFI than it does now. And then expand scope as resources permit.

Everything else I think is totally esoteric. Ideology wise, I think since it's offered in the installer it ought to work. But I also don't want to test esoteric stuff when basic, broadly useful stuff, needs attention. I think the iSCSI/SAN stuff is way more useful than enabling installation time creation of or install to software raid other than level 1.

Anyone know if we can boot off a glusterfs volume? Random question…

I'm not going to be much use for anything but occasional emails, taking pot shots, etc. for the next 3-4 weeks: crashed into a tree while skiing Friday, have a week to prepare 3 presentations for Libre Graphics Meeting, a wedding, travel to/fr Germany for LGM. Plus recovery. And mostly I'm emailing now because I'm procrastinating.



Chris Murphy



More information about the test mailing list