Very rough storage validation matrix draft

Adam Williamson awilliam at redhat.com
Sat Dec 14 18:08:35 UTC 2013


On Sat, 2013-12-14 at 10:56 -0700, Chris Murphy wrote:
> On Dec 14, 2013, at 12:48 AM, Adam Williamson <awilliam at redhat.com> wrote:
> 
> > On Sat, 2013-12-14 at 00:06 -0700, Chris Murphy wrote:
> >>> 
> >> In my opinion, every one of those tests requires a feature owner. If
> >> no one volunteers, if a hand off isn't made, the functionality for the
> >> feature represented by the sanity check shall be removed from the next
> >> version of Fedora.
> > 
> > Do you mean someone who is responsible for development of the feature,
> > or testing it?
> 
> Testing it.
> 
> > Right now I'm simply trying to figure out a vaguely practical approach
> > for testing what we can of the installer's storage functions. That's
> > really all I'm shooting for.
> 
> I understand that. I'm suggesting an approach that ties functionality
> retention to community interest. If we can't recruit even temp "QA
> people" to adopt a test case, then maybe the community doesn't really
> value the functions indicated by those test cases.

Personally I don't think that approach really works out; what are we
going to do, have a big database of who 'owns' what test at any given
time? You have to file paperwork to keep tests in the matrix? Who's
going to own the process of tracking who owns which tests?

I think tests with difficult requirements are something we have to deal
with, but I don't think having 'feature owners' is the way to go,
personally.

> For example the iSCSI test case. It seemed pretty much no one in QA
> really cared about that functionality, as they didn't depend on or use
> it themselves. It was just a test case to them. So where are the
> people who actually want that function to work?

I was perfectly willing to test this one, actually, only it turns out my
iSCSI target has some kind of weird issue that others don't. So I can't,
practically, at the moment.

I can see how your approach kind of feels like it makes sense if you
think of every little attribute of the installer as a 'feature', but
that itself doesn't quite work, for me. I mean, 'install into free
space' isn't really a 'feature', or at least I don't see that it gets us
anywhere to think of it as one. What does it mean to be the 'feature
owner' for the 'install onto an empty disk' 'feature'?

> Another example is LVM Thin P. I want to know where the feature's
> owners have been this whole time, and how it is they didn't test RC1
> to see if their own feature, given prime real estate in the installer,
> was working. On that basis alone I'd say LVM Thin P should be pulled
> due to lack of community interest, including apparently lack of
> interest by the feature's own owners.

I'm honestly willing to cut them some slack here, given that RC1 existed
for *one whole day* before Go/No-Go.

One thing I should probably unpack explicitly here is that I'm worried
that the way we've done releases the last few cycles is becoming the New
Normal, especially for people who've mostly been involved with
validation in the last few releases. I don't think it's a good thing at
all that we've got 'accustomed' to spinning the RC we wind up releasing
about 16 hours before we sign off on its release; we only really started
doing that a lot around F18, and it is not at all the optimal way to do
things. It is much better if we have the RC we're planning to ship built
for, like, 3-4 days, to give us a chance to find issues in it that
aren't immediately screamingly obvious from the validation matrices.
Building the release image late on Wednesday then doing go/no-go on
Thursday is absolutely not how we really _want_ to be doing things.
People outside QA aren't reading the test@ list 24/7 ready to jump on a
new RC, I don't think it's entirely realistic to expect every Change
owner to catch the final RC in a 16-hour window and test their Change.

thinp support as a feature is actually owned by dlehman -
https://fedoraproject.org/wiki/Changes/InstallerLVMThinProvisioningSupport - who I expect was taking a well-earned break during the very short window we were testing RC1.

> > This is one possible approach, there are
> > many others. I mean, prior to newUI, we placed a _much_ lower
> emphasis
> > on custom partitioning.
> 
> Yes, well you know how I feel about manual partitioning. The test
> cases are necessary to qualify the function works sufficiently well
> for release. But to execute the test cases requires either people or
> some magical automation.

anaconda team is currently working on implementing CI into their
development process, and may come to us for help with that later in the
process (to help write test cases). I think that will help quite a lot;
we could probably do much better 'sanity checking' with automated tests
to stress obvious corner cases like setting things to invalid values,
repeatedly changing values and so on. I would expect we would also be
able to cover a lot of the cases in that matrix at least in non-UI form.
I think there are always likely to be bugs in the UI stuff that only
become apparent when you run the full interactive installer and click on
stuff, and I personally am fairly sceptical about the practicality of
automated interactive testing, so I think we'll need to maintain some
kind of set of manual interactive partitioning tests for the foreseeable
future. But I'm definitely  hoping this CI project works out as I think
it will take a lot of the load outside of interactive-specific bugs off
of us.
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net



More information about the test mailing list