Draft desktop validation test matrix

James Laska jlaska at redhat.com
Wed Jan 13 14:14:29 UTC 2010

On Tue, 2010-01-12 at 23:26 +0000, Adam Williamson wrote:
> On Tue, 2010-01-12 at 18:32 +0800, Li Ming wrote:
> > Yes, currently,the cases is not enough for a new page,but if you can 
> > add/extend more cases,I think it worth a new page. And the 
> Well, it's a set of tests to validate the release, hence it has tests to
> check all existing release criteria. We can't really just invent new
> tests out of thin air if they're not things we actually need to test to
> validate the release :)
> > classification of test results is different:the existing installation 
> > matrix is i386,x86_64, but your cases are different session.
> These tests are not likely to vary according to architecture, in my
> opinion.
> >  Putting 
> > them together seems a bit incompatible?
> Not necessarily, we can have multiple tables on one page.
> >  and also the classification of 
> > criteria is different.

What will determine the method by which we store the results is the
frequency each test run happens.  Presently, install test runs are on
the schedule at specific dates [1] during the release.  If we plan to
also include additional test plans during each of these test runs,
applicable templates can be pulled into the test run wiki page for that

      * 3.  Pre-Alpha Rawhide Acceptance Test Plan #1   Thu 2010-01-21
      * 9.  Pre-Alpha Rawhide Acceptance Test Plan #2   Thu 2010-01-28
      * 10. Pre-Alpha Rawhide Acceptance Test Plan #3   Thu 2010-02-04
      * 12. Test Alpha 'Test Compose' (boot media testing)  Thu
      * 14. Test Alpha Candidate    Thu 2010-02-18
      * 29. Pre-Beta Rawhide Acceptance Test Plan   Wed 2010-03-10
      * 30. Test Beta 'Test Compose' (boot media testing)   Thu
      * 32. Test Beta Candidate     Thu 2010-03-25  Thu 2010-04-01
      * 39. Pre-RC Rawhide Install Test Plan    Wed 2010-04-14
      * 46. Test 'Final' Test Compose (boot media testing)  Thu
      * 49. Test 'Final' RC     Thu 2010-04-29

My suggestion, let's go with different wiki "templates" for each focus
area (desktop, install, other).  When the time comes to create a test
run page, each of the applicable focus areas will be substituted [2]
into that single page.

Each test run wiki would consist of ...

        {{subst:Fedora 13 Test Run Template}}
        {{subst:Fedora 13 Milestone Checklist Template}}
        {{subst:Fedora 13 Install Results Template}}
        {{subst:Fedora 13 Desktop Results Template}}
        <insert additional templates here>

> In fact, the classification of the installation tests should now be
> adjusted to this system - classifying tests by release stage rather than
> the arbitrary 'tier' concept - since we now have proper per-release
> release criteria.

I hear you're point, but arbitrary isn't the most accurate way to
describe the years of built up test knowledge that contributed to the
current method for prioritizing install testing [3].

> > So I suggest create a new matrix and add more 
> > cases to it in the future

I have a ticket assigned to address that point (see
https://fedorahosted.org/fedora-qa/ticket/35).  I'm still wrestling with
a decent way to visualize this change, and implement it in a repeatable
manner in the wiki.  I've got some thoughts, just need to put pen to
paper so folks can review.


[2] http://en.wikipedia.org/wiki/Help:Substitution
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part
Url : http://lists.fedoraproject.org/pipermail/test/attachments/20100113/0c099ea7/attachment.bin 

More information about the test mailing list