#483: Help design a new release validation testing system
Reporter: adamwill | Owner:
Type: Web Design | Status: new
Priority: medium | Severity: Long-Term / Complex Issue
Keywords: | Blocked By:
* What's your deadline (could be date, could be Fedora release milestone)
No strict deadline.
* Who's the developer writing the code (IRC nick + email + wiki profile
For now, me (adamw / adamwill / adamwill(a)fp.o)
* If you can, please provide us with example URLs of web designs that are
similar to the result you're looking for
Well, the thing I wanna replace is:
There's nothing precisely similar to what I want instead (which is kinda
why we're thinking of making a new thing), but in the same vein, there's
and Moztrap, which we were looking at using for a while:
* What type of web project is this?
A system for reporting and viewing Fedora release validation testing
* Wireframes or mockups for a website / web application
Will attach my SUPER AWESOME literally-on-the-back-of-an-envelope sketch.
* Is this for a new or existing site? (if existing, provide URL)
Do you need CSS/HTML for the design?
This would be an entirely new webapp.
* Provide a link to the application project page or github page
Don't have one yet.
* Provide a link to the theming documentation if available
* Provide a link to the deployment to be themed, if available
* Set up a test server and provide connection/login information
App doesn't exist yet. :)
So for a long time we (Fedora QA) have been using the wiki for storing
validation test results. There is a hilariously complex mess of stuff -
clever wiki templates, python-wikitcms, and the relval fedmsg consumer -
all conspiring to produce all the wiki validation pages for new Fedora
composes when appropriate, then we ask the squishy humans who actually do
(some of) the testing to either edit the wiki pages directly or use the
`relval report-results` command (basically a crappy TUI which knows how to
edit the wiki) to report their results.
We don't like this for one really important reason and a few less
important ones. The really important reason is, it's a terrible interface
for humans to report test results; needlessly hard to understand and easy
to get wrong (wiki syntax is awful). The less important reasons are, it
needs an awful lot of complicated (and just plain dumb) code to keep it
all working, and it's a really stupid way to store results, which makes
pretty much any kind of analysis of said results more work than it ought
So we'd quite like to come up with a completely new way to do release
validation testing. We've gone through several versions of this plan in
the past and none has quite worked out. The current idea is to write a new
webapp from scratch which would be tuned to the release validation
workflow and would store the results in ResultsDB (which will make it easy
to consolidate them with results from automated test systems in future).
My current very rough idea for approximately how this could look is in the
image I'm gonna attach, if you can read it. It's basically somewhat
similar to how the wiki pages look, but smarter.
The basic flow would be that you'd pick a deliverable and report results
for that deliverable. The 'pick a deliverable' stuff would happen at the
top of the page: my first thought is to have two lists (drop-downs?) side-
by-side, one for 'arch' and one a list of deliverables; picking an arch
would cause the deliverable list to only show deliverables for that arch.
In the arch list, release-blocking arches would have clear prominence over
non-blocking arches, and similarly in the deliverable list, release-
blocking deliverables would have clear prominence over non-blocking.
Once you'd picked a deliverable we'd show a download button with the image
size, and show a table of the test cases that can be run with that
The default sort for the test cases would prioritize important tests that
had not yet been run: there's kinda a few different properties of tests
that could be used for sorting, I'm not sure yet exactly how to combine
them and whether to offer any sorting options to the user. But there's the
test's 'milestone' - in the current implementation these are Alpha, Beta,
Final and Optional, that's basically the effective order of importance -
whether the test has been run by anyone else, and the test's 'type'
(Installation, Base, Server etc - we aren't tied to these test types for
the new system, but they are actually not a bad concept and could probably
stand to stick around).
One more concept we'd need to keep in mind is that running every relevant
test for every image is likely impossible, so we need to be smart about
saying 'as long as this test has been run for any live image, we're OK' or
things along those lines. In the wiki system we mess around with the
result columns to achieve this - if you look at the titles of the columns
where results go they flip around constantly, sometimes we use arch,
sometimes 'product', there's all kinds. In the new system I want the
*user* to only have to worry about what ISO they're testing, but for the
*admins* and people involved in the release process, we'll need it to be
possible to specify 'groups' of images for each test case. So say for a
single test case we'd set up four groups of images and say 'as long as the
test has been run with at least one image from each group, we're covered'.
The way this would be significant to the user is that it'd be less
important for them to run a given test on their chosen ISO if its 'group'
was already covered, so we'd ideally indicate that somehow.
I figured it would be good to get some help with the design before we run
off and start coding stuff, which is why I'm opening this ticket; I'm
basically hoping you folks can help us think intelligently about what
we're going to build before we start, and come up with some nice
blueprints for us to work off (nicer than my envelope...)
Ticket URL: <https://fedorahosted.org/design-team/ticket/483>
Design Team <http://fedoraproject.org/wiki/Design>
Fedora Design Team