openQA live image testing: ready for merge?

Adam Williamson adamwill at fedoraproject.org
Wed Mar 11 23:42:13 UTC 2015


Hey, folks. So I've been tweaking and testing my openQA live image 
test stuff today, and I think it's probably about ready for merging.

I've got branches of both openqa_fedora and openqa_fedora_tools with 
relevant changes:

https://www.happyassassin.net/cgit/openqa_fedora/log/?h=live
https://www.happyassassin.net/cgit/openqa_fedora_tools/log/?h=live

on the openQA side there's all the expected work to add new needles 
and test cases and modify existing ones where appropriate to handle 
both live and non-live cases - quite a lot of change and this will get 
very long if I try to summarize it, so just poke me with questions 
about any specific bits that aren't obvious. I might go back through 
and sprinkle some comment-fu on there today.

On the fedora_tools side various tweaks were needed. image downloading 
is tweaked so the filenames for different images should always be 
unique (we can't use sub-directories, I asked :<).

One of the most awkward bits was making sure we run all the tests that 
aren't particularly image-specific *exactly once* for all composes - 
not zero times, and not twice or more. This is problematic because 
nightly composes have a 'generic boot.iso' image (but no 'server 
boot.iso' or 'server DVD' or 'server netinst' or anything), while 
TC/RC composes have a 'server boot.iso', 'server DVD', and 'server 
netinst' (which I think is always 100% identical to 'server boot.iso', 
but fedfind has to consider them two separate things, really), but no 
'generic boot.iso'.

So I introduced a new openQA flavor called 'universal', and added a 
bit of logic to openqa_trigger which makes it effectively 'nominate' 
one of the images from the compose it's running on as the one that 
will have the 'universal' tests run against it. It *also* then 
schedules every image downloaded - including the one nominated as 
'universal' - under its 'natural' flavor name (which is now 
payload_imagetype). To match this, on the openQA side, all the tests 
which can run with any non-live image are now associated with the 
'universal' flavor/product, and only image-specific tests (which 
currently means just 'default boot and install') are associated with 
the image-specific flavors/products.

The upshot of that is that when you schedule a run against a compose 
you get most of the tests run just once against one of the non-live 
images, and you also get one instance of default_boot_install per 
image, which is I think exactly what we want.

The other cute hack I added is a way for the result reporting stuff to 
be able to figure out how to report the default_boot_install results 
to the wiki correctly. I added an optional key for the per-testcase 
dicts in the TESTCASES dict-of-dicts; if a testcase has a 'name_cb' 
key, its value should be a callback function which will provide the 
correct testcase name when called with the openQA job's 'flavor' as 
the sole argument. report_job_results.py checks for the callback and 
calls it if it's there, and uses the return value as the --testcase 
parameter it passes to relval. Aaaand that results in us reporting the 
result against the correct 'test instance' (row in the results table) 
for the image the test was run against. SIMPLES! Actually I kinda like 
that approach, and the general idea should be extensible in other 
cases where we need to do something like this. (We'll probably need a 
section callback for the Desktop page, for instance.)

anyhow, it's a tad tricky to test this with just a single run ATM 
because we have netinsts (but no Workstation lives) for F22 composes 
and Workstation lives (but no netinsts) for Rawhide, but I tested it 
out quite a bit with various different composes and it seems to be 
working pretty well. You can of course see all the various test runs 
at https://openqa.happyassassin.net .
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net



More information about the qa-devel mailing list