New automated test coverage: openQA tests of critical path updates
by Adam Williamson
Hi folks!
I am currently rolling out some changes to the Fedora openQA deployment
which enable a new testing workflow. From now on, a subset of openQA
tests should be run automatically on every critpath update, both on
initial submission and on any edit of the update.
For the next little while, at least, this won't be incredibly visible.
openQA sends out fedmsgs for all tests, so you can sign up for FMN
notifications to learn about these results. They'll also be
discoverable from the openQA web UI - https://openqa.fedoraproject.org
. The results are also being forwarded to ResultsDB, so they'll be
visible via ResultsDB API queries and the ResultsDB web UI. But for
now, that's it...I think.
Our intent is to set up the necessary bits so that these results will
show up in the Bodhi web UI alongside the results for relevant
Taskotron tests. There's an outside possibility that Bodhi is actually
already set up to find these results in ResultsDB, in which case
they'll just suddenly start showing up in Bodhi - we should know about
that soon enough. :) But most likely Bodhi will need a bit of a tweak
to find them. This is probably a good thing, because we need to let the
tests run for a while to find out how reliable they are, and if there's
an unacceptable number of false negatives/positives. Once we have some
info on that and are happy that we can get things sufficiently reliable
for the results to be useful, we'll hook up the Bodhi integration.
The tests that are run are most of the tests that, on the 'compose
test' workflow, get run on the Server DVD and Workstation Live images
after installation. Between them they do a decent job of covering basic
system functionality. They also cover FreeIPA server and client setup,
and Workstation browser (Firefox) and terminal functionality. So
hopefully, if your critpath update completely breaks one of those basic
workflows, you'll find out about it before pushing it stable.
At present it looks like the Workstation tests may sometimes fail
simply because the base install gets stuck during boot for some reason;
I'm going to look into that this week. In testing so far the Server
tests seem fairly reliable, but I want to gather data from a few days
worth of test runs to see how those look. Once we start sending results
to Bodhi, I'll try and write up some basic instructions on how to
interpret and debug openQA test results; QA folks will also be
available in IRC and by email for help with this, of course.
You can see sample runs on Server:
https://openqa.stg.fedoraproject.org/tests/overview?groupid=1&build=FEDOR...
and Workstation:
https://openqa.stg.fedoraproject.org/tests/overview?version=25&distri=fed...
the 'desktop_notifications_live' failure is a stale bit of data - that
test isn't actually run any more because obviously it makes no sense in
this context, but because it got run one time in early development,
openQA continues to show it for that update (it won't show for any
*other* update). The `desktop_update_graphical` fail is a good example
of the kind of issue I'll have to look into this week: it seems to have
failed because of an intermittent crasher bug in PackageKit, rather
than an issue in the update. We'll have to look at skipping known-
unreliable tests, or marking them somehow so you know the deal in
Bodhi, or automatically re-running them, or things along those lines.
--
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net
6 years, 9 months
Semi-automated (and automated) testing of laptops for Fedora
by Benjamin Berg
Hi,
we are currently looking into enabling us to test laptops more
effectively. There are two main parts to the issue, which is to
1. have a system to run semi-automated tests on a standalone machine
and submit the results to an online server ("Fedora Tested Laptops")
and to
2. run parts of the tests in a fully automated fashion in a lab here in
Munich.
For now I am probably going to concentrate on the first part, but full
automation is still something to keep in mind. Some automation might
also happen without a full CI setup (e.g. simulate the lid switch or
plugging in different monitors using the chameleon board).
Focusing on the feature set the test runner should have, I see the
following requirements:
* Online submission of results
- Initially probably just manual updates and uploads to the wiki
- Fedora has resultsdb, but it is not designed to store larger blobs
* Ability to run standalone on a machine
- Resume test after interruptions like kernel panics
- Show tests status and user instructions for tests requiring
interaction, but allow the test to run automated when servo is
available.
- Allow skipping any tests requiring user interaction
* Possible to integrate into a CI setup
* Gathering of data about hardware before and during the test
- e.g. dmidecode, power usage, CPU states, firmware tests
So far I had a closer look at the at the following tools:
* OpenQA (http://open.qa/)
* autotest (python, http://autotest.github.io/)
* avocado (python, https://avocado-framework.github.io/)
* resultsdb
* taskotron
Right now I think that avocado (a successor for autotest) is the best
fit and can be adapted to the above needs. The only real advantage of
autotest is that Google uses it on a large scale for testing
chromebooks, but it seems harder to adapt and use. Most of the other
tools cover other parts of a CI infrastructure.
With this in mind, my current plan would be to work on the following
items using avocado as a base:
1. Integrate a test status plugin including the ability to prompt for
fine grained user instructions (maybe using DBus)
2. Work on support to resume interrupted runs (i.e. kernel panic)
3. Create data collection plugins and add features where sensible (e.g.
maybe add RAPL power monitoring into upower)
4. Start writing test cases to exercise the above
Opinions? Have I missed something important maybe?
Benjamin
6 years, 9 months
Official packages for python-openqa_client and resultsdb_conventions
by Adam Williamson
Hi folks! Just a heads up that there are now official Fedora packages
for the openQA python client -
https://github.com/os-autoinst/openQA-python-client - and
resultsdb_conventions -
https://pagure.io/taskotron/resultsdb_conventions . The source packages
are 'python-openqa_client' and 'resultsdb_conventions' respectively,
binary packages are python(pyver)-openqa_client , python2-
resultsdb_conventions and python2-resultsdb_conventions-fedora (split
out to keep the fedfind dependency separate); there is no Python 3
package for resultsdb_conventions as there is no Python 3 package for
resultsdb_api.
The packages are in Rawhide now and updates are submitted for F24 and
F25. python-openqa_client updates are also available for EPEL 6 and 7;
resultsdb_conventions will be available for EPEL 7, but for some reason
the branch wasn't created as I requested, so it's not available yet.
I'd do EPEL 6 as well, but resultsdb_api isn't packaged for EPEL 6.
I'll switch the openQA roles and deployments over to using the packages
shortly, and update the relevant docs for fedora_openqa etc.
--
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net
6 years, 9 months
New list for ResultsDB users
by Adam Williamson
Hi folks! So I've been floating an idea around recently to people who
are currently using ResultsDB in some sense - either sending reports to
it, or consuming reports from it - or plan to do so. The idea was to
have a group where we can discuss (and hopefully co-ordinate) use of
ResultsDB - a place to talk about result metadata conventions and so
forth.
It seemed to get a bit of traction, so I've created a new mailing list:
resultsdb-users . If you're interested, please do subscribe, through
the web interface:
https://lists.fedoraproject.org/admin/lists/resultsdb-users.lists.fedorap...
or by sending a mail with 'subscribe' in the subject to:
resultsdb-users-join(a)lists.fedoraproject.org
Please note: despite the list being a fedoraproject one, the intent is
to co-ordinate with folks from CentOS, Red Hat and maybe even further
afield as well; we're just using an fp.o list as it's a quick
convenient way to get a nice mailman3/hyperkitty list without having to
go set up a list server on taskotron.org or something.
Thanks folks!
--
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net
6 years, 9 months
Taskotron CI in Taskotron
by Josef Skladanka
Gang,
I finally got some work done on the CI task for Taskotron in Taskotron. The
idea here is that after each commit (of a relevant project - trigger,
execdb, resultsdb, libtaskotron) to pagure, we will run the whole stack in
docker containers, and execute a known "phony" task, to see whether it all
goes fine.
The way I devised is that I'll build a 'testsuite' container based on the
Trigger, and instead of running the fedmsg hub, I'll just use the CLI to
"replay" what would happen on a known, predefined fedmsg.
The testsuite will then watch execdb and resultsdb, whether everything went
fine.
It is not at all finished, but I started hacking on it here:
https://pagure.io/taskotron/task-taskotron-ci
I hope to finish it (to a point where it runs the phony task) till the end
of the week. At that point, I'd be glad for any actual, sensible task ideas
to ideally test as much of the capabilities of the
libtaskotron/execdb/resultsdb as possible.
The only problem with this kind of testing is, that we still don't really
have a good way to test trigger, as it is tied to external events. My idea
here was that I could add something like wiki edit consumer, and trigger
tasks off of that, making that one "triggering" edit from inside the
testsuite. But As it's almost 4am here, I'm not sure it is the best idea.
Once again, I'll be glad for any input/ideas/evil laughter.
Joza
6 years, 9 months
fedora_openqa (scheduler/reporter) and createhdds split from
openqa_fedora_tools, moved to Pagure
by Adam Williamson
Hi folks! So continuing with the agreed plan (for openQA bits) to move
git repos to Pagure and split up openqa_fedora_tools, I've split out
and moved the scheduler/reporter library and CLI - now called
'fedora_openqa', since 'fedora_openqa_schedule' was a dumb name for a
thing that actually does more than just scheduling - and createhdds:
https://pagure.io/fedora-qa/fedora_openqa
https://pagure.io/fedora-qa/createhdds
I have renamed the Phabricator projects for the openQA tests and
scheduler too:
openqa_fedora -> os-autoinst-distri-fedora
openqa_fedora_tools -> fedora_openqa
The new names match the git repo names. Almost all the issues and diffs
for openqa_fedora_tools were really for the scheduler; for createhdds I
think we can just use Pagure issues / PRs. I will move the one
outstanding createhdds diff to Pagure manually. For fedora_openqa we
will still be using Phab for issues and diffs for now, just with the
new project name. The .arcconfig in the new repo should be correct.
I'm trying to clean up all the appropriate bits in Pagure, READMEs,
wiki, ansible tasks etc. this afternoon. If it doesn't all look to be
in line tomorrow AM, please do let me know (or just go ahead and fix
anything straightforward you find that I've overlooked). Yes, I know
that right now the fedora_openqa README references a 'Fedora openQA
wiki page' that doesn't exist, I'm planning to write it soon, and move
the 'how to install openQA' content from openqa_fedora_tools into it.
Thanks!
--
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net
6 years, 9 months