On Sat, 2019-12-07 at 09:22 -0500, pmkellly(a)frontier.com wrote:
I'm thinking about a project for myself and I'm curious about
Is there a tool available to automate (relval report-results)? Something
like a template with check boxes and text fields you could fill in as
you test and then do something like (relval post) and Yeah! all the
results would get posted.
Is there a text file posted somewhere that contains all the command line
commands that get used in the course of release testing? Such might be
handy for those who might like to automate some or all of their testing.
Just trading searches and typing for grab one file then cut and paste.
I want to experiment with automating most of my release testing. Then I
could test more often and not contend with my typos and such.
So just to talk about the relval side here: no, nothing like this
exists ATM (at least AFAIK). If you want to implement something like
this, I wouldn't do it by scripting relval report-results, FWIW. relval
is just a CLI front-end to python-wikitcms, really:
That's where all the major heavy lifting is done. If you wanted to
write some sort of batch-submission tool I think it'd make sense to
just write it as another front-end to python-wikitcms - either as a
separate little app, or as a new subcommand for relval, `relval batch-
report` or something like it. I'd be happy to review a pull request to
add a feature like that.
My idea is to connect 2 PCs together via USB. One PC would be running
Python pretending to be the keyboard on the test machine. So of course I
need a Python library that provides for keyboard emulation. I would
setup the commands so stdout and stderr go to files I could use to parse
results to a relval template. I understand that posting back to relval
presents more problems than just having a template. To bypass all of
that I have considered that I could restrict all such testing to
Rawhide. and just send a template of results to @test at the end. Though
I might need to do something other than that as I doubt everyone would
enjoy getting my frequent Rawhide test reports.
I understand that some tests require ears and eyes; so results can't be
parsed from a file. Yes, it's sort of just duplicating what coconut
does, but at least it would be done on a different bare metal machine.
Also, I doubt that coconut does it this way; so it would be a bit of a
I suspect I'm lining up a lot of hours for myself. I doubt that the
Python library I need exists; so my first task would be to write the
library. The elapsed time is probably measured in years, but I have and
have had long term projects. The possibility that this would be a
foolish waste of time has occurred to me.
Yes, this would be a lot of work :)
To give you some context, almost all reports resulted from 'coconut'
come from the openQA tests:
The fedora_openqa library - https://pagure.io/fedora-qa/fedora_openqa
has configuration and logic for deciding when one or more openQA test
results correspond to a passed wiki test, and reporting those results
to the wiki. (It does this using python-wikitcms, of course).
Currently we run all openQA tests in qemu virtual machines. However,
openQA does actually have the capacity to run tests in other
environments, and SUSE do this in production, we've just never used it
(yet) in Fedora. For instance, one of these other 'backends' is called
ikvm and it's designed for bare metal testing: it uses a feature called
iKVM on Supermicro server motherboards, where the system firmware
actually provides a VNC server, so this fits in great with openQA,
which was initially written to interact with qemu virtual machines via
VNC (this is still the most common way to use it, and how we use it).
With iKVM, openQA can interact with an install running on bare metal
via VNC much the same way it interacts with a VM via VNC.
If we wanted to automate bare metal testing, using one of the other
openQA backends might be the easiest way to do it, since we have openQA
running and tests written already; the work would involve adjusting our
test configuration to run some of the tests on a bare metal 'machine'
using an appropriate backend, and setting up the actual hardware to do
it. You could do this with a local test openQA deployment initially; to
put it in production we'd need to deploy hardware in the Fedora data
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net