Hey folks! Just a heads-up to the openQA-interested: I'm working on
another update to current upstream git. staging is now running the
latest git of both os-autoinst and openQA. There have been some changes
upstream which are related to working with Mojolicious 7, but aaannz
assures me they're Mojo 6-compatible, and they still have one
deployment running on Mojo 6. So far, it seems to be working OK.
We're slightly suspicious about
apparently they're still arguing about whether it's the right thing to
do. I'll keep an eye on it, and if any uploads go squiffy, I'll revert
it in the package. So far, though, at least one upload test has run and
One nice thing about this git bump is it disables the extremely verbose
myjsonrpc logging which was going on and making the logs quite
difficult to read and follow.
If any of you want to play along with your pet deployments, the scratch
builds are here:
if this runs OK in staging for the next day or two I'll do official
builds and submit an F24 update, then bump prod later next week.
One significant change with the new openQA is they've changed how
'softfails' work. Previously, it wasn't a 'real' result - tests could
only be 'passed' or 'failed' as a whole, the concept of a 'soft
failure' was kind of synthesized by the web UI based on the individual
test module results, but wasn't expressed by the API, the API 'result'
for soft failed tests was just 'passed'. If you wanted to catch soft
fails you had to parse the test module results and replicate the logic
the web UI used.
Now, 'softfailed' is simply a result state; both a test as a whole and
individual test modules can have 'softfailed' as their result. For now,
I've patched everything we have that considers openQA results (that's
fedora_openqa_schedule, fedora_nightlies, and check-compose) to treat
'softfailed' the same as 'passed' (except check-compose, which does
distinguish between passes and soft fails).
In future, we could get more clever with this, and maybe report 'warn'
rather than 'pass' to the wiki for soft fails, that kinda thing. But
for now this should preserve current behaviour. Most of the changes I
could just commit, only one requires review:
BTW, in case any of you were trying to do needle edits using
interactive mode and being annoyed that when a needle match fails and
you go to the editor, you can't use any existing needle as a base for
the new one, it's a known bug in the recent interactive mode rewrite,
unfortunately without a fix for now:
I'm hoping coolo will show up with a fix next week. If they don't fix
it soon I might try, because it's a really annoying bug, but this is
kind of a complex area to grok and be sure you're fixing it right, I
think it'll be much easier for coolo to do since he already understands
exactly how all the bits are interacting there.
For now my 'workaround' is to hack up the post_fail_hook to do nothing
(so you don't have to wait around for a bunch of log uploads every time
the test fails) then just keep re-running the test, waiting for it to
fail, and editing the failed needle, until they're all done. The editor
works properly when you use it on a failed test (as opposed to an
interactive test that's paused and waiting for the needle editor).