Resultsdb v2.0 - API docs
by Josef Skladanka
Hey gang,
I spent most of today working on the new API docs for ResultsDB, making use
of the even better Apiary.io tool.
Before I put even more hours into it, please let me know, whether you think
it's fine at all - I'm yet to find a better tool for describing APIs, so
I'm definitely biased, but since it's the Documentation, it needs to also
be useful.
http://docs.resultsdb20.apiary.io/
I am also trying to put more work towards documenting the attributes and
the "usual" queries, so please try and think about this aspect of the docs
too.
Thanks, Joza
6 years, 12 months
openQA: nother git bump comin'
by Adam Williamson
Hey folks! Just a heads-up to the openQA-interested: I'm working on
another update to current upstream git. staging is now running the
latest git of both os-autoinst and openQA. There have been some changes
upstream which are related to working with Mojolicious 7, but aaannz
assures me they're Mojo 6-compatible, and they still have one
deployment running on Mojo 6. So far, it seems to be working OK.
We're slightly suspicious about
https://github.com/os-autoinst/openQA/commit/f2547e9bcc0a166f993426bceeac... ;
apparently they're still arguing about whether it's the right thing to
do. I'll keep an eye on it, and if any uploads go squiffy, I'll revert
it in the package. So far, though, at least one upload test has run and
worked.
One nice thing about this git bump is it disables the extremely verbose
myjsonrpc logging which was going on and making the logs quite
difficult to read and follow.
If any of you want to play along with your pet deployments, the scratch
builds are here:
http://koji.fedoraproject.org/koji/taskinfo?taskID=15391589
http://koji.fedoraproject.org/koji/taskinfo?taskID=15391751
if this runs OK in staging for the next day or two I'll do official
builds and submit an F24 update, then bump prod later next week.
One significant change with the new openQA is they've changed how
'softfails' work. Previously, it wasn't a 'real' result - tests could
only be 'passed' or 'failed' as a whole, the concept of a 'soft
failure' was kind of synthesized by the web UI based on the individual
test module results, but wasn't expressed by the API, the API 'result'
for soft failed tests was just 'passed'. If you wanted to catch soft
fails you had to parse the test module results and replicate the logic
the web UI used.
Now, 'softfailed' is simply a result state; both a test as a whole and
individual test modules can have 'softfailed' as their result. For now,
I've patched everything we have that considers openQA results (that's
fedora_openqa_schedule, fedora_nightlies, and check-compose) to treat
'softfailed' the same as 'passed' (except check-compose, which does
distinguish between passes and soft fails).
In future, we could get more clever with this, and maybe report 'warn'
rather than 'pass' to the wiki for soft fails, that kinda thing. But
for now this should preserve current behaviour. Most of the changes I
could just commit, only one requires review:
https://phab.qadevel.cloud.fedoraproject.org/D987
BTW, in case any of you were trying to do needle edits using
interactive mode and being annoyed that when a needle match fails and
you go to the editor, you can't use any existing needle as a base for
the new one, it's a known bug in the recent interactive mode rewrite,
unfortunately without a fix for now:
https://progress.opensuse.org/issues/13456
https://progress.opensuse.org/issues/12680
I'm hoping coolo will show up with a fix next week. If they don't fix
it soon I might try, because it's a really annoying bug, but this is
kind of a complex area to grok and be sure you're fixing it right, I
think it'll be much easier for coolo to do since he already understands
exactly how all the bits are interacting there.
For now my 'workaround' is to hack up the post_fail_hook to do nothing
(so you don't have to wait around for a bunch of log uploads every time
the test fails) then just keep re-running the test, waiting for it to
fail, and editing the failed needle, until they're all done. The editor
works properly when you use it on a failed test (as opposed to an
interactive test that's paused and waiting for the needle editor).
Thanks folks!
7 years
"500 Internal Server Error" when trying to submit a new blocker bug
by Christian Stadelmann
Whenever I'm trying to submit a new blocker bug¹, I'm getting a HTTP 500 status code error page. I've tried these browsers:
* one recent release of Firefox (heavily configured)
* one vanilla Fedora Firefox release
* one Firefox LTS based release
With all of these I'm getting the error page and am unable to propose blocker bugs as a result. I remember that this worked on the last release cycle, I think I proposed some F24 blocker bugs.
¹ https://qa.fedoraproject.org/blockerbugs/propose_bug
If I could propose blocker bugs, I'd propose these ones as Final blockers:
* https://bugzilla.redhat.com/show_bug.cgi?id=1026119
Title: "fails to unmount encrypted filesystem (/dev/mapper/luks-partition) containing /var/log on every shutdown"
Reason: This bug violates the final release criterion "Shutdown, reboot, logout": "Similar to the Alpha criterion for shutting down, shutdown and reboot mechanisms must take storage volumes down cleanly and correctly request a shutdown or reboot from the system firmware." There is no clean shutdown here.
* https://bugzilla.redhat.com/show_bug.cgi?id=1370889
Reason: gnome-weather is completely broken due to backend service URL change or shutdown.
This violates section "Default application functionality" of Fedora final release criteria: "All applications that can be launched using the standard graphical mechanism of a release-blocking desktop after a default installation of that desktop must start successfully and withstand a basic functionality test. […]"
Basic functionality of gnome-weather includes showing weather for at least one user-selected location, which is not working right now.
7 years, 1 month
libtaskotron: new mock needed to run the test suite
by Kamil Paral
Please note I've bumped the requirements for mock in libtaskotron and removed some workarounds we had for bugs in the older version. Please make sure to run
$ git pull
$ pip install -r requirements.txt
otherwise the test suite might not pass for you the next time you run it and the errors are very cryptic.
Cheers,
Kamil
7 years, 1 month