rpmlintrc in dist-git?
by Miro Hrončok
Hi,
I've recently come to a package, where an rpmlint error is bogus and I
want to ignore it in the taskotron check.
I think I have heard about a way to ignore such error using rpmlintrc
file and placing it in dist-git. However I cannot find any documentation
for this, and inspecting the task-rpmlint source does not indicate this
is possible.
Maybe it is not possible at all.
Could you please point me to the docs for this feature if it exists, or
tell me it doesn't?
Thanks,
--
Miro Hrončok
--
Phone: +420777974800
IRC: mhroncok
6 years, 1 month
Upcoming changes for rpmgrill
by Róman Joost
Heya,
just wanted to give everyone a heads up for some of the upcoming changes
in a new rpmgrill release.
The rpmgrill tool now exits with an exit code of 1 in case of test
failures, 0 if all tests ran successfully. While this might sound like a
trivial thing, I thought I better share it before the release, since
this will obviously be picked up by CI systems the tool might be running
in.
PS: Changelog: https://github.com/default-to-open/rpmgrill#v0-31-unreleased
Kind Regards,
--
Róman Joost
Senior Software Engineer, Products & Technologies Operations (Brisbane)
Red Hat
6 years, 1 month
Proposal to CANCEL: 2017-08-28 QA Devel Meeting
by Tim Flink
There are more than one of us traveling to Flock on Monday and as such,
I propose that we cancel the regularly scheduled QA Devel meeting.
If there is some urgent topic to discuss, please reply to this thread
and the meeting can happen if there is someone around who is willing to
lead such meeting.
Tim
6 years, 1 month
Re: Fedora's beaker instance
by Dan Callaghan
Sorry for my very slow reply Petr...
Excerpts from pschindl's message of 2017-08-08 17:24 +02:00:
> Hi Dan,
>
> I have few questions regarding a beaker.qa.fp.org. What is the state of
> the project right now? I now work with DesktopQA team on upstreaming
> their tests. And because they run their tests on internal beaker so I'd
> like to mimic their workflow. So I'd like to try to run tests on our
> Fedora's beaker.
>
> So my questions. Does the beaker.qa.fp.org works at all? Could you give
> me access to at least one machine? Can I help you somehow with make it
> work? What can I do to put F26 or Rawhide trees to Beaker?
>
> Thank you for your answers. Have a nice day. With regards
> Petr Schindler (Fedora QA)
I've cc'ed the Fedora qa-devel list. Tim Flink did a lot of work getting
the Fedora Beaker instance set up.
If you wanted to get access to use it, I'm sure we could do that. We
would just need to add you to this group:
https://beaker.qa.fedoraproject.org/groups/qa-beaker-users#members
As far as I know, Beaker itself is fully up and running now (we had some
issues with Fedora authentication but they were all sorted out a while
back). However it looks like the Dell machines attached to Beaker are
not successfully booting into the installer right now. Tim might know
more about what is missing/broken with them. I'm not entirely sure
what's wrong because I don't think I have access to their serial
console.
That reminds me, one thing we never did set up properly is a Conserver
instance so that Beaker can scrape the serial console logs. That would
certainly make it easier to see why the Dell machines aren't netbooting
successfully.
I should try and see about writing a playbook to deploy Conserver...
--
Dan Callaghan <dcallagh(a)redhat.com>
Senior Software Engineer, Products & Technologies Operations
Red Hat
6 years, 1 month
Ansiblizing Questions
by Lukas Brabec
Hey, gang!
As I read through standard interface and tried ansiblized branch of
libtaskotron, I found things that were not exactly clear to me and I
have some questions. My summer afternoon schedule involves feeding
rabbits (true story!) and I keep missing people on IRC, hence this
email.
= Test output and its format =
Standard test interface specifies that [1]:
1) "test system must examine the exit code of the playbook. A zero
exit code is successful test result, non-zero is failure"
and
2) "test suite must treat the file test.log in the artifacts folder as
the main readable output of the test"
ad 1) Examining the exit code is pretty straight forward. The mapping
to outcome would be zero to PASSED and non-zero to FAILED. Currently
we use more than these two outcomes, i.e. INFO and NEEDS_INSPECTION.
Are we still going to use them, if so, what would be the cases? The
playbook can fail by itself (e.g. fail like command not found, or
permission denied), but I presume this failure would be reported to
ExecDB not to ResultsDB. Any thoughts on this?
ad 2) The standard interface does not specify the format of test
output, just that the test.log must be readable. Does this mean that
the output can be in any arbitrary format and the parsing of it would
be left to people who care, i.e. packagers? Wouldn't be this a problem
with if, for example, bodhi wanted to extract/parse this information
from ResultsDB and show it on update page?
= Triggering generic tasks =
Standard interface is centered around dist-git style tasks and doesn't
cover generic tasks like rpmlint or rpmdeplint. As these tasks are
Fedora QA specific, are we going to create custom extension to
standard interface, used only by our team, to be able to run generic
tasks?
= Reporting to ResultsDB =
Gating requirements for CI and CD contains [2]:
"It must be possible to represent CI test results in resultsdb."
However standard interface does not speak about resultsdb.
Does this mean, that task playbook won't contain something like
ResultsDB module (in contrast to ResultsDB directive in formulae), as
the task playbook should be agnostic to system in which it is run, and
the reporting will be done by our code in runtask?
= Output of runtask =
Libtaskotron's output is nice and readable, but output of the parts,
handled by ansible now, is not. My knowledge of ansible is still
limited, but as far as my experience goes, debuging ansible playbooks
or even asnible modules is kind of PITA. Are we going to address this
in some way, or just bite the bullet and move along?
= Params of runtask =
When I tried ansiblized branch of libtaskotron, I ran into issues such
as unsupported params: ansible told me to run it with "-vvv" param,
which runtask does not understand. Is there a plan on how are we going
to forward such parameters (--ansible-opts= or just forward any params
we don't understand)?
Runtask, at the moment, maps our params to ansible-playbook params and
those defined by standard interface. Are we going to stick with this
or change our params to match the ones of ansible-playbook and
standard interface (e.g. item would become subject, etc)?
= Future of runtask =
For now, runtask is user-facing part of Taskotron. However, standard
interface is designed in such way, that authors of task playbooks
shouldn't care about Taskotron (or any other system that will run
their code). They can develop the tasks by simply using
ansible-playbook. Does this mean that runtask will become convenient
script for us that parses arguments and spins up a VM? Because
everything else is in wrapping ansbile playbook...
Lukas
[1] https://fedoraproject.org/wiki/Changes/InvokingTests
[2] https://fedoraproject.org/wiki/Fedora_requirements_for_CI_and_CD
6 years, 1 month
Deprecating rpmgrill-fetch-build and rpmgrill-analyze-local
by Róman Joost
Hi,
The rpmgrill tool ships with two commands which I think can be
deprecated:
rpmgrill-fetch-build in favour of `koji download-task`
rpmgrill-analyze-local in favour of rpmgrill-unpack; rpmgrill unpacked
Does anybody use these binaries and has disagreements?
Kind Regards,
--
Róman Joost
Senior Software Engineer, Products & Technologies Operations (Brisbane)
Red Hat
6 years, 1 month
Outdated clamav database in Taskotron
by Petr Pisar
I received this test failure
<https://taskotron.fedoraproject.org/artifacts/all/ad295f84-7690-11e7-a988...>:
{
"module" : "VirusCheck",
"order" : 2,
"results" : [
{
"arch" : "src",
"code" : "ClamAV",
"context" : {
"path" : "/libwbxml-0.11.5.tar.bz2"
},
"diag" : "ClamAV <b>Win.Trojan.Ramnit-5550</b> subtest triggered (ClamAV 0.99.2/21723/Mon Jun 13 13:53:00 2016)",
"subpackage" : "libwbxml"
}
],
"run_time" : 9,
"status" : "completed"
},
I checked it locally in Fedora 27 with the same result. The tarball contains
win32/expat/libexpat.dll that triggers the antivirus.
But look at the clamav database version (21723/Mon Jun 13 13:53:00 2016). It's
more than a year old. After updating the database (you need to install
clamav-update package and wait for a cronjob or run /usr/bin/freshclam as root
manually), the hit went away. It's a false positive caused by Tasktron not
updating the virus database.
Please update the virus database regularly.
-- Petr
6 years, 1 month
openQA test failures due to typing errors
by Adam Williamson
Hi folks! Another note for anyone paying attention to openQA test
results.
Since 2017-08-09 there's been kind of a flood of failures caused by
'typing errors' - that is, when the test runner is trying to type a
string into the test VM and it doesn't get through correctly (usually
due to one or more keypresses being dropped). We've always seen this
kind of failure *very occasionally* in openQA tests, but it suddenly
become massively more common (as in, half the tests for every update
were failing).
With some invaluable help from Cole Robinson I'm pretty sure we have
this figured out now; it was caused by a change to qemu which was
introduced to address a potential denial-of-service issue. For now I've
reverted qemu to the known good older version on the openQA worker
hosts (there isn't any realistic vector for anyone to cause any harm by
exploiting that DoS in the case of the openQA deployment, or any of the
other similar issues the updated qemu fixed), and we've identified some
later upstream commits that look like they may well resolve the
problem, so we should hopefully be able to put a more permanent fix in
place soon.
I'll be re-running all the tests that have run since 2017-08-09 with a
working qemu, so we have more accurate results.
Very sorry for this problem; I'd usually have noticed and addressed
this sooner, but it came at an unfortunate time.
While I'm here - there's also a consistent failure in the
'desktop_browser' test which is just caused by a screenshot that needs
updating. I'll get on that ASAP.
--
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net
6 years, 1 month