Package Update Acceptance Test Plan - final call

Kamil Paral kparal at
Wed Apr 21 14:13:02 UTC 2010

this is a final call for comments to our Package Update 
Acceptance Test Plan [1]. This test plan is tightly related
to Package Update Acceptance Criteria [2] approved by FESCo.
It basically says how we, the QA team, will decide whether
a package update should be accepted or rejected. It also
tries to map the requirements of the policy into an actionable
test plan.

If you have any comments about the test plan, please post
them now, so we can include them in the document.

Although AdamW said he was pretty pleased with the state of 
the test plan (yay!), I still think there are a few things
that should be clarified. I attach a list of them below,
sorted by the individual test plan's paragraphs:

= Test Priority =

1. Do we want to have the strict requirement that introspection and
advisory tests may be started only after all mandatory tests have
finished? In other words, do we want to have the testing sequence
like this:

-> mandatory tests -> (wait for completion) -> introspection + 
advisory tests

or this:

-> mandatory + introspection + advisory tests (no particular order)

Reason for (the first sequence): It prioritizes the most important
tests, they will be finished sooner. It can save resources in case
some mandatory test fails and we won't run any subsequent tests
(see question 2).
Reason against: More difficult to implement. Less effective from
overall performance view.

= Test Pass/Fail Criteria =

2. Do we want to continue executing tests when some mandatory
test fails?
Reason for: We will have more results, maybe the maintainer will
have a look at all the results and may fix more issues at once
(not only the failed mandatory test).
Reason against: It wastes resources - the update will not be
accepted anyway. When a mandatory test fails (like installability
or repo sanity), many follow-up tests may fail because of that,
so they may not produce interesting output anyway.

3. We should complement requirement "all tests have finished" with
a definition what happens if some test crashes. It is obvious that
we can't accept package for which some substantial (read mandatory
or introspection) test has crashed. So we can add a requirement
like this:
"all mandatory and introspection tests have finished cleanly (no
test crashes)"
The question remains - what about advisory tests? Do we require that
they also don't crash? Or even if some of them crashes it won't be
an obstacle for accepting the update?
Reason for: Advisory tests are not important for accepting the
update, a test crash should not cause rejection.
Reason against: Some information won't be available. It could happen
that those information would cause the maintainer to withdraw/renew
the updated package.

= Introspection tests =

4. Rpmlint - "no errors present" vs "no new errors present": It
is obvious we have to provide an option for package maintainers to
whitelist some rpmlint messages. The actual implementation of the
whitelist is still to be discovered, but that doesn't matter, it
will be there. In this respect is seems to me that "no new errors"
requirement has no benefit over "no errors" requirement, because
the latter one does everything the prior one and even more. When
it is possible to whitelist errors then there's no reason for us
to allow any errors in the output.
Implementation note: "no new errors" is more an rpmguard's task
than rpmlint's. We can implement that as an rpmguard check and put
it to introspection tests, that's possible. But it's not needed if
we agree on "no errors" requirement.

=  Responsibilities and permissions =

5. "Introspection tests failures may be waived by:"
5a. Should Release Engineering be there?
5b. Should we add the new proventesters group?

Thanks for ideas.

Kamil Paral


More information about the test mailing list