Package Update Acceptance Test Plan - final call

James Laska jlaska at redhat.com
Thu Apr 22 19:12:41 UTC 2010


On Wed, 2010-04-21 at 10:13 -0400, Kamil Paral wrote:
> Hello,
> this is a final call for comments to our Package Update 
> Acceptance Test Plan [1]. This test plan is tightly related
> to Package Update Acceptance Criteria [2] approved by FESCo.
> It basically says how we, the QA team, will decide whether
> a package update should be accepted or rejected. It also
> tries to map the requirements of the policy into an actionable
> test plan.

Honestly, this is pretty exciting to see this coming together.  Thanks
for moving this forward!

> If you have any comments about the test plan, please post
> them now, so we can include them in the document.
> 
> Although AdamW said he was pretty pleased with the state of 
> the test plan (yay!), I still think there are a few things
> that should be clarified. I attach a list of them below,
> sorted by the individual test plan's paragraphs:
> 
> 
> = Test Priority =
> 
> 1. Do we want to have the strict requirement that introspection and
> advisory tests may be started only after all mandatory tests have
> finished? In other words, do we want to have the testing sequence
> like this:
> 
> -> mandatory tests -> (wait for completion) -> introspection + 
> advisory tests
> 
> or this:
> 
> -> mandatory + introspection + advisory tests (no particular order)
> 
> Reason for (the first sequence): It prioritizes the most important
> tests, they will be finished sooner. It can save resources in case
> some mandatory test fails and we won't run any subsequent tests
> (see question 2).
> Reason against: More difficult to implement. Less effective from
> overall performance view.

My preference would be the path of least resistance, which seems like
the second approach.  Should we suffer from slow mandatory test results
due to introspection and advisory tests being run first, we have test
scheduling options to explore to address the problem.  Whether it's
prioritized/weighted jobs or grouping the tests into the three buckets
I'm not sure yet ... but I feel like that's something we can address in
the future.

> = Test Pass/Fail Criteria =
> 
> 2. Do we want to continue executing tests when some mandatory
> test fails?
> Reason for: We will have more results, maybe the maintainer will
> have a look at all the results and may fix more issues at once
> (not only the failed mandatory test).
> Reason against: It wastes resources - the update will not be
> accepted anyway. When a mandatory test fails (like installability
> or repo sanity), many follow-up tests may fail because of that,
> so they may not produce interesting output anyway.

Let's make our lives simpler at first ... schedule all the tests.  Even
if mandatory results fail, having the introspection and advisory results
available for later review or comparison will be helpful.

That said, as soon as the mandatory tests have failed, we can initiate
whatever process needs to happen to ensure that package update is not
accepted.  However, the other tests would continue to run and report
results into the appropriate place.

> 3. We should complement requirement "all tests have finished" with
> a definition what happens if some test crashes. It is obvious that
> we can't accept package for which some substantial (read mandatory
> or introspection) test has crashed. So we can add a requirement
> like this:
> "all mandatory and introspection tests have finished cleanly (no
> test crashes)"
> The question remains - what about advisory tests? Do we require that
> they also don't crash? Or even if some of them crashes it won't be
> an obstacle for accepting the update?
> Reason for: Advisory tests are not important for accepting the
> update, a test crash should not cause rejection.
> Reason against: Some information won't be available. It could happen
> that those information would cause the maintainer to withdraw/renew
> the updated package.

Long-term, all these tests need to run and results presented to the
maintainer.  Short-term while we are piloting this effort, I don't have
a problem allowing a build to proceed once mandatory tests have
completed, but advisory/introspection tests failed or aborted.

> = Introspection tests =
> 
> 4. Rpmlint - "no errors present" vs "no new errors present": It
> is obvious we have to provide an option for package maintainers to
> whitelist some rpmlint messages. The actual implementation of the
> whitelist is still to be discovered, but that doesn't matter, it
> will be there. In this respect is seems to me that "no new errors"
> requirement has no benefit over "no errors" requirement, because
> the latter one does everything the prior one and even more. When
> it is possible to whitelist errors then there's no reason for us
> to allow any errors in the output.
> Implementation note: "no new errors" is more an rpmguard's task
> than rpmlint's. We can implement that as an rpmguard check and put
> it to introspection tests, that's possible. But it's not needed if
> we agree on "no errors" requirement.

"No new errors" seems like an achievable short-term approach to add
value while not overloading the maintainer with a potentially large list
of rpmlint failures (/me thinking kernel) that they haven't been
tracking since the package was initially incorporated into Fedora.  Down
the road, I agree a whitelist mechanism would be ideal (as well as a
shortcut to file a bug against rpmlint).

> =  Responsibilities and permissions =
> 
> 5. "Introspection tests failures may be waived by:"
> 5a. Should Release Engineering be there?

No, let's just go with below.  And anyone can demonstrate required test
skills and join the group, including release engineers.

> 5b. Should we add the new proventesters group?

Oh good catch, yes!

Hope this helps,
James
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part
Url : http://lists.fedoraproject.org/pipermail/test/attachments/20100422/b69897cf/attachment.bin 


More information about the test mailing list