updates improvements/changes ideas

James Laska jlaska at redhat.com
Mon Nov 29 19:03:18 UTC 2010


On Mon, 2010-11-29 at 10:08 -0800, Adam Williamson wrote:
> On Mon, 2010-11-29 at 12:40 -0500, James Laska wrote:
> 
> > > * Just drop all the requirements/go back to before we had any updates
> > >   criteria. 
> > 
> > Hmm, certainly an idea.  I feel like this is definitely a step backward,
> > not forward.  Has the initial motivation for an updates policy gone away
> > or changed?  Have we encountered problems that didn't yet exist, or
> > weren't as painful, when the policy was first enabled?  Are there other
> > problems we need to focus on resolving (I suspect this is the case)?
> 
> As I see it, the thing that everyone agrees is problematic is critpath
> updates for old releases not getting pushed or taking a very long time
> to push. It's also generally agreed that the quality of critpath testing
> can be improved by taking some steps we're already looking at
> (package-specific test cases).
> 
> Things that some people see as problematic are:
> 
> 1 Having to wait a week to push an update if you can't find testing
> 2 Testing being required for packages with automated test suites
> 3 The delay to security updates which is introduced by the testing
> requirements

(Note, I've numbered your bullets above to respond to them individually)

#1 - if we don't have testers familiar with the packages, and it's a
popular package (either in terms of # of installed systems, interest or
critpath), we (includes maintainers) should be looking to create/develop
testing interest, right?  Or, if no engaged testers were found ... do we
     1. Wait X days for exploratory test results
     2. Or push immediately

#2 - I'd take this case-by-case ... it depends on what is "automated".
Certainly, I could see a scenario where %check includes the upstream
unittest, and the automated package update acceptance plan passes ...
the update is in good shape and ready for additional functional tests
('updates-testing') or stable.  

TODO - draft an autoqa test that confirms whether %check ran during
build-time, and ensures it passed.

#3 - Seems similar to #1 to me.  Testing is *always* a delay.  In a way,
that's by design.  That's not to say we as testers intentionally want to
slow a process down to a crawl.  So if there are unnecessary delays
unrelated to the act of testing ... let's work those.  It sounds like
the delay here is similar to #1, in that testers aren't providing
feedback on security updates?

TODO - draft general security update test procedure

> > > * allow packages with a %check section to go direct to stable.
> > 
> > Interesting, I like the spirit of this idea, but would like to see if we
> > can incorporate this with autoqa karma plans.  So, perhaps packages with
> > %check get automated karma?  Just the same as with packages that pass
> > automated tests ... they'll eventually get positive karma of some form.
> 
> Yes, I was going to suggest the same thing. I'd suggest packages with a
> %check section should get +1 proventester karma. 

Note though that the %check needs to also pass.  I've seen plenty of
builds that include a %check, but don't fail if the %check fails.  There
is an AutoQA test in the making here, anyone interested in helping?

> Of course, that relies
> on the automated test suite actually testing the things proventester
> testing is meant to cover; 

http://fedoraproject.org/wiki/QA:Package_Update_Acceptance_Test_Plan

> do we want to audit the test suites in question?

No, but I think we would want to at least ensure they pass.  I don't
think it would be horrible to pull together an AutoQA test that checks
for the presence of a %check and whether it passed.

> > > * setup a remote test env that people could use to test things.
> > 
> > I could use more details on this point.  Is this talking about setting
> > up QA systems hosted in Fedora infrastructure that any tester could
> > login and use to test updates?
> 
> Yes - it's an idea to make it easier to test older releases, or packages
> you don't want to / can't install (or configure) on your active systems.

I don't object to this.  We have experience using shared test systems
internally at Red Hat for many years.  There are a lot of system state
issues we'd need to work through, but there are certainly benefits to
having public test systems.  

I'm inclined to think the more immediate problem though is the lack of
test instructions, not the lack of hardware.  Additionally, over time
the lack of hardware will be less of an issue right (e.g. virt)?  But
again, no major objections to shared test hardware, other than it does
involve some setup/maint, and doesn't address the lack of test
instructions.

> > > * reduced karma requirement on other releases when one has gone stable
> > > * aggregated karma across the releases for the same package version.
> > 
> > I don't have data to indicate how many updates have been released, and
> > then reverted/obsoleted on only a subset of releases.
> 
> Yeah, I'd really like to see some data here. I did ask Luke on the
> -devel thread, but so far no response. Everyone more or less agrees on
> the factors here (on the one hand it's hard to get testing for old
> releases, on the other hand it *is* possible for the 'same' update to
> work fine on one distro but not another), it's hard to balance these
> without hard numbers. I think you can possibly draw a distinction
> between different types of updates here too: as I wrote on the -devel
> list, I can see the argument for a single leaf package update, but
> pushing an update to, say, an entire desktop environment and relying on
> testing from another release seems scary.

Ah, good point.

> > > * allow anon karma to count. 
> > 
> > Or maybe it counts, but counts less (.5 karma or something).
> 
> Something else to consider here is to make more people login; I suspect
> relatively few people are actually doing testing who don't have a FAS
> account, but I think we could make the login link more prominent, and
> try harder to get people to log in (have a big scare-step when posting
> anonymous feedback which says 'your feedback will not count unless you
> log in!' and requires you to re-confirm to submit the feedback
> anonymously; a nag screen, basically.

Good thinking.  Is this something we can get on the bodhi roadmap
(https://fedorahosted.org/bodhi/roadmap)?  I think Luke monitors this
list, perhaps he'll jump in.

> > > Security updates: 
> > > 
> > > * allow security updates to go direct to stable
> > 
> > Risky
> 
> Right. As was pointed out on -devel, the update which caused us to start
> thinking about an update testing process in the first place - the
> infamous udev update - was a security update.

Yeah.

> > > * ask QA to commit to testing security updates
> > 
> > We can't commit to testing without guidance or instructions.  Let's
> > commit to documenting repeatable procedures that testers can follow and
> > expand upon.
> > 
> > For some security updates, they may have already been functionally
> > tested upstream.  I think it's reasonable to provide proxy karma linking
> > to upstream functional tests.  Though, I don't think upstream functional
> > tests alone can bless a security update.
> 
> That seems like it'd be tricky to automate.

Eeew, it certainly would be!  I was more just adding some extra flavor
on the karma "by proxy" practice.

> > > Non critpath/security: 
> > > 
> > > * reduce timeout for non critpath from 7 to 3 days. 
> > > 
> > > * change default autokarma to 2 or 1. 
> > 
> > No immediate thoughts on these points.
> 
> I suggested the default auto-push karma change, though really what
> should change is the linking of auto-push and approval. Right now,
> whatever you set as the threshold for auto-push is *also* the threshold
> for approval, which is more of a hack/unintended consequence than
> intentional design.

I see what you mean.

Thanks,
James
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part
Url : http://lists.fedoraproject.org/pipermail/test/attachments/20101129/779835ea/attachment.bin 


More information about the test mailing list