Provide more testing feedback (was: Re: Refining the update queues/process)

Adam Williamson awilliam at redhat.com
Thu Mar 4 00:07:42 UTC 2010


On Thu, 2010-03-04 at 00:55 +0100, Till Maas wrote:

> > So - for the third time - a package being in updates-testing for a few
> > days and getting no negative feedback is a moderate strength indicator
> > that it's not egregiously broken. Not a super-strong indicator, but
> > better than a kick in the teeth.
> 
> It probably only means that the meta-data of the installed package is
> not broken, but if they do not use all packages installed daily, then
> there is not much test coverage.

The types of breakage that most worry us are the ones where some update
causes really big and obvious problems that affect lots of people.
Happily, this is the kind of breakage you're most likely to get negative
feedback on when it happens. :)

So yes, the current process probably isn't very good at testing whether
a given update does absolutely everything it's supposed to do, in all
cases. It's not brilliant even at testing whether a given update works
at all, if that update is a fairly obscure package. What it _can_ do
reasonably well is catch the situation where an update mistakenly breaks
the world - where you install it and then suddenly you can't boot or
GNOME won't start or your network connection is broken or whatever. And
that's the kind of thing we're really trying to prevent.

I don't think a system where all updates had to stay in -testing for a
few days would catch all update problems. We'd still probably ship some
buggy updates. But hopefully we wouldn't again have the situation where
we're standing around scratching our heads and thinking 'how the *hell*
did that get shipped, when it breaks normal functionality for thousands
of people?'
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Fedora Talk: adamwill AT fedoraproject DOT org
http://www.happyassassin.net



More information about the devel mailing list