Refining the update queues/process [Was: Worthless updates]

Adam Williamson awilliam at redhat.com
Sat Mar 6 02:43:50 UTC 2010


On Fri, 2010-03-05 at 23:52 +0100, Michael Schwendt wrote:
> On Fri, 05 Mar 2010 13:46:34 -0800, Adam wrote:
> 
> > Ah. You're looking at it on a kind of micro level; 'how can I tell this
> > package has been tested?'
> 
> Exactly. Because I don't like to act on assumptions.
> 
> And "zero feedback" is only an indicator for "doesn't break badly", if
> there are N>1 testers with N>1 different h/w and s/w setups who have
> installed the update actually and have not rolled back without reporting a
> problem. This may apply to certain core packages, but _not_ to all pkgs.

I did say it was only a medium-strength indicator (in most cases), not
an infallible one, which was kinda intended to cover the above. IOW, I
agree, mostly. The more people we have running updates-testing, the more
likely we are to catch big breakages, of course.

> It takes days for updates to be distributed to mirrors. A week may be
> nothing for that important power-user of app 'A', who would find a problem
> as soon as he *would* try out a test-update.

In my experience, I get testing updates only a few hours after the email
listing them hits the mailing lists.

> Further, I hear about users who have run into problems with Fedora but
> haven't reported a single bug before. ABRT may help with that, but they
> would still need to create a bugzilla account, which is something they
> haven't done before and maybe won't do. Only sometimes a problem annoys
> them for so long that they see themselves forced to look into how to
> report a bug.

I'd hope this wouldn't describe anyone who takes the trouble to manually
activate updates-testing, but of course I could be wrong :)

> > Maybe it makes it clearer if I explain more clearly that that's not
> > exactly how I look at it, nor (I think) how the rest of QA sees it, or
> > what the proposal to require -testing is intended to achieve. We're
> > thinking more about 'the big picture', and we're specifically thinking
> > about - as I said before - the real brown-paper-bag,
> > oh-my-god-what-were-they-thinking kinds of regressions, the 'systems
> > don't boot any more', 'Firefox doesn't run' kinds of forehead-slappers.
> > What we believe is that requiring packages to go to updates-testing for
> > some time improves our chances of avoiding that kind of issue.
> 
> The key questions are still: Which [special] packages do you want to cover?
> CRITPATH only? Or arbitrarily enforced delays for all packages?

The initial proposal to FESco would cover all packages. There is a
reason to cover all packages, which is that there _are_ cases where
there can be really serious breakage created by a package which isn't in
CRITPATH, though you could argue that's sufficiently unlikely to not
warrant holding up non-critpath packages. It could do with more
discussion, I guess.

> For example, it would make sense to keep those packages in updates-testing
> for an extended period, which have received feedback in bodhi _before_
> and which have a high bug reporting activity in bugzilla.

I'd say it's almost the opposite - you could hold those packages up only
for a little while, because you can be reasonably confident you'll find
out if they're badly broken *really fast* :) Obviously, it's a tricky
area.

> > Obviously, the more testing gets done in updates-testing, the better.
> > Hopefully Till's script will help a lot with that, it's already had a
> > very positive response. But the initial trigger for the very first
> > proposal from which all this discussion sprang was wondering what we
> > could do to avoid the really-big-duh kind of problem.
> 
> I cannot answer that. Especially not because a package that may work fine
> for you and other testers, may be a really-big-duh for other users. ;)
> This also leads to a not so funny scenario, where the big-duh has not
> been noticed by any tester during F-N development, but shortly after
> release it is found by ordinary users.
> 
> When I give +1 karma, I either acknowledge only the fix for a specific bug
> that's linked, or I mention the type of usage, e.g. "basic daily usage" or
> "didn't try the new features". To not give a false impression that I may
> have tested everything. In general I hope that the feedback about me using
> the software is more helpful than zero feedback. However, it may still be
> that a certain feature/plugin I don't use is broken badly. That's not a
> guess, it has happened before and will happen again. With updates or shortly
> after a new Fedora release.

Yeah, this is a definite problem with the Bodhi system: it's not
particularly clear what +1 means or what it should mean, and different
reporters use it differently. It's definitely not something we've nailed
perfectly yet.
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Fedora Talk: adamwill AT fedoraproject DOT org
http://www.happyassassin.net



More information about the devel mailing list