Refining the update queues/process [Was: Worthless updates]

Michael Schwendt mschwendt at gmail.com
Fri Mar 5 22:52:24 UTC 2010


On Fri, 05 Mar 2010 13:46:34 -0800, Adam wrote:

> Ah. You're looking at it on a kind of micro level; 'how can I tell this
> package has been tested?'

Exactly. Because I don't like to act on assumptions.

And "zero feedback" is only an indicator for "doesn't break badly", if
there are N>1 testers with N>1 different h/w and s/w setups who have
installed the update actually and have not rolled back without reporting a
problem. This may apply to certain core packages, but _not_ to all pkgs.

Not everyone runs "yum -y update" daily. Not everyone installs updates
daily. It may be that there are broken dependencies in conjunction
with 3rd party repos only (Audacious 2.2 test update as an example
again - the bodhi ticket warned about such dependency issues, and nobody
complained about them - all I know is that there are users who use
Audacious, just no evidence that the test-updates are tested, too).

It takes days for updates to be distributed to mirrors. A week may be
nothing for that important power-user of app 'A', who would find a problem
as soon as he *would* try out a test-update.

Further, I hear about users who have run into problems with Fedora but
haven't reported a single bug before. ABRT may help with that, but they
would still need to create a bugzilla account, which is something they
haven't done before and maybe won't do. Only sometimes a problem annoys
them for so long that they see themselves forced to look into how to
report a bug.

> Maybe it makes it clearer if I explain more clearly that that's not
> exactly how I look at it, nor (I think) how the rest of QA sees it, or
> what the proposal to require -testing is intended to achieve. We're
> thinking more about 'the big picture', and we're specifically thinking
> about - as I said before - the real brown-paper-bag,
> oh-my-god-what-were-they-thinking kinds of regressions, the 'systems
> don't boot any more', 'Firefox doesn't run' kinds of forehead-slappers.
> What we believe is that requiring packages to go to updates-testing for
> some time improves our chances of avoiding that kind of issue.

The key questions are still: Which [special] packages do you want to cover?
CRITPATH only? Or arbitrarily enforced delays for all packages?

For example, it would make sense to keep those packages in updates-testing
for an extended period, which have received feedback in bodhi _before_
and which have a high bug reporting activity in bugzilla.

> Obviously, the more testing gets done in updates-testing, the better.
> Hopefully Till's script will help a lot with that, it's already had a
> very positive response. But the initial trigger for the very first
> proposal from which all this discussion sprang was wondering what we
> could do to avoid the really-big-duh kind of problem.

I cannot answer that. Especially not because a package that may work fine
for you and other testers, may be a really-big-duh for other users. ;)
This also leads to a not so funny scenario, where the big-duh has not
been noticed by any tester during F-N development, but shortly after
release it is found by ordinary users.

When I give +1 karma, I either acknowledge only the fix for a specific bug
that's linked, or I mention the type of usage, e.g. "basic daily usage" or
"didn't try the new features". To not give a false impression that I may
have tested everything. In general I hope that the feedback about me using
the software is more helpful than zero feedback. However, it may still be
that a certain feature/plugin I don't use is broken badly. That's not a
guess, it has happened before and will happen again. With updates or shortly
after a new Fedora release.


More information about the devel mailing list