Refining the update queues/process [Was: Worthless updates]

Adam Williamson awilliam at redhat.com
Sat Mar 6 02:39:02 UTC 2010


On Fri, 2010-03-05 at 23:47 +0100, Till Maas wrote:
> On Fri, Mar 05, 2010 at 01:46:34PM -0800, Adam Williamson wrote:
> 
> > Ah. You're looking at it on a kind of micro level; 'how can I tell this
> > package has been tested?'
> 
> For a package maintainer it is especially interesting, whether the own
> update has been tested.
> 
> > Maybe it makes it clearer if I explain more clearly that that's not
> > exactly how I look at it, nor (I think) how the rest of QA sees it, or
> > what the proposal to require -testing is intended to achieve. We're
> > thinking more about 'the big picture', and we're specifically thinking
> > about - as I said before - the real brown-paper-bag,
> > oh-my-god-what-were-they-thinking kinds of regressions, the 'systems
> > don't boot any more', 'Firefox doesn't run' kinds of forehead-slappers.
> > What we believe is that requiring packages to go to updates-testing for
> > some time improves our chances of avoiding that kind of issue.
> 
> Afaics, this misunderstanding is a big problem, e.g. my expectations of
> updates testing also differ. Maybe you can add some more information to
> the wiki about what QA for updates testing currently tries to ensure,
> what it actually ensures and plans for the future. E.g. I always noticed

Yeah, that may be a good idea. For the record, we certainly hope the
updates-testing system makes it possible to do far more intensive
testing, and we would love to see real in-depth evaluation of every
package in updates-testing; at present we don't really have enough
people using it to ensure this, but we'd certainly like to see that, and
we'll continue to try and encourage more people to use updates-testing
and report their experiences. Your script could definitely help with
that.

> to how these numbers have changed in a week. I hope then everyone from
> the QA SIG is using the script to report feedback, so it will be save to
> say that an update was not tested at all if it did not receive any
> feedback.

Well, I'm using your script, but still intentionally skipping certain
updates. I don't think it's a good idea to give a +1 on an update that I
haven't really directly tested just because it didn't blow up my system,
though if it *did* blow up my system I'd certainly give it a -1. We
could institute a 'I booted with this installed and nothing exploded'
button, but I'm not sure that would ultimately be valuable...?
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Fedora Talk: adamwill AT fedoraproject DOT org
http://www.happyassassin.net



More information about the devel mailing list