Update testing policy: how to use Bodhi

Adam Williamson awilliam at redhat.com
Fri Mar 26 23:46:01 UTC 2010


On Fri, 2010-03-26 at 16:31 -0700, Jesse Keating wrote:
> On Fri, 2010-03-26 at 15:49 -0700, Adam Williamson wrote:
> > 
> > The system could do a count of how many of each type of feedback any
> > given update has received, but I don't think there's any way we can
> > sensibly do some kind of mathematical operation on those numbers and
> > have a 'rating' for the update. Such a system would always give odd /
> > undesirable results in some cases, I think (just as the current one
> > does). I believe the above system would be sufficiently clear that there
> > would be no need for such a number, and we would be able to evaluate
> > updates properly based just on the information listed.
> > 
> > What are everyone's thoughts on this? Thanks! 
> 
> Unless we use some sort of value for this feedback, there won't be able
> to be a way to autopush the update once criteria is reached.  Relying on
> everybody to remember to go back to their update and click the button to
> make the update go won't work.

In that case I'd favour a formula along the lines of "X reports of
'successful fix' or 'no regression' with no reports of
'regression'" (where the maintainer sets X). I think the fact that an
update can go out even if several people report encountering regressions
is one of the biggest flaws in the current setup. Any report of
'regression found' should prevent an auto-push.
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Fedora Talk: adamwill AT fedoraproject DOT org
http://www.happyassassin.net



More information about the test mailing list