measuring success [was Re: Bodhi 0.7.5 release]

Kevin Kofler kevin.kofler at chello.at
Sat Jul 3 01:16:14 UTC 2010


Will Woods wrote:
> The main reasons we want to perform testing are things like: to avoid
> pushing updates with broken dependencies

The right way to prevent that is to get AutoQA completed, which will, if it 
works as intended, automatically detect and throw out updates with broken 
dependencies without needlessly delaying all those updates which don't have 
broken dependencies. Once AutoQA is completed, the testing process will do 
NOTHING whatsoever to prevent broken dependencies because they wouldn't make 
it through AutoQA anyway.

> or updates that cause serious regressions requiring manual intervention /
> emergency update replacements.

No amount of testing is going to catch all such cases, and when it does 
happen, the testing requirements actually HINDER a quick fix, increasing the 
window of exposure to the bug and therefore making it affect many more users 
and for longer time.

> In fact, Kevin, given a set of metrics we're both happy with, I'd be
> willing to stake my subscription to this list on it - for, say, 3
> months. Are you willing to do the same?

No. Metrics just encourage working to the metric to game the system, and any 
improvement you measure from the new process might just be due to chance or 
to factors we aren't considering at all. Plus, do we even have the 
historical data to compare with, given that everything older than F12 is 
deleted from Bodhi?

        Kevin Kofler



More information about the devel mailing list