Updates Criteria Summary/Brainstorming

Kevin Kofler kevin.kofler at chello.at
Sun Nov 21 05:00:21 UTC 2010

Kevin Fenzi wrote:
> * Just drop all the requirements/go back to before we had any updates
>   criteria.

That's really the only way to go. The policy failed, it's time to withdraw 
it. All the other proposed solutions require even more complexity in the 
software and policies, for little to no gain.

> * Change FN-1 to just security and major bugfix
> This may be hard to enforce or figure out if something is a major bugfix.

Indeed, this obviously doesn't work.

> * allow packages with a %check section to go direct to stable

echo 'Success!'

Is that OK? If not, what is? How much testing do you want to require?

And most importantly, it doesn't solve the problems for those many packages 
for which automated testing is not feasible and/or not useful.

> * setup a remote test env that people could use to test things.

That doesn't solve the time issue, only the "I don't have a Fedora n 
environment" issue, and not everything can be tested properly in such a 
setup. (Hardware-specific issues, latency-critical software etc.)

> * require testing only for packages where people have signed up to be
> testers

The maintainers know best whether their packages have sufficient testers or 
not, just let them decide on how much feedback to wait for before going 
stable! A boolean is not sufficient to accurately describe the situation, 
e.g. requiring a karma of 5 may make sense for something like the kernel, 
but not for a package with only 4 testers in total, and also the testers 
available for a given Fedora release matter (a number which changes over 
time, and you can't really rely on testers updating their data each time 
they upgrade their system(s)).

> * Ask maintainers to provide test cases / test cases in wiki for each
> package?

There are many packages where that's just not feasible. (Good luck trying to 
provide an exhaustive set of test cases for e.g. kdebase-workspace!) It's 
also a lot of extra work for the maintainers.

> * have a way to get interested testers notified on bodhi updates for
> packages they care about.

That doesn't solve the problem of there not being interested testers in the 
first place.

> * reduced karma requirement on other releases when one has gone stable

In principle, that makes sense. It might solve part of the issues if the 
"reduced" karma requirement is zero. (Otherwise it's just useless, since we 
can already set it to 1.) But you'd have to allow the maintainer to tell 
Bodhi what 2 updates are the same. "Same EVR minus disttags" as you propose 
has both false positives and false negatives. And why not avoid all this 
complexity by just always letting the maintainer decide? They know best how 
much value to attribute to feedback from identical or similar updates for 
other releases in the specific case at hand.

In short, my proposal:
1. discontinue the current update acceptance enforcement (in particular, 
reenable direct stable pushes, and of course allow pushes from testing to 
stable at any moment at the maintainer's discretion),
2. drop the aggregated numeric karma score, which is devoid of any actual 
significance, and the autokarma (mis)feature that goes with it (keep only 
the +1/0/-1 emoticons on the individual comments),
3. write some recommendations which should GUIDE the maintainers on how to 
handle updates, but NOT FORCE anything on them (experienced maintainers 
follow many unwritten rules, writing them down can certainly help guiding 
less experienced maintainers towards doing the right thing),
4. TRUST maintainers to make the right decisions on when an update is stable 
enough to be pushed, also considering the impact of NOT pushing the update 
immediately (which has worked very well, despite some claims to the contrary 
based on isolated incidents blown way out of proportion).

        Kevin Kofler

More information about the devel mailing list