Updates Criteria Summary/Brainstorming

Henrik Nordström henrik at henriknordstrom.net
Mon Nov 22 08:57:40 UTC 2010


sön 2010-11-21 klockan 11:00 +0100 skrev Till Maas:

> I guess this can be somehow automated. E.g. change Bodhi to drop the
> karma requirements for packages that had e.g. two subsequent updates
> without any Bodhi feedback and re-enable it if they get feedback.

That would be somewhat counter productive for the goal of actually
having testers, as it discourages maintainers looking for testers as a
way to improve their release process.


Imho the main concerns about the updates criteria is actually confusion
between autokarma requirements and minimal karma requirements. At least
it was for me when last discussing the topic. The actual requirements
isn't very high or unreasonable.

What I'd like to see is

* aggregated karma across the releases for the same package version.
Should be quite sufficient to test most updates on one of the active
releases.

* autoqa trapping dependency errors and other install failure errors,
disallowing push of a completely borked package, including changes in
provides breaking other packages.

* packages failing the above should only be pushed in a group when
dependencies have been satisfied (done automatically once push has been
requested for all of them), or after requesting an exception if a
dependent package for some reason can't be updated.

* better integration of releases in the push process, enabling package
maintainers a view of the package status across the releases.

* automatic enforcement on order of release, preventing a push or at
minimum alerting the maintainer if an earlier version is in a later
fedora release (including rawhide).


In addition to this the whole concept about how to enable actual users
to test packages in a reasonable manner need quite a bit of love to make
testing scale. The model of updates-testing and test everything is
simply a too high threshold for most users and scares away most, and
manually searching for and applying selected updates with yum do not
scale. I think something along this model would help greatly in that
area:

* Keep updates-testing repository model as-is

* Give users an easy option to get notified via package-kit when there
is updates to selected packages available in updates-testing, enabling
the user to select "just package x,y,z, selected package groups, or
everything" of the packages they have installed, and to select if the
user wants testing updates to be automatically installed as part of the
update process or only notified and requiring manual selection each
time.

* A new notification icon, reminding users when they have packages from
updates-testing installed for which they have not yet given feedback.
Packages should automatically disappear from this list when they have
been pushed to updates.

* Give users a easy way of downgrading to current non-testing release of
a package, giving them confidence that they can easily recover should
they find or suspect that the updates-testing package fails. This would
also need blocking that package release from automatic update from
updates-testing.

> All of this could be combined. E.g. packages with enough testers get
> test cases and need to fulfill stronger criteria. Packages with not so
> many testers get test cases and only need to fulfil that similar
> updates need to receive good karma on one Fedora release.

Imho this should be more based on how critical the package is for system
operation and quality of past updates than amount or activity of
testers.

I.e. if a borked update gets pushed out by the package maintainer then
that will increase the testing requirement on future updates of the same
package for a number of package pushes.

> Also it could be made easier for maintainers to identify problematic
> updates, e.g. by warning that the dependencies or provides of an update
> changed when the update is created.

+1

Regards
Henrik



More information about the devel mailing list