On Wed, Dec 01, 2010 at 02:17:32PM -0800, Adam Williamson wrote:
The concept of having a policy requiring updates to be tested before
they're issued is really no different. I think one point where we've
fallen over is that it wasn't sufficiently well discussed / communicated
in advance that this testing wasn't just going to 'get done' by some
independent group and no-one else would have to worry about it, but
would require a lot of people to chip in. In the same way that there
isn't some separate independent group that does package reviews, it's
just all maintainers chipping in when they can. I think perhaps those
who supported and voted for the policy kind of assumed this would
happen, and many others weren't actually aware of it.
I think this is the heart of the matter. Communication and buy-in.
The difference between package reviews/guidelines and testing is a matter of
history -- package reviews are an expectation of maintainer responsibility
from fedora.us days. Recruitment of testers is an increase in expectations
of maintainers that's happened in the last year. Without buy-in from
maintainers that they want to do this, you don't get maintainers actually
working on testing packages. Actually, thinking back to fedora.us, testing
of each update was actually done in fedora.us and abandoned for lack of
manpower. However the way we tested was a lot different -- each update went
through a new package review, not just a build being installed and rated as
to whether or not it worked.
The comparison to package reviews is also interesting in several ways.
There is an ad hoc group of a few package maintainers that do most of the
reviews. So this is similar to what you're currently seeing with the
testers. An ad hoc group of a (relatively) few package maintainers is
testing updates.while the majority of packagers do not participate.
The queue of packages often seem larger than the available manpower to
Recently the queue of packages has been going down. I attribute a large
part of this to tibbs's efforts where he's done a couple things:
* Closing out old reviews where the package submitter no longer responds to
the review request.
* Actively seeking to put together domain-specific reviewers with packages
that fit those domains (hooking up active python-sig members with python
package reviews for instance).
One encouraged and somewhat popular method of getting packages reviewed is
to trade reviews with other packagers. we don't have that recommendation
for testing at the moment.
I do think that for update testing to work well going forward we need
engage more groups with it and make it clear it's not something that
some separate QA group is just going to do for everyone and no-one has
to worry about it. We can get, and already have got, some enthusiastic
people to sign up to run updates-testing and provide testing feedback
for the packages they use anyway, but the concept of there being a
hardcore group of dedicated testers who will go out of their way to
install, configure and test software they wouldn't usually use is not
one that's likely to fly, I don't think.
When software is packaged it's reasonable to expect that someone,
somewhere, uses it; if they don't, it probably shouldn't be packaged. We
need to find those people and engage them in the testing process, and it
seems to me that the maintainers of packages are as well placed as
anyone to help find and engage their users in this process.
Allowing anonymous karma to count is something that I think targets this.
In many cases it's easier than that; a lot of packages are
more than one person. It's not only perfectly okay but more or less
*what we want to happen* for co-maintainers to sign up as proven testers
and test each others' updates. There's a bunch of people in the anaconda
group, for instance; it's perfectly fine for you all to sign up as
proven testers and test each other's code. The testing doesn't have to
come from some impartial outside body, all we need is a sanity check.
I don't really see any reason why *everyone* who's a packager shouldn't
also have signed up to be a proven tester by now. I'd like to ask if
anyone has a perception that it's a hard process to get involved in, or
if they got the impression that they *shouldn't* get engaged in it, or
something like that. Maybe we can improve the presentation to make it
clear that this really ought to be a very wide-based process.
With that in mind, perhaps we should have being added to the packager group
automatically put you in the proventester group. If you turn out to be
a problem we can then remove you from the proventester group until you've
learned how you should be testing. (On the implementation side, we should
have this ability in fas since we do something similar to put people who
sign the cla into the cla_done group automatically).
And in answer to your question -- my perception is that it's a separate
thing that I could join just as I've joined infrastructure as well as
packaging. So in the sense that it's not something that's automatically
there for me to do by virtue of being in packager, it is hard to get