openssh: no pre-release sanity check? [Re: ssh-to-rawhide hangs
awilliam at redhat.com
Mon Sep 12 18:48:22 UTC 2011
On Mon, 2011-09-12 at 11:17 +0100, Richard W.M. Jones wrote:
> On Mon, Sep 12, 2011 at 01:02:26AM -0700, Adam Williamson wrote:
> > On Mon, 2011-09-12 at 08:56 +0100, Richard W.M. Jones wrote:
> > > I thought AutoQA was going to do this, but it's been disappointing.
> > AutoQA is under active development, still. It's a complex project.
> I hope I can make a suggestion:
> Can we have it so that packagers can commit a file into Fedora git
> (eg. "autoqa.sh"), and have that picked up by AutoQA and run whenever
> a new package appears in Rawhide.
> It could just return exit status zero / non-zero, or if someone is
> feeling ambitious:
That's more or less the ultimate goal, yes. Although it has to be more
complex than that: there's a rather wider range of pre-conditions for
running a test (when a new build is done? When an update is submitted?
Whenever you commit anything to git?) and it makes sense to allow test
contributors to choose the condition they want.
AutoQA tests are already more or less just 'any kind of executable that
returns either a pass or fail condition'.
So yeah, this is certainly where we're going, we're just not there yet.
As I wrote in the FESCo meeting today, AutoQA is currently in a slightly
unfortunate position: the team's goal is that AutoQA is a generic
framework which they maintain and to which other people contribute
tests, i.e., they don't want it to be their job to write and maintain
the tests themselves. But at the same time, the framework isn't yet
quite to the point where it's actually at all easy for third parties to
contribute tests. So right now, it's hard to contribute tests to AutoQA
- or to put a more positive gloss on it, it's still at the point where
the framework needs to be worked on, and we're concentrating on that
work. (It's pretty easy to help out with the AutoQA framework, but
that's not what most people are interested in contributing to).
We're certainly trying to get to the point where we have all the bits in
place that you can contribute a test and choose when to have it run and
what should happen in response to the results. It's just a lot of work
to get there, and we're not there yet, unfortunately.
Right now it's best to look at AutoQA as a half-finished framework with
a small, more or less fixed set of tests in place which are as much
examples that are there to help us finish off the framework as they are
tests that actually generate important and useful results, although in
practice they *do* achieve that to some extent. Usually when a test is
added to AutoQA at this point it's done because the test exercises some
bit of the framework we need to be working on.
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | identi.ca: adamwfedora
More information about the devel