Possible QA Devel Projects for GSoC 2014

Tim Flink tflink at redhat.com
Tue Mar 11 11:03:51 UTC 2014


On Tue, 11 Mar 2014 05:02:28 -0400 (EDT)
Kamil Paral <kparal at redhat.com> wrote:

> > > > ------------------------------------------------------------
> > > > Graphical Installation Testing
> > > > ------------------------------------------------------------
> > > > Continue the work that Jan started with his thesis or look into
> > > > integrating something like openqa. The emphasis here is on the
> > > > graphical interface since ks-based installation testing could be
> > > > covered by stuff already written for beaker
> > > 
> > > After talking to Michal Hrusecky from OpenSUSE on DevConf, I'm
> > > pretty convinced we should collaborate with them on OpenQA. They
> > > have unleashed their whole Boosters team to work on it, and
> > > they're fixing many of the previous pain-points (except for Perl,
> > > unfortunately). They also try to have it pretty generic, without
> > > unneeded ties to OpenSUSE infrastructure (e.g. they've just
> > > implemented OpenID login), and they would really appreciate our
> > > collaboration.
> > 
> > We keep running into this and I really need to spend some time with
> > OpenQA again. When I looked at it a couple years ago, there were
> > several things that I didn't like about how the framework actually
> > works (entire screenshot comparision, forcing keyboard interactions
> > etc.) but it's possible that they've fixed those issues.
> 
> Look here:
> https://www.google.cz/#q=openqa+site:http:%2F%2Flizards.opensuse.org
> 
> They use OpenCV instead of screenshot checksuming now. I'm not sure
> what you mean by keyboard interactions.

IIRC, they were using opencv the last time I looked at openqa. The
image checksumming stuff is worse than the bits I had concerns about,
to be honest :)

What I mean by keyboard interactions is that you can't use the mouse -
it was a strict script of keyboard actions. The runner made keypresses
as scripted and nothing more.

> One major drawback is that they still don't support task distribution
> (to test clients). Everything is executed on a single machine. But
> they say they are able to run lots of test cases every single day,
> and we intend to run just a fraction of it, so performance-wise it
> shouldn't be a problem.

We'd still need to evaluate the system to see if it can do in reality
what we need it to do, what the level of integration work will be and
what kind of patches we'd need to write and submit.

I'm really not itching to write our own system here but at the same
time, I'm also not thrilled about the idea of jumping into a system we
have little to no control over just because it looks like it'd save us
time in the short term. As bad as NIH syndrome is, shoehorning an
existing library/system into a place where it isn't going to work well
and may cause us just as many problems is also not a good thing.

> > > > ------------------------------------------------------------
> > > > Disposable Client Support
> > > > ------------------------------------------------------------
> > > > 
> > > > This is another of the big features that we'll be implementing
> > > > before too long. It's one of the reasons that we made the shift
> > > > from AutoQA to taskotron and is blocking features which folks
> > > > say they want to see (user-submitted tasks, mostly).
> > > > 
> > > > This would involve some investigation into whether OpenStack
> > > > would be practical, if there is another provisioning system we
> > > > could use or if we'll be forced to roll our own (which I'd
> > > > rather avoid). There should be some tie-in with the graphical
> > > > installation support and possibly the gnome integration tests.
> > > 
> > > As usual, we're still missing the required pieces the student
> > > should work with. But as a pilot and a way how to discover and
> > > evaluate possible options, this could be interesting.
> > 
> > What are we missing that wouldn't be part of this project?
> 
> Well, are we sure now how exactly the client setup process will be
> hooked into taskotron or its underlying tools?

I'm not exactly sure how this will work, either. It's going to depend
on what we end up using for graphical testing, what openstack is
capable of, what cloud resources we have access to and what the cloud
SIG ends up needing for their testing.

> Are we committed to using buildbot, or might it change?

I don't really see how this is relevant. Can you elaborate on how using
buildbot or not would factor in here?

> > > > ------------------------------------------------------------
> > > > System for apparent results storage and modification
> > > > ------------------------------------------------------------
> > > > 
> > > > There has to be a better title for this but it would be one of
> > > > the last major steps in enabling bodhi/koji to block
> > > > builds/updates on check failures. The idea would be to provide
> > > > an interface which can decide whether a build/update is OK
> > > > based on what checks were passed/failed. It would have a
> > > > mechanism for manual overrides and algorithmic overrides (ie,
> > > > we know that foo has problem X and are working on it, ignore
> > > > failures for now) so that we don't upset packagers more than we
> > > > need to.
> > > > 
> > > > When Josef and I last talked about this, we weren't sure that
> > > > putting this functionality into our results storage mechanism
> > > > was wise. It's a different concern that has the potential to
> > > > make a mess out of the results storage.
> > > 
> > > This is one of the more self-contained projects I think. It still
> > > depends on some ResultsDB bits that are not ready yet, I think,
> > > but doesn't depend on our test infra that much. I agree that we
> > > will need something like this. IIUIC, this would be an
> > > API-accessible tool with some web frontend. My only question is
> > > whether we want to have it completely separate, or somehow
> > > integrated into ResultsDB web frontend, for example. It might be
> > > weird to have two similar systems, one for browsing the true
> > > results, and one for browsing the effective results (e.g. waived,
> > > combined per updates, etc).
> > 
> > Having a single web frontend makes sense to me. I'm still not sure
> > how the two systems would be integrated but I pretty much agree
> > with Josef that the two systems need to be somewhat separated.
> > Handling overrides and test cases inside the results storage system
> > is also messy, just a different kind of messy :)
> 
> So, two different systems (i.e. two different databases) displayed in
> a single web frontend, right? I guess it makes sense.

Yeah, that's what I had in mind, anyways.

Since student registration has started, I'd like to get our proposed
ideas in the wiki soon. The question of whether any of these projects
would be worth distracting folks from other dev/testing work remains -
any thoughts on that front?

It sounds like the results middleware project, the graphical
installation project, the gnome-continuous project and _maybe_ the
disposable client project are the best candidates. Any thoughts on the
value for those?

Tim
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 490 bytes
Desc: not available
URL: <http://lists.fedoraproject.org/pipermail/qa-devel/attachments/20140311/50d58891/attachment.sig>


More information about the qa-devel mailing list