Possible QA Devel Projects for GSoC 2014

Kamil Paral kparal at redhat.com
Tue Mar 11 09:02:28 UTC 2014


> > > ------------------------------------------------------------
> > > Graphical Installation Testing
> > > ------------------------------------------------------------
> > > Continue the work that Jan started with his thesis or look into
> > > integrating something like openqa. The emphasis here is on the
> > > graphical interface since ks-based installation testing could be
> > > covered by stuff already written for beaker
> > 
> > After talking to Michal Hrusecky from OpenSUSE on DevConf, I'm pretty
> > convinced we should collaborate with them on OpenQA. They have
> > unleashed their whole Boosters team to work on it, and they're fixing
> > many of the previous pain-points (except for Perl, unfortunately).
> > They also try to have it pretty generic, without unneeded ties to
> > OpenSUSE infrastructure (e.g. they've just implemented OpenID login),
> > and they would really appreciate our collaboration.
> 
> We keep running into this and I really need to spend some time with
> OpenQA again. When I looked at it a couple years ago, there were several
> things that I didn't like about how the framework actually works
> (entire screenshot comparision, forcing keyboard interactions etc.) but
> it's possible that they've fixed those issues.

Look here:
https://www.google.cz/#q=openqa+site:http:%2F%2Flizards.opensuse.org

They use OpenCV instead of screenshot checksuming now. I'm not sure what you mean by keyboard interactions.

One major drawback is that they still don't support task distribution (to test clients). Everything is executed on a single machine. But they say they are able to run lots of test cases every single day, and we intend to run just a fraction of it, so performance-wise it shouldn't be a problem.

> > > ------------------------------------------------------------
> > > Disposable Client Support
> > > ------------------------------------------------------------
> > > 
> > > This is another of the big features that we'll be implementing
> > > before too long. It's one of the reasons that we made the shift
> > > from AutoQA to taskotron and is blocking features which folks say
> > > they want to see (user-submitted tasks, mostly).
> > > 
> > > This would involve some investigation into whether OpenStack would
> > > be practical, if there is another provisioning system we could use
> > > or if we'll be forced to roll our own (which I'd rather avoid).
> > > There should be some tie-in with the graphical installation support
> > > and possibly the gnome integration tests.
> > 
> > As usual, we're still missing the required pieces the student should
> > work with. But as a pilot and a way how to discover and evaluate
> > possible options, this could be interesting.
> 
> What are we missing that wouldn't be part of this project?

Well, are we sure now how exactly the client setup process will be hooked into taskotron or its underlying tools? Are we committed to using buildbot, or might it change?

> > > ------------------------------------------------------------
> > > System for apparent results storage and modification
> > > ------------------------------------------------------------
> > > 
> > > There has to be a better title for this but it would be one of the
> > > last major steps in enabling bodhi/koji to block builds/updates on
> > > check failures. The idea would be to provide an interface which can
> > > decide whether a build/update is OK based on what checks were
> > > passed/failed. It would have a mechanism for manual overrides and
> > > algorithmic overrides (ie, we know that foo has problem X and are
> > > working on it, ignore failures for now) so that we don't upset
> > > packagers more than we need to.
> > > 
> > > When Josef and I last talked about this, we weren't sure that
> > > putting this functionality into our results storage mechanism was
> > > wise. It's a different concern that has the potential to make a
> > > mess out of the results storage.
> > 
> > This is one of the more self-contained projects I think. It still
> > depends on some ResultsDB bits that are not ready yet, I think, but
> > doesn't depend on our test infra that much. I agree that we will need
> > something like this. IIUIC, this would be an API-accessible tool with
> > some web frontend. My only question is whether we want to have it
> > completely separate, or somehow integrated into ResultsDB web
> > frontend, for example. It might be weird to have two similar systems,
> > one for browsing the true results, and one for browsing the effective
> > results (e.g. waived, combined per updates, etc).
> 
> Having a single web frontend makes sense to me. I'm still not sure how
> the two systems would be integrated but I pretty much agree with Josef
> that the two systems need to be somewhat separated. Handling overrides
> and test cases inside the results storage system is also messy, just a
> different kind of messy :)

So, two different systems (i.e. two different databases) displayed in a single web frontend, right? I guess it makes sense.



More information about the qa-devel mailing list