Possible QA Devel Projects for GSoC 2014

Kamil Paral kparal at redhat.com
Mon Mar 10 12:23:16 UTC 2014


> Fedora has been accepted as a mentoring org for GSoC 2014 and I'm
> planning to sign up as a mentor again this year. I'm trying to think of
> good projects that we could put on the list of suggestions of things
> that we'd like students to work on and figured it was a good topic for
> wider discussion.
> 
> I'd like to avoid any blockerbugs projects this year so that we can
> focus on taskotron and keeping forward momentum. I've made a quick list
> of the possible projects that I can think of. Please comment on any of
> them that you think would benefit from having a dedicated intern over
> the summer or add to the list if you can think of other projects.
> 
> Ideally, the projects would be self-contained enough for the student to
> demonstrate their progress over the summer but not so isolated that
> they wouldn't be interacting with the community. Projects should be far
> enough out that we wouldn't be critically blocked on them but close
> enough that the effects of their work are visible before the end of
> GSoC.

Thanks for thinking about this. Personally I'm really bad at coming up with such project topics.

One of the major concerns I have is whether it is efficient to mentor someone, or whether it would be better to have your (and/or someone else's) time fully devoted to the project itself. We would have to come up with such topics that don't require much mentoring from our side, and which are not blocked on our future actions (we don't want the student to wait until we implement feature X).

> 
> Tim
> 
> 
> ------------------------------------------------------------
> Graphical Installation Testing
> ------------------------------------------------------------
> Continue the work that Jan started with his thesis or look into
> integrating something like openqa. The emphasis here is on the
> graphical interface since ks-based installation testing could be
> covered by stuff already written for beaker

After talking to Michal Hrusecky from OpenSUSE on DevConf, I'm pretty convinced we should collaborate with them on OpenQA. They have unleashed their whole Boosters team to work on it, and they're fixing many of the previous pain-points (except for Perl, unfortunately). They also try to have it pretty generic, without unneeded ties to OpenSUSE infrastructure (e.g. they've just implemented OpenID login), and they would really appreciate our collaboration.

I'm just not sure about timing. If the task is to integrate it into our test infrastructure, we need to have some infrastructure to begin with :-)

I don't know if all of them, but a large number of the Boosters team is located in CZ timezone, so Europe-based student would be a better fit for this, in order to talk to them on their IRC channel.

> 
> 
> ------------------------------------------------------------
> Beaker Integration
> ------------------------------------------------------------
> This is on our roadmap and is certainly something that would be useful.
> It would require a bit of infrastructure work and likely the
> cooperation of the beaker devs but seems like it could be a good
> project even if it isn't the most exciting thing ever.
> 
> On the other hand, this could end up being rather critical and may not
> be something that we want to mostly hand off to a student.

I'm not sure this is a good project for a student, because it's likely to involve a lot of communication with internal teams. And again, I don't think our test infra is ready yet.

> 
> 
> ------------------------------------------------------------
> Gnome Integration Test Support
> ------------------------------------------------------------
> 
> An over-simplification of this would be to say "take the stuff that's
> run as part of gnome continuous [1] and run it on fedora packages". The
> goal would be to have gnome's integration test suites running with any
> new gnome builds.
> 
> [1] https://wiki.gnome.org/action/show/Projects/GnomeContinuous
> 
> 
> ------------------------------------------------------------
> Disposable Client Support
> ------------------------------------------------------------
> 
> This is another of the big features that we'll be implementing before
> too long. It's one of the reasons that we made the shift from AutoQA to
> taskotron and is blocking features which folks say they want to see
> (user-submitted tasks, mostly).
> 
> This would involve some investigation into whether OpenStack would be
> practical, if there is another provisioning system we could use or if
> we'll be forced to roll our own (which I'd rather avoid). There should
> be some tie-in with the graphical installation support and possibly the
> gnome integration tests.

As usual, we're still missing the required pieces the student should work with. But as a pilot and a way how to discover and evaluate possible options, this could be interesting.


> 
> 
> ------------------------------------------------------------
> RPM-OSTree Support/Integration
> ------------------------------------------------------------
> 
> I haven't used rpm-ostree enough yet to figure out how good of a fit
> it'd be with taskotron but from the description of the project and the
> discussions I've had with cwalters, it sounds like it could be a good
> fit as part of our provisioning system for disposable clients.
> 
> If we're serious about proposing this as a GSoC project, we should
> probably explore it a bit more to be certain that we'd want it now but
> I figured it was worth putting on the list.
> 
> [2] https://github.com/cgwalters/rpm-ostree
> 
> 
> ------------------------------------------------------------
> System for apparent results storage and modification
> ------------------------------------------------------------
> 
> There has to be a better title for this but it would be one of the last
> major steps in enabling bodhi/koji to block builds/updates on check
> failures. The idea would be to provide an interface which can decide
> whether a build/update is OK based on what checks were passed/failed.
> It would have a mechanism for manual overrides and algorithmic
> overrides (ie, we know that foo has problem X and are working on it,
> ignore failures for now) so that we don't upset packagers more than we
> need to.
> 
> When Josef and I last talked about this, we weren't sure that putting
> this functionality into our results storage mechanism was wise. It's a
> different concern that has the potential to make a mess out of the
> results storage.

This is one of the more self-contained projects I think. It still depends on some ResultsDB bits that are not ready yet, I think, but doesn't depend on our test infra that much. I agree that we will need something like this. IIUIC, this would be an API-accessible tool with some web frontend. My only question is whether we want to have it completely separate, or somehow integrated into ResultsDB web frontend, for example. It might be weird to have two similar systems, one for browsing the true results, and one for browsing the effective results (e.g. waived, combined per updates, etc).



More information about the qa-devel mailing list