Hi, all. Just before I left the Raleigh office after my week for
orientation and getting to know the rest of the QA team, we had a
meeting to try and set some goals for Fedora QA for this year.
'We' is myself, James Laska, and Will Woods. In the spirit of community
that I am supposed to be bringing to the team, I wanted to throw these
topics open to the list to try and get your feedback on the same topics
Off the top, we should be honest and open: Red Hat pays Will, James and
now myself to work on Fedora QA full time. In my case, what RH want from
me is quite purely and simply to try and help the community - external
contributors - to improve the quality of Fedora as a project. In Will's
and James's cases, though, things are slightly different. Part of their
value is to produce tools and work on processes that, as well as
contributing to Fedora QA, also contribute to the process that
ultimately improves the quality of Red Hat's other products. In addition
to that, they're real engineers who know how to write stuff, and all
engineers like to come up with ideas and implement them. So, given that,
there are always going to be things that the internal QA guys are
working on that are decided by RH or by themselves. No matter how open
and community-friendly we are, it will never be the case that the work
and goals of those paid by RH are entirely determined by the
However, bearing that in mind, it's still very valuable to RH, Fedora,
and the Fedora QA community to hopefully get all your opinions on these
issues, and it can certainly help to set my goals and help everyone
involved in this project think about what they're doing, what they'd
like to do, and how we can all work together towards the ultimate goal
of making Fedora an even better project.
So, to the topics! We set ourselves three simple questions
1. What does Fedora QA do?
2. What should it do?
3. What should it do first?
We'd like all of your input on these questions - just let us know what
you think. Broadly, we identified several things we - as a project - do:
* Exploratory testing: simply using Fedora, updates-testing, or Rawhide,
and complaining when stuff breaks. We agreed that this is at least as
important as anything else QA does, but sometimes isn't treated as such.
We agreed we should always emphasize that this is important and
valuable, try to help ensure it can be done as effectively as possible
(through things like the Bugzappers project), and try to always
communicate to everyone that simply doing this kind of testing is an
important and valued contribution to the QA project.
* Structured testing: this is the more in-depth testing we do, such as
regular testing of specific functions based on test plans, the use of
automated tools such as Beaker and Nitrate (once they're ready), and
test days. It also covers the case where another group contacts us with
a request to do specific testing on a certain function. A lot of
discussion here covered the Beaker and Nitrate projects; my take on
this, as a new guy, is that they sound like really great tools that will
help us a lot when they're ready.
* Bug zapping: all the great work done by the Bugzappers team, mainly
triaging and following up bugs to ensure they're properly handled from
report through to released update.
* Tooling: this is the work done, particularly by Will, to write the
tools that allow structured testing to take place. We agreed that it
would be good to get more contributions to this area, and that it's
important to communicate the tools we do / will have available so they
can be used to their full ability. This is important for things like
Bodhi - we'd like to make sure that kind of system is more widely used.
We also had discussions on things arising from the above, like the
importance of Rawhide, and meta-tasks like documentation and community
relations, which are important in attracting and enabling people to do
the actual tasks.
We also identified some specific goals. See also this Wiki page I
1. Increase participation in Rawhide: it provides a huge benefit in
terms of identifying issues and having them fixed quickly and early in
I am going to work on communication and documentation issues around
that, and Will is going to work on producing a tool which simply tests,
every day, whether you can a) install Rawhide fresh and b) update from
latest stable+updates to Rawhide. This serves two purposes: it both lets
you know whether it's worth actually attempting to install Rawhide that
day if you wanted to know, and if we track the results over time, it
provides an incentive to the developers to improve the reliability of
2. Make release testing more accessible
Encompasses many sub-tasks.
* defining what role QA serves in the release process
* defining what QA can do during the release process
* how can the community get involved?
* who tests what, when, and how?
This is what we're doing at present with the Wiki cleanup: the purpose
of this is to make it easier for people to get involved and know what we
3. Strengthen the QA tools portfolio: aim to have Nitrate available in
prototype form by June, as we believe it will be really useful both in
improving the amount and quality of testing we're able to do, and
providing a fun and easy-to-use system that will get more people
involved with QA. There's also Beaker, which may take longer. This is
what wwoods is mainly working on.
So, what's your opinion on the above? Do you have anything to add to the
lists of what QA does, or should be doing? Do you feel there are any
specific problems stopping us performing at our full potential, that
should be solved as soon as possible? Do you have any ideas on making QA
more accessible and getting more people involved? Please let us know.
Fedora QA Community Monkey
IRC: adamw | Fedora Talk: adamwill AT fedoraproject DOT org