Tim saw my Beaker talk proposal for Flock and asked me to get involved
earlier than that, since he's been experimenting with Taskbot and
doesn't want to wait until August to discuss things. That sounds
perfectly reasonable to me, so here I am :)
The short version is that I think Beaker can slot fairly cleanly into
Tim's Taskbot vision as the task execution engine, as well as providing
a results repository.
But wait, you say, doesn't Beaker always provision systems from scratch?
Doesn't it only support the arcane task definition syntax we inherited
from RHTS? Good questions, and we do have answers for them :)
= Defining tasks =
The interface to the native test harness (beah) is one we inherited from
RHTS, and it has historically been quite poorly documented. The upcoming
Beaker 0.13 release includes much improved documentation for anyone that
wants to write a native Beaker task:
However, above and beyond that, we're working with the autotest
developers to start supporting autotest as a first class environment for
execution of tasks in Beaker, by providing a stable API on the lab
controllers for harnesses to talk to (see
That alternate harness API is also our avenue for bypassing the task
library in the future - we're working with the autotest developers to
ensure that the details of the tests to be executed can be retrieved
directly from git rather than having to be registered as RPMs in the
Beaker task library.
Even once we get the autotest support on par with the existing beah
support, the task library will likely still be useful for solving
problems that can otherwise be painful (like Kerberos and AMQP testing -
we have some Beaker provided tasks in development for spinning up a KDC
or a qpid message broker to test against as part of a multi-host test)
= Provisioning systems =
Beaker *does* currently always provision systems from scratch - it's the
only way to support full installer testing as well as kernel integration
testing on a wide range of hardware. However, we're also aware that this
*doesn't make sense* for a whole lot of testing that could just as
easily be run in a VM.
Our first step down the road to fixing this has been to support dynamic
provisioning of virtual machines for task execution. The initial attempt
relied on oVirt, and this turned out to be a really bad fit - oVirt
isn't designed for fast provisioning of ephemeral instances, it's built
for stable provisioning of long running core services. We also explored
ovirt's support for dynamic image based provisioning and the short
answer is "not supported".
However, the rest of the dynamic provisioning support is still in place,
so our current plans involve tweaking that system to use OpenStack
instead (although, if we can, we'll probably use the EC2 compatible APIs
for broader compatibility). OpenStack already includes a *lot* of the
stuff we want (fast image based provisioning, a cross platform
post-install configuration system, etc) so it makes sense to us to try
to re-use it rather than writing our own (the development resources
being poured into OpenStack by prospective vendors don't hurt, either).
= What's in it for Fedora QA? =
You don't have to reinvent solutions to problems that Beaker already solved.
You also get a task execution engine with several full time engineers
assigned to it (in addition to whatever resources others can spare),
that was specifically built for the task of testing an integrated Linux
distribution rather closely related to Fedora ;)
= What's in it for Beaker? =
We get Fedora QA's assistance in solving the problems that we haven't
solved yet either (like fast image based provisioning).
We also get a *public* instance we can reference from our docs rather
than having to be somewhat vague and hand-wavey about how all this works
because all the other current instances are behind various corporate
Red Hat Infrastructure Engineering & Development, Brisbane
Test Automation Team Lead
Beaker Development Lead (http://beaker-project.org/
PulpDist Development Lead (http://pulpdist.readthedocs.org