My name is Ilgiz Islamgulov and I'm a student at Ufa State Aviation
Technical University in Ufa, Russia.
I'll be working on Blocker Tracking Application [blockerbugs]. The
Fedora Blocker Bug Tracking App is web application that was designed
to track release blocking bugs and related updates in Fedora releases
currently under development. While the app itself already exists,
there are many features which I would like to implement.
I'll be posting updates on my [blog], my progress will be tracked on
[reviewboard] and you may reach me directly on #fedora-qa as ilgiz or
on this email. Feel free to contact me!
I noticed during a yum upgrade today that gcc-4.8.1 hit stable, but a
rebuild of llvm/clang wasn't available yet so my upgrade failed. I went
and looked at the bodhi update for 4.8.1: it passed with +3 karma and
got auto-pushed. I noticed that the depcheck test passed, so I clicked
on it and saw tons of errors from the yum depsolver. It really looks
like this test should not have passed with how many issues were present.
Does depcheck just look at the dependencies for the update in question?
i.e., in this update gcc-4.8.1's dependencies are all present, so things
pass? If that's the case, I can see why autoqa says it was OK, but I'd
argue that it still should have failed since so much other stuff depends
on the older version of gcc, and that other stuff would break as a
result of the update request.
Anyway, I just wanted to bring it to your attention and see what you think.
I've been working to update the blockerbugs docs over the last day or
so and just pushed my initial work.
You can see the rendered output at:
The updated docs are in the feature/documentation branch of the git
repository. I'll merge it into develop and master before too long.
On a side note, we need to find a better place for the rendered docs to
live. My fedorapeople space works for now, but it doesn't seem like
the most natural choice.
Tim saw my Beaker talk proposal for Flock and asked me to get involved
earlier than that, since he's been experimenting with Taskbot and
doesn't want to wait until August to discuss things. That sounds
perfectly reasonable to me, so here I am :)
The short version is that I think Beaker can slot fairly cleanly into
Tim's Taskbot vision as the task execution engine, as well as providing
a results repository.
But wait, you say, doesn't Beaker always provision systems from scratch?
Doesn't it only support the arcane task definition syntax we inherited
from RHTS? Good questions, and we do have answers for them :)
= Defining tasks =
The interface to the native test harness (beah) is one we inherited from
RHTS, and it has historically been quite poorly documented. The upcoming
Beaker 0.13 release includes much improved documentation for anyone that
wants to write a native Beaker task:
However, above and beyond that, we're working with the autotest
developers to start supporting autotest as a first class environment for
execution of tasks in Beaker, by providing a stable API on the lab
controllers for harnesses to talk to (see
That alternate harness API is also our avenue for bypassing the task
library in the future - we're working with the autotest developers to
ensure that the details of the tests to be executed can be retrieved
directly from git rather than having to be registered as RPMs in the
Beaker task library.
Even once we get the autotest support on par with the existing beah
support, the task library will likely still be useful for solving
problems that can otherwise be painful (like Kerberos and AMQP testing -
we have some Beaker provided tasks in development for spinning up a KDC
or a qpid message broker to test against as part of a multi-host test)
= Provisioning systems =
Beaker *does* currently always provision systems from scratch - it's the
only way to support full installer testing as well as kernel integration
testing on a wide range of hardware. However, we're also aware that this
*doesn't make sense* for a whole lot of testing that could just as
easily be run in a VM.
Our first step down the road to fixing this has been to support dynamic
provisioning of virtual machines for task execution. The initial attempt
relied on oVirt, and this turned out to be a really bad fit - oVirt
isn't designed for fast provisioning of ephemeral instances, it's built
for stable provisioning of long running core services. We also explored
ovirt's support for dynamic image based provisioning and the short
answer is "not supported".
However, the rest of the dynamic provisioning support is still in place,
so our current plans involve tweaking that system to use OpenStack
instead (although, if we can, we'll probably use the EC2 compatible APIs
for broader compatibility). OpenStack already includes a *lot* of the
stuff we want (fast image based provisioning, a cross platform
post-install configuration system, etc) so it makes sense to us to try
to re-use it rather than writing our own (the development resources
being poured into OpenStack by prospective vendors don't hurt, either).
= What's in it for Fedora QA? =
You don't have to reinvent solutions to problems that Beaker already solved.
You also get a task execution engine with several full time engineers
assigned to it (in addition to whatever resources others can spare),
that was specifically built for the task of testing an integrated Linux
distribution rather closely related to Fedora ;)
= What's in it for Beaker? =
We get Fedora QA's assistance in solving the problems that we haven't
solved yet either (like fast image based provisioning).
We also get a *public* instance we can reference from our docs rather
than having to be somewhat vague and hand-wavey about how all this works
because all the other current instances are behind various corporate
Red Hat Infrastructure Engineering & Development, Brisbane
Test Automation Team Lead
Beaker Development Lead (http://beaker-project.org/)
PulpDist Development Lead (http://pulpdist.readthedocs.org)