I've been wanting to try out Sikuli (http://www.sikuli.org) for a while
now and finally got around to making it build and work on Fedora.
The basic concept is to create GUI test cases based on images instead of
absolute (x,y) coordinates (like selenium), scripted key sequences
(openqa) or using an accessability layer (like dogtail).
I've put together a tarball with source and build scripts if anyone
else wants to try it out:
The distribution method is a really dirty hack but I've been able to
build on both F15 and F16 after installing a huge number of -devel
dependencies and building all of Sikuli's direct dependencies from
source. Xpresser (https://wiki.ubuntu.com/Xpresser) might be another
option - it's a re-implementation of the concepts from Sikuli. It seems
to be missing some of Sikuli's features but would likely be quite a bit
easier to build and integrate.
One of the things I have in mind is to attempt automating at least some
of the installation test cases because a lot of them pass most of the
time and they can be a bit mind-numbing to do over and over again. I'm
of the opinion that our human resources could be put to better use than
monkey-button-pushing their way through simple test cases.
A non-trivial amount of work would be needed before we could fully
utilize this (fixing the code so that it is package-able, better
integration with VNC etc.) but I'm wondering if it could be a good
candidate for automating some of the installation test cases.
I've written two test cases for Sikuli thus far: a simple DVD graphical
install and graphical firstboot. I had to workaround some timing issues
due to occasional lag in the VM's VNC interface but they seem to be
working well with F17 Alpha RC2 for now.
F17 DVD Basic Graphical Install:
F17 Graphical Firstboot:
- NOTE: This only works some of the time due to rhbz#750527 which was
closed with no bug comment by the developers. I'm going to bug them
about this but will likely update the test case to work with
unpredictable order of the firstboot screens.
To run those test cases, extract them somewhere and point the Sikuli
IDE at the extracted directories.
I'm going to be running my basic graphical install testcase for
different F17 releases to get an initial feel for how fragile the test
cases are and how much effort would be required to maintain these kinds
of test cases over a release.
Anyhow, long email over. Any thoughts on whether this might be worth
I guess that the autoqa-results list is still causing some problems for
I'm honestly not sure what the problem is with our current usage of the
list - I didn't really get a clear answer to that, just that they want
to understand more about our usage in case there is a better solution
than what we're currently using.
The old archives are going to be deleted but I've downloaded a
compressed copy of the old archives in case we ever decide to parse
I've been asked for the following information:
- What do we do with the list
- What do we plan to do with the list
With the recent changes that Kamil made, the only thing that I can
think of is "notification of failed or questionable test runs". Is
there anything that I'm missing?
IMPORTANT: The results of this thread will affect 3rd parties developing
on top of autotest in the near future.
Due to the packaging requirements that we've been working during the
past several months, I decided to finally tackle the challenge of doing
some major cleanup on the autotest API structure. I have a patchset
It's a massive patchset, that was for the most of it auto generated,
plus a bunch of fixes that I've been working on over the past week. With
this, everyone developing code on top of autotest will have to perform
changes on their code, so consider this as a heads up.
I've tested the patches over the few weeks, and I'm up to the point I
see no more (at least obvious) problems on it. Bottom line, the patches
change the autotest structure in the following way
-> No more autotest/client/bin dir
-> API: autotest_lib -> autotest
-> API: autotest.client.common_lib -> autotest.client.shared
As an example, here's how the imports on client/bin/job.py looks in
from autotest_lib.client.bin import client_logging_config
from autotest_lib.client.bin import utils, parallel, kernel, xen
from autotest_lib.client.bin import profilers, boottool, harness
from autotest_lib.client.bin import config, sysinfo, test, local_host
from autotest_lib.client.bin import partition as partition_lib
from autotest_lib.client.common_lib import base_job
from autotest_lib.client.common_lib import error, barrier, log,
from autotest_lib.client.common_lib import base_packages, packages
from autotest_lib.client.common_lib import global_config
from autotest_lib.client.tools import html_report
And how they're going to look after the patchset is applied:
from autotest.client import client_logging_config
from autotest.client import utils, parallel, kernel, xen
from autotest.client import profilers, boottool, harness
from autotest.client import config, sysinfo, test, local_host
from autotest.client import partition as partition_lib
from autotest.client.shared import base_job
from autotest.client.shared import error, barrier, log, logging_manager
from autotest.client.shared import base_packages, packages
from autotest.client.shared import global_config
from autotest.client.tools import html_report
I know this will cause inconveniences, but I believe it'll be well worth
the effort. Please feel free to review the changes and give us your
because ResultsDB instances work fine, I decided to lower the burden on fedora mailman and our mailboxes and disabled "ok" test results notifications coming from autoqa production server into autoqa-results ML. This is now set in autoqa.conf both on production and staging:
result_email_blacklist = ^.*: (PASSED|FAILED|INFO);.*$
That means autoqa-results will receive only CRASHED, ABORTED and NEEDS_INSPECTION emails. ResultsDB doesn't support any notification system yet, so it's useful. All the rest can be found in ResultsDBs.
#418: rpmguard: IndexError: list index out of range
Reporter: kparal | Owner:
Type: defect | Status: new
Priority: minor | Milestone: Hot issues
Component: tests | Keywords:
Blocked By: | Blocking:
N: Comparing openchange-devel-0.11-4.fc17 and openchange-devel-1.0-1.fc17
(archs: x86_64) ...
W: requirement-added libmapiserver.so.0()(64bit)
W: requirement-added openchange-server = 1.0-1.fc17
Traceback (most recent call last):
File "/usr/share/autotest/tests/rpmguard/rpmguard", line 371, in
File "/usr/share/autotest/tests/rpmguard/rpmguard", line 112, in main
printPRCOchanges(output, 'CONFLICTS', 'conflict')
File "/usr/share/autotest/tests/rpmguard/rpmguard", line 317, in
rem_names = [line.split() for line in removed]
IndexError: list index out of range
Ticket URL: <https://fedorahosted.org/autoqa/ticket/418>
Automated QA project
This is a highly-relevant discussion from our perspective. Linking here.
----- Forwarded Message -----
From: "Stephen Gallagher" <sgallagh(a)redhat.com>
Sent: Monday, March 26, 2012 9:53:09 PM
Subject: Dependencies on Bodhi Updates
As requested during the FESCo meeting, I am going to try to summarize
some of the issues inherent in the way that Bodhi updates currently
First, I'll try to explain the goals and constraints:
1) The stable 'fedora-updates' yum repository should NEVER exist in a
state where any package has dependency issues. In other words, it should
never be possible for an update to be pushed to stable that breaks
cleanly updating any other package.
2) Updates must be possible and (ideally) timely. This is probably
3) Packages pushed to the stable 'fedora-updates' yum repository should
(ideally) not introduce regressions in packages that depend on them.
4) New features in "superpackages" such as Firefox, GNOME or FreeIPA
that have many and varied dependencies may require new features in
packages they depend on in order to enhance or fix the superpackage.
In the trivial example, a package (let's say libtalloc) needs to make an
update to fix a bug. This package requires nothing new from its
dependencies and is a self-contained fix. For this example, it is simple
to just build libtalloc in koji and then create a Bodhi update and pass
it through "updates-testing", get karma and *poof* off to
Now let's extend the example. Suppose that we have another package
libtevent that has libtalloc as a dependency. Libtevent's maintainer
wants to add a new feature to libtevent, but the patch from upstream
depends on the bug in libtalloc having been fixed in order for the new
feature to work properly. In this situation, the maintainer of libtevent
would build libtevent with an explicit Requires: libtalloc >= <version>
in the specfile (possibly pulling libtalloc into the BuildRoot overrides
if necessary) and then test it locally to see that it works.
So now we have our first updates dependency issue. If we submit
libtevent as its own update, it is possible that it will achieve its
karma requirement before libtalloc does. It would then be pushed to the
"fedora-updates" repository and then introduce a dependency issue in the
stable repository (because users trying to update libtevent would be
unable to update libtalloc without enabling the updates-testing
The current recommended approach is to bundle the two updates into a
single one carrying multiple packages. The first problem with this is
that you must have commit privilege on all packages that you are
bundling into an update. If you do not, then you need to track down a
provenpackager to do it for you.
Now let's make the problem even more fun. Consider that the update to
libtevent might be coming in because it is necessary for a new feature
in libldb, which is in turn providing new functionality necessary for
SSSD. So now we have four packages all sitting in the same update. The
problem with this is that the tendency will be to only test the most
user-visible package(s) in the set. In this particular case, that might
be SSSD. So people would likely test SSSD and, if nothing went wrong,
consider the entire update stable.
But wait! SSSD isn't the only package that depends on libldb, libtevent
and libtalloc. So too does the samba package. Suppose that the bugfix in
libtalloc, after resolving the original issue, results in exposing
another more serious bug in samba? Now we need to pull a samba update
into this same update series.
A contrived example, you say? That would never happen, bugfixes aren't
likely to do that. Well, for one example:
https://admin.fedoraproject.org/updates/FEDORA-2011-11845 In this
particular example, we knew up-front that it was going to necessitate a
rebuild of several dependent packages and we coordinated a single
release to address them. So in this case, the proper approach was to
bundle them together in a single update. This worked because we
specifically knew that the libtevent change was going to break other
But what about when we don't know that? Let's take another example:
In this case, there was a security bug reported against Firefox. Such
things are serious, and acted on quickly. However, the bug was actually
fixed in the nss package, and Firefox, Xulrunner and friends were
rebuilt against that nss package. The problem was this: the fix made to
the nss package introduced regressions in every other package that
depended on it. However, because the default install of Firefox
contained no issues, it rapidly received the necessary karma points and
the whole update was pushed to stable. It then broke nearly every
application in Fedora that relied on cryptography.
The problem here was sociological, not technological. The only package
that received testing was Firefox. It's hard to say without evidence
whether the problem would have been averted by having nss go through its
own update, but I strongly suspect that what we would have seen was
greater testing on actual nss features for that specific update.
Of course, we now have the same potential for an issue that I described
above: If we had separate updates for nss and for Firefox, chances would
be highly-likely that Firefox would be pushed to stable via karma points
rapidly, whereas nss (which requires much more careful testing) might be
left behind in updates-testing.
So I really see two options for improving these situations:
1) https://fedorahosted.org/bodhi/ticket/663 I opened this ticket two
months ago (to silence). The idea would be to add the ability for bodhi
updates to mark other updates as a dependency, so that in the example
above, Firefox could have been marked as ready for stable, but not
pushed until the nss update was also marked as ready for stable. This to
me seems like the best long-term solution. I'd also like to mention that
Ubuntu's Launchpad system has this capability.
2) We could continue on the "single update for multiple packages"
approach, but revamp the karma system so that each SRPM gets its own
karma, rather than the update as a whole. Then, the whole update would
not be pushed via autokarma until all of the dependent packages had
sufficient karma (or the owner of the update could push them after the
stable wait period, of course).
devel mailing list