(mostly) Out of Office until July 22
by Will Woods
Hey folks,
I'm going to be out of the office (going to NYC for HOPE[1]) until July
22. I know we have a lot of work in-progress:
* helloworld test / autoqa-args-in-a-dict / multihook tests
* autoqa labels / control.autoqa
* AutoQATest base class
* depcheck improvements
* automation installation work
* new test ideas coming in, watcher updates, etc..
So! I'll try to check in as much as I can, and I'll be happy to answer
questions anyone sends by email - just know it might take me a day or
two to respond[2] - and if I can get time I'll still review patches sent
to the list. But if something REALLY needs to get reviewed/merged to
autoqa master immediately, well: if Kamil and Josef both think it's a
good idea, go for it.
Okay. See you next week!
-w
[1] http://thenexthope.org/
[2] "How's that different from any other day, Will?" Hm. Good point.
13 years, 9 months
[PATCH] add helloworld test
by Vojtěch Aschenbrenner
Hello,
I created new test for AutoQA named 'helloworld', ticket #195. It's
simple test, that will only forward parameters that were passed in
control file. Main purpose is to show, how to write new tests for
AutoQA. Patch included.
---
>From 78a996deb74201984a42d3d3783c1a47e213396f Mon Sep 17 00:00:00 2001
From: vaschenb <vaschenb(a)redhat.com>
Date: Tue, 13 Jul 2010 15:37:26 +0200
Subject: [PATCH] 195: Add helloworld test for AutoQA
---
tests/helloworld/control | 18 +++++++++++++++++
tests/helloworld/helloworld.py | 41 ++++++++++++++++++++++++++++++++++++++++
2 files changed, 59 insertions(+), 0 deletions(-)
create mode 100644 tests/helloworld/control
create mode 100644 tests/helloworld/helloworld.py
diff --git a/tests/helloworld/control b/tests/helloworld/control
new file mode 100644
index 0000000..af19252
--- /dev/null
+++ b/tests/helloworld/control
@@ -0,0 +1,18 @@
+# vim: set syntax=python
+TIME="SHORT"
+AUTHOR = "Vojtech Aschenbrenner <vaschenb(a)redhat.com>"
+DOC = """
+This test runs helloworld. It will only print params what it will get.
+Main purpose is to show, how to write tests.
+"""
+NAME = 'helloworld'
+TEST_TYPE = 'CLIENT' # SERVER can be used for tests that need multiple machines
+TEST_CLASS = 'General'
+TEST_CATEGORY = 'Functional'
+
+# post-koji-build tests can expect the following variables from autoqa:
+# envr: package NVR (required, epoch can be skipped)
+# name: package name
+# kojitag: koji tag applied to this package
+
+job.run_test('helloworld', name=name, envr=envr, kojitag=kojitag, config=autoqa_conf)
diff --git a/tests/helloworld/helloworld.py b/tests/helloworld/helloworld.py
new file mode 100644
index 0000000..6d1bdf8
--- /dev/null
+++ b/tests/helloworld/helloworld.py
@@ -0,0 +1,41 @@
+#
+# Copyright 2010, Red Hat, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License along
+# with this program; if not, write to the Free Software Foundation, Inc.,
+# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Author: Vojtech Aschenbrenner <vaschenb(a)redhat.com>
+
+from autotest_lib.client.bin import test
+from autotest_lib.client.bin.test_config import config_loader
+import autoqa.util
+import os
+
+class helloworld(test.test):
+ version = 1 # increment if setup() changes
+
+ def initialize(self, envr, config):
+ self.config = config_loader(config, self.tmpdir)
+ self.autotest_url = autoqa.util.make_autotest_url(self.config)
+
+ def setup(self):
+ pass
+
+ def run_once(self, **kwargs):
+ self.arguments = kwargs
+ print "===Printing passed params==="
+ for self.name, self.arg in self.arguments.iteritems():
+ print "%s = %r" % (self.name, self.arg)
+ print "%s = %r" % ("autotest_url", self.autotest_url)
+ print "===End of passed params==="
--
1.7.1.1
13 years, 9 months
[PATCH] Create a common base class for autoqa tests
by Josef Skladanka
Hello gang,
while I was playing around with ResultsDB, I've come to some issues,
which bothered me to the level I wrote a fix :)
1) Common base class for the tests
-----------------------------------
[patch 0001 - lib/python/test.py]
While rewriting the tests, so they store the results in resultsdb, i
found myself doing repetitive work - i needed to write the same code to
__all__ the test files, which is a boring job, which i do not want to do
ever again :)
Fortunately this can be solved by creating a common parent class (lets
call it BaseTest) for all the tests. This class will then implement all
the repetitive code (e.g. sending the email, loading config...), and
only local "specialities" need to be handled in the test.
The main thought behind the BaseTest is that if we setup few common
variables (result, summary, highlights, outputs) as instance variables,
we can then use the postprocess function (which is called after the
run_once successfully [and note the 'successfully' for later] ends).
[note]
This is also a pre-step to switching to ResultsDB as all the changes
required to add basic resultsdb storage is about 7 lines in the BaseTest.
[/note]
[patch 0002]
For the email's sake in current 'send email to results list' model, the
postprocess fuction implements the email-sending. Summary is the subject
and highlights + outputs are body. The self.mail_to list is filled in
the tests which have the "inform package owners" fuctionality.
2) ExceptionCatcher decorator
-----------------------------
[patch 0001 - lib/python/decorators.py]
What the heck is this?!? Well, you remember, that the 'postprocess'
function is called only if the run_once ends well (i.e. no exception is
thrown from the run_once). This, of course, is absolutely not OK, if one
wants to have some common "out" point - e.g. call the postprocess function.
This decorator then watches over the decorated function, and if
exception is raised, the decorator stores it, calls the
'exception_happened' function (which is given as parameter to the
decorator, see the docstring for more detail), and then re-raises the
exception.
This practically gives us the opportunity to call the postprocess
method. In fact, I call "FOOBAR_failed" method, which potentially sets
the result & summary (if not already set) and then calls the postprocess.
Using the decorator has also a small drawback - **kwargs needs to be set
as last parameter of the decorated function. This is because autotest
does some nasty magic with introspection to find out which parameters
the function (initialize, run_once) has and it fails on the decorated
function, so it just blindly uses all the parameters specified in the
control file.
3) Tests & templates rewritten
------------------------------
[both patches]
I also rewrote the tests & templates so they implement this
functionality, I did not have time to test it yet, so it's probable,
that there are some typing errors and stuff, but i think it's important
to show you, how will the new style of test writing work, once this
patch is pushed.
Please comment, ask questions, share ideas - I believe that this is an
important issue and would like all of you to be happy about it.
joza
13 years, 9 months
Re: [PATCH] add autotest labels support
by Kamil Paral
Thanks for your remarks, they were really valuable. Comments below.
----- "Will Woods" <wwoods(a)redhat.com> wrote:
> On Mon, 2010-07-12 at 09:18 -0400, Kamil Paral wrote:
>
> I like what this patch is doing overall; a couple comments below.
>
> > +def get_aq_vars(controlfile, extradata):
> > + '''Extract AutoQA specific test variables from control file
> and
> > + return them in a dictionary. All these variables start with
> prefix 'aq_'.
> > + Arguments:
> > + * controlfile - control file
> > + * extradata - dictionary with extra variables received from
> hook options
> > + Returns: dictionary with AutoQA specific test variables
> > + '''
> > + lines = [line for line in open(controlfile).readlines() if
> line.startswith('aq_')]
> > + vars = extradata.copy()
> > + for line in lines:
> > + try:
> > + exec line in vars
>
> Interesting - rather than actually interpreting the entire control
> file,
> you're executing *only* those lines that start with 'aq_'.
>
> Which means:
> 1) that variable needs to be *literal* - this wouldn't work:
> release_ver = figure_out_release_ver()
> aq_labels = [release_ver]
> Nor this:
> if rawhide:
> aq_labels=['rawhide']
> else:
> aq_labels=['f13']
Ah, you got me, I didn't imagine that. I wanted to evaluate the whole
control file but that would mean I also run "job.run_test(...)" line,
which I don't want to. So I just parsed out aq_* variables.
Alright, possible solutions:
1. Make separate file for AutoQA variables, like control.autoqa. We
can then easily evaluate the whole file. But it's another file for
every test... I don't like many files :)
2. We can find "job.runtest(...)" line in control file, cut it out/
comment it out, and evaluate the rest of the file. That's easy to do,
on the other hand maybe some more complex stuff may be present in
the control file in the future and some problems may arise.
Which approach is better? Personally I would do the second one. We
can always redesign it in the future. Or do you have some better idea?
>
> 2) Any test can execute any python code inside the 'autoqa' script -
> which usually runs as root. This makes me a little nervous, although
> I'm
> not sure what user the control file normally runs as, and we're
> making
> no guarantees about security anyway.
I understand it. Currently only autoqa script and watchers script run
on autotest server machine. Now also parts of tests will run there.
But there are these two points:
1. I think we will always review any test that would be accepted and
deployed to our production servers. It's very easy to see any mischief
in the control file. I don't think it is probable some malicious code
could be put in there and we haven't seen it.
2. I have the feeling we can't do without it. We need to use autotest
labels, right? They must be defined per test, not per hook. Also we
need a dynamic definition ("because test XX runs on YY.fc13 package,
it needs 'fc13' autotest label" - you can hardly define this statically).
Dynamic definition means some scripting language to provide the logic,
you can't get by with just a plain text file or smth. Well and that
means some Python script in the end.
To be correct, in the aforementioned example we could get by with
a static mapping of ENVR suffixes -> autotest labels. But I expect
some more complex needs will arise in the future.
So, it is my impression we can't really do much about it, we need
it. And I don't really see it as a big security problem.
>
> So anyway, I'm not sure how I feel about this. At the very least it
> seems confusing to have lines in the control file that get
> *partially*
> evaluated by autoqa, although not as you might expect.
>
> How does autotest evaluate the control files as a whole?
Yes, I believe autotest executes the whole control file.
>
> > + except:
> > + print "Evaluating of AutoQA variables from control file
> failed:"
> > + print " Line: %s" % line.strip()
> > + print " Control file: %s" % controlfile
> > + return {}
> > + # let's leave only aq_* keys present
> > + for key in vars.copy():
>
> Why vars.copy() instead of just 'for key in vars:'?
Because you can't delete keys from a collection that you're traversing at
the same time. There might be some... side effects :)
>
> > + if not key.startswith('aq_'):
> > + del vars[key]
> > + return vars
> > +
>
> The rest seems fine - I pushed a commit to clean up some trailing
> whitespace.
I should finally define some vim shortcuts to fix this up for me...
>
> Anyway, if we can figure out how to make the partial evaluation of
> the
> control files a bit saner (or you can help me understand why this is
> actually sane and I'm just confused) I'll be happy to merge this into
> master.
>
> -w
>
> _______________________________________________
> autoqa-devel mailing list
> autoqa-devel(a)lists.fedorahosted.org
> https://fedorahosted.org/mailman/listinfo/autoqa-devel
13 years, 9 months
[PATCH] add autotest labels support
by Kamil Paral
Hello,
This a patch that adds autotest labels support to AutoQA. I have used labels
defined at https://fedoraproject.org/wiki/Managing_autotest_labels. The
patch is tested and should work ok.
The only tests that currently need to use this are initscripts and
virt_install, I believe. If I have missed something, let me know.
Initscripts are now forced to run in virtual machine and they run on the
Fedora release for which the package was built. Rats_install now requires
a machine that is virt_capable. In the future package_sanity will also
require this patch's functionality.
I have also added a two line code that disables machine reboot before and
after each test is run (it should be a separate patch, right, but...:)).
13 years, 9 months
[PATCH] replace repoinfo.conf on upgrade
by Kamil Paral
repoinfo.conf (as opposed to autoqa.conf) is not really a configuration
file where the user could set some useful setting. It's more of a
definition file for AutoQA's internal use only. Therefore we move
it from %config(noreplace) behaviour to %config behaviour. That will
save us some trouble when upgrading autoqa (often you forget about
updating repoinfo.conf and then it throws weird error messages, at
best).
---
Makefile | 2 +-
autoqa.spec | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/Makefile b/Makefile
index 86c7105..0931a69 100644
--- a/Makefile
+++ b/Makefile
@@ -24,7 +24,7 @@ install: build
[ -f $(PREFIX)/etc/cron.d/autoqa ] || install -m 0644 autoqa.cron $(PREFIX)/etc/cron.d/autoqa
install -d $(PREFIX)/etc/autoqa
[ -f $(PREFIX)/etc/autoqa/autoqa.conf ] || install -m 0644 autoqa.conf $(PREFIX)/etc/autoqa/
- [ -f $(PREFIX)/etc/autoqa/repoinfo.conf ] || install -m 0644 repoinfo.conf $(PREFIX)/etc/autoqa/
+ install -m 0644 repoinfo.conf $(PREFIX)/etc/autoqa/
install -d $(PREFIX)$(HOOK_DIR)
for h in hooks/*; do cp -a $$h $(PREFIX)$(HOOK_DIR); done
install -d $(PREFIX)$(TEST_DIR)
diff --git a/autoqa.spec b/autoqa.spec
index 540fb00..8e60be8 100644
--- a/autoqa.spec
+++ b/autoqa.spec
@@ -76,7 +76,7 @@ rm -rf $RPM_BUILD_ROOT
%doc README LICENSE TODO
%config(noreplace) %{_sysconfdir}/cron.d/autoqa
%config(noreplace) %{_sysconfdir}/autoqa/autoqa.conf
-%config(noreplace) %{_sysconfdir}/autoqa/repoinfo.conf
+%config %{_sysconfdir}/autoqa/repoinfo.conf
%config(noreplace) %{testdir}/rats_sanity/irb.cfg
%dir %attr(0775,root,autotest) %{_localstatedir}/cache/autoqa
%{_sysconfdir}/autoqa
--
1.7.1.1
13 years, 9 months
Re: Priority discussion / PUATP "sprint"
by Kamil Paral
----- "Will Woods" <wwoods(a)redhat.com> wrote:
>
>
> The remaining milestones and all the other work we're doing is
> definitely all still important. But I think it's time that we sat
> down
> and really tried to finish off the PUATP work - or at least get a
> version 1.0 up and running.
>From the last prioritization meeting I got the idea that we wanted
to prioritize ResultsDB first...? I understood that we would just finish
some semifinished parts - testing depcheck, implementing autotest
labels - and then focusing on having ResultsDB up and running. As
for myself I'm finishing my part now and am prepared for jumping
onto the ResultsDB train very soon.
But we can surely discuss other possibilities of prioritization.
The only important thing is, I think, that we move along together,
so it progresses faster. We can have it as a Monday topic or discuss
it here.
13 years, 9 months
Priority discussion / PUATP "sprint"
by Will Woods
Hi, all.
So you probably noticed me mucking about with milestones and tickets in
AutoQA trac. I'm trying to rearrange things to give us a clearer idea of
what needs to be done to get the Package Update Acceptance Test Plan
automation up and running - especially the depcheck test, since we keep
hitting situations it should have prevented.
I've broken it down into a few milestones, the key ones being:
* Multi-hook tests
This is a milestone for work to allow tests that run for multiple hooks
(e.g. rpmguard/rpmlint). I think we might need to have this working in
order to have all the tests we want as part of the Package Update
Acceptance Test Plan (see below).
* Package Update Acceptance Test Plan
This is the main milestone for the PUATP work as a whole. Some further
sub-milestones follow.
* Package Update Acceptance Test Plan - depcheck
This milestone is for the depcheck test, which is possibly the most
complex part of the PUATP.
* Package Update Acceptance Test Plan - package sanity tests
This milestone tracks the progress of the package sanity tests, which
are another important part of the PUATP.
* Packaging, Review, & Deployment
This tracks the status of the packaging and deployment of
AutoQA/autotest in the Fedora infrastructure - we need this to be
finished for the above tests to be truly useful, since we need the test
logs to go somewhere public.
The remaining milestones and all the other work we're doing is
definitely all still important. But I think it's time that we sat down
and really tried to finish off the PUATP work - or at least get a
version 1.0 up and running.
So if possible, I'd like to use part of Monday's Fedora QA meeting to
discuss what other work/tickets might be needed, and try to divide up
the tickets and get us all working together to make the Package Update
Acceptance Test Plan fully automated, as soon as possible.
Does that work for everyone? Does anyone see any obvious missing pieces?
Does anyone want to volunteer to take charge of any of the open tickets?
Please let me know - here or in the meeting on Monday.
Thanks!
-w
13 years, 9 months