AutoQATest
by Josef Skladanka
Hi,
Check out the jskladan branch in autoqa git (i deleted it, and created
again today, so you might need to newly checkout it from git, if you
were already tracking it)
I changed all the tests, so they take advantage from the AutoQATest -
some of those are changed a bit roughly, so please check your tests ;-)
These changes are untested, but as i'll be biking in Swiss Alps next
week, i thought it might be a good idea to post the changeset, so you
guys can have a look at it (and maybe test it ;-) ) during my absence.
joza
13 years, 8 months
[PATCH] control.autoqa
by Kamil Paral
Hey, I feel like I've rewritten half of the autoqa harness. Oh, I did!
Damn. :)
So, here's the patch that adds control.autoqa files for our tests. This
means:
1. multihook support
2. autotest labels support
3. tests itself choosing when to run instead of hooks forcing them to run
And some other goodies:
4. 'noarch' arch support
5. --autotest_server option fix
6. improved documentation, templates put into doc/ directory
I've tried to put all those things into separate commits (as much as it
was possible), but I've not always succeeded, so bear with me. The patch
is too large for ML, so please do:
$ git log origin/master..origin/control.autoqa
and inspect the commits, or
$ git diff origin/master..origin/control.autoqa
and see all the changes.
Below I attach just the diff of the autoqa harness itself.
I've tried to document the changes in commit logs, but I'm sure there
will be many questions. Shoot.
======
diff --git a/autoqa b/autoqa
index d0473f8..4144f31 100755
--- a/autoqa
+++ b/autoqa
@@ -28,6 +28,8 @@ import tempfile
import StringIO
import urlgrabber
import socket
+import copy
+import fnmatch
from ConfigParser import *
from subprocess import call
@@ -89,6 +91,30 @@ def prep_controlfile(controlfile, extradata):
os.close(fd)
return name
+def eval_test_vars(test, test_vars):
+ '''Take a test's control.autoqa file, have the test_vars argument as the
+ input data, evaluate the file, and return the modified test_vars.
+ Arguments:
+ * test - name of the test for which control.autoqa should be executed
+ * test_vars - dictionary with test variables that will be used for input
+ Returns: dictionary with test variables that have been evaluated (and
+ probably modified) by test's control.autoqa file
+ '''
+ cfile = open(os.path.join(conf['testdir'], test, 'control.autoqa'))
+ # we need to deepcopy test_vars, so we don't change the input argument at all
+ # (there are nested structures)
+ vars = copy.deepcopy(test_vars)
+ # execute the file
+ exec cfile in vars
+ cfile.close()
+ # leave only those keys there were defined before, delete all other keys
+ # (there could have appeared some new ones we don't need, like builtins
+ # or temporary variables)
+ for key in vars.keys():
+ if not key in test_vars.keys():
+ del vars[key]
+ return vars
+
def maybe_call(cmdlist, dryrun=False, verbose=False):
if dryrun or verbose:
print ' '.join(cmdlist)
@@ -100,22 +126,24 @@ def maybe_call(cmdlist, dryrun=False, verbose=False):
print "Command failed: %s" % str(e)
return None
-# TODO add info about required distro
-def schedule_job(controlfile, required_arch=None, email=None, name=None, dryrun=False):
+def schedule_job(controlfile, required_arch=None, email=None, name=None, dryrun=False, labels=[]):
cmd = ['/usr/bin/atest', 'job', 'create']
if email:
cmd += ['-e', email]
# some hooks/tests may require special machines
- if required_arch:
- cmd += ['-m', '*%s' % required_arch]
- # NOTE this doesn't work without -m, so we need to pick something..
- # currently we require an --arch flag though, so this will never happen
- else:
- # FIXME schedule against distro label with e.g. '-b fc11'
- cmd += ['-m', '*x86_64']
+ # autotest currently doesn't support 'noarch' tests, so execute them on x86_64
+ if not required_arch or required_arch == 'noarch':
+ required_arch = 'x86_64'
+ # for 'i[3-6]86' arch we have 'i386' autotest label, let's convert it
+ if fnmatch.fnmatch(required_arch, 'i?86'):
+ required_arch = 'i386'
+ cmd += ['-m', '*%s' % required_arch]
+ # schedule against additional labels, like distro label ('fc13')
+ if labels:
+ cmd += ['-d', ','.join(labels)]
+ cmd += ['-f', controlfile]
cmd.append(name) # job name
- thiscmd = cmd + ['-f', controlfile]
- return maybe_call(thiscmd, dryrun)
+ return maybe_call(cmd, dryrun)
def run_test_locally(controlfile, name=None, dryrun=False):
cmd = ['/usr/share/autotest/client/bin/autotest', '--verbose']
@@ -124,13 +152,31 @@ def run_test_locally(controlfile, name=None, dryrun=False):
cmd.append(controlfile)
return maybe_call(cmd, dryrun)
+def prep_test_vars(hook, archs, autoqa_args):
+ '''Prepare variables that should be redirected to a test's control.autoqa
+ file, so the test can then decide whether to run and how.
+ Arguments:
+ hook - name of the hook that calls this test
+ archs - list of architectures to be executed on
+ autoqa_args - dictionary of autoqa arguments that will be given to
+ test's control file
+ Returns a dictionary of all the test variables, including autoqa_args
+ as one of its items.
+ '''
+ test_vars = {}
+ test_vars['autoqa_args'] = autoqa_args
+ test_vars['hook'] = hook
+ test_vars['archs'] = archs
+ test_vars['labels'] = []
+ test_vars['execute'] = True
+ return test_vars
+
# Sanity check our installation
if not os.path.isdir(conf['hookdir']):
print "Can't find hooks in %s. Check your installation." % conf['hookdir']
sys.exit(1)
-# known hooks = dirs in hookdir that have a 'testlist' file
-known_hooks = [d for d in os.listdir(conf['hookdir']) \
- if os.path.exists(os.path.join(conf['hookdir'],d,'testlist'))]
+# known hooks = dirs in hookdir
+known_hooks = [d for d in os.listdir(conf['hookdir'])]
# Set up the option parser
parser = optparse.OptionParser(usage="%prog HOOKNAME [options] ...",
@@ -139,23 +185,25 @@ parser.add_option('-h', '--help', action='help',
help='show this help message (or hook help message if HOOKNAME given) and \
exit')
parser.add_option('-a', '--arch', action='append', default=[],
- help='arch to run the test(s) on. can be used multiple times')
-# XXX TODO '-d', '--distro', help='distro label to schedule this test against'
+ help="arch to run the test(s) on; can be used multiple times; by default \
+'noarch' arch is used")
parser.add_option('-t', '--test', action='append',
- help='run only the given test(s). can be used multiple times')
+ help="run only the given test(s) instead of all relevant ones; can be used \
+multiple times; if you specify a test that wouldn't be run by default it will \
+be forced to run")
# XXX --skiptest/--exclude?
parser.add_option('--keep-control-file', action='store_true',
- help='Do not delete generated control files')
+ help='do not delete generated control files')
parser.add_option('--dryrun', '--dry-run', action='store_true', dest='dryrun',
- help='Do not actually execute commands, just show what would be done \
+ help='do not actually execute commands, just show what would be done \
(implies --keep-control-file)')
parser.add_option('--local', action='store_true', dest='local',
- help='Do not schedule jobs - run test(s) directly on the local machine')
+ help='do not schedule jobs - run test(s) directly on the local machine')
parser.add_option('-l', '--list-tests', action='store_true', dest='listtests',
help='list the tests for the given hookname - do not run any tests')
parser.add_option('--autotest-server', action='store', default=None,
- help='Sets the autotest-server hostname. Used for creating URLs to results.\
-Hostname of the local machine is used by default.')
+ help='sets the autotest-server hostname used for creating URLs to results;\
+hostname of the local machine is used by default')
# Read and validate the hookname
# Check for no args, or just -h/--help
if len(sys.argv) == 1 or sys.argv[1] in ('-h', '--help'):
@@ -175,61 +223,66 @@ hook.extend_parser(parser)
(opts, args) = parser.parse_args()
args.pop(0) # dump hookname
+# Bail out if we didn't get at least one argument
+if not args:
+ parser.error('No test argument given - nothing to test!')
+
# Run the tests locally, or schedule them through autotest?
run_local = (opts.local or (conf['local'].lower() == 'true'))
-# Determine list of architectures
-if run_local:
- opts.arch = [os.uname()[4]] # not really important that we get this right
-
-# Get the initial testlist
-# TODO try/except
-testlist = []
-testlist_file = open(os.path.join(hookdir,'testlist'))
-testlist = [t for t in shlex.shlex(testlist_file)]
-testlist_file.close()
-controlfiles = [os.path.join(conf['testdir'], t, 'control') for t in testlist]
-# Use the hook-specific code to filter the testlist
-testlist = hook.process_testlist(opts, args, testlist)
-# Allow user overrides
+if not opts.arch or run_local:
+ opts.arch = ['noarch']
+# it doesn't make sense to have 'noarch' and some other arch specified, but
+# some watcher may still provide us with such combination
+# if this is the case, just delete the 'noarch' item
+while 'noarch' in opts.arch and len(opts.arch) > 1:
+ opts.arch.remove('noarch')
+
+# Override autotest_server if required
+if opts.autotest_server:
+ conf['autotest_server'] = opts.autotest_server
+
+# Ask hook to determine all arguments needed for tests. These variables will
+# be then written into the control file as autoqa_args dictionary.
+autoqa_args = hook.process_testdata(opts, args)
+
+# Evaluate control.autoqa file for every test to get a list of tests to execute
+tests = [test for test in os.listdir(conf['testdir'])]
+default_test_vars = prep_test_vars(hookname, opts.arch, autoqa_args)
+test_vars = {} # dict of test->its test vars
+for test in tests[:]:
+ try:
+ test_vars[test] = eval_test_vars(test, default_test_vars)
+ except IOError as e:
+ print "Error: Can't evaluate test '%s': %s" % (test, e)
+ tests.remove(test)
+testlist = [test for test,vars in test_vars.iteritems() if vars['execute'] == True]
+
+# Allow testlist user override
+# User may force some test to run even though it wouldn't be run by default.
+# This is useful for example for helloworld test.
if opts.test:
for t in opts.test:
- if t not in testlist:
- parser.error('Unknown test %s' % t)
+ if t not in tests:
+ parser.error('Unknown test: %s' % t)
testlist = opts.test
+
# Print testlist, if requested
if opts.listtests:
print ' '.join(testlist)
sys.exit(0)
-# Bail out if we didn't get at least one argument
-if not args:
- parser.error('No test argument given - nothing to test!')
-# XXX TODO allow no arch if we have a specified distro?
-if not (opts.arch or run_local):
- parser.error('No arch specified')
-
# We're ready to run/queue tests now.
-for arch in opts.arch:
- # N.B. process_testdata may grow new keyword arguments if we add new autoqa
- # args that add another loop here..
- testdata = hook.process_testdata(opts, args, arch=arch)
- if not 'autotest_server' in testdata.keys():
- if opts.autotest_server is not None:
- testdata['autotest_server'] = opts.autotest_server
- else:
- testdata['autotest_server'] = conf['autotest_server']
- # XXX FIXME: tests need to be able to indicate that they do not require
- # any specific arch (e.g. rpmlint can run on any arch)
- for test in testlist:
- try:
- template = os.path.join(conf['testdir'], test, 'control')
- control = prep_controlfile(template, testdata)
- except IOError, e:
- print "WARNING: could not process control file for %s: %s" % (test,
- str(e))
- continue
+for test in testlist:
+ try:
+ template = os.path.join(conf['testdir'], test, 'control')
+ control = prep_controlfile(template, test_vars[test]['autoqa_args'])
+ except IOError, e:
+ print "WARNING: could not process control file for %s: %s" % (test,
+ str(e))
+ continue
+ for arch in test_vars[test]['archs']:
testname='%s:%s.%s' % (hookname, test, arch)
email = conf['notification_email']
@@ -237,14 +290,13 @@ for arch in opts.arch:
retval = run_test_locally(control, name=testname,
dryrun=opts.dryrun)
else:
- # XXX FIXME add required_distro, set arch=None for noarch tests
retval = schedule_job(control, email=email, name=testname,
- required_arch=arch,
- dryrun=opts.dryrun)
+ required_arch=arch, dryrun=opts.dryrun,
+ labels=test_vars[test]['labels'])
if retval != 0:
print "ERROR: failed to schedule job %s" % testname
- if opts.keep_control_file or opts.dryrun:
- print "keeping %s at user request" % control
- else:
- os.remove(control)
+ if opts.keep_control_file or opts.dryrun:
+ print "keeping %s at user request" % control
+ else:
+ os.remove(control)
13 years, 8 months
control.autoqa merged
by Will Woods
As the subject says - I've merged the control.autoqa branch into master.
If we were keeping score based on how many 'FIXME' and 'TODO' comments
were fixed by a patch, that one would have been worth a LOT of points.
Awesome work. Thanks a million.
Since the helloworld test was already merged I think the other major
part we wanted to get in for autoqa 0.4 was Josef's ExceptionCatcher and
AutoQATest patches - I'm hoping to get them reviewed and merged by the
end of the week.
Thanks again, guys!
-w
13 years, 8 months
Re: [PATCH] ticket#207 - Use correct list of tags when looking for koji builds
by Kamil Paral
----- "James Laska" <jlaska(a)redhat.com> wrote:
> ---
> hooks/post-koji-build/watch-koji-builds.py | 12 +++++++++++-
> 1 files changed, 11 insertions(+), 1 deletions(-)
>
> diff --git a/hooks/post-koji-build/watch-koji-builds.py
> b/hooks/post-koji-build/watch-koji-builds.py
> index 1b3a17b..3a8ad23 100755
> --- a/hooks/post-koji-build/watch-koji-builds.py
> +++ b/hooks/post-koji-build/watch-koji-builds.py
> @@ -111,7 +111,12 @@ tags for new builds and kick off tests when new
> builds/packages are found.')
> # Using repoinfo, establish the set of tags to look for
> taglist = set()
> for repo in repoinfo.repos():
> - taglist.add(repoinfo.get(repo, 'tag'))
> + # include tag the development branch (aka rawhide)
> + if repoinfo.get(repo, 'collection_name') == "devel":
> + taglist.add(repoinfo.get(repo, 'tag'))
> + # otherwise, find all *-updates-candidate tags to include
> + elif len(repoinfo.getparents(repo)) == 0:
> + taglist.add(repoinfo.get(repo, 'tag') +
> '-updates-candidate')
Oh, I like this. It is a very simple solution, yet I haven't thought about
it. Nice.
Review from our autoqa-guru would surely be beneficial, but I have tested
it and it seems to be working ok - I think it's ready for push.
>
> if opts.verbose:
> print "Looking up builds since %s (%s)" % (opts.prevtime,
> time.ctime(opts.prevtime))
> @@ -145,6 +150,11 @@ tags for new builds and kick off tests when new
> builds/packages are found.')
> testarches.discard('noarch')
> testarches.add('x86_64')
>
> + # Account for mismatched 32-bit repo and package
> architecture
> + # name ('i386' vs 'i686')
> + if 'i386' in repoarches and 'i686' in arches:
> + testarches.add('i686')
> +
I don't know whether this is required presently. But I'm sure this won't
be necessary once control.autoqa patchset is pushed in. I have modified
autoqa script to handle all this conversions centrally in one place
(right before job scheduling).
> for arch in testarches:
> harnesscall += ['--arch', arch]
> if b['epoch']:
> --
> 1.7.2
>
> _______________________________________________
> autoqa-devel mailing list
> autoqa-devel(a)lists.fedorahosted.org
> https://fedorahosted.org/mailman/listinfo/autoqa-devel
13 years, 8 months
Re: [PATCH] control.autoqa
by Kamil Paral
----- "Will Woods" <wwoods(a)redhat.com> wrote:
> On Fri, 2010-07-23 at 10:44 -0400, Kamil Paral wrote:
> > Hey, I feel like I've rewritten half of the autoqa harness. Oh, I
> did!
> > Damn. :)
>
> Ha! Yeah it's a big patch:
> $ git diff --shortstat master..origin/control.autoqa
> 47 files changed, 417 insertions(+), 420 deletions(-)
> but it passes the first criteria - more deletes than inserts. So
> things
> are looking good so far. Heh.
>
> I'm back in action now and I'll be reviewing this (and Josef's)
> patches
> as quick as I can.
>
> Can I ask what testing this code has gotten? It'll help me get
> through
> it quicker if we're already sure it works as expected.
Yea, I wanted to say it's tested pretty well. But then I realized
I haven't merged jskladan's patch about url->baseurl rename, so
some tests were broken. I have merged it now.
I also added a new patch that will run initscripts tests only
when the package is actually supported by it and skip it otherwise.
This is a little controversial, jskladan is still not sure we want
to do this this way. In any case I think it serves as a nice proof
of concept of what can be done with control.autoqa files.
I have tried post-koji-build and post-repo-update hooks and tests
and they seem to work ok. Post-tree-compose watcher doesn't spit
any output on me, I'm sure whether it is supposed to work and how.
Everything pushed to origin/control.autoqa, so please update.
13 years, 8 months
[AutoQA] #210: "url" not accepted as argument by autotest
by fedora-badges
#210: "url" not accepted as argument by autotest
---------------------+------------------------------------------------------
Reporter: kparal | Owner:
Type: defect | Status: new
Priority: major | Milestone: Hot issues
Component: harness | Version: 1.0
Keywords: |
---------------------+------------------------------------------------------
So, I've found another problem with commit
afcc8d4e748fa1f7a4dd9529e6c67147cc7b4695 (and the previous ones). It seems
that there is some problem in autotest and it doesn't want to accept "url"
as test argument. When I use --dryrun to create the config file, change
"url" to "burl" and run the test by hand, it goes fine.
I have now some other stuff in-process, so creating a ticket. Feel free to
grab it if you like, otherwise I'll dig into it soon.
{{{
# autoqa post-repo-update --name f13 --arch x86_64
http://download.fedoraproject.org/pub/fedora/linux/releases/13/Everything...
-t helloworld
10:49:44 INFO | Writing results to /usr/share/autotest/client/results
/post-repo-update:helloworld.noarch
10:49:44 INFO | Initializing the state engine
10:49:44 DEBUG| Persistent state variable __steps now set to []
10:49:44 INFO | Symlinking init scripts
10:49:44 DEBUG| Running 'grep :initdefault: /etc/inittab'
10:49:44 DEBUG| Running 'ln -sf /usr/share/autotest/client/tools/autotest
/etc/init.d/autotest'
10:49:44 DEBUG| Running 'ln -sf /usr/share/autotest/client/tools/autotest
/etc/rc3.d/S99autotest'
10:49:44 DEBUG| Dropping caches
10:49:44 DEBUG| Running 'sync'
10:49:44 DEBUG| Running 'echo 3 > /proc/sys/vm/drop_caches'
10:49:45 DEBUG| Running 'rpm -qa'
10:49:45 INFO | START ---- ---- timestamp=1279874985
localtime=Jul 23 10:49:45
10:49:45 DEBUG| Persistent state variable __group_level now set to 1
10:49:45 DEBUG| Dropping caches
10:49:45 DEBUG| Running 'sync'
10:49:45 DEBUG| Running 'echo 3 > /proc/sys/vm/drop_caches'
10:49:45 ERROR| JOB ERROR: Unhandled TypeError: run_test() got multiple
values for keyword argument 'url'
Traceback (most recent call last):
File "/usr/share/autotest/client/bin/job.py", line 1102, in step_engine
execfile(self.control, global_control_vars, global_control_vars)
File "/tmp/autoqa-control.N8vgao", line 35, in <module>
job.run_test('helloworld', config=autoqa_conf, **autoqa_args)
File "/usr/share/autotest/client/bin/job.py", line 43, in wrapped
return f(self, *args, **dargs)
TypeError: run_test() got multiple values for keyword argument 'url'
10:49:45 DEBUG| Persistent state variable __group_level now set to 0
10:49:45 INFO | END ABORT ---- ---- timestamp=1279874985
localtime=Jul 23 10:49:45 Unhandled TypeError: run_test() got
multiple values for keyword argument 'url'
Traceback (most recent call last):
File "/usr/share/autotest/client/bin/job.py", line 1102, in
step_engine
execfile(self.control, global_control_vars, global_control_vars)
File "/tmp/autoqa-control.N8vgao", line 35, in <module>
job.run_test('helloworld', config=autoqa_conf, **autoqa_args)
File "/usr/share/autotest/client/bin/job.py", line 43, in wrapped
return f(self, *args, **dargs)
TypeError: run_test() got multiple values for keyword argument 'url'
10:49:45 DEBUG| Logging subprocess finished
10:49:45 DEBUG| Logging subprocess finished
}}}
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/210>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
13 years, 8 months
"Hot issues" milestone
by Kamil Paral
Hi,
I've created a "Hot issues" milestone which we can use for marking really
serious defects/regressions we have found. The meaning of this milestone
is to encompass all issues that should be fixed as soon as possible.
The reason I did this is because I start to be a little lost in all those
tickets and sorting by importance is not enough, so hopefully this will
help.
Happy Friday,
Kamil
13 years, 8 months
[AutoQA] #107: writing a python test (modeled similarly to install.py)
by fedora-badges
#107: writing a python test (modeled similarly to install.py)
-------------------+--------------------------------------------------------
Reporter: liam | Owner:
Type: task | Status: new
Priority: major | Milestone:
Component: tests | Version: 1.0
Keywords: |
-------------------+--------------------------------------------------------
write a python test (modeled similarly to install.py) that takes as input:
* a URL to a kickstart file (URL can be local (e.g. file://) or
remote (e.g. http://, ftp://, nfs:// ...) ... but start with easy case
first.
* a URL for the install media (again, keep this simple for now and
assume file:///var/lib/libvirt/images/Fedora-12-x86_64-DVD.iso)
* a URL to a configuration file that describes the environment -
again, perhaps optional for now. But eventually we'll need
something that tells the test to create a guest with 4 NICs vs 1
NIC, 3 SCSI drives etc... Don't worry about being fancy at
first ... just take the defaults. This is just where I might
see it headed in 6+ months. Copy from the kvm autotest project
if you like.
At beginning, we focus on the basics and something that gets things far
enough along so we can review, adjust and repeat.
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/107>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
13 years, 8 months
Re: Upgrade Path
by Kamil Paral
----- "James Laska" <jlaska(a)redhat.com> wrote:
> On Wed, 2010-07-21 at 10:06 -0400, Josef Skladanka wrote:
> > Yes, technically, it is breaking the upgrade path, the question was,
>
> > if we want to support users who basically broke the path for
> themselves
> > by removing one of the repos before updating.
>
> This test is likely to be initiated by the post-bodhi-update watcher
> right? That watcher provides information about what target the
> update
> is for. Can we keep it simple and assume for now ...
>
> * if the update is intended for a *-updates-testing repository,
> include all -updates-testing tags
> * if the update is intended for *-updates repository, include
> only
> *-updates tags
>
> This seems like it would capture the most common scenarios and we can
> handle exception scenarios at later date (or as an informational
> result)?
I'll answer for Josef, because he won't return till Friday.
The basic scenario:
F12 stable -> F13 stable
is crystal clear. No upgrade path problems must occur.
With updates-testing however, issues appear. One of the issues was described
by Josef pretty comprehensibly. Another issue is that updates may be
withdrawn from updates-testing. So you can push the same version to F12
and F13, and then withdraw the one from F13. Kaboom. AutoQA is not able to
catch this, the withdrawal happens only after primary AutoQA check is
finished.
So the question basically is - how complete do we want to have our upgrade
path test? It can check just the common stuff, as James just suggested. Or
we can try to cover all the corner cases - but some of them will probably
need some strict policies to take place - like the one described by Josef
(pushing to F12 updates-testing only after it has landed in F13 updates).
It seems to me that the easiest solution is to make AutoQA check just the
common scenarios and offload some of the corner cases to Bodhi, because
it can prevent some things much more easily than we can (like the
aforementioned package withdrawal). Of course that means the logic of this
test is then split before AutoQA and Bodhi, it's not in a single place.
13 years, 8 months