PATCH proposal - new Koji watcher
by Josef Skladanka
Hi gang,
according to #228 <https://fedorahosted.org/autoqa/ticket/228>,
and discussions with kparal, I've rewritten the koji watcher,
so it can check also the -pending tags in koji.
For the -pendig tags, we're using quite a different querying model
based on parsing the tagHistory (e.g.
# koji list-tag-history --tag='dist-f14-updates-pending'
)
The benefit of this solution is, that we can catch the situations
like
1) package Foo is built at date XYZ. It gains tag dist-f14-updates-pending
2) koji-watcher founds out "ha, new package, let's test it"
3) tests are OK
4) package Foo gains dist-f14-updates-testing-pending tag
5) we'd like to run tests like depcheck on it, but because the 'built at'
date, which the actual watcher checks is not altered, we miss the change
But parsing the tag history, we only check for new 'events' in the tag
history (i.e. 'package foo got the -pending tag few minutes ago'), independently
on the build time -> win :)
The drawback is, that at the moment, querying for the tag history takes time
because koji sends the whole tag history (a lot of data). I'm discussing a minor
change which would allow us to specify "give me the history which is newer than
date XYZ" (as is currently used for fetching new builds) with jkeating.
Because of this, the non-pending repos are still handled the 'old' way.
....
We also found us in need of 'batch' scheduling - e.g. we don't want to run
depcheck for every package built, but we'd like just to inform autoqa
"hey, there is new stuff in dist-f14-updates-pending tag, run stuff".
So there is new watch-koji-builds-batch watcher.
The thing is, that it is not really a watcher as such, only the hook.py
file. The reason to do this is, that querying koji is time-consuming operation,
and there is no need to do it twice, to get the same data. So there are two
different 'schedule jobs' methods in the watch-koji-build hook, and one
schedules 'per package' jobs, and the second does it as a 'batch'.
(note that the batch one is not yet tested - I just wanted to post you the
patch, so you can see the new 'concept'. The per-package part is tested,
and considered to be working).
Comments are more than welcomed - looking forward to hearing from you!
Joza
13 years, 4 months
bodhi staging server
by Kamil Paral
Hi,
just a note for anyone interested. As it was mentioned before, we can use
Bodhi staging server [1] for AutoQA development purposes. The email
notifications should be disabled on that staging server, so we don't have to
be afraid to flood people with messages.
Currently, when you try to use it, you get following error:
# /usr/share/autoqa/post-bodhi-update/watch-bodhi-requests.py --dryrun
Traceback (most recent call last):
...
pycurl.error: (51, "SSL: certificate subject name '*.fedoraproject.org' does not match target host name 'admin.stg.fedoraproject.org'")
Luke Macken promised to look for someone who can fix that. In the meantime,
it's enough when you put "insecure=True" to the BodhiClient() constructor.
That means you create the object like this:
fedora.client.BodhiClient(username=user, password=pswd, base_url='https://admin.stg.fedoraproject.org/updates/', insecure=True)
(The base_url will be configurable from autoqa.conf in a short time).
And it really works, I have a proof [2] :-)
But please note that the contents of the Bodhi staging instance and of the
Bodhi production instance is not the same. Therefore we can't listed for
production events and send comments to staging instance. We also can't listen
for staging events, because currently nothing happens out there. But it is
still useful for manual testing of Bodhi integration patches.
Regards,
Kamil
[1] https://admin.stg.fedoraproject.org/updates/
[2] https://admin.stg.fedoraproject.org/updates/polkit-qt-0.96.1-3.fc14
13 years, 4 months
[PATCH] Added a support for sending comments into Bodhi
by Martin Krizek
Hi all,
this patch (https://fedorahosted.org/autoqa/ticket/205) allows AutoQA to send a test result as a comment to bodhi. You can turn on/off the support by setting 'send_bodhi_comments' in /etc/autoqa/autoqa.conf to true/false. Once the test is completed it will call the bodhi_post_testresult function from lib/python/bodhi_utils.py. For instance, the usage in upgradepath test would be:
> bodhi_post_testresult(kwargs['name'], self.__class__.__name__, self.result, self.autotest_url, self.config),
note that kwargs['name'] must contain title of the update, for now. Thanks to Luke Macken, there will be a support for posting comments by UPDATEID in the next release of bodhi.
The comment will be posted only if the support is turned on and if the same comment is NOT already posted. The only exception are FAILED results. Those will be sent again to remind the developer about the issue. However, we do not want to send them FAILED results every time the test failed, obviously. So there is a variable BODHI_POSTING_COMMENT_SPAN in lib/python/bodhi_utils.py which tells the script how long it should wait before posting the same comment again. Note that the test result could be of the same result (FAILED), but it could fail from a different reason than it did last time. That's why the comment contains url to the test result, so developers can check it. The format of the comments is as follows:
> AutoQA: *test_name* test *result* on *arch*. The result can be found at: *url*
for example,
> AutoQA: upgradepath test PASSED on noarch. The result can be found at: http://server.com/results/14-root/client.com/
FAS (Fedora Accounts System) credentials for logging into bodhi and sending comments are stored in /etc/autoqa/fas.conf.
If you have any questions, please do ask.
---
diff --git a/Makefile b/Makefile
index ac4ddeb..f3a14f9 100644
--- a/Makefile
+++ b/Makefile
@@ -22,6 +22,7 @@ install: build
install autoqa $(PREFIX)/usr/bin/
install -d $(PREFIX)/etc/autoqa
[ -f $(PREFIX)/etc/autoqa/autoqa.conf ] || install -m 0644 autoqa.conf $(PREFIX)/etc/autoqa/
+ [ -f $(PREFIX)/etc/autoqa/fas.conf ] || install -m 0640 -g autotest fas.conf $(PREFIX)/etc/autoqa
install -m 0644 repoinfo.conf $(PREFIX)/etc/autoqa/
install -d $(PREFIX)$(HOOK_DIR)
for h in hooks/*; do cp -a $$h $(PREFIX)$(HOOK_DIR); done
diff --git a/autoqa.conf b/autoqa.conf
index ed8662a..7fd8f33 100644
--- a/autoqa.conf
+++ b/autoqa.conf
@@ -23,3 +23,7 @@ result_email =
mail_from = autoqa(a)fedoraproject.org
# hostname or hostname:port of smtp server / mailhub to use for sending email
smtpserver = localhost
+# If "true", test results (for tests utilizing this feature) will be sent
+# as comments to Fedora Update System (Bodhi). This requires that you have
+# Bodhi credentials filled in in fas.conf.
+send_bodhi_comments = false
diff --git a/autoqa.spec b/autoqa.spec
index 261b20a..84b9efe 100644
--- a/autoqa.spec
+++ b/autoqa.spec
@@ -62,6 +62,7 @@ make build PYTHON=%{__python}
rm -rf $RPM_BUILD_ROOT
make install PREFIX=$RPM_BUILD_ROOT TEST_DIR=%{testdir} HOOK_DIR=%{hookdir} PYTHON=%{__python}
install -m 644 autoqa.conf repoinfo.conf $RPM_BUILD_ROOT%{_sysconfdir}/autoqa/
+install -m 640 -g autotest fas.conf $RPM_BUILD_ROOT%{_sysconfdir}/autoqa/
# front-ends/israwhidebroken
mv %{buildroot}%{_bindir}/start-israwhidebroken %{buildroot}%{_sbindir}/
mv %{buildroot}%{_bindir}/israwhidebroken.wsgi %{buildroot}%{_sbindir}/
@@ -78,6 +79,7 @@ rm -rf $RPM_BUILD_ROOT
%doc README LICENSE TODO autoqa.cron
%dir %{_sysconfdir}/autoqa
%config(noreplace) %{_sysconfdir}/autoqa/autoqa.conf
+%config(noreplace) %{_sysconfdir}/autoqa/fas.conf
%config %{_sysconfdir}/autoqa/repoinfo.conf
%config(noreplace) %{testdir}/rats_sanity/irb.cfg
%dir %attr(0775,root,autotest) %{_localstatedir}/cache/autoqa
diff --git a/doc/test_class.py.template b/doc/test_class.py.template
index 590bd16..c19074a 100644
--- a/doc/test_class.py.template
+++ b/doc/test_class.py.template
@@ -32,7 +32,7 @@ from autotest_lib.client.bin import utils
# Your class name must match file name (without .py) and also run_test line in
# its control file.
-class testclassname(AutoQATest): # <-- UPDATE Classname
+class testclassname(AutoQATest): # <-- UPDATE class name
version = 1 # increment this if setup() changes
# All methods below may receive arbitrary number of arguments that you
@@ -53,7 +53,7 @@ class testclassname(AutoQATest): # <-- UPDATE Classname
# method - if you don't need to initialize anything, delete this block.
#@ExceptionCatcher()
#def initialize(self, config, **kwargs): #**kwargs needs to stay
- # super(testclassname, self).initialize(config) # <-- UPDATE Classname
+ # super(testclassname, self).initialize(config) # <-- UPDATE class name
# #your extra initialization code goes here
# This is where the test code actually gets run. It's the only required
@@ -65,6 +65,7 @@ class testclassname(AutoQATest): # <-- UPDATE Classname
# self.highlights: important lines to notice (string or list of strings)
@ExceptionCatcher()
def run_once(self, some_params, **kwargs): #**kwargs needs to stay
+ super(testclassname, self).run_once() # <-- UPDATE class name
cmd = 'test_binary --param %s' % some_params
self.outputs = utils.system_output(cmd, retain_output=True)
diff --git a/fas.conf b/fas.conf
new file mode 100644
index 0000000..5fc322e
--- /dev/null
+++ b/fas.conf
@@ -0,0 +1,6 @@
+# FAS (Fedora Accounts System) credentials
+# These credentials are used when reporting results in the name of AutoQA,
+# i.e. posting a comment into Bodhi
+[fas]
+username =
+password =
diff --git a/lib/python/bodhi_utils.py b/lib/python/bodhi_utils.py
index 9c0adbc..00d4154 100644
--- a/lib/python/bodhi_utils.py
+++ b/lib/python/bodhi_utils.py
@@ -18,9 +18,18 @@
#
# Authors:
# Will Woods <wwoods(a)redhat.com>
+# Martin Krizek <mkrizek(a)redhat.com>
import fedora.client
import time
+import sys
+import re
+from datetime import datetime
+from ConfigParser import *
+from util import get_cfg
+
+# how long should we wait before posting the same comment to bodhi
+BODHI_POSTING_COMMENT_SPAN = 3*24*60 # in minutes
def bodhitime(timestamp):
'''Convert timestamp (seconds since Epoch, assumed to be local time) to a
@@ -46,3 +55,176 @@ def bodhi_list(params, limit=100):
updates += r['updates']
params['tg_paginate_no'] += 1
return updates
+
+def _bodhi_already_commented(update, user, testname, arch):
+ '''Check if the comment is already posted.
+
+ Args:
+ update -- The *title* of the update
+ user -- username that posted the comment
+ testname -- the name of the test
+ arch -- tested architecture
+
+ Returns:
+ Tuple containing old result and time when the last comment was posted,
+ if none comment is posted already, tuple will be empty.
+ '''
+ bodhi = fedora.client.BodhiClient()
+ res = bodhi.query(package=update)
+ comment_re = r'AutoQA:[\s]+%s[\s]+test[\s]+(\w+)[\s]+on[\s]+%s' % (testname, arch)
+ old_result = ''
+ comment_time = ''
+
+ for update in res['updates']:
+ for comment in update['comments']:
+ if comment['author'] == user:
+ m = re.match(comment_re, comment['text'])
+ if m == None:
+ continue
+ old_result = m.group(1)
+ comment_time = comment['timestamp']
+
+ return (old_result, comment_time)
+
+def _is_bodhi_testresult_needed(old_result, comment_time, result):
+ '''Check if the comment is meant to be posted.
+
+ Args:
+ old_result -- the result of the last test
+ comment_time -- the comment time of the last test
+ result -- the result of the test
+
+ Returns:
+ True if the comment will be posted, False otherwise.
+ '''
+ # the first comment or a comment with different result, post it
+ if not old_result or old_result != result:
+ return True
+
+ # If we got here, it means that the comment with the same result has been
+ # already posted, we now need to determine whether we can post the
+ # comment again or not.
+ # If the previous result is *not* 'FAILED', we won't post it in order not to
+ # spam developers.
+ # If the previous result *is* 'FAILED', we will need to check whether given
+ # time span expired, if so, we will post the same comment again to remind
+ # a developer about the issue.
+
+ if result != 'FAILED':
+ return False
+
+ posted_datetime = datetime.strptime(comment_time, '%Y-%m-%d %H:%M:%S')
+ if (datetime.now() - posted_datetime).days*24*60 < BODHI_POSTING_COMMENT_SPAN:
+ return False
+
+ return True
+
+def bodhi_post_testresult(update, testname, result, url, config, arch = 'noarch', karma = 0):
+ '''Post comment and karma to bodhi
+
+ Args:
+ update -- the *title* of the update comment on
+ testname -- the name of the test
+ result -- the result of the test
+ url -- url of the result of the test
+ config -- autoqa config
+ arch -- tested architecture (default 'noarch')
+ karma -- karma points (default 0)
+
+ Returns:
+ True if comment was posted successfully or comment wasn't meant to be
+ posted (either posting is turned off or comment was already posted),
+ False otherwise.
+ '''
+ err_msg = 'Could not post a comment to bodhi'
+
+ try:
+ try:
+ fas = get_cfg('fas.conf', 'fas')
+ except IOError:
+ sys.stderr.write('fas.conf is not present in the current directory. Using /etc/autoqa/fas.conf instead.')
+ try:
+ fas = get_cfg('/etc/autoqa/fas.conf', 'fas')
+ except IOError:
+ return False
+ except (NoSectionError, DuplicateSectionError, MissingSectionHeaderError):
+ return False
+
+
+ if not update or not testname or not result or url == None:
+ sys.stderr.write('Incomplete arguments!\n%s\n' % err_msg)
+ return False
+
+ try:
+ if config.get('test', 'send_bodhi_comments').lower() != 'true':
+ print 'Sending bodhi comments is turned off. Test result will NOT be sent.'
+ return True
+ except KeyError:
+ print 'Sending bodhi comments is turned off. Test result will NOT be sent.'
+ # option missing -> it's false, do not send it (but return True since
+ # it's intentional, not an error)
+ return True
+
+ try:
+ user = fas['username']
+ pswd = fas['password']
+ except KeyError:
+ sys.stderr.write('Conf file containing FAS credentials is incomplete!\n%s\n' % err_msg)
+ return False
+
+ comment = 'AutoQA: %s test %s on %s. The result can be found at: %s.' \
+ % (testname, result, arch, url)
+ try:
+ (old_result, comment_time) = _bodhi_already_commented(update, user, testname, arch)
+
+ if not _is_bodhi_testresult_needed(old_result, comment_time, result):
+ print 'The test result already posted to bodhi.'
+ return True
+
+ bodhi = fedora.client.BodhiClient(username=user, password=pswd)
+
+ if not bodhi.comment(update, comment, karma):
+ sys.stderr.write('%s\n') % err_msg
+ return False
+
+ print 'The test result was sent to bodhi successfully.'
+ except Exception, e:
+ sys.stderr.write('An error occured: %s' % e)
+ sys.stderr.write('Could not connect to bodhi!\n%s\n' % err_msg)
+ return False
+
+ return True
+
+def _self_test():
+ '''
+ Simple self test.
+ '''
+ from datetime import timedelta
+ try:
+ print '1. Test:',
+ assert _is_bodhi_testresult_needed('PASSED', datetime.now, 'PASSED') == False
+ print 'Passed'
+ print '2. Test:',
+ assert _is_bodhi_testresult_needed('FAILED', datetime.now, 'PASSED') == True
+ print 'Passed'
+ print '3. Test:',
+ assert _is_bodhi_testresult_needed('PASSED', datetime.now, 'FAILED') == True
+ print 'Passed'
+ print '4. Test:',
+ date = (datetime.now() - timedelta(minutes=BODHI_POSTING_COMMENT_SPAN)).\
+ strftime('%Y-%m-%d %H:%M:%S')
+ assert _is_bodhi_testresult_needed('FAILED', date, 'FAILED') == True
+ print 'Passed'
+ print '5. Test:',
+ date = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
+ assert _is_bodhi_testresult_needed('FAILED', date, 'FAILED') == False
+ print 'Passed'
+ print '6. Test:',
+ assert _is_bodhi_testresult_needed('', '', 'FAILED') == True
+ print 'Passed'
+ except AssertionError:
+ print 'Failed [!!!]'
+
+if __name__ == '__main__':
+ _self_test()
+
diff --git a/lib/python/test.py b/lib/python/test.py
index d59d765..29bb96c 100644
--- a/lib/python/test.py
+++ b/lib/python/test.py
@@ -46,7 +46,7 @@ class AutoQATest(test.test, object):
@ExceptionCatcher()
def run_once(self, **kwargs):
- pass
+ os.chdir(self.bindir) # easiest way for tests to find their test scripts, config files, etc
def process_exception(self, exc):
self._convert_list_variables()
diff --git a/lib/python/util.py b/lib/python/util.py
index b6d0013..66cf673 100644
--- a/lib/python/util.py
+++ b/lib/python/util.py
@@ -18,6 +18,7 @@
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Author: Will Woods <wwoods(a)redhat.com>
+# Martin Krizek <mkrizek(a)redhat.com>
import os
import sys
@@ -29,6 +30,7 @@ import urlgrabber.progress
import config
import urllib
import socket
+from ConfigParser import *
def timestamp_to_compose_id(timestamp=None, serial=1):
if not timestamp:
@@ -180,4 +182,34 @@ def make_autotest_url(config):
autotest_url = "http://%s/results/%s/" % (autotest_server, jobtag)
return(autotest_url)
+def get_cfg(cfgfile, section, default_conf = {}):
+ '''Get data from config file
+
+ Args:
+ cfgfile -- config file name
+ section -- section of the config to be retrieved
+ default_conf -- default configuration values
+
+ Returns:
+ Dictionary containing retrieved data on success.
+ '''
+ conf = default_conf
+ cfg_parser = SafeConfigParser()
+ try:
+ if cfg_parser.read(cfgfile) == []:
+ raise IOError
+ # override defaults with configfile values
+ for k, v in cfg_parser.items(section):
+ conf[k] = v
+ except IOError, e: # no config file
+ if not conf:
+ sys.stderr.write('ERROR: %s: %s' % (cfgfile, str(e)))
+ raise e
+ # using defaults
+ except (NoSectionError, DuplicateSectionError, MissingSectionHeaderError), e:
+ sys.stderr.write('ERROR: Could not parse %s: %s' % (cfgfile, str(e)))
+ raise e
+
+ return conf
+
diff --git a/tests/conflicts/conflicts.py b/tests/conflicts/conflicts.py
index 5a93a2b..c9a4667 100644
--- a/tests/conflicts/conflicts.py
+++ b/tests/conflicts/conflicts.py
@@ -34,11 +34,11 @@ class conflicts(AutoQATest):
@ExceptionCatcher()
def run_once(self, baseurl, parents, name, **kwargs):
+ super(conflicts, self).run_once()
if name:
name = "%s-%s" % (name, autoqa.util.get_basearch())
else:
name = baseurl
- os.chdir(self.bindir)
cmd = './potential_conflict.py --tempcache --newest'
cmd += ' --repofrompath=target,%s --repoid=target' % baseurl
count = 1
diff --git a/tests/helloworld/helloworld.py b/tests/helloworld/helloworld.py
index a0bd1d4..1a96ef5 100644
--- a/tests/helloworld/helloworld.py
+++ b/tests/helloworld/helloworld.py
@@ -25,6 +25,7 @@ class helloworld(AutoQATest):
@ExceptionCatcher()
def run_once(self, *args, **kwargs):
+ super(helloworld, self).run_once()
self.summary = 'Hello, World!'
self.outputs = "===Printing passed params===\n"
for arg in args:
diff --git a/tests/initscripts/initscripts.py b/tests/initscripts/initscripts.py
index e0f95bb..8e35e35 100644
--- a/tests/initscripts/initscripts.py
+++ b/tests/initscripts/initscripts.py
@@ -101,6 +101,7 @@ class initscripts(AutoQATest):
@ExceptionCatcher()
def run_once(self, kojitag, **kwargs):
+ super(initscripts, self).run_once()
if kwargs['hook'] == 'post-koji-build':
envrs = [kwargs['envr']]
update_id = kwargs['envr']
diff --git a/tests/rats_install/rats_install.py b/tests/rats_install/rats_install.py
index 552381a..23c2015 100644
--- a/tests/rats_install/rats_install.py
+++ b/tests/rats_install/rats_install.py
@@ -52,11 +52,11 @@ class rats_install(AutoQATest):
@ExceptionCatcher()
def run_once(self, baseurl, name, image_url="", boot_args="", **kwargs):
+ super(rats_install, self).run_once()
if name:
name = "%s-%s" % (name, util.get_basearch())
else:
name = baseurl
- os.chdir(self.bindir)
cmd = "./install.py -s %s -l %s" % (self.tmpdir, self.resultsdir)
if image_url != "":
cmd += " -i %s" % image_url
diff --git a/tests/rats_sanity/rats_sanity.py b/tests/rats_sanity/rats_sanity.py
index d6a6357..19116ef 100644
--- a/tests/rats_sanity/rats_sanity.py
+++ b/tests/rats_sanity/rats_sanity.py
@@ -40,11 +40,11 @@ class rats_sanity(AutoQATest):
@ExceptionCatcher()
def run_once(self, baseurl, parents, name, **kwargs):
+ super(rats_install, self).run_once()
if name:
name = "%s-%s" % (name, util.get_basearch())
else:
name = baseurl
- os.chdir(self.bindir)
cmd = "./sanity.py -s %s -l %s" % (self.tmpdir, self.resultsdir)
cmd += " %s" % baseurl
self.result = None
diff --git a/tests/repoclosure/repoclosure.py b/tests/repoclosure/repoclosure.py
index b27b684..f167722 100644
--- a/tests/repoclosure/repoclosure.py
+++ b/tests/repoclosure/repoclosure.py
@@ -31,6 +31,7 @@ class repoclosure(AutoQATest):
@ExceptionCatcher()
def run_once(self, baseurl, parents='', name='', **kwargs):
+ super(repoclosure, self).run_once()
if name:
name = "%s-%s" % (name, autoqa.util.get_basearch())
else:
diff --git a/tests/rpmguard/rpmguard.py b/tests/rpmguard/rpmguard.py
index 588a58f..bf973f0 100644
--- a/tests/rpmguard/rpmguard.py
+++ b/tests/rpmguard/rpmguard.py
@@ -43,6 +43,7 @@ class rpmguard(AutoQATest):
@ExceptionCatcher()
def run_once(self, kojitag, **kwargs):
+ super(rpmguard, self).run_once()
if kwargs['hook'] == 'post-koji-build':
envrs = [kwargs['envr']]
update_id = kwargs['envr']
diff --git a/tests/rpmlint/rpmlint.py b/tests/rpmlint/rpmlint.py
index 4771f13..4091b3e 100644
--- a/tests/rpmlint/rpmlint.py
+++ b/tests/rpmlint/rpmlint.py
@@ -44,6 +44,7 @@ class rpmlint(AutoQATest):
@ExceptionCatcher()
def run_once(self, kojitag, **kwargs):
+ super(rpmlint, self).run_once()
if kwargs['hook'] == 'post-koji-build':
envrs = [kwargs['envr']]
update_id = kwargs['envr']
diff --git a/tests/upgradepath/upgradepath.py b/tests/upgradepath/upgradepath.py
index c53285f..a99ac2f 100755
--- a/tests/upgradepath/upgradepath.py
+++ b/tests/upgradepath/upgradepath.py
@@ -86,6 +86,7 @@ class upgradepath(AutoQATest):
@ExceptionCatcher()
def run_once(self, envrs, kojitag, **kwargs):
+ super(upgradepath, self).run_once()
update_id = kwargs['name'] or kwargs['id']
# Get a list of all repos we monitor (currently not -testing)
---
Martin
13 years, 4 months
Re: Virtualization support
by Kamil Paral
----- "Scott M Ferguson" <smferguson(a)gmail.com> wrote:
>
> Hey all. I have a rough proof-of-concept going for the client/server,
> but in chatting it over with a friend a novel idea came up. What if
> the server just opened 2 ssh sessions for each client. 1 to run the
> tests, 1 to tail the log (grab the result). Then we wouldn't need a
> client daemon. I'm not sure this is desirable since other projects
> use
> a standard client/server model, but thought it was worth bringing up.
>
> Best,
> Scott
Hello Scott,
I'm not exactly sure I get the idea. The autotest server already
handles starting the task and collecting results (arrows 2. and 3. in
the picture I added to
https://fedorahosted.org/autoqa/ticket/183#comment:description). Ticket
#183 mainly concerns arrow 4., signaling to the host (of that autotest
client VM). We can use it for reverting the VM to the previous state
(I think that's the main reason we need all of this). In the Virtualization
milestone there are other tickets that cover remaining parts of that
picture.
So, the problem of ticket #183 is not getting the results (autotest handles
that for us), but telling the host (of that VM) "do something" right after
completing the test.
Are we on the same page? Maybe I have missed something. Tell me.
Thanks,
Kamil
13 years, 4 months
Re: Re: Virtualization support
by Scott M Ferguson
> Message: 3
> Date: Tue, 9 Nov 2010 08:29:27 -0500 (EST)
> From: Kamil Paral <kparal(a)redhat.com>
> Subject: Re: Virtualization support
> To: AutoQA development <autoqa-devel(a)lists.fedorahosted.org>
> Message-ID:
> <1532790094.383731289309367710.JavaMail.root(a)zmail03.collab.prod.int.phx2.redhat.com>
>
> Content-Type: text/plain; charset=utf-8
>
> ----- "Scott M Ferguson" <smferguson(a)gmail.com> wrote:
>
>> Hey all,
>>
>> Joza was kinda enough to let me have a go at part of the
>> virtualization support: enabling 2-way communication between a
>> virt-host and virt-guests and I'm starting to dig into it. Per his
>> suggestion I'm working on a generic api that can be tuned based on
>> the
>> needs of the project and I wanted to bring it up to the list. A few
>> initial thoughts from Joza regarding communication were:
>>
>> Host->Guest - this needs to be able to select a certain virt guest
>> 1) Is a test actually running?
>> 2) Disconnect autotest-enabled eth
>> 3) run a command
>>
>> Guest->Host
>> 1) Test finished
>> 2) Test crashed
>> 3) Test running too long
>> 4) Destructive test finished, revert me to 'safe' snapshot
>>
>> He also noted a management interface on the server might be
>> interesting and James noted that it will need to play nice with
>> autotest.
>>
>> I know this is a long-term goal, but I'd love to hear everyone's
>> thoughts when time permits.
>
> Hey Scott,
>
> it absolutely great that you want to help us with this one. We really
> appreciate it. I don't know how many details jskladan provided to you,
> but here us some additional information about it:
>
> https://fedorahosted.org/pipermail/autoqa-devel/2010-June/000642.html
> https://fedorahosted.org/autoqa/milestone/Virtualization
>
> Don't hesitate to ask and consult anything related in here.
>
> Thanks,
> Kamil
>
Thanks for the additional info. I'll definitely keep the list updated
and likely have lots of questions.
Best,
Scott
13 years, 4 months
Re: plan for merging clumens branch onto master
by Kamil Paral
----- "Chris Lumens" <clumens(a)redhat.com> wrote:
> > I won't of course study all the code in a large
> > detail, but I'll provide some feedback for the test object (and
> other
> > AutoQA-specific files) soon.
>
> These files should hopefully be very small.
Yes, which is great. I have just one comment below:
> @ExceptionCatcher()
> def run_once(self, *args, **kwargs):
> # This is not so good. We need to get the directory containing framework/
> # into the PYTHONPATH for the next import to work, but we don't know
> # which directory that'll be without crazy digging or install-time
> # manipulation. I prefer the digging.
> f = sys.modules[self.__class__.__module__].__file__
> sys.path.append(os.path.dirname(f))
> from framework import StorageTestFramework
The test directory (containing framework/ subdir) is stored as self.bindir
variable. That should be necessary to put it into PYTHONPATH. Also, a
forthcoming patch will ensure that the test has CWD set to self.bindir
(therefore local directory imports should work out of the box). Does it solve
your problem or did I misunderstand it?
Apart from this, everything looks great (at least at my end). We can merge it
when you feel it's ready.
13 years, 4 months
Re: [PATCH] Added a support for sending comments into Bodhi
by Kamil Paral
----- "James Laska" <jlaska(a)redhat.com> wrote:
> Merging my response for both mails into one. I hope that's not too
> confusing.
Great set of comments! Replying below.
> > 3. BODHI_POSTING_COMMENT_SPAN (comment duplication protection) is
> > currently set to 3 days
>
> Does it make sense to store/access this value in autoqa.conf in a
> [bodhi] section?
Yes, it may be well a configurable option. That will allow us to easily
change it any time we need. Great idea.
> Also, now that I think of it (see below), shouldn't
> the bodhi server URL be a configuration option. Unrelated to this
> patchset, but this also applies to the koji server URL?
Interesting. That would certainly help those people I spoke with at FUDCon
Zurich. They were interested in AutoQA and they were running their own
Koji instance.
I'll add this to "add support for staging server" ticket, it's somewhat
related.
> > +def get_cfg(cfgfile, section, default_conf = {}):
> > + '''Get data from config file
> > +
> > + Args:
> > + cfgfile -- config file name
> > + section -- section of the config to be retrieved
> > + default_conf -- default configuration values
> > +
> > + Returns:
> > + Dictionary containing retrieved data on success.
> > + '''
>
> Can we make get_cfg() accept a list of cfg files to try, not just a
> single file (see similar example in lib/python/repoinfo.py)? I've
> been
> meaning to do this for the 'autoqa' script next time I'm in there.
> Mainly because it makes running tests/watchers directly from a git
> check-out difficult if the code only looks in
> '/etc/autoqa/autoqa.conf'.
> With repoinfo, it looks for both
> ['repoinfo.conf,'/etc/autoqa/repoinfo.conf'].
Everything fits together. Yes, that's certainly a good idea. Thanks to my
latest patch we have now config files transferred together with the test.
We can access repoinfo (and other conf files) in the test directory, we
don't need to maintain /etc/autoqa files anymore.
I'll create a patch for repoinfo handling once this patch is in master, but
Martin may prepare the libraries even in this patch. Good idea.
> Speaking of, should
> the
> 'autoqa' script to use the new get_cfg() method to access it's conf?
Forwarding to Martin.
>
> > + if not _is_bodhi_testresult_needed(old_result,
> comment_time, result):
> > + print 'The test result already posted to bodhi.'
> > + return True
>
> Does that 'print' statement show up in our autoqa logs so we know
> when
> it decided not to post feedback into bodhi?
Yes, as with all other test output, this is accessible in the autotest client
logs (stored at autotest server).
> > def run_once(self, **kwargs):
> > - pass
> > + os.chdir(self.bindir) # easiest way for tests to find their
> test scripts, config files, etc
> >
>
> It looks like there are some changes to lib/python/test.py
> (os.chdir(self.bindir)) and the test templates. Are those changes
> related to the bodhi comment support? Does changing the PWD for all
> our
> run_once() tests alter there outcome?
We could have probably mentioned this, it's a notable change. We change now
CWD to self.bindir before any test starts. The reason is fas.conf. It's
located in the test's directory (self.bindir), but autotest sets CWD to
self.outputdir by default. In order for our libraries to be able to read
fas.conf (and other conf files), we need either
1. give them full path
2. pass on self object, so they can extract self.bindir
3. change CWD to self.bindir
We decided to do option 3), because it seemed best. Martin did some testing
and changing CWD did not introduce any problems. Moreover, several tests
did this already internally (inside run_once()). The only drawback is that
we now require to call super.run_once() at the beginning of the run_once()
method (same as for initialize() and setup()).
(Could that be simplified by using a decorator function, Josef?)
>
> > @ExceptionCatcher()
> > def run_once(self, baseurl, parents, name, **kwargs):
> > + super(conflicts, self).run_once()
> > if name:
>
> Same as above, is this related/required by the bodhi comment support?
> Also, if this is needed, we'll want to update the documentation?
We adjusted doc/test_class.py.template and we will probably also adjust
wiki documentation, yes. This ticket will be re-assigned to 'docs'
component once the patch has been committed.
>
> > 5. We will need to test this feature more before final release.
> Martin
> > did a few manual tests, but that's not enough. My best idea is
> that
> > we could use our development server, deploy the code there,
> deactivate
> > actual bodhi sending code and just intercept the calls and log
> them.
> > We would have it running for a week and only then we can be
> quite
> > sure it will really work once we make a new release and use it
> for
> > production machine.
>
> lmacken informs me that we can use a staging bodhi instance for
> testing
> comments (https://admin.stg.fedoraproject.org/updates). Email
> notifications are disabled, so we shouldn't have angry maintainers
> hunting us down as we test this new support :)
Oh, that sounds great. The only problem is that the staging server
doesn't seem to share content with the production server, the package
information are quite outdated. That means:
1. We won't probably get any new package update notifications.
2. If we listen to production server and try to send comments to the
staging server, we probably won't find the matching updates.
So, we can't probably hook it up permanently to our AutoQA staging
instance (in the future). But we can certainly use it for
semi-automatically sending a few dozen comments, for different use cases
etc. Great.
13 years, 4 months
Re: [PATCH] Added a support for sending comments into Bodhi
by Kamil Paral
----- "Martin Krizek" <mkrizek(a)redhat.com> wrote:
> Hi all,
>
> this patch (https://fedorahosted.org/autoqa/ticket/205) allows AutoQA
> to send a test result as a comment to bodhi. You can turn on/off the
> support by setting 'send_bodhi_comments' in /etc/autoqa/autoqa.conf to
> true/false. Once the test is completed it will call the
> bodhi_post_testresult function from lib/python/bodhi_utils.py. For
> instance, the usage in upgradepath test would be:
>
> > bodhi_post_testresult(kwargs['name'], self.__class__.__name__,
> self.result, self.autotest_url, self.config),
>
> note that kwargs['name'] must contain title of the update, for now.
> Thanks to Luke Macken, there will be a support for posting comments by
> UPDATEID in the next release of bodhi.
>
> The comment will be posted only if the support is turned on and if the
> same comment is NOT already posted. The only exception are FAILED
> results. Those will be sent again to remind the developer about the
> issue. However, we do not want to send them FAILED results every time
> the test failed, obviously. So there is a variable
> BODHI_POSTING_COMMENT_SPAN in lib/python/bodhi_utils.py which tells
> the script how long it should wait before posting the same comment
> again. Note that the test result could be of the same result (FAILED),
> but it could fail from a different reason than it did last time.
> That's why the comment contains url to the test result, so developers
> can check it. The format of the comments is as follows:
>
> > AutoQA: *test_name* test *result* on *arch*. The result can be found
> at: *url*
>
> for example,
> > AutoQA: upgradepath test PASSED on noarch. The result can be found
> at: http://server.com/results/14-root/client.com/
>
> FAS (Fedora Accounts System) credentials for logging into bodhi and
> sending comments are stored in /etc/autoqa/fas.conf.
>
> If you have any questions, please do ask.
Thanks, Martin, for this patch. Just a few more remarks:
1. You can find the same code in the mkrizek branch.
2. This patch adds *support* for sending bodhi comments, but it does
not enable it yet for any of our tests.
3. BODHI_POSTING_COMMENT_SPAN (comment duplication protection) is
currently set to 3 days
4. I have reviewed this patch with Martin throughout its developments,
it has ACK from me. But please point out deficiencies if you see
some.
5. We will need to test this feature more before final release. Martin
did a few manual tests, but that's not enough. My best idea is that
we could use our development server, deploy the code there, deactivate
actual bodhi sending code and just intercept the calls and log them.
We would have it running for a week and only then we can be quite
sure it will really work once we make a new release and use it for
production machine.
13 years, 4 months
Re: plan for merging clumens branch onto master
by Kamil Paral
----- "Chris Lumens" <clumens(a)redhat.com> wrote:
> Hey everyone, as you may have noticed I have been working on tests
> for
> anaconda's storage code. With jlaska's help, I've got it in shape
> now
> where it runs just like I want. It does all the testing inside a VM
> and
> then communicates results to the outide world. My existing tests
> should
> cover the whole partitioning part of the test matrix, aside from
> resizing since we can't express that with kickstart.
>
> All of the code is on the clumens branch in tests/anaconda_storage/.
Wow, very complex test. I won't of course study all the code in a large
detail, but I'll provide some feedback for the test object (and other
AutoQA-specific files) soon.
> Eventually, I'd like to move it into tests/anaconda/storage/ instead
> since we are adding some different anaconda tests on the branch as
> well.
We can do such changes, but I can't guarantee it will be included in
the next release. But, your test works independently of those changes,
right? If you prefer, we can include your test in the next release
and change the path slightly later. I don't really see a problem in that.
>
> Anyway I would like to get this stuff merged to master soon (after I
> reorganize though) so it can be running and reporting results. What
> do
> I need to do to get this on the plan?
I think you just did it :) If we can cleanly include it in our current
code (it seems we can), there's no reason why it should not be part of
the next release (0.4.4). Unfortunately I don't have a spare bare metal
machine to test your code. (I should inquire how to use RHTS). How long
does the test run take? Are there any issues we should be aware of?
Thanks,
Kamil
13 years, 4 months
plan for merging clumens branch onto master
by Chris Lumens
Hey everyone, as you may have noticed I have been working on tests for
anaconda's storage code. With jlaska's help, I've got it in shape now
where it runs just like I want. It does all the testing inside a VM and
then communicates results to the outide world. My existing tests should
cover the whole partitioning part of the test matrix, aside from
resizing since we can't express that with kickstart.
All of the code is on the clumens branch in tests/anaconda_storage/.
Eventually, I'd like to move it into tests/anaconda/storage/ instead
since we are adding some different anaconda tests on the branch as well.
Anyway I would like to get this stuff merged to master soon (after I
reorganize though) so it can be running and reporting results. What do
I need to do to get this on the plan?
- Chris
13 years, 4 months