Re: [PATCH] move config files to /etc/autoqa
by Kamil Paral
----- "Kamil Páral" <kparal(a)redhat.com> wrote:
> This is a patch to move config files to /etc/autoqa, a single point
> for
> program configuration files. I have included autoqa.conf and
> repoinfo.conf,
> but not irb.conf (because of reasons mentioned in my previous patch).
> I ask for careful review especially of autoqa.spec, I don't have any
> expertise with that. I tried running some watchers and some tests,
> everything should work ok. I also tried building an RPM, it worked
> too.
> I don't know if some transition support of old conffile location to
> new
> conffile location is required (regarding to RPM upgrade)?
>
> Kamil
Hello Will,
just a reminder, please don't forget about reviewing this tiny patch,
and also the initscripts patch (the big one).
Thanks,
Kamil
14 years, 1 month
Re: Job number/job tag visibility into client
by Josef Skladanka
Hello,
Thanks for such quick answer.
So, if i understand you correctly, if i have a control file containing this:
TIME="SHORT"
AUTHOR = "Will Woods <wwoods(a)redhat.com>"
DOC = """
This test runs rpmlint to catch common packaging problems which may have
crept
into the package since the last build.
"""
NAME = 'rpmlint'
TEST_TYPE = 'CLIENT' # SERVER can be used for tests that need multiple
machines
TEST_CLASS = 'General'
TEST_CATEGORY = 'Functional'
# post-koji-build tests can expect the following variables from autoqa:
# envr: package NVR (required, epoch can be skipped)
# name: package name
# kojitag: koji tag applied to this package
job.run_test('rpmlint', name=name, envr=envr, kojitag=kojitag,
config=autoqa_conf)
I could pass the job.tag as another argument to the job.run_test, like
job.run_test('rpmlint', name=name, envr=envr, kojitag=kojitag,
config=autoqa_conf, job_tag = job.tag)
Is that right?
Joza
On 03/30/2010 07:19 AM, Lucas Meneghel Rodrigues wrote:
> Hi guys, after researching a bit, here's what I found: the autotest
> scheduler will call autoserv instances and pass them a job tag (flag
> -P):
>
> 03/24 09:51:35 INFO |drone_mana:0497| command = ['nice', '-n', '10', '/usr/local/autotest/server/autoserv', '-p', '-m', u'virtlab105.virt.bos.redhat.com', '-r', u'/usr/local/autotest/results/541-debug_user/virtlab105.virt.bos.redhat.com', '-u', u'debug_user', '-l', u'Development (RHEL 6)', '-P', u'541-debug_user/virtlab105.virt.bos.redhat.com', '-n', '/usr/local/autotest/results/drone_tmp/attach.1', '-c']
>
> In this case, the job tag is
>
> 541-debug_user/virtlab105.virt.bos.redhat.com
>
> Job number goes on that job tag. This tag is generated by the method
> execution_tag, that combines the job id, user and the execution subdir.
>
> The job tag is passed to autoserv, that will execute autotest on a
> client. The job tag can be accessed from inside a control file. Every
> autotest control file has a job object, that can be accessed through
> job.tag.
>
> So, if I understood correctly what Josef explained to me, the solution
> is fairly straightforward:
>
> 1) In the control file that autoqa generates, we could have code that
> writes the job tag in, say, the job keyval:
>
> utils.write_keyval(job.resultsdir, {"job_tag": job.tag})
>
> So this information has permanent storage.
>
> 2) The job tag is used where placing the results inside the autotest
> server, which makes linking to the job results a piece of cake:
>
> http://autotest.virt.bos.redhat.com/results/556-debug_user/virtlab104.vir...
>
> It's basically os.path.join(results_url, job_tag)
>
> I *think* this is a plan of action. I might have not explained myself
> very well, but I did try :) If some of this, or nothing at all, makes
> sense, ping me on irc.
>
> Lucas
>
>
14 years, 1 month
Fwd: Job number/job tag visibility into client
by Josef Skladanka
-------- Original Message --------
Subject: Job number/job tag visibility into client
Date: Tue, 30 Mar 2010 02:19:41 -0300
From: Lucas Meneghel Rodrigues <lmr(a)redhat.com>
To: James Laska <jlaska(a)redhat.com>, jskladan(a)redhat.com
Hi guys, after researching a bit, here's what I found: the autotest
scheduler will call autoserv instances and pass them a job tag (flag
-P):
03/24 09:51:35 INFO |drone_mana:0497| command = ['nice', '-n', '10',
'/usr/local/autotest/server/autoserv', '-p', '-m',
u'virtlab105.virt.bos.redhat.com', '-r',
u'/usr/local/autotest/results/541-debug_user/virtlab105.virt.bos.redhat.com',
'-u', u'debug_user', '-l', u'Development (RHEL 6)', '-P',
u'541-debug_user/virtlab105.virt.bos.redhat.com', '-n',
'/usr/local/autotest/results/drone_tmp/attach.1', '-c']
In this case, the job tag is
541-debug_user/virtlab105.virt.bos.redhat.com
Job number goes on that job tag. This tag is generated by the method
execution_tag, that combines the job id, user and the execution subdir.
The job tag is passed to autoserv, that will execute autotest on a
client. The job tag can be accessed from inside a control file. Every
autotest control file has a job object, that can be accessed through
job.tag.
So, if I understood correctly what Josef explained to me, the solution
is fairly straightforward:
1) In the control file that autoqa generates, we could have code that
writes the job tag in, say, the job keyval:
utils.write_keyval(job.resultsdir, {"job_tag": job.tag})
So this information has permanent storage.
2) The job tag is used where placing the results inside the autotest
server, which makes linking to the job results a piece of cake:
http://autotest.virt.bos.redhat.com/results/556-debug_user/virtlab104.vir...
It's basically os.path.join(results_url, job_tag)
I *think* this is a plan of action. I might have not explained myself
very well, but I did try :) If some of this, or nothing at all, makes
sense, ping me on irc.
Lucas
14 years, 1 month
2010-03-26 - AutoQA resultsdb meeting recap
by James Laska
# AutoQA ResultsDB Discussion
# Date: 2010-03-26
# Time: 14:00 UTC (10:00 EST, 15:00 CET)
# Participants - wwoods, kparal, jskladan, jlaska
= Links =
* <https://fedoraproject.org/wiki/AutoQA_resultsdb_schema>
* Scheme #1 - includes TCMS concept for notification of results
* Scheme #2 - schema without TCMS
* Scheme #3 - schema defining general Metadata table, not specific
tables for specific test classes
* <https://fedoraproject.org/wiki/AutoQA_resultsdb_use_cases>
= Terminology =
* Test_program - The code, which performs the testing
* Test - Description/metadata (Name, Owner, Purpose, Tested
Package ...) ~ Table Test
* Testrun - Result of running the actual Test_program ~ Table testrun
* Test class - Package tests, Installation test, Repo test ...
= Agenda =
* Test
* What information do we need to store? (Name, Version,
Description ...)
* Identifying Test from Test_program
* While running Test_program it's vital to know, which Test does it
perform, so the result of Testrun could be correctly connected to the
Test
* What is the best way, to identify if from inside the Test_program -
is Test.name + Test.version sufficient?
-> Can the wiki be used as a "cheap" test case management system for
now (maps test_case -> test_plan)?
* Test classes
* What test classes do we have?
* What information do we need to store for each test class?
-> The different dashboard/views are responsible for listing the
key/value pairs it will include
* Storing specific values for different Test classes-
* What is the best way to do this?
* Specific table for each test class
* Generic table for key/value pairs
* Mediawiki-like tags (which IMHO equals generic key-value pair)
* How to give users a way, to extend test classes with minimal
(ideally none) db-admin interference? Do we want that?
* IMHO could be easily done via key-value pairs
* do we want to define required keys for each test class?
-> For complying with some test class the test should provide some
basic few common keys
-> Don't force requirements on people, if they don't comply, it will
just not show up in front-ends etc (but they can write their own
front-end/listener)
* Tagging
* What is the purpose of tagging
* How do we want to use the tags
* can jlaska give more detail on the discussion with Dave Lawrence?
* performance issues (milions of lines in table to be searched) -
caching?
* Testplans
* Hard-coded Test_program with multiple 'phases' (like RATS)
* How to represent testplans in database? (Idea1 vs Idea3)
* one Test for each 'phase' so we can easily report progress
* additional information in Test about "how many phases it has" so
we can report progress
-> Decided to use the wiki for an interim test plan system. The wiki
will provide information about the test cases in a test plan. Perhaps a
specific format needed?
* resultdb interface
= Ideas =
* Fedora Message Bus - It will be neccessary to communicate with
external subjects (sending notifications to Bodhi, etc). It can be even
used for communication with front-end.
* TCMS - We are reluctant to implement this stuff inside ResultDB, we
should leave it up to Nitrate/similar project. (right?)
* Number of phases in test case - It can be provided by wiki (referenced
by test's URL) and parsed from that, so it doesn't have to be stored in
ResultDB (and we get rid of the TCMS part from ResultDB schema)
= Open Questions =
* How long to store old results? Archive them after end-of-release?
* How can we access the test result details (aka log files, stderr
etc...)? Autotest doesn't yet provide an easy way to find the job id.
The job id is needed to locate the log directory for results
* Once message bus is setup, determine what notifications are needed
(for our test results views/dashboards, for other projects such as
Fedora Community, bodhi, koji etc...)
= Action items =
* [jskladan+kparal] Create updated DB schema according to the discussion
* [wwoods?] Study MediaWiki RPC mechanism
* [kparal] Define default (common) key-values for basic test classes
(installation tests, repo tests, package tests)
* [jskladan] Communicate with Autotest developers how to get Job ID in
autotest client
* [jlaska] Try to set up a qpid instance and send some test data through
it (from provider to listener)
(jkeating (Oxf13) is a good person to talk to about this!)UI
= Next Meeting =
* Check-in on mailing list next week
* Schedule follow-up as needed
14 years, 1 month
Watch the branched repo for updated install images
by James Laska
Greetings,
I've been using the following changes on the internal production instance for the past few weeks. Since install images are available for the 'branch' repository, I'd like to have post-tree-compose look there for installable images as well. The following patches:
* update the default repoinfo.conf to add a 'branched' repo definition
* update the post-tree-compose hook to monitor the 'branched' repo for new install images
Alternative approaches:
* I could have just used the existing 'f13' repo definition in repoinfo.conf, but I'd rather define the 'branched' repo itself, and not have to change any hook code when we change from f13 -> f14 (branched).
* Since we no longer provide install images for 'rawhide', change the hook to watch only the 'branched' repo instead of both, 'branched' and 'rawhide'. I'd be okay with this, but my initial preference was to monitor both.
Questions/concerns?
Thanks,
James
14 years, 1 month
2010-03-22 - AutoQA resultdb ML check-in
by Josef Skladanka
Hello gang,
last week, I made a simple proof of concept using the resultdb database
schema <https://fedoraproject.org/wiki/AutoQA_resultsdb_schema> and
xmlrpc interface <http://rajcze.homelinux.net/resultdb/xmlrpc.py>. This
can only start/stop a testrun (can try like this
<http://rajcze.homelinux.net/resultdb/example.txt> - you can invent any
test name/version combination if it's not already in the
database<http://rajcze.homelinux.net/resultdb/frontend/simple_php/?action=show_tests>,
new test will be created. Watch the state here
<http://rajcze.homelinux.net/resultdb/frontend/simple_php/?action=show_tes...>),
but it made me realize some stuff, i'd like to share:
Tests and Testruns:
===================
1) Even though it's not really required to store the results, we
certainly need to store some metadata, to be able to show the results in
reasonable way (table Test in schema). For basic usage, i suggest fields
Name, Version, Tested Package and Description. These should make
possible to search the tests in an usefull way.
2) We need a way to identify, which test is actually executed in the
testrun. For now, i use identification based on $test_name/$test_version
schema, which is converted to UUID5 [1] in URL namespace. I'm not sure,
if the UUID is not a duplicate information (since it's figured out from
two other known values in the database), but it seems reasonable at
least as a unique identifier in the database.
For now, my API uses name/version parameters for identification,
maybe we would like to store the UUID inside the test source (even
though i'm not a big fan of this solution) and use directly it. (hope
this is not too confusing :) )
Testplans and Jobs
==================
My starting idea was, that we would have a number of standalone Tests
(one Test equals one Testrun), and Testplans would be just a set of
these Tests, runned in specified order. One would basically create the
testplan 'on the fly' from existing Tests (and/or Testplans) using the
TCMS-like-thingie, and the rest would be taken care of automatically.
As you can imagine, this could be quite hard to implement using AutoQA,
so I talked with wwoods about it, and i belive, that we agreed, that we
would love to have this functionality, but it's not a problem to solve
*now*.
So how could Testplans work *now*
---------------------------------
1) Testplans will be hand written, and 'hard coded', using the resultdb
only as a metadata/results storage.
2) From AutoQA point of view, Testplan is just an ordinary test, which
will subsequentely run each Test required, and will report the results
to the resultdb.
3) At the beginning of executing a Testplan, it will create new record
in the Job table, and will add a record to _Job-Testrun table for each
executed Testrun (aka Test). This way, we'll be able to show overall
progress (as James had in his mockup), and we will use this information
also in the frontends - for example one could want to compare subsequent
executions of a given Testplan.
Questions
=========
1) Are there any tests, we would like to use in more than one Testplan?
I.E. is there a need to tell apart a Test from Testplan? (for me, it's
certainly a good thing)
2) What do you think about the UUID identification? I'm sure we need to
have some way to tell the tests apart (at least to be able to
automatically store the results :-D), but is the UUID generated from
name/version better than "random" UUID or not? (for me, it's better to
have name/version, since one could almost automatically re-use the
metadata in a simple way, when only the test-version changes, and
generating the UUID from human-readable values makes more sense to me)
Links
=====
[1] - <http://docs.python.org/library/uuid.html#uuid.uuid5>,
<http://tools.ietf.org/html/rfc4122.html>
14 years, 1 month