----- "James Laska" <jlaska(a)redhat.com> wrote:
Nice work Kamil, this looks great. I don't mind the output
format to
be
honest, but I think the discussion you and Will had during the recent
QA
meeting [1] about collapsing results from different architectures
together makes sense. We probably want to err on the side of
displaying
less information for developer review than scaring them away with
tons
of repeated information. Besides, we can scare them later by just
providing the results against their builds :)
Already as a ticket:
https://fedorahosted.org/autoqa/ticket/94
I want to have it implemented this week. Good idea, Will.
I don't think this is specific to the rpmguard test, but it's a
little
confusing for me (not knowing the internals) to see two different
scripts 'rpmguard' and 'rpmguard.py' in the test directory. Looking
further, I believe rpmguard.py provides the class that autotest calls
from the provided control file and is intended to be imported only.
Is
that correct?
Actually, I think it's rather specific to rpmguard test. In other tests
you usually use some binary from fedora repos located in system paths
(you just do 'yum install foo' at setup), so we can afford to have test
named same as the binary (e.g. rpmlint binary and rpmlint test).
For rpmguard it got a little complicated, because I already develop
the rpmguard in the same directory (and we don't have it packaged
or something). And because the wrapper must be called <test>.py, I
renamed rpmguard.py to rpmguard and created the wrapper with the prior
name.
There are a few solutions:
1. change test naming conventions
2. rename the rpmguard test
3. move rpmguard upstream to another folder
4. move rpmguard upstream to its own project (overkill?)
Third option is probably the easiest. What you think? Just tell me
where would you prefer to put it so we won't make mess of the project
structure.
Some other thoughts ...
* I like how you're using the setup() method and checking for
specific versions of required software. I imagine this might
become a common thing, I wonder if in the future we could
offer
a common require_version() method in the autoqa base package
* In the initialize() method ... should we look into using
mktemp
(or similar) here instead of os.makedirs? Could multiple
runs
of the test pick-up stale data?
I have to admit cheating, I just copied that stuff from Will's rpmlint
test :) I haven't studied if we have some conventions where to save data
etc. So maybe Will could better answer that. Surely I don't need the
data to be persistent in this case so mktemp works well for me. Multiple
runs should work ok in current implementation, because the urlgrabber
seems to be overwriting existing files.
* In run_once(), I like that you are displaying errors on
stdout
as well as in the log. I wonder if we could rely on the
python
logging module (or a common autoqa logging subclass to
provide
proper format and loglevels) to send results to multiple
places?
The trick I think will be writing this such that it can still
run in stand-alone mode. Looks like Lucas and Michael are
doing
similar things with integrating the KVM tests into upstream
autotest [2]. Is this something we could make use of as
well?
[1]
https://fedoraproject.org/wiki/QA/Meetings/20091214#rpmguard_integration
[2]
http://patchwork.kernel.org/patch/40190/
Some kind of logging would be nice, I was also thinking about this
when I printed the messages both to stdout and results. Now when
I want to reformat the rpmguard test output to eliminate duplicities
I am not sure if I would use it (the results output will be probably
different from stdout), but for most tests this could simplify
things.
I hope our one and only autoqa guru wwoods can tell us more about this
topic :)
_______________________________________________
autoqa-devel mailing list
autoqa-devel(a)lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/autoqa-devel