New plugin API
by Mikolaj Izdebski
Hi all,
First of all thank you for working on Fedora-Review, which was really useful
for me -- saved me a lot of time and probably improved quality of my reviews.
I would like to share my opinion on how Fedora-Review plugin API should look
like from perspective of a user who did a number of reviews and who doesn't
know the internal API.
PREAMBLE
Normal F-R workflow (simplified a lot): First you build the package and run a
number of automated tests (as part of F-R and external tools). Then go through
the review template and perform manual checks. In most cases you can't open and
read every source file, but you you'll probably end up running tools like find
and grep looking for some package-specific patterns.
After doing your n-th review you realize which parts are boring and brainless
-- you find candidates for test automation. At this point you could probably
write a custom script or a F-R plugin (using Python or JSON API).
Writing a custom plugin requires a lot of time (I know from my own experience),
so I think that the first option is definitely better. After all you remember
well all the commands you had to run, you just need to write them down in a
form of a script. You can even copy variuos parts directly from your shell
history. But running these scripts manually again and again can be
annoying. What you need is a glue between F-R and the scripts -- the API I am
proposing.
SOME ASSUMPTIONS
The API should:
1) Be easy to learn.
If you need to spend several (or more) hours to learn the F-R API and API of
JSON some JSON library then creating a custom test (plugin) may be not worth
the effort. However if you somehoe know most of the API before you even start
thinking about plugins (more about that below), creating test scripts will be
straightforward.
2) Be multilangual.
You may want to write the simplest tests in just Bourne shell, maybe more
complicated ones in other scripting languages. Also Perl SIG may want to write
tests in Perl, Python in Python, Java SIG in... anything but Java :) You can't
expect Perl proponents to write Python code only to create a plugin for F-R.
3) Backwards compatible.
If you spend some time developing a set of plugins you don't want to rewrite
them because of F-R upgrade. During upgrade from 0.1.x to 0.2.x I had to
rewrite my plugin. Hopefully I had only one and Stano helped me with the
upgrade.
4) Callback-less.
Current JSON API gives plugins only some information about package being
reviewed and requires them to call F-R back to obtain more info (like sections
of spec file). This means formating a request, reading and parsing
(semi-manually!) the reply. But this information could have been automatically
provided by F-R from the beginning! F-R should give pluging all info it has
about the package without plugins having to ask for it.
MY PROPOSAL
Basic information about packages, like name, version, type (C,Java,R,...) is
passed in exported environmental variables, which names should be compatible
with names from `rpm --querytags` if possible. More detailed information is
passed on disk in form of files. This includes things like spec file, unpacked
sources, rpmlint output, symlinks to builddir, unpacked binary rpms and so on.
Users already should know basic RPM macros like %{name} or %{epoch}, so using
env variables ($name and $epoch respectively) should be strightforward. People
that previously used F-R will be familiar with structure of directories created
by F-R (srpm-unpacked, upstream, BUILD, and so on). Test results are simply
printed on stdout. Exit code decides if test passed, failed or was inconclusive
(exit codes for that are already standardized). Attachements can be created by
creating attachement files somewhere in the filesystem.
I believe that implementing something similar to my approach would greatly
enchance Fedora Review. It could be implemented as a JSON plugin (what I
actually did) which then has a separate api for comunicating with different,
scripts or implemented directly in Python. It's up to you, Fedora-Review
developers, to decide if you want to support my idea or not. If yes then I
would be glad to help. In any case I have my implementation working now
so I'm happy either way :)
Keep up with good work!
Mikolaj Izdebski
11 years, 8 months
Redefining the python tests interfaces
by Alec Leamas
Stanislaw and I have had a long discussion on IRC today. We have agreed
on a new interface to the test (a. k. a. check) object to be a bunch of
properties and a single run() method. Compared to today, it means that
the is_applicable() function is no more part of the interface. Instead,
run() returns 'not_applicable' as needed.
Unfortunately, we have designed two ways to implement this in the python
context: one based on inheritance, and one based on decorators. Noone of
us sees it as a major problem to select any solution, but we must
decide. And needs some input.
First method is based on inheritance.
-------------------------------------------------
CheckBase:
def run(self):
if self.is_applicable():
self.run_if_applicable()
else:
self.set_passed('not_applicable')
JavaCheckBase(CheckBase):
def is_applicable(self):
if self.has_files("*.jar") or self.has_files("*.pom"):
return True
else:
return False
A standard test overrides .run_if_applicable(), which only is called if
is_applicable() is true.
def run_if_applicable(self):
name = self.get_files_by_pattern("/usr/share/javadoc/%s" %
self.spec.name)
....
self.set_passed('pass')
A test which needs to specify if it's applicable or not can do so by
overriding is_applicable as
today or, more straight-forward, by overriding run()
def run(self):
if not self.has_files("*.pom"):
self.set_passed('not_applicable')
return
...
self.set_passed( "pass")
The other way is based on decorators:
-----------------------------------------------------
CheckBase:
def run(self):
self.set_passed('inconclusive')
JavaCheckBase(CheckBase):
@staticmethod
def if_javapackage(run_f):
def wrapper(self, *args, **kwargs):
if self.has_files("*.jar") or self.has_files("*.pom"):
return run_f(self, *args, **kwargs)
else:
self.set_passed('not_applicable')
return wrapper
A standard test uses a decorator to run as applicable:
@JavaCheckBase.if_javapackage
def run(self):
files = self.get_files_by_pattern("/usr/share/javadoc/%s/*.html" %
self.spec.name)
....
self.set_passed(...)
but can instead choose to handle applicability itself:
def run(self):
if not self.spec.find(self.skip_regex):
self.set_passed('not_applicable')
return
......
self.set_passed(....)
------------------------------------------------------------------
Both methods have their pros and cons. We really need some input to
select a way of doing this. Anyone, out there?
On behalf of Stan
--alec
11 years, 8 months
Feature Branches or How To Come to a Conclusion
by Alec Leamas
Back from holidays, I have five feature branches. This is somewhat
insane, and I need to come to conclusions to trash, merge or update
these. Here we go:
koji: Use koji scratch builds. This is just a manpage update, and a
separate script to download koji scratch builds. I could commit this
myself, but since the corresponding issue #17 is assigned to sochotni I
hesitate for that very reason. As long as noone says otherwise, I intend
to commit this one.
no-must: Remove most MUST/SHOULD from output. This is dicussed in the
related bug #68. We need to finish this discussion, preferably in the
bug. In particular, I need sochotni to reply to my last attempt to find
common ground.
rm-bugz: This was discussed in the list, see e. g.
https://lists.fedorahosted.org/pipermail/fedorareview/2012-July/000006.html.
My conclusion is that removing the bugzilla options is acceptable if
(and only if) a separate tool is provided with these options. Unless
there are other opinions, I might merge such a solution. I said 'might',
didn't I?
vers-info: Keep devel branch ready to produce post-release rpm:s (which
means some fixes in release branches). I think sochotni and I agreed on
this, but it's long time since. Once again, as long as there is no other
opinions, I plan to merge this.
So, you have been warned. I guess this is the only way to get attention
--alec
11 years, 8 months
[PATCH] Fix non-specific clean message for EPEL5 %clean
by Richard Marko
According to guidelines, this is only required for EPEL5:
http://fedoraproject.org/wiki/Packaging/Guidelines#.25clean
Signed-off-by: Richard Marko <rmarko(a)redhat.com>
---
src/FedoraReview/checks/generic.py | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/FedoraReview/checks/generic.py b/src/FedoraReview/checks/generic.py
index 8a4a88d..31a25be 100644
--- a/src/FedoraReview/checks/generic.py
+++ b/src/FedoraReview/checks/generic.py
@@ -185,7 +185,7 @@ class CheckClean(CheckBase):
def run(self):
passed = True
- msg = 'Clean would be needed if support for EPEL is required'
+ msg = 'Clean would be needed if support for EPEL5 is required'
sec_clean = self.spec.get_section('%clean')
for sec in sec_clean:
sec_lines = sec_clean[sec]
@@ -195,7 +195,7 @@ class CheckClean(CheckBase):
for line in sec_lines:
if regex.search(line):
passed = False
- msg = 'Clean is needed only if supporting EPEL'
+ msg = 'Clean is needed only if supporting EPEL5'
break
self.set_passed(passed, msg)
--
1.7.11.2
11 years, 8 months
Re: fedora-review JSON API
by Mikolaj Izdebski
Hi,
> Let me know what you think...
First of all let me begin with my motives. I think that Fedora Review
can be useful not only for doing initial package reviews but also to
perform regular QA checks of existing packages. It's especially useful
if you are maintaining many packages of the same type (like Java or
Perl packages) for which you can write many automated tests.
Writing single checks should be easy and shouldn't require knowing any
complex API so that any user could write their own checks. I think that
simple shell script is good enough in most of the cases. Some
variables like pkg name, version etc would be exported as
environmental variables, more complex things would be extracted on
disk (as F-R does now, but more things could be extracted too). So my
goal was creating a layer that could integrate those shell scripts
with Fedora Review. Some proof of concept will be available soon.
I wrote a JSON plugin instead of native python plugin only because I
don't know Python, but I talked to Stano and we considered
reimplementing that in Python to overcome some limitations of JSON
API.
> One question is how to create a reasonable unit test. You seem to have
> done some testing... and I more or less rewrote the existing test suite
> as part of the 0.2.0 release. May we we could achieve something together?
If you don't mind having non-Python test cases then I could write some
basic tests. Moreover, working plugin framework (which I am developing
as free software) can be a kind of test too, especially if there are
not that many other users of this API (if there are any at all).
Mikolaj
PS. <rant>
I also think that more tests could be automated, and that doesn't
have to be "pass or fail" type of automation. Let's take "Large
documentation files must go in a -doc subpackage" as example. AFAIR
for now it's not implemented at all. But you could check the size and
if it's less than 1kB pass, more than 1Mb fail, in-between -
inconclusive. The advantage is that both worst cases are avoided. case
1) There is huge documentation and reviewer didn't notice it -- users
sufer. That simple tests detects this, but if it was inconclusive,
users won't notice much (what's 1MB today). And case 2) -- there are
no documentation, or tiny 5-line README, reviewer spent his time
looking for doc files which are absent. It's obvious than 1kB is OK.
That's only an example but you get what I mean.
Also some tests results imply others. For example "The package
successfully compiles and builds into binary rpms on at least one
primary architecture" and "Package supports only 1 architecture"
implies "The package successfully compiles and builds into binary rpms
on all supported architectures.". This rule would apply to many packages.
</rant>
11 years, 8 months