On Tue, 11 May 2010, James Laska wrote:
What interested me about this exercise was how a new test might be
added
to rpmguard. The power of rpmguard is in abstracting the details of
locating the previous packages and providing a common framework for
comparisons against the two package sets. Presently, the tests exist in
the main driver.
How do we want to extend this in the future make it easier/faster to
support new comparison tests? Should tests exist in stand-alone python
scripts, where they all accept a common set of arguments? It seems like
rpmlint is structured this way [4], is this good/bad ... does this make
user-contributed comparative tests easier?
Something Kamil proposed a while back, should rpmguard be a stand-alone
tool (much like rpmlint)? Instead of being bundled inside autoqa,
anyone could do: `yum install rpmguard`
Apologies for the ranty nature of the email, I'm still thinking this
through. My main objective is to get a sense where rpmguard should go.
Hopefully these thoughts can lead to a wiki or TRAC roadmap, and
something we could implement once our immediate objects are behind us.
okay - so this might be my own imagination but I thought there was a goal
of autoqa to do the following:
1. show issues with pkgs
2. show signficant/dangerous CHANGES to pkgs
3. provide a way for a packager/the-powers-that-be to stop a package if it
doesn't get at least N score for a pkg
That last one might be my own wishful-thinking.
But I was sorta thinking that rpmguard (or something else above it) could
have a dir of python scripts - like yum plugins.
Each one of the [enabled] scripts would be passed a 'conduit' that maybe
looks like:
conduit.get_old_package_file()
conduit.get_new_package_file()
conduit.get_old_package_hdr()
conduit.get_new_package_hdr()
#and this is just my own wishfulness)
conduit.get_old_package_object()#using yum's package objects
conduit.get_new_package_object()#using yum's package objects
Then the scripts could do whatever they need to do and maybe feedback to
the code a result object of some kind:
for example:
test_result.code = RPMGUARD_PASS
test_result.output = "a lot of strings here"
test_result.score = 23
We then collect and compile those to pitch back to the
user/builder/resultsdb.
Does that make sense?
-sv