On Wed, 12 May 2010, James Laska wrote:
> okay - so this might be my own imagination but I thought there
was a goal
> of autoqa to do the following:
>
> 1. show issues with pkgs
> 2. show signficant/dangerous CHANGES to pkgs
> 3. provide a way for a packager/the-powers-that-be to stop a package if it
> doesn't get at least N score for a pkg
Yeah, in a nutshell. AutoQA allows for the above to occur. Perhaps the
steps above might be more the focus for specific tests like rpmlint,
package_sanity, depcheck and rpmguard. Same result in the end.
right - a series of tests.
So maybe the solution is to make it simpler to create more package tests?
I'm not sure where the abstraction needs to be really. If we do it at
rpmguard then you get the results back from rpmguard. If we do it at the
rpmlint layer then it comes back from those tests. It doesn't REALLY
matter where it happens as long as it happens.
Love the plugin idea! It's always fun to think of code in this
regard,
but I always forget that yum plugins didn't just appear out of thin air.
It took time for the need to develop and the API to mature.
sorta - Menno Smits did some great work with that infrastructure.
What's a conduit? Is that a way for each test to gain access to the old
and new packages etc..., or something else?
It's the thing that gives a plugin its access to the info in the main
code.
> conduit.get_old_package_file()
> conduit.get_new_package_file()
> conduit.get_old_package_hdr()
> conduit.get_new_package_hdr()
> #and this is just my own wishfulness)
> conduit.get_old_package_object()#using yum's package objects
> conduit.get_new_package_object()#using yum's package objects
>
>
> Then the scripts could do whatever they need to do and maybe feedback to
> the code a result object of some kind:
>
> for example:
> test_result.code = RPMGUARD_PASS
> test_result.output = "a lot of strings here"
> test_result.score = 23
How were you thinking score would be used here?
Generally I was thinking it could be:
you get positive points for each thing you pass and zero or negative
points for things you fail. Then we could set an arbitrary number that
says if you don't get at least this many points then you don't pass over
all.
Or maybe just as a simple - the last pkg got a score of 4000, this one
has a score of 400. We ran the same number of tests, what the hell
happened?
Yeah, certainly does. At this high level, it seems like it would
provide a nice structure to allow for future test additions and
dynamically enabling or disabling any subset of tests.
right - so doing it like the yum or mock plugins is relatively easy
enough to implement in rpmguard. It takes only a little bit of
infrastructure. I figure just a directory of these files, each one
referenced by name in some config for rpmguard to enable/disable them?
Since the test is just a one shot we don't even really need multiple hooks
at this point - just the one call out and expect back data in a standard
way.
anyone object to mandating that all tests be in python?
-sv