On Wed, 2010-05-12 at 15:24 -0400, Seth Vidal wrote:
On Wed, 12 May 2010, James Laska wrote:
>> okay - so this might be my own imagination but I thought there was a goal
>> of autoqa to do the following:
>>
>> 1. show issues with pkgs
>> 2. show signficant/dangerous CHANGES to pkgs
>> 3. provide a way for a packager/the-powers-that-be to stop a package if it
>> doesn't get at least N score for a pkg
>
> Yeah, in a nutshell. AutoQA allows for the above to occur. Perhaps the
> steps above might be more the focus for specific tests like rpmlint,
> package_sanity, depcheck and rpmguard. Same result in the end.
right - a series of tests.
So maybe the solution is to make it simpler to create more package tests?
I'm not sure where the abstraction needs to be really. If we do it at
rpmguard then you get the results back from rpmguard. If we do it at the
rpmlint layer then it comes back from those tests. It doesn't REALLY
matter where it happens as long as it happens.
You're right, the specific short-term goal is to implement the package
update acceptance criteria. My initial thoughts here were around what
might be needed to adjust rpmguard to allow for easier comparative test
contributions in the future.
> Love the plugin idea! It's always fun to think of code in
this regard,
> but I always forget that yum plugins didn't just appear out of thin air.
> It took time for the need to develop and the API to mature.
sorta - Menno Smits did some great work with that infrastructure.
>
> What's a conduit? Is that a way for each test to gain access to the old
> and new packages etc..., or something else?
It's the thing that gives a plugin its access to the info in the main
code.
Gotcha, thx.
>> conduit.get_old_package_file()
>> conduit.get_new_package_file()
>> conduit.get_old_package_hdr()
>> conduit.get_new_package_hdr()
>> #and this is just my own wishfulness)
>> conduit.get_old_package_object()#using yum's package objects
>> conduit.get_new_package_object()#using yum's package objects
>>
>>
>> Then the scripts could do whatever they need to do and maybe feedback to
>> the code a result object of some kind:
>>
>> for example:
>> test_result.code = RPMGUARD_PASS
>> test_result.output = "a lot of strings here"
>> test_result.score = 23
>
> How were you thinking score would be used here?
Generally I was thinking it could be:
you get positive points for each thing you pass and zero or negative
points for things you fail. Then we could set an arbitrary number that
says if you don't get at least this many points then you don't pass over
all.
Or maybe just as a simple - the last pkg got a score of 4000, this one
has a score of 400. We ran the same number of tests, what the hell
happened?
Interesting concept, I can't say that's good or bad. I like the
approach we take now in the package update acceptance test plan in
stating things as fairly binary (pass/fail). Mainly because it is
easier to explain, understand and code to. But I'm open to the concept
of [some] tests determining pass/fail by ensuring a score is within a
specified range.
Is it crazy to have something like the existing "provides-added" test
Kamil wrote [1] to continue providing an ADVISORY result of any %
provides added. The results of that test would be available for other
tests to potentially review (through the conduit). For example, we
could potentially have future tests that say ...
* the kernel package should never %provide kernel-drm (just making
this up) -- so the new "kernel-no-provides-drm" test would
examine the provides previously generated and fail if
"kernel-drm" is now provided
* or, another test could do some score checking based on results
of other tests -- so a new test "provides-score" would provide
an INTROSPECTION result if there is a 30% increase/decrease
Perhaps a future refinement.
[1]
https://fedorahosted.org/autoqa/wiki/RpmguardChecks#provision-added
> Yeah, certainly does. At this high level, it seems like it
would
> provide a nice structure to allow for future test additions and
> dynamically enabling or disabling any subset of tests.
right - so doing it like the yum or mock plugins is relatively easy
enough to implement in rpmguard. It takes only a little bit of
infrastructure. I figure just a directory of these files, each one
referenced by name in some config for rpmguard to enable/disable them?
Since the test is just a one shot we don't even really need multiple hooks
at this point - just the one call out and expect back data in a standard
way.
anyone object to mandating that all tests be in python?
That seems appropriate. The existing rpmguard tests are all in python.
Kamil is the rpmguard author, so I value his opinion too.
Thanks,
James