On 08/09/2012 05:01 PM, Stanislav Ochotnicky wrote:
Quoting Alec Leamas (2012-08-09 16:16:02)
> On 08/09/2012 03:29 PM, Mikolaj Izdebski wrote:
>>> Examples are good, especially examples like these. To be fair, I hadn't
>>> realized how simple things could be. Have you implemented this using a
>>> json plugin handling the communication between "your" plugins and
f-r?!
>> Yes, I have implemented this as a single JSON plugin which basically:
>> 1) asks F-R for all possible info (spec file scetions etc)
>> 2) extracts more stuff on disk
>> 3) reads metadata of all scripts
>> 4) orders scripts basing on their dependencies
>> 5) executes individual scripts
>> 6) returns one reply to F-R containing results of all tests
>>
> I raised the issue to simply drop the json api in another message. One
> strategy might be to support either "advanced" plugins written in
> python or "simple" ones following your approach here. This means that if
> we could reimplement your json plugin in python the json interface could
> be dropped and we would have:
> - python plugins: multiple tests, attachment handling, access to
> internal python classes.
> - "simple" plugins: one test per file, simplified registration, no
I wanted to have external tests on par with internal (Python) ones so
noone would feel left out. But it is most probably true that covering
90% functionality in external tests in more simple way would be more
productive (and attractive).
> In any case, the modelling of a test needs an overhaul both in python
> and to some extent also for the simple ones. Things like
> dependencies/execution order, access to other tests, selecting tests to
> run. The registration of plugin tests should really be done also by tje
> python ones...
Indeed my biggest grief is that internal tests have no idea about
external ones (from deprecating POV). For this we'd need 2-step plugin
interaction (i.e. registration and then run).
So alternative approach. Let's reimplement Mikolaj's interface in Python
where we put stuff either in ENV or special predefined places (i.e.
spec/main, spec/prep, spec/build, spec/install etc). And then get
information about plugins from decorators (name, description text etc).
Attachments can be put in special subdirectory (as Mikolaj suggested).
Normally they are supposed to be linked with specific test, not sure if
it's really that needed. Result of test is based on return value (0 -
success, 1 - fail, etc.).
Sounds like a plan?
yes, it sounds like a plan. And a good one.
But as you wrote earlier, the registration needs a two-phase approach:
first register all tests (internal and external), then run. I don't
think we save time by cutting this corner - I think we need to do that
first. In such a model, internal and external tests should be equals.
after registration.
I'm still not happy with e. g. splitting spec into sections, having each
in a file as the only way. This path will eventually expand the
interface beyond sanity, when f-r should give all possible information
in advance for any possible need.. I still think the yat approach, a
tool which can extract info from the files on behalf of the plugin is a
better one (combined with the raw data in the review directory.)
Basically, yat would provide similar info as python plugins can retrieve
from their context. It could also easily filter the output depending on
settings, an otherwise complicated task.
This is not black or white. Things which are needed "often" qualify for
a file;others are better accessed using yat. It's a trade-off between
simple interfaces and keeping them flexible and limited in size.
So, my two points
- First create a sane model for tests including registration,
dependecies, access to results.
- Augment the files interface with a tool capable of more high-level
access.