Proposal for integration tests infrastructure
hhorak at redhat.com
Mon Nov 3 16:08:40 UTC 2014
On 10/28/2014 08:08 AM, Nick Coghlan wrote:
> On 10/22/2014 09:43 PM, Honza Horak wrote:
>> Fedora lacks integration testing (unit testing done during build is not
>> enough). Taskotron will be able to fill some gaps in the future, so
>> maintainers will be able to set-up various tasks after their component
>> is built. But even before this works we can benefit from having the
>> tests already available (and run them manually if needed).
>> Hereby, I'd like to get ideas and figure out answers for how and where
>> to keep the tests. A similar discussion already took place before, which
>> I'd like to continue in:
>> And some short discussion already took place here as well:
> It's worth clarifying your scope here, as "integration tests" means
> different things to different people, and the complexity varies wildly
> depending on *what* you're trying to test.
> If you're just looking at tests of individual packages beyond what folks
> have specified in their RPM %check macro, then this is exactly the case
> that Taskotron is designed to cover.
> If you're looking at more complex cases like multihost testing, bare
> metal testing across multiple architectures, or installer integration
> testing, then that's what Beaker was built to handle (and has already
> been handling for RHEL for several years).
> That level is where you start to cross the line into true system level
> acceptance tests and you often *want* those maintained independently of
> the individual components in order to catch regressions in behaviour
> other services are relying on.
Good point about defining the scope, thanks.. From my POV, we should
rather start with some less complicated scenarios, so we can have
something ready to use in reasonable time.
Let's say the common use case would be defining tests that verify
"components' basic functionality that cannot be run during build". This
should cover simple installation scenarios, running test-suites that
need to be run outside of build process, or tests that need to be run
for multiple components at the same time (e.g. testing basic
functionality of LAMP stack). This should also cover issues with
SELinux, systemd units, etc. that cannot be tested during build and IMHO
are often cause of issues.
I have no problem to state clearly for now that the tests cannot define
any hardware requirements, even non-localhost networking. In other words
the tests will be run on one machine with any hardware and any (or none)
However, I'd rather see tests not tight to a particular component, since
even simple test might cover two or three of them and it wouldn't be
correct tight it to all nor to only one of them.
>> How to deliver tests?
>> a/ just use them directly from git (we need to keep some metadata for
>> dependencies anyway)
>> b/ package them as RPMs (we can keep metadata there; e.g. Taskotron will
>> run only tests that have "Provides: ci-tests(mariadb)" after mariadb is
>> built; we also might automate packaging tests to RPMs)
> Our experience with Beaker suggests that you want to support both -
> running directly from Git tends to be better for test development, while
> using RPMs tends to be better for dependency management and sharing test
> infrastructure code.
>> Which framework to use?
>> People have no time to learn new things, so we should let them to write
>> the tests in any language and just define some conventions how to run them.
> Taskotron already covers this pretty well (even if invoking Beaker
> tests, it would make more sense to do that via Taskotron rather than
Right, Taskotron involvement seems like the best bet now, but it should
not be tight to it -- in case Taskotron is replaced by some other tool
for executing tasks in the future, we cannot loose the tests themselves.
That's actually why I don't like the idea to keep the tests in
Taskotron's git repo -- that could easily end up with using some
specific Taskotron features and potential move to other system or
running them as standalone tests would be problematic.
More information about the devel