Proposal for integration tests infrastructure
tflink at redhat.com
Mon Nov 3 18:10:12 UTC 2014
On Mon, 03 Nov 2014 17:08:40 +0100
Honza Horak <hhorak at redhat.com> wrote:
> On 10/28/2014 08:08 AM, Nick Coghlan wrote:
> > On 10/22/2014 09:43 PM, Honza Horak wrote:
> >> Fedora lacks integration testing (unit testing done during build
> >> is not enough). Taskotron will be able to fill some gaps in the
> >> future, so maintainers will be able to set-up various tasks after
> >> their component is built. But even before this works we can
> >> benefit from having the tests already available (and run them
> >> manually if needed).
> >> Hereby, I'd like to get ideas and figure out answers for how and
> >> where to keep the tests. A similar discussion already took place
> >> before, which I'd like to continue in:
> >> https://lists.fedoraproject.org/pipermail/devel/2014-January/193498.html
> >> And some short discussion already took place here as well:
> >> https://lists.fedoraproject.org/pipermail/env-and-stacks/2014-October/000570.html
> > It's worth clarifying your scope here, as "integration tests" means
> > different things to different people, and the complexity varies
> > wildly depending on *what* you're trying to test.
> > If you're just looking at tests of individual packages beyond what
> > folks have specified in their RPM %check macro, then this is
> > exactly the case that Taskotron is designed to cover.
> > If you're looking at more complex cases like multihost testing, bare
> > metal testing across multiple architectures, or installer
> > integration testing, then that's what Beaker was built to handle
> > (and has already been handling for RHEL for several years).
> > That level is where you start to cross the line into true system
> > level acceptance tests and you often *want* those maintained
> > independently of the individual components in order to catch
> > regressions in behaviour other services are relying on.
> Good point about defining the scope, thanks.. From my POV, we should
> rather start with some less complicated scenarios, so we can have
> something ready to use in reasonable time.
> Let's say the common use case would be defining tests that verify
> "components' basic functionality that cannot be run during build".
> This should cover simple installation scenarios, running test-suites
> that need to be run outside of build process, or tests that need to
> be run for multiple components at the same time (e.g. testing basic
> functionality of LAMP stack). This should also cover issues with
> SELinux, systemd units, etc. that cannot be tested during build and
> IMHO are often cause of issues.
> I have no problem to state clearly for now that the tests cannot
> define any hardware requirements, even non-localhost networking. In
> other words the tests will be run on one machine with any hardware
> and any (or none) network.
> However, I'd rather see tests not tight to a particular component,
> since even simple test might cover two or three of them and it
> wouldn't be correct tight it to all nor to only one of them.
Yeah, I think that package-specific checks are a similar but slightly
different kettle of fish than we're discussing here.
We'd have to figure out how the integration tests would be scheduled
(nightly, on change in a set of packages corresponding to each check,
etc.) but that can wait until we've refined what we're looking to do a
> >> How to deliver tests?
> >> a/ just use them directly from git (we need to keep some metadata
> >> for dependencies anyway)
> >> b/ package them as RPMs (we can keep metadata there; e.g.
> >> Taskotron will run only tests that have "Provides:
> >> ci-tests(mariadb)" after mariadb is built; we also might automate
> >> packaging tests to RPMs)
> > Our experience with Beaker suggests that you want to support both -
> > running directly from Git tends to be better for test development,
> > while using RPMs tends to be better for dependency management and
> > sharing test infrastructure code.
> >> Which framework to use?
> >> People have no time to learn new things, so we should let them to
> >> write the tests in any language and just define some conventions
> >> how to run them.
> > Taskotron already covers this pretty well (even if invoking Beaker
> > tests, it would make more sense to do that via Taskotron rather than
> > directly).
> Right, Taskotron involvement seems like the best bet now, but it
> should not be tight to it -- in case Taskotron is replaced by some
> other tool for executing tasks in the future, we cannot loose the
> tests themselves.
While I don't see Taskotron going away anytime soon, I agree that we
should avoid tight coupling where it makes sense to avoid it.
With my "captain obvious" hat on, the trick is figuring out where the
point of diminishing returns is - too much independence can be just as
problematic as not enough.
> That's actually why I don't like the idea to keep the tests in
> Taskotron's git repo -- that could easily end up with using some
> specific Taskotron features and potential move to other system or
> running them as standalone tests would be problematic.
There are a couple of things here that I want to address:
Taskotron _is designed_ to facilitate the local execution use case
without crazy setup/install requirements. libtaskotron is a mostly
self-contained, user installable package that contains the Taskotron
runner. With the exception of non-localizable resource requirements
(infra specific systems and reporting mostly), the way that tasks are
executed in production Taskotron is the same as if it were to be
If I'm understanding everything correctly, Taskotron would need some
new features to handle the kinds of checking that you're talking about
here (local or central execution case)
I'm not sure what you mean by Taskotron's git repo - from chatting on
IRC for a few minutes, it sounds like you mean using repos similar to
what we're curently using for the generic package checks (rpmlint,
depcheck, upgradepath). Taskotron isn't tied to any specific git repo
location and I figured we were talking about something that would keep
package-specific checks as close to the package git repos as it makes
sense to do. For any checks that are higher-level, we can figure out a
place to put them but there is no hard requirement for everything to be
in a single repo.
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 473 bytes
Desc: not available
More information about the devel