Proposal for integration tests infrastructure
tflink at redhat.com
Fri Oct 24 20:10:23 UTC 2014
On Wednesday, October 22, 2014 01:43:57 PM you wrote:
> Fedora lacks integration testing (unit testing done during build is
> not enough). Taskotron will be able to fill some gaps in the future,
> so maintainers will be able to set-up various tasks after their
> component is built. But even before this works we can benefit from
> having the tests already available (and run them manually if needed).
> Hereby, I'd like to get ideas and figure out answers for how and where
> to keep the tests. A similar discussion already took place before,
> which I'd like to continue in:
> And some short discussion already took place here as well:
Instead of cross-posting to several lists, I'm going to just reply here
instead of copying/fragmenting the conversation more.
> Some high level requirements:
> * tests will be written by maintainers or broader community, not a
> dedicated team
> * tests will be easy to run on anybody's computer (but might be
> potentially destructive; some secure environment will not be part of
> * tests will be run automatically after related components get built
> (probably by Taskotron)
Just to make sure I understand what you're talking about here, you're
talking about mechanical checks, right? Something that is run by a
program and returns a limited-state result (ie PASS/FAIL/UNKNOWN)?
I think that you've hit on a lot of what we have in mind for Taskotron,
to be honest.
The tasks in Taskotron are run by libtaskotron and outside of things
like posting results or having access to secrets, do not require any of
the other infrastructure components that make up an entire Taskotron
deployment. The parts of Taskotron outside of libtaskotron are
responsible for scheduling, reporting and managing the execution of
Anyone can install libtaskotron, clone a task's git repository and
start running tasks. If this doesn't work in all reasonable cases, then
we have violated one of the core design principles of Taskotron and it
will be fixed.
By designing for git-repo-contained tasks, a set of people with proper
permissions can change tasks in pretty much the same way that a group
of developers change source code.
> Where to keep tests?
> a/ in current dist-git for related components (problem with sharing
> parts of code, problem where to keep tests related for more
> components) b/ in separate git with similar functionality as dist-git
> (needs new infrastructure, components are not directly connected with
> tests, won't make mess in current dist-git)
> c/ in current dist-git but as ordinary components (no new
> infrastructure needed but components are not directly connected with
I'm leaning somewhat towards a somewhat separate dist-git-ish solution
right now. By keeping it separate, we can't make a mess of the package
ACLs, don't need to worry about giving non-packagers access to the
dist-git repos and aren't adding a bunch of stuff to an already working
I'd also like to see the tasks be easily accessible from checked out
dist-git repos. I'm not sure that submodules or subtrees are good
answers here but having the tasks appear as a subdirectory of dist-git
repos sounds like a good way to integrate things to me.
> How to deliver tests?
> a/ just use them directly from git (we need to keep some metadata for
> dependencies anyway)
> b/ package them as RPMs (we can keep metadata there; e.g. Taskotron
> will run only tests that have "Provides: ci-tests(mariadb)" after
> mariadb is built; we also might automate packaging tests to RPMs)
I'm of the opinion that keeping stuff in plain git is the best choice.
For this particular use case, I'm not aware of any advantages from
packaging checks as long as we're smart about updating git repos prior
to task execution and it's additional overhead - especially if we want
to have non-packagers involved in task creation and maintenance.
> Structure for tests?
> a/ similar to what components use (branches for Fedora versions)
> b/ only one branch
> Test maintainers should be allowed to behave the same as package
> maintainers do -- one likes keeping branches the same and uses "%if
> %fedora" macros, someone else likes specs clean and rather maintain
> more different branches) -- we won't find one structure that would
> fit all, so allowing both ways seems better.
I think that restricting stuff to a single branch is going to be too
complicated and messy. The method of branching that is used in dist-git
seems to be pretty well accepted and IMHO it's a logical approach to
allowing per-version check differences without introducing a bunch of
mess and complexity to the tasks to be run.
> Which framework to use?
> People have no time to learn new things, so we should let them to
> write the tests in any language and just define some conventions how
> to run them.
Specifying a single framework to use in all cases would be a mistake,
IMHO. No matter what's chosen, there is going to be a set of folks who
don't like the decision and there are going to be some cases for which
that single choice would not work well if at all.
Instead of choosing a single framework, I prefer the approach of having
a default option which is easy to start with but reports in a
well-defined and relatively universal format. By structuring a system
like this, if someone doesn't want to use our default framework, they
can use something different as long as it returns results in our
If/When we go forward with something like this, it's important to keep
everything as transparent and natural as possible for packagers and
task maintainers. While somewhat obvious, the more barriers between a
contributor and running checks or getting useful results, the less
likely anyone is to use a system that is put forth.
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 473 bytes
Desc: not available
More information about the devel