On 06.06.2017 16:46, Tim Flink wrote:
On Wed, 24 May 2017 20:00:00 +0200
Stef Walter <stefw(a)redhat.com> wrote:
> On 24.05.2017 13:50, Tim Flink wrote:
>> As we're working more with the test invocation spec [1] and the
>> ansible implementation [2], there are a few things which aren't 100%
>> clear to me and I'm hoping to get some clarification on.
>>
>> Why is "ansible_connection=local" a requirement for the testing
>> system?
>
> The initial invocation of Ansible is meant to act locally. The ansible
> scripts are meant to stage their own containers, or virtual machines
> (eg: with qemu, or libvirt) and then perform further tasks into them.
Sure but I'm wondering why this is a requirement. If we were using
shell scripts, this makes a lot of sense. As we're using ansible, there
isn't a whole lot of difference between executing locally vs. remotely
so long as ssh keys are set up properly.
In a way, it feels like we've decided to use the autopkgtest paradigm
but dropped ansible in as "bash++". I'm having a hard time seeing how
the current setup is that much better than bash and something like
beakerlib.
Don't get me wrong, I'm all for doing automation with Ansible - it's
the restrictions placed around that automation that I'm questioning.
It's easy to describe Ansible as "ssh in a bash for loop", but we're
trying to use it as Ansible.
Requirement: it is the job of the tests (and its roles and/or framework)
to launch any containers and virtual machines ... and run actual tests
inside of them.
The use of ansible_connection=local is a way enshrining that
requirement. But if there's a better way to enshrine that requirement,
then lets do it. Perhaps one will come to mind?
>> Why is obtaining the test subject solely the responsibility
of the
>> testing system? This seems like something that could potentially
>> have different requirements for different usecases. For example,
>> something like abicheck needs to grab previous builds in addition
>> to the latest build of a given package.
>
> It seems like those previous builds are not the test subject. They are
> part of the fixtures of the test or test suite. In CI terms, the test
> subject is the thing that has changed. The fixtures are things it are
> tested against. We didn't include the term 'fixtures' in the
> terminology section, but it's usage here is the usual usage within
> testing.
>
>> I don't understand what we're trying to accomplish with all the
>> test_<subject>.yml. Is the idea that we'd run test_oci.yml for every
>> package used in building a container against that container?
>
> When a test is written in such a way that it makes sense to test it as
> an OCI image, then we expect that test_oci.yml is present. In some
> cases this file will be a boilerplate of a few lines.
When would it make sense to test an RPM as an OCI image? Are we
expecting all the test_<subject>.yml files to be in every repo?
No only those with tests that make sense to be run on that subject. So
for example if mariadb dist-git repo has tests that should be run in an
OCI image, it would have test_oci.yml. In some cases it will be the same
or similar tests, run on a different subject.
One of key things about CI detecting failures is running tests multiple
times, varying certain minor things about it on each run. Often the
thing being varied is the test subject.
>> What is the point in having test_local.yml and test_rpm.yml?
I
>> remember the idea of in-situ tests being part of the discussion but
>> I thought that we were pretty limited on what we could do with
>> those since we're not going to be modifying rpm like debian was
>> able to get changes made for .deb packages.
>
> test_rpm.yml installs RPMs and then tests against them. test_local.yml
> assumes that everything to be tested has already been installed.
I don't understand why we're taking the stance of "we don't plan on
letting you install the RPMs under test but we do expect you to come up
with your own method of booting VMs". It seems very strange but I could
be missing something here.
That's not the case. You can come up with your own method ... but most
tests can just use:
https://admin.fedoraproject.org/pkgdb/package/rpms/standard-test-roles/
One of the great things about using Ansible here is sharing roles for
common cases. Such as starting a VM.
The point is that it's not the job of the CI system to launch the test
subject as a VM or containers, with a specific IP address, architecture,
disks mounts etc... The point is to decouple the tests from that from
the CI system. The test accomplishes that, and usually does so with
standard stuff ^^.
I understand the need to allow folks to set up and tear down their
own
set of VMs for testing but I'm still not understanding why this is the
only method that we'll support. I would have thought that the ability
to specify the type(s) and number of VMs needed would be an easier route
for many tests. Is there something I'm missing?
Because it locks in the tests to a certain CI system, thus creating
exactly the same fiasco we have with beakerlib tests being tied to the
method of invocation.
Another concern I have with installing RPMs for people is what to do
about builds which depend on each other and ordering. Are we assuming
that test case dependencies will declare all the dependencies needed to
install a new build? How will we be handling cases where multiple
builds are needed to test against eachother but are not built at the
same time?
For modules, repo, OCI, QCow2 test subjects obviously this is already
solved. If the compose succeeded, then that works.
For RPMs as test subjects it is up to the CI system to put all these
test subjects together to be tested together in a batch. We could copy
Debian/Ubuntu's mechanism here ... or come up with our own.
See these clauses in the spec:
"The testing system SHOULD stage the tests on Fedora operating system
appropriate for the branch name of the dist-git repository containing
the tests."
https://fedoraproject.org/wiki/Changes/InvokingTestsAnsible
Not everything here has been worked out. If you have possible solutions,
lets talk about them, and if necessary, add them to the spec.
> Both test_rpm.yml and test_local.yml operate in-situ, but one
performs
> an additional action.
>
> Again, as with test_oci.yml if test_rpm.yml is commonly the same
> boilerplate, we could find a simple way to reduce that. Do you have
> ideas? It does seem to be necessary for packages to be able to deviate
> from the default behavior in cases. So such a solution should not be a
> blanket thing.
I agree that there will be and should be some deviation from the
default behavior.
I don't have any good ideas to reduce the boilerplate but I'm still
struggling to understand the details of what we're aiming to do with
all of this. Are we planning to support more than just per
build/container testing with this interface? If I wanted to write a
test for FreeIPA that used multiple hosts to create and poke at an AD
setup (for example), would I be writing that using this interface or is
that out of scope for what we're trying to do?
That's in scope. This is exactly one of the reasons that the tests (more
likely shared roles/framework) launches the VMs that would enable such a
test.
https://github.com/cockpit-project/cockpituous/tree/master/tests
It's something we do all the time with the Cockpit integration tests. We
use local qemu/libvirt VMs for this that start in about 10 seconds. If
we want, once we're underway, we could contribute these tests to Fedora.
Cheers,
Stef