On Wed, 24 May 2017 20:00:00 +0200
Stef Walter <stefw(a)redhat.com> wrote:
On 24.05.2017 13:50, Tim Flink wrote:
> As we're working more with the test invocation spec [1] and the
> ansible implementation [2], there are a few things which aren't 100%
> clear to me and I'm hoping to get some clarification on.
>
> Why is "ansible_connection=local" a requirement for the testing
> system?
The initial invocation of Ansible is meant to act locally. The ansible
scripts are meant to stage their own containers, or virtual machines
(eg: with qemu, or libvirt) and then perform further tasks into them.
Sure but I'm wondering why this is a requirement. If we were using
shell scripts, this makes a lot of sense. As we're using ansible, there
isn't a whole lot of difference between executing locally vs. remotely
so long as ssh keys are set up properly.
In a way, it feels like we've decided to use the autopkgtest paradigm
but dropped ansible in as "bash++". I'm having a hard time seeing how
the current setup is that much better than bash and something like
beakerlib.
Don't get me wrong, I'm all for doing automation with Ansible - it's
the restrictions placed around that automation that I'm questioning.
> Why is obtaining the test subject solely the responsibility of
the
> testing system? This seems like something that could potentially
> have different requirements for different usecases. For example,
> something like abicheck needs to grab previous builds in addition
> to the latest build of a given package.
It seems like those previous builds are not the test subject. They are
part of the fixtures of the test or test suite. In CI terms, the test
subject is the thing that has changed. The fixtures are things it are
tested against. We didn't include the term 'fixtures' in the
terminology section, but it's usage here is the usual usage within
testing.
> I don't understand what we're trying to accomplish with all the
> test_<subject>.yml. Is the idea that we'd run test_oci.yml for every
> package used in building a container against that container?
When a test is written in such a way that it makes sense to test it as
an OCI image, then we expect that test_oci.yml is present. In some
cases this file will be a boilerplate of a few lines.
When would it make sense to test an RPM as an OCI image? Are we
expecting all the test_<subject>.yml files to be in every repo?
> What is the point in having test_local.yml and test_rpm.yml? I
> remember the idea of in-situ tests being part of the discussion but
> I thought that we were pretty limited on what we could do with
> those since we're not going to be modifying rpm like debian was
> able to get changes made for .deb packages.
test_rpm.yml installs RPMs and then tests against them. test_local.yml
assumes that everything to be tested has already been installed.
I don't understand why we're taking the stance of "we don't plan on
letting you install the RPMs under test but we do expect you to come up
with your own method of booting VMs". It seems very strange but I could
be missing something here.
I understand the need to allow folks to set up and tear down their own
set of VMs for testing but I'm still not understanding why this is the
only method that we'll support. I would have thought that the ability
to specify the type(s) and number of VMs needed would be an easier route
for many tests. Is there something I'm missing?
Another concern I have with installing RPMs for people is what to do
about builds which depend on each other and ordering. Are we assuming
that test case dependencies will declare all the dependencies needed to
install a new build? How will we be handling cases where multiple
builds are needed to test against eachother but are not built at the
same time?
Both test_rpm.yml and test_local.yml operate in-situ, but one
performs
an additional action.
Again, as with test_oci.yml if test_rpm.yml is commonly the same
boilerplate, we could find a simple way to reduce that. Do you have
ideas? It does seem to be necessary for packages to be able to deviate
from the default behavior in cases. So such a solution should not be a
blanket thing.
I agree that there will be and should be some deviation from the
default behavior.
I don't have any good ideas to reduce the boilerplate but I'm still
struggling to understand the details of what we're aiming to do with
all of this. Are we planning to support more than just per
build/container testing with this interface? If I wanted to write a
test for FreeIPA that used multiple hosts to create and poke at an AD
setup (for example), would I be writing that using this interface or is
that out of scope for what we're trying to do?
Tim