Hi Kamil,
Thank you for the reply.
On Wed, Apr 13, 2016 at 7:34 PM, Kamil Paral <kparal(a)redhat.com> wrote:
>
> Hello Sinny,
> thanks for your work and sorry for late response. I'll review your
> taskotron task and let you know if there's something that should be changed
> or not. Afterwards, we will start mirroring your git repo on our taskotron
> servers, and patch our taskotron-trigger to know about task-libabigail. We
> can then execute it on every new Koji build (or Bodhi update, your choice).
> Your task will probably be the first one that we will execute regularly
> while not being written and maintained directly by us, so if there are any
> rough edges in the process, I apologize in advance :-)
>
Okay, sounds good to me.
>
> You'll need to have two branches in your git:
> master - this will be used on our production server
> https://taskotron.fedoraproject.org/
> develop - this will be used on our dev and staging server
> http://taskotron-dev.fedoraproject.org/ and
> https://taskotron.stg.fedoraproject.org/
Okay, works for me.
You need to decide whether it is better to run libabigain against every new
> Koji build, or just against every new Bodhi update. From a quick look, I
> think it makes more sense to run libabigail on every new Koji build, so
> that people can see the results even before creating the update (that
> requires looking into ResultsDB manually at the moment). If we run it on
> every Koji build, the results will still show up in Bodhi - Bodhi should
> query ResultsDB and show the results for those particular builds present in
> the update. (We might need to teach Bodhi about libabigail existence, I'm
> not sure). Ultimately it's your choice, what makes more sense for your
> check.
>
I believe that when we say every new Koji build, we are talking about
non-scratch build which doesn't include scratch build done by anyone. If my
assumption is right, then yes running libabigail task on each koji build
will be good. It is possible to do that with current implementation since
libabigail task look for a koji build-id to download corresponding rpms.
> Please also create a wiki page at
> https://fedoraproject.org/wiki/Taskotron/Tasks/libabigail similar to these
> https://fedoraproject.org/wiki/Taskotron/Tasks/depcheck
> https://fedoraproject.org/wiki/Taskotron/Tasks/upgradepath
> linked from https://fedoraproject.org/wiki/Taskotron/Tasks .
>
Sure, I will create one.
> We try to have at least some basic documentation and FAQ for our checks in
> there. Currently it's not very discoverable (we should link to it at least
> from ResultsDB, which we currently don't) and the location can change, but
> at least it's a link we can give to people when asking basic questions
> about one of our tasks. Also, since you're going to maintain the task and
> not us, please include some "Contact" section where to post feedback or
> report bugs (e.g. github issues page). If people ask us about the task and
> we don't know the answer, we will point them to that wiki page.
>
Will add contact section in wiki page.
I wonder if it is better to have this included with other fedora qa tasks?
> Can we please continue this discussion in qa-devel [1] mailing list? We
> can discuss more implementation details in there, and I'll post my review
> findings in there as well.
>
done!
--
http://sinny.io/
# Fedora QA Devel Meeting
# Date: 2016-04-25
# Time: 14:00 UTC
(https://fedoraproject.org/wiki/Infrastructure/UTCHowto)
# Location: #fedora-meeting-1 on irc.freenode.net
This will likely be a shorter meeting, not a whole lot to cover this
week.
Please put announcements and information under the "Announcements and
Information" section of the wiki page for this meeting:
https://phab.qadevel.cloud.fedoraproject.org/w/meetings/20160425-fedoraqade…
Tim
Proposed Agenda
===============
Announcements and Information
-----------------------------
- Please list announcements or significant information items below so
the meeting goes faster
Tasking
-------
- Does anyone need tasks to do?
Potential Other Topics
----------------------
- Docker testing
- abi checking
Open Floor
----------
- TBD
#436: SSH access to systems in Beaker lab
--------------------------------------+---------------------
Reporter: atodorov | Owner: tflink
Type: defect | Status: new
Priority: major | Milestone:
Component: Blocker bug tracker page | Version:
Keywords: | Blocked By:
Blocking: |
--------------------------------------+---------------------
= bug description =
Currently systems in Beaker lab can be accessed only through bastion.fp.o
which is not as convenient as direct SSH into the system.
There's also the question whether or not to open the systems directly to
the Internet.
This needs to be discussed with infra. Filing here so it doesn't get lost.
--
Ticket URL: <https://fedorahosted.org/fedora-qa/ticket/436>
Fedora QA <http://fedorahosted.org/fedora-qa>
Fedora Quality Assurance
As a heads up, I'm planning to run a few tests on the staging
phabricator instance later today.
As part of this, you may receive some duplicate emails but if that
happens, it shouldn't be many.
Tim
# Fedora QA Devel Meeting
# Date: 2016-04-18
# Time: 14:00 UTC
(https://fedoraproject.org/wiki/Infrastructure/UTCHowto)
# Location: #fedora-meeting-1 on irc.freenode.net
It's been a few weeks since we had our last QA devel meeting and I'm
sure that everyone is chomping at the bit to get back to them.
Please put announcements and information under the "Announcements and
Information" section of the wiki page for this meeting:
https://phab.qadevel.cloud.fedoraproject.org/w/meetings/20160404-fedoraqade…
Tim
Proposed Agenda
===============
Announcements and Information
-----------------------------
- Please list announcements or significant information items below so
the meeting goes faster
Tasking
-------
- Does anyone need tasks to do?
Potential Other Topics
----------------------
- Docker testing
- taskotron-ansible
Open Floor
----------
- TBD
Hey folks, just a quick openQA status update: both staging and prod are
now running a recent git master build of openQA and os-autoinst (the
-24 and -7 builds in the recently-submitted update). Also, qa14.qa has
been added to prod as an extra worker box; it's currently hosting 10
workers, just over doubling our test capacity (wahay). It was used for
the 20160415 Rawhide tests and they seemed to go fine, we'll keep an
eye on things for the next few days.
Tim is hoping to free up another of those boxes for use as a worker
soon; we could give both 'big iron' boxes to prod (=20 workers) and
give all three old worker boxes to stg (=6 workers), or split it up
some other way (e.g. give both servers one 'big iron' box and give all
the old boxes to prod, which would be a 16/10 split). (It *is* possible
to have one box act as a worker host for two openQA servers, but it
could get tricky if the tests get out of sync between the two hosts, as
sometimes happens, as the worker box just has one mount point of tests,
so it's probably best to stick with each worker host box being
dedicated to a server).
We could also look at going to single-CPU VMs; this is how SUSE runs,
apparently. We ought to be able to up the worker count considerably if
we do that.
--
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net
> Hi all,
> Last year, discussion happened around "Checking the ABI of packages submitted
> to the updates-testing Fedora repository" on Fedora devel ML [1].
> We felt that taskotron[2] will be the best place to run automatic ABI checks
> for a new package update pushed in bodhi for testing against latest stable
> package. If any ABI incompatibility is detected, we will provide package
> maintainer feedback to review ABI changes, similar to how rpmlint feedback
> is given. This will help to ship stable ABI for Fedora packages, reduced
> build failures which is seen later in packages depending on it.
> I have created a libabigail [3] taskotron task which can be integrated in
> taskotron to perform automatic ABI checks for new bodhi updates. Right now,
> this task downloads all rpms with debuginfo for specified package update and
> corresponding latest stable package available for that release. Further, it
> runs libabigail's abipkgdiff tool for ABI checks. It tells PASSED if no ABI
> change occurred, otherwise FAILED and ABI changes log can be reviewed to
> ensure changes are ok to go or not.
> Source code of libabigail taskotron task can be accessed from github[4].
> Some sample output of libabigail task run on my local machine:
> * http://fpaste.org/349740/
> * http://fpaste.org/349741/
> * http://fpaste.org/349761/
> It will be great if someone can review libabigail task and provide feedback.
> Also, I would love to know further steps required to integrate it with
> taskotron?
> Thanks,
> Sinny
> [1]
> https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org…
> [2] http://libtaskotron.readthedocs.org/en/develop/
> [3] https://sourceware.org/libabigail/
> [4] https://github.com/sinnykumari/task-libabigail
Hello Sinny,
thanks for your work and sorry for late response. I'll review your taskotron task and let you know if there's something that should be changed or not. Afterwards, we will start mirroring your git repo on our taskotron servers, and patch our taskotron-trigger to know about task-libabigail. We can then execute it on every new Koji build (or Bodhi update, your choice). Your task will probably be the first one that we will execute regularly while not being written and maintained directly by us, so if there are any rough edges in the process, I apologize in advance :-)
You'll need to have two branches in your git:
master - this will be used on our production server https://taskotron.fedoraproject.org/
develop - this will be used on our dev and staging server http://taskotron-dev.fedoraproject.org/ and https://taskotron.stg.fedoraproject.org/
We will regularly pull your repo, probably with a cron job for now. The cron job is not yet written and the periodicity is not yet decided, but you can track it here:
https://phab.qadevel.cloud.fedoraproject.org/T767
You need to decide whether it is better to run libabigain against every new Koji build, or just against every new Bodhi update. From a quick look, I think it makes more sense to run libabigail on every new Koji build, so that people can see the results even before creating the update (that requires looking into ResultsDB manually at the moment). If we run it on every Koji build, the results will still show up in Bodhi - Bodhi should query ResultsDB and show the results for those particular builds present in the update. (We might need to teach Bodhi about libabigail existence, I'm not sure). Ultimately it's your choice, what makes more sense for your check.
Please also create a wiki page at https://fedoraproject.org/wiki/Taskotron/Tasks/libabigail similar to these
https://fedoraproject.org/wiki/Taskotron/Tasks/depcheckhttps://fedoraproject.org/wiki/Taskotron/Tasks/upgradepath
linked from https://fedoraproject.org/wiki/Taskotron/Tasks .
We try to have at least some basic documentation and FAQ for our checks in there. Currently it's not very discoverable (we should link to it at least from ResultsDB, which we currently don't) and the location can change, but at least it's a link we can give to people when asking basic questions about one of our tasks. Also, since you're going to maintain the task and not us, please include some "Contact" section where to post feedback or report bugs (e.g. github issues page). If people ask us about the task and we don't know the answer, we will point them to that wiki page.
Can we please continue this discussion in qa-devel [1] mailing list? We can discuss more implementation details in there, and I'll post my review findings in there as well.
Thanks,
Kamil
[1] https://lists.fedoraproject.org/archives/list/qa-devel@lists.fedoraproject.…
# Fedora QA Devel Meeting
# Date: 2016-04-04
# Time: 14:00 UTC
(https://fedoraproject.org/wiki/Infrastructure/UTCHowto)
# Location: #fedora-meeting-1 on irc.freenode.net
It's been a few weeks since we had our last QA devel meeting and I'm
sure that everyone is chomping at the bit to get back to them.
Please put announcements and information under the "Announcements and
Information" section of the wiki page for this meeting:
https://phab.qadevel.cloud.fedoraproject.org/w/meetings/20160411-fedoraqade…
Proposed Agenda
===============
Announcements and Information
-----------------------------
- Please list announcements or significant information items below so
the meeting goes faster
Tasking
-------
- Does anyone need tasks to do?
Open Floor
----------
- TBD
Hey, folks. Just wanted to let everyone know I've done scratch builds
of current git master os-autoinst and openQA and deployed them to the
staging instance; prod is still on 4.3-22. The set of patches
backported from git was getting pretty unwieldy and we were starting to
have to manually rediff a lot of them, which sucks. So I figured we
could just bump up the packages and see how it goes; I plan to run them
on staging for a week or so, see what upstream thinks, and if it all
looks OK we can bump prod too. So far it seems to be running fine.
Note I made sure to use a commit after Jan's 'expand downloaded
compressed assets' stuff landed, for both packages, so he should be
able to work on the ARM stuff on staging now.
--
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net
Resending to keep things public...
tl;dr; of the tl;dr;
https://www.youtube.com/watch?v=3DWB7CBdvXU
tl;dr;
Tim and I/lbrabec shared the same worry, that by being overly-facilitating
to docker, we might go into the spiral of doom, where in the end we'd have
to have and support specific tooling for all of the projects.
Using pytest for docker testing seemed silly to me, since I thought that we
should be adding a layer of docker-specific convenience code, thus going
into the spiral of doom, instead of using some pre-existing docker
convenience (tutum).
Tim was worried that using Tutum (docker specific tool) sends us down the
same spiral.
In the end, we agreed that Taskotron should be first and foremost an
universal runner. Tim mentioned pytest (AFAIK) because we will need to be
able to consume some more "standard" output format than result yaml (and
return code), and pytest might be a good source of this, while at the same
time providing kind of OK testsuite-like behavior.
For the problem at hand (but this is universal for the future problems
too), we say:
1) we understand these output formats (result yaml, return code, in the
furture probably maybe junit)
2) write the test any way you want (bash to compiled C), and as long as
you provide one of the options from #1 as output, we're fine
3) if we did the tests, we'd do it this way: [insert foobar test using
docker containers], but #1 and #2 are still valid
Where #3 is the "reference implementation" aka "simple piece of code, that
shows how we'd do it, but is by no means binding".
We do not know, nor say, that pytest is _the_ tool for docker testing, and
we realistically expect that most of the tests will be just a bash script,
that will do what's necessary, and we (lbrabec) will just try and provide
_some_ reference implementation for a docker test in taskotron.
Joza
Tim, If I left something out, or messed up, please correct me. I'm going to
have a beer now...
(04:51:29 PM) lbrabec: welcome guys :)
(04:51:45 PM) tflink: is this the place where all the docker things are
figured out?
(04:52:04 PM) jskladan: and where we burn the witches tooo
(04:52:16 PM) tflink: this is an acceptable solution to docker
(04:52:38 PM) tflink: how do we test docker? we burn witches!
(04:54:46 PM) jskladan: so ad pytest & docker - I don't really see the
profit of making the image maintainers write tests in pytest - although I
get the "it is a testsuite" argument, docker is mostly interfaced with via
command line, and that is one of the worst things to do in Python.
(04:54:46 PM) jskladan: On top of that - why force a choice on the
maintainers, instead of just allowing them to "write a test(suite)" any way
they want (heck, even using pytest, if that's their choice), and just
running it as a regular task via taskotron?
(04:55:09 PM) tflink: who's forcing anyone to do anything?
(04:55:50 PM) jskladan: then I'm misunderstanding, and still don't
understand why should _we_ use pytest in any way
(04:55:52 PM) tflink: eh, it can be wrapped to be much less painful
(04:56:22 PM) tflink: because it offers a default option so there's not so
much overwhelming choice
(04:57:30 PM) tflink: if there's an easy(ish) default, that's what many
people will end up using
(04:57:45 PM) jskladan: from my POW, I don't see why we should treat Docker
testing any different than, ie. package-specific tests
(04:58:13 PM) tflink: which is why i was suggesting that we look into
something more generic
(04:58:34 PM) jskladan: more generic than what?
(04:58:45 PM) tflink: not something specific to docker
(04:59:11 PM) tflink: something that allows grouping of commands/actions
into test cases and makes reporting results easy for users
(05:00:32 PM) tflink: why would having everyone come up with their own
solution for that use case be better?
(05:00:42 PM) lbrabec: i always thought that the generic thing is
taskotron, and we are going to provide docker directive, that runs the
actual tests
(05:00:55 PM) tflink: we can
(05:00:59 PM) jskladan: ok, let me ask it in a different way - are we going
to "remove the overwhelming choice" for the package-specific tests too?
(05:01:31 PM) tflink: but there's a limit to what we're going to be able to
support if we do "this is for docker, this is for kubernetes, this is for
modules ..."
(05:01:38 PM) tflink: jskladan: I'd like to, yes
(05:02:05 PM) jskladan: so who's going to tell all the devs "well, what you
have now is nice, but you should really rewrite it to pytest"
(05:02:05 PM) jskladan: ?
(05:02:31 PM) tflink: but note that the base thing in my mind is "this is a
default which will likely make your lives easier. if there's a better tool,
please use it - if it returns something we can understand, it doesn't
matter but we won't be able to help as much if you run into issues"
(05:02:51 PM) tflink: nobody?
(05:03:33 PM) tflink: the target folks here are people writing tests for
things beyond what's already upstream
(05:04:03 PM) tflink: there is no way that we're going to be able to
mandate a single framework/runner/etc.
(05:04:08 PM) tflink: well, we could
(05:04:23 PM) tflink: but i don't think it would work and I'd consider that
a form of failure
(05:05:55 PM) tflink: my emphasis is that people can use pretty much
whatever they want. so long as it can be run on the command line and
returns some form of input we can parse (xunit xml, resultsyaml, tap ...),
I don't much care
(05:06:26 PM) jskladan: so, I think that on some fundamental level, we do
agree :) I'm just not sure where (if anywhere) we diverge
(05:07:58 PM) jskladan: since, what I though is the solution, was to just
say "write a script, we'll run it", and that's it. What we wanted the tutum
for was to remove the burden of us making the tooling to interface with
docker
(05:08:27 PM) jskladan: like, even though it does not really make much
sense to me, you might want to run a command inside the docker container
(05:08:47 PM) tflink: if it works really well, using tutum for that is an
option
(05:08:58 PM) jskladan: so instead of us trying to put together the code
for it, we'd just use that code that exists
(05:09:15 PM) jskladan: whether we should make the tooling around it is
another questiong
(05:09:16 PM) tflink: which is always a balancing act, in my opinion
(05:09:38 PM) jskladan: but on the fundamental level, my idea of what
"Testing docker images" means for us is functional testing, basically
(05:09:50 PM) jskladan: you'd run the container, and then try to connect to
it
(05:09:54 PM) tflink: yep
(05:10:09 PM) tflink: maybe even getting into "here are 3 containers, make
sure they work together"
(05:10:58 PM) tflink: the docker ecosystem worries me more than most others
due to it's tendency to change drastically
(05:11:39 PM) tflink: which tempts me to say that it might be wise to write
our own interfaces and keep them as simple as possible
(05:12:08 PM) jskladan: yup - it's like - I would not want to write a code
for them to set up the three containers, and connect them, and so on - that
would mean that we are in need of a declaratory language, to say what's to
be done
(05:12:15 PM) jskladan: and the fact is - docker already has that
(05:12:17 PM) jskladan: the compose files
(05:12:58 PM) tflink: ok
(05:13:31 PM) jskladan: tflink: ad "write our own interfaces and keep them
as simple as possible" - that's what we already do, IMHO, via the formulae
(05:13:40 PM) jskladan: and result yaml
(05:13:59 PM) jskladan: input: any executable, output: result yaml
(05:14:00 PM) jskladan: done :)
(05:14:23 PM) tflink: and I'm not arguing against that
(05:14:30 PM) jskladan: and I understand that adding "something else,
hopefully standard" to output is a good idea
(05:14:44 PM) tflink: to output?
(05:14:53 PM) jskladan: to the output standard
(05:15:12 PM) tflink: yeah, we're going to have to do that eventually
(05:15:14 PM) jskladan: so it's not result yaml, but also ie. pytest's
output
(05:15:31 PM) jskladan: where's what I see is probably the gain you feel
there
(05:16:10 PM) tflink: but you don't think that the idea of having a
"default choice" for writing tests is a good idea?
(05:16:13 PM) jskladan: and by pytest's output, I really do mean junit, or
however is the "defacto standard" called
(05:16:40 PM) tflink: i figure we'll find a few options and specify them as
"stuff we understand"
(05:17:12 PM) *garretraziel left the room (quit: Quit: garretraziel).*
(05:18:13 PM) jskladan: well, I think that we should really just define the
interface, not provide the "default choice", but the border between the two
is probably blurred
(05:18:20 PM) ***tflink figured that having a "default" with a bunch of
examples would be helpful overall so long as there's no "first class
citizen" stuff involved
(05:18:51 PM) tflink: why are those two things mutually exclusive?
(05:19:15 PM) jskladan: I'm not saying they are
(05:19:56 PM) tflink: hrm, let me try rephrasing things
(05:20:10 PM) tflink: would you have the same objection if the "default
option" wasn't part of libtaskotron?
(05:21:35 PM) jskladan: *shrugs* I probably don't understand the question
(05:23:16 PM) tflink: is your concern with libtaskotron growing too many
arms? ie, trying to be too many things and that the paradigm of "we set up,
you execute" could be hurt by having too many different things?
(05:23:25 PM) jskladan: but we have diverged a bit, I guess so when you
said "usiepytest to test docker" - as I understand it now, you did not mean
that we should write a codebase/library to make testing docker stuff using
pytest easier (like pre-coding the fixtures _we_ think are usefull, or
something like that) - beacause that would go against 'but there's a limit
to what we're going to be able to support if we do "this is for docker,
this is for kubernetes, this is for modules ..."'
(05:23:26 PM) tflink: that's an odd question, i'll rephrase
(05:24:01 PM) jskladan: tflink: yup, I guess it could be said like that
(05:24:51 PM) jskladan: I think that we basically share the same worries -
I don't want to get to the point where we over-facilitate for all of the
people
(05:25:11 PM) tflink: which makes sense. would you have the same concern if
the pytest thingy was separate from libtaskotron?
(05:25:37 PM) tflink: so it was a "here's an example of something that
works with libtaskotron, you can use it or just make sure your tool behaves
close enough to it"
(05:25:49 PM) jskladan: that does not mean, though, that I don't see value
in package-specific testing, or what you discussed with kamil lately (the
CI for our repos thing),
(05:26:08 PM) tflink: yeah, I see these as somewhat separate concerns
(05:26:18 PM) jskladan: exactly
(05:27:50 PM) jskladan: ad "here's example of something..." - I really
don't have a problem with that. My concern with "pytest for docker" was
that we are to bend pytest fo make docker testing easier (than something
else, undefined) to be honest
(05:28:09 PM) jskladan: and if I understand your problem with tutum
correctly now, we basically share the same worries
(05:28:31 PM) jskladan: that by using something too specific, we are going
into the spiral of doom
(05:28:36 PM) jskladan: of specific things
(05:28:57 PM) tflink: yeah, it was less bending pytest to work better for
docker than wondering if the docker-specific things are worth dealing with
or if we should just have a default that works well enough
(05:29:00 PM) tflink: yep
(05:29:26 PM) jskladan: it was just that I thought that you want to add
(docker specific) convenience layer over pytest, and I wanted to use
something that's there (tutum) isntead of us being responsible for the
convenience layer
(05:29:32 PM) jskladan: especially since docker changes day to day
(05:29:59 PM) tflink: maybe in starting up images/instances
(05:30:06 PM) jskladan: and this was the concern lbrabec voiced quite a lot
- that we should not have our own convenience code, since docker changes
all the time
(05:30:34 PM) tflink: ie, make it reasonable for libtaskotron to start
images and hand off stuff to whichever task is running
(05:30:52 PM) tflink: the more I'm thinking about it, the more I like the
idea of having a reference implementation of a testing tool
(05:31:46 PM) tflink: "here's the tool, here's how it interacts with
libtaskotron. you're welcome to use it, extend it or use something
different that behaves close enough to it"
(05:32:33 PM) tflink: but to be a bit more clear (even if I am topic
jumping a bit) - I agree with trying to avoid the "spiral of death" that
you mentioned earlier
(05:32:43 PM) tflink: trying to be all things to all people is a recipe for
disaster and failure
(05:33:18 PM) jskladan: agreed
(05:35:15 PM) tflink: what are your thoughts on the reference
implementation idea?
(05:36:11 PM) jskladan: so to sum it up, for docker (not only, but it's the
problem at hand), we could say:
(05:36:11 PM) jskladan: 1) we understand these output formats (result yaml,
return code, in the furture probably maybe junit)
(05:36:11 PM) jskladan: 2) write the test any way you want (bash to
compiled C), and as long as you provide one of the options from #1 as
output, we're fine
(05:36:11 PM) jskladan: 3) if we did the tests, we'd do it this way:
[insert foobar test using docker containers], but #1 and #2 are still valid
(05:37:14 PM) jskladan: where #3 is the "reference implementation" aka
"proof of concept code" aka "magic land full of unicorns"
(05:37:26 PM) jskladan: for the problem at hand
(05:38:08 PM) tflink: yeah, that sounds good
(05:38:30 PM) tflink: i suspect that it would be wise to keep 3 as dumb as
possible, though
(05:38:30 PM) jskladan: I'm still not convinced that pytest is the way to
go for docker specifically, but hey, that's an implementation issue now
(05:38:49 PM) tflink: pytest was just an example, I have no idea if it
would be a good choice for docker
(05:38:52 PM) jskladan: yup
(05:38:55 PM) jskladan: superb!
(05:39:08 PM) tflink: I really don't know that much about docker
(05:39:20 PM) tflink: so if i'm suggesting silly things, I assume you all
will let me know :)
(05:39:36 PM) jskladan: yeah, and I get what you meant by that now, and I
understand the concerns you have/had
(05:40:11 PM) tflink: it was really a "i don't think anyone knows what
'docker testing' means right now, what can we use for a generic,
cover-our-asses solution so we're not caught unprepared?"
(05:40:13 PM) jskladan: I was just having an aggro on "using pytest for
docker testing is silly", and did not understand where you're coming from
(05:40:21 PM) tflink: no worries
(05:40:30 PM) jskladan: sure thing
(05:40:57 PM) tflink: lbrabec: does this make more sense for you as well?
(05:41:17 PM) jskladan: and I belive we all shared the same idea - that was
the reason for Lukas going the tutum way - since Docker "bought" it, and it
removed the "Docker is changing all the time, and we are responsible" issue
(05:41:56 PM) lbrabec: lbrabec: yep x)
(05:42:17 PM) jskladan: lbrabec: compass making you talk to yourself?
(05:42:18 PM) jskladan: :D
(05:42:28 PM) lbrabec: kind of :D
(05:42:33 PM) tflink: that sounds about right
(05:42:52 PM) tflink: compass causing folks to talk to themselves
(05:43:34 PM) tflink: but docker or arm could do that, too :)
(05:44:29 PM) tflink: in case it's not clear - there's a decent number of
things going on which are pretty hazy
(05:44:41 PM) tflink: if something doesn't make sense, speak up, please
(05:45:11 PM) tflink: too many meetings can cause questionable ideas to
seem OK
(05:49:07 PM) jskladan: :)
(05:50:09 PM) jskladan: well I'm glad this is sorted out, and to be honest
I/(us in the office) needed an hour of so of a heated discussion just to
wrap our heads around the "docker testing" as a whole
(05:50:26 PM) jskladan: and I'
(05:50:59 PM) tflink: if you've wrapped your head around docker testing,
you may be ahead of me :)
(05:51:27 PM) jskladan: hehe, I would not go that far, but we at least
tried to define what we expect of it, and what we thing the use-cases might
be
(05:51:51 PM) jskladan: like, that running some testsuite _inside_ the
container, is probably not the way to go
(05:52:04 PM) jskladan: and that it should be more of an
integration/functional testing
(05:52:15 PM) tflink: yeah, i can see use cases for that but I'm not very
interested in those
(05:52:24 PM) jskladan: in the "run the container, check that the service
[http, postgres] works fine"
(05:52:50 PM) jskladan: or maybe "run these two containers, connect them,
and make sure it works ok"
(05:52:58 PM) tflink: yep
(05:53:08 PM) jskladan: but that we should keep this on the test
developers, since they know what to do
(05:53:24 PM) jskladan: and I think most of the "run and connect stuff"
will just be bash scripts
(05:53:29 PM) jskladan: based on docker behaves
(05:53:30 PM) tflink: exactly
(05:53:43 PM) jskladan: *how docker behaves
(05:53:43 PM) tflink: I don't pretend to be an expert in docker testing,
kernel testing or much else
(05:53:47 PM) jskladan: yeah
(05:54:03 PM) jskladan: ok, superb, I'm really happy we got to the same page
(05:54:07 PM) tflink: I'm more interested in providing a way to make
running automated tests and getting results easy
(05:54:16 PM) tflink: yeah, same here
(05:54:22 PM) tflink: sorry for not explaining things better
(05:54:42 PM) jskladan: no problem, really, it's the problem with written
conversation
(05:55:32 PM) tflink: and not being in the same office
(05:55:43 PM) tflink: which is pretty much the same problem :)
(05:56:03 PM) jskladan: do you have anything other we should discuss,
related to this? If not, I'll be leaving the office soon to get some
dinner, and beer. A lot of beer :D
(05:56:17 PM) tflink: nothing for today
(05:56:23 PM) tflink: beer sounds like a good plan to me
(05:56:34 PM) tflink: a natural consequence of working with docker :)
(05:56:46 PM) tflink: or test automation, come to think of it
(05:58:59 PM) jskladan: :)