Hello,
This is my first attempt to integrate my small test suite (currently server-oriented) into AutoQA framework, I've not yet tested running testes through autoqa, I need to install some deps, next week I'll try to check [1].
git repo: git://fedorapeople.org/~athmane/autoqa.git
tested are in: tests/pkg_tests/tests
[1] http://repos.fedorapeople.org/repos/fedora-qa/autoqa/
On Tue, 2011-06-14 at 01:43 +0100, Athmane Madjoudj wrote:
Hello,
This is my first attempt to integrate my small test suite (currently server-oriented) into AutoQA framework, I've not yet tested running testes through autoqa, I need to install some deps, next week I'll try to check [1].
git repo: git://fedorapeople.org/~athmane/autoqa.git
tested are in: tests/pkg_tests/tests
Very nice, thank you for sharing! I'm still absorbing your code, but just a few quick comments...
Have you considered using a more standard (well-defined/documented) test format? We have a lot of folks with experience using beakerlib [1] (a bash test helper library), and I'm sure there are others. At the very least, you might want to consider a tests/pkg_tests/tests/README file explaining the test format, required files, required file permissions, and perhaps some sample hello-world type thing? However, I would suggest going with a well-documented format for ease in maintenance. We can explore some options if you are interested.
We discussed on #fedora-qa already, but I think we should mention that these tests are potentially disruptive to the system. We might want to explore doing a virt-install in your test wrapper so your tests are provided a disposable system. Alternatively, bonus points if you can figure out how to farm out the tests to a cloud service instead.
In each of your test scripts, perhaps add some documentation explaining what the purpose of the test is? This likely would be covered by the test format above.
I think we need to explore execution a bit further. We have grand plans of defining a structure for package maintainers to define per-package tests in their own git space (likely alongside their pkgs.fedoraproject.org code). However, we aren't yet able to provide disposable test systems to support maintainer contributed tests. I don't have a full answer for this yet, but in the meantime I think it's perfectly fine to explore this idea further. Some questions to help lead us to the answer.
1. Who is the expected audience for these test results? Meaning, if a test fails, who needs to know? 2. Who is expected to maintain and expand these tests? 3. When should these tests be run?
Thanks, James
On 06/14/2011 02:52 PM, James Laska wrote:
On Tue, 2011-06-14 at 01:43 +0100, Athmane Madjoudj wrote:
... <snip>
Have you considered using a more standard (well-defined/documented) test format? We have a lot of folks with experience using beakerlib [1] (a bash test helper library), and I'm sure there are others. At the very least, you might want to consider a tests/pkg_tests/tests/README file explaining the test format, required files, required file permissions, and perhaps some sample hello-world type thing? However, I would suggest going with a well-documented format for ease in maintenance. We can explore some options if you are interested.
Yes I've looked at BeakerLib (already used by initscripts test), well I was very impressed by the quality of the code and output (never thought that we could have do that we Bash/SH) but I suppose that tests can be written in different language (Python/Ruby/Perl ...), only exit return value matter.
We discussed on #fedora-qa already, but I think we should mention that these tests are potentially disruptive to the system. We might want to explore doing a virt-install in your test wrapper so your tests are provided a disposable system. Alternatively, bonus points if you can figure out how to farm out the tests to a cloud service instead.
Cloud service is very interesting idea.
In each of your test scripts, perhaps add some documentation explaining what the purpose of the test is? This likely would be covered by the test format above.
+1 for docs, this should be my next task.
I think we need to explore execution a bit further. We have grand plans
....<snip>
1. Who is the expected audience for these test results? Meaning, if a test fails, who needs to know?
Perhaps package maintainer or QA so we can fill bugs after reproducing them manually.
2. Who is expected to maintain and expand these tests?
3. When should these tests be run?
Not sure, maybe during validation tests or after.
Thanks.
autoqa-devel@lists.fedorahosted.org