On Mon, 2011-04-18 at 06:54 +0100, Athmane Madjoudj wrote:
Hello,
I've some tests written in Python (all under 50 lines), used to test for
the following software:
Wow, this is great! First, let me say thanks for being proactive and
attempting to create AutoQA tests or your packages. Presently, AutoQA
is focused on tests that apply to all packages (e.g. rpmlint, depcheck,
upgradepath etc...). At some point in the future, AutoQA will support
package-specific tests contributed by maintainers. However, at this
time, we are not able to offer that support.
The main roadblocks I'm aware of are crafting a solution to provide
disposable test systems. With disposable test systems, each test would
get a fresh new system to test on, and wouldn't need to worry about
destructive tests or test cleanup. Another roadblock is defining a
structure for where and how maintainers will write tests. I don't think
we need to expose all of the autotest/autoqa library to maintainers,
it's just too much information. We need to figure out what information
is needed, and provide a simple way for maintainers to store tests such
that AutoQA will run them at a desired event.
We could always use help building the infrastructure for
package-specific tests. But in the meantime, the options for creating
and running your tests are probably ...
1. Work with upstream to have your tests included in any upstream
test suite
2. Include your tests in the rpm and have them run by %check during
package build
3. Create a private [1] autoqa git branch that mirrors 'stable' and
manually run your tests. You would be responsible for running
the tests and communicating the results, but this would at least
help you identify issues with integrating tests, and perhaps
help expedite the process of package-specific tests.
[1]
https://fedoraproject.org/wiki/Infrastructure/fedorapeople.org#Creating_a...
Dovecot : 1 test
Apache HTTPD: 3 + 1 not yet finished
vsFTPd : 1 test
MySQL : 1 test
SSHD : 1 not yet finished
After reading about autotest (big picture) and 'Writing AutoQA Tests'
[1], I need to have the following info:
1. Should I use TEST_TYPE = 'CLIENT', because actually the tests run on
'localhost' or I need to modify the tests to be network-aware and use
TEST_TYPE = 'SERVER'.
You'll want CLIENT, unless your tests require multiple systems
coordinating with each other using barriers.
2. I'm little confused about the directory structure, should I
group all
tests into one or split the according the service (tests/httpd,
tests/httpd_vhost, tests/vsftpd, etc...)
Since we don't yet support package-specific maintainer contributed
tests, the directory structure isn't as important. I'd probably suggest
a single directory for each src.rpm ... but that's subject to change and
just a best guess at this point.
3. The tests uses some no-standards Python modules like pexpect and
'tcpcat' (I wrote this module to emulate netcat, because tests have
failed after a netcat bug[2] ), Is that OK ?
The dependencies of the test can be installed using the setup() method
in the Autotest wrapper. If your dependencies are not packaged as rpms,
they'll need to be, or they can be included in your test. The ideal
scenario, is packaging them though.
Thanks,
James