I don't think this particular conversation ever made it very far and IIRC, hasn't been on the list yet but I want to get it started before FUDCon NA this weekend. I've cc'd David and Matthew because I've talked with them about Fedora test automation recently and they might have input.
In my opinion (I suspect that other people feel similarly), AutoQA in it's current form is not capable of meeting the test automation needs for Fedora; mostly because we don't have a clear path towards external tests and it seems pretty clear that the current devs (myself included) don't have the bandwidth to add any more tests to the current set.
There has been some casual conversation about looking into switching over to using Beaker [1] so that we can leverage some of the tests currently being used by various groups within Red Hat instead of having to rewrite them for AutoQA/Autotest. However, I don't want this to sound like 'autotest is bad'. I sincerely doubt that there is a single framework/runner out there which will 100% satisfy all of our needs and I'm just looking to re-evaluate what we want from test automation before deciding how we get there.
Instead of getting into the minutia of what we can do with beaker/autotest/robot/whatever at the moment, I want to get a better idea of what we actually want and need so that we don't end up coding ourselves into a corner in the future.
At the end of this email, I've listed requirements for our test automation from my perspective. I want to emphasize again, that I _don't_ want to get into specific frameworks/solutions yet - just what we want an ideal framework to do. We can get into the advantages/disadvantages of particular setups and other practicality issues later.
I'm planning to make this into a wiki page but figured that I would put it on the list first for some discussion.
Tim
[1] http://beaker-project.org/
For the sake of consistency: - 'must' means that something is a requirement. maybe not on initial release but it has to at least seem possible without too much effort or too many dirty hacks - 'should' means that it would be preferred but not absolutely required - 'would be nice' means that it would be cool, but nothing to lose much sleep over.
=== Basic Requirements === * should be written mostly in Python * must be package-able in fedora and EPEL repos - mostly a licensing thing, other packaging issues could be overlooked if upstream is at least interested in taking patches to fix any issues. * should have a friendly, responsive upstream * must have an understandable codebase * must be extendable without dirty hacks
=== Reporting === * must be able to coordinate with bodhi (1.0, 2.0) * must be able to report some information via fedbus * must support the ability to report to external systems * must be clear about what test version, package-under-test version and fedora release correspond with the reports * must be clear about the test system's state (package versions, installed packages etc.) * should have some standardized reporting format - based on something standard like XML, json, yaml etc.
=== Automation Framework === * must be able to support spawning VMs in Fedora infra's cloud - or at least some other solution that supports rapid VM provisioning without the need to install from scratch for every test. installing from scratch every time is not an acceptable option. * must be able to differentiate between fedora release numbers and package versions * should be able to tell the difference between VMs and bare metal where appropriate * would be nice to have the VM type used during tests as a variable when hooked up to a cloud-ish setup * would be nice to support graphical installation testing * would be nice to include support for grabbing new images from image builder when that's supported
=== Library === * should be written in mostly python * should not make writing tests in languages other than python more difficult than it needs to be * would be nice to support basic reporting options in other languages
=== Test Runner === * must support any language (within reason) * must be able to pull in new/updated tests independently from runner or framework updates * must support version-specific tests (ie variants for different fedora releases) * should be runnable outside the framework for testing and development purposes * would be nice to be able to support multiple libraries (beakerlib, application specific stuff, non-python support etc.)
=== Tests === * must be decoupled from the automation framework * must be able to run outside the automation framework * must report sane results (looking at you, depcheck)
=== Test Repository === * must support non-python languages * must enable a review process for new tests before they are accepted * must allow for test updates without admin or dev intervention * should be in an existing package format (python egg, rpm etc.)
On 01/15/2013 09:25 PM, Tim Flink wrote:
I don't think this particular conversation ever made it very far and IIRC, hasn't been on the list yet but I want to get it started before FUDCon NA this weekend. I've cc'd David and Matthew because I've talked with them about Fedora test automation recently and they might have input.
In my opinion (I suspect that other people feel similarly), AutoQA in it's current form is not capable of meeting the test automation needs for Fedora; mostly because we don't have a clear path towards external tests and it seems pretty clear that the current devs (myself included) don't have the bandwidth to add any more tests to the current set.
There has been some casual conversation about looking into switching over to using Beaker [1] so that we can leverage some of the tests currently being used by various groups within Red Hat instead of having to rewrite them for AutoQA/Autotest. However, I don't want this to sound like 'autotest is bad'. I sincerely doubt that there is a single framework/runner out there which will 100% satisfy all of our needs and I'm just looking to re-evaluate what we want from test automation before deciding how we get there.
As one would expect, I'd jump right away into the discussion.
One thing about beaker is that the test harness (Beah) has some limitations that drive people to look into autotest frequently (I talk to people inside the company interested in it). The folks at beaker made some experiments using the autotest client as one of the possible harnesses inside their framework, but I haven't heard much from them after the initial patches.
It is true that we have scarce resources. My team has in fact 4 people (not all of us allocated full time) and we spend much of the time writing and reviewing virtualization tests, our main attribution. So it's understandable that you want to change.
I'm not sure whether I was actually called into this discussion, so I'll refrain to comment on the requirements. I still think that autotest provides at least a part of what is required.
On Wed, 16 Jan 2013 00:06:49 -0200 Lucas Meneghel Rodrigues lmr@redhat.com wrote:
On 01/15/2013 09:25 PM, Tim Flink wrote:
I don't think this particular conversation ever made it very far and IIRC, hasn't been on the list yet but I want to get it started before FUDCon NA this weekend. I've cc'd David and Matthew because I've talked with them about Fedora test automation recently and they might have input.
In my opinion (I suspect that other people feel similarly), AutoQA in it's current form is not capable of meeting the test automation needs for Fedora; mostly because we don't have a clear path towards external tests and it seems pretty clear that the current devs (myself included) don't have the bandwidth to add any more tests to the current set.
There has been some casual conversation about looking into switching over to using Beaker [1] so that we can leverage some of the tests currently being used by various groups within Red Hat instead of having to rewrite them for AutoQA/Autotest. However, I don't want this to sound like 'autotest is bad'. I sincerely doubt that there is a single framework/runner out there which will 100% satisfy all of our needs and I'm just looking to re-evaluate what we want from test automation before deciding how we get there.
As one would expect, I'd jump right away into the discussion.
Despite the fact that I said that I wanted to _not_ get into framework and implementation specifics in this thread twice in the part of the email you didn't quote. Oh well, I got an email from a beaker dev, too so I can't fault you too much :-P
I just wanted to start the conversation with what we wanted to have for Fedora test automation so that there was a basis for comparison before we dove into 'autotest does X', 'beaker does Y' and so on.
One thing about beaker is that the test harness (Beah) has some limitations that drive people to look into autotest frequently (I talk to people inside the company interested in it). The folks at beaker made some experiments using the autotest client as one of the possible harnesses inside their framework, but I haven't heard much from them after the initial patches.
It is true that we have scarce resources. My team has in fact 4 people (not all of us allocated full time) and we spend much of the time writing and reviewing virtualization tests, our main attribution. So it's understandable that you want to change.
In my mind, it's not so much of a "autotest won't work for us" as a "we're not using much of what autotest can do and I'm not sure our needs are all that compatible without quite a bit of work". I sincerely doubt that we're going to get all of what we want/need in fedora without at least submitting code to an existing framework, though so the desired direction of any upstream would certainly be a factor.
I'm not sure whether I was actually called into this discussion, so I'll refrain to comment on the requirements. I still think that autotest provides at least a part of what is required.
Yeah, I was hoping to talk to you about autotest this weekend since we're both going to be at FUDCon NA. I don't pretend to understand all of what autotest can do and figured you would have more insight.
Tim
On Tue, Jan 15, 2013 at 04:25:06PM -0700, Tim Flink wrote:
I don't think this particular conversation ever made it very far and IIRC, hasn't been on the list yet but I want to get it started before FUDCon NA this weekend. I've cc'd David and Matthew because I've talked with them about Fedora test automation recently and they might have input.
Thanks. This looks great.
- must be able to support spawning VMs in Fedora infra's cloud
- or at least some other solution that supports rapid VM provisioning without the need to install from scratch for every test. installing from scratch every time is not an acceptable option.
And I think needs to be easy to automatically update the VM image that's used. It'd be nice to be able to test against an automatic nightly image spin without needing to redefine that manually. This is probably different from a lot of test frameworks, where the assumption is that you're testing a certain bit of software in a known environment. (We probably want that _too_.)
- would be nice to include support for grabbing new images from image builder when that's supported
Which might be what you're saying with this point. :)
On Wed, 16 Jan 2013 13:44:23 -0500 Matthew Miller mattdm@fedoraproject.org wrote:
On Tue, Jan 15, 2013 at 04:25:06PM -0700, Tim Flink wrote:
I don't think this particular conversation ever made it very far and IIRC, hasn't been on the list yet but I want to get it started before FUDCon NA this weekend. I've cc'd David and Matthew because I've talked with them about Fedora test automation recently and they might have input.
Thanks. This looks great.
- must be able to support spawning VMs in Fedora infra's cloud
- or at least some other solution that supports rapid VM provisioning without the need to install from scratch for every test. installing from scratch every time is not an acceptable option.
And I think needs to be easy to automatically update the VM image that's used. It'd be nice to be able to test against an automatic nightly image spin without needing to redefine that manually. This is probably different from a lot of test frameworks, where the assumption is that you're testing a certain bit of software in a known environment. (We probably want that _too_.)
- would be nice to include support for grabbing new images from
image builder when that's supported
Which might be what you're saying with this point. :)
Kind of, but cloud images were a little farther down the priority list for image builder - the initial targets are DVDs and lives for install testing. Now that you mention it, that might be an interesting thing to look into but a somewhat different discussion :)
I was trying to fit your idea about running tests on new cloud images without getting too much into the specifics of how that would actually work - I was assuming that you guys already had the image building part mostly figured out.
Tim
autoqa-devel@lists.fedorahosted.org