As I've been working on the email reduction feature, I've been rather frustrated with how difficult it is to test AutoQA as a framework. I've written scripts to make testing depcheck (minus commenting and build gathering) easier but I have yet to find a reasonable way to exercise the whole AutoQA stack other than running jobs on my dev system with comments disabled and hope that they finish before production does.
Before I go farther, I realize that what I'm discussing would be a non-trivial amount of work and I'm kind of thinking that it would be a little too much for 0.5.0. Then again, more testing could help improve the release. Thoughts on that part would be welcome.
I can think of two methods that we could use to make testing easier and start down the road of retiring our mascot: - Refactor bodhi_utils and koji_utils so that they can be stubbed out in testing - Find an alternative to the production instances of koji and bodhi
Either way, setting up the conditions for self-tests (making mock rpms, setting up the metadata, creating repos) is going to be a decent amount of work and will likely end up with fragile, high maintenance tests if we go beyond basic smoke tests.
There were concerns over the refactoring I proposed for koji_utils a while back so I assume that isn't the way we want to proceed for now.
I looked into setting up test instances of bodhi and koji but I'm rather intimidated by the amount of maintenance and hacking that would be required to induce the conditions that we would need for more complete testing on a regular basis.
With this in mind, I started hacking at a mockup for replacing bodhi and koji for testing purposes that does nothing more than implement the parts of those interfaces that we're using in AutoQA. It took a little while to reverse engineer the interfaces to bodhi and koji but I do have some code that is able to fool the koji client with hard-coded results (haven't tried it with AutoQA yet nor have I gotten very far on mocking up bodhi). I can send out code if anyone wants to see it (kind of ugly ATM, though).
While finishing the mockup would also be non-trivial, I think that it will be more manageable and flexible in the long run in addition to being better suited to our needs since we can add in testability features as needed without having to bug other teams about features that other people may never use.
Anyhow, thoughts on the concepts? On the timing (try to have some for 0.5.0 or not)? Anything that I didn't consider?
Tim
As I've been working on the email reduction feature, I've been rather frustrated with how difficult it is to test AutoQA as a framework. I've written scripts to make testing depcheck (minus commenting and build gathering) easier but I have yet to find a reasonable way to exercise the whole AutoQA stack other than running jobs on my dev system with comments disabled and hope that they finish before production does.
Before I go farther, I realize that what I'm discussing would be a non-trivial amount of work and I'm kind of thinking that it would be a little too much for 0.5.0. Then again, more testing could help improve the release. Thoughts on that part would be welcome.
I can think of two methods that we could use to make testing easier and start down the road of retiring our mascot:
- Refactor bodhi_utils and koji_utils so that they can be stubbed out
in testing
- Find an alternative to the production instances of koji and bodhi
Either way, setting up the conditions for self-tests (making mock rpms, setting up the metadata, creating repos) is going to be a decent amount of work and will likely end up with fragile, high maintenance tests if we go beyond basic smoke tests.
There were concerns over the refactoring I proposed for koji_utils a while back so I assume that isn't the way we want to proceed for now.
I looked into setting up test instances of bodhi and koji but I'm rather intimidated by the amount of maintenance and hacking that would be required to induce the conditions that we would need for more complete testing on a regular basis.
With this in mind, I started hacking at a mockup for replacing bodhi and koji for testing purposes that does nothing more than implement the parts of those interfaces that we're using in AutoQA. It took a little while to reverse engineer the interfaces to bodhi and koji but I do have some code that is able to fool the koji client with hard-coded results (haven't tried it with AutoQA yet nor have I gotten very far on mocking up bodhi). I can send out code if anyone wants to see it (kind of ugly ATM, though).
While finishing the mockup would also be non-trivial, I think that it will be more manageable and flexible in the long run in addition to being better suited to our needs since we can add in testability features as needed without having to bug other teams about features that other people may never use.
Anyhow, thoughts on the concepts? On the timing (try to have some for 0.5.0 or not)? Anything that I didn't consider?
Tim
Tim, that sounds great, thank you for this work. I agree that replacing parts of koji and bodhi library to serve our testing purposes seems a good way to go. I also considered using custom Koji/Bodhi instances in the past (or using the Bodhi staging instance instead), but it always looks like a huge maintenance burden.
I expect all this testing/stubbing/mocking to require considerable effort and time. I may be wrong, but I don't currently see as reasonable to try to put some of it into upcoming 0.5.0. I would rather make a release targeted specifically at enabling testing. Or at least half of the release, I can imagine two people working on enabling testing and two people working on ResultDB, or similarly. Should it be the very next release (0.6.0)? I don't know, let's discuss and plan that once we release 0.5.0.
On 06/03/2011 01:43 AM, Kamil Paral wrote:
Tim, that sounds great, thank you for this work. I agree that replacing parts of koji and bodhi library to serve our testing purposes seems a good way to go. I also considered using custom Koji/Bodhi instances in the past (or using the Bodhi staging instance instead), but it always looks like a huge maintenance burden.
I expect all this testing/stubbing/mocking to require considerable effort and time. I may be wrong, but I don't currently see as reasonable to try to put some of it into upcoming 0.5.0. I would rather make a release targeted specifically at enabling testing. Or at least half of the release, I can imagine two people working on enabling testing and two people working on ResultDB, or similarly. Should it be the very next release (0.6.0)? I don't know, let's discuss and plan that once we release 0.5.0.
Depending on the direction we choose to go, I was thinking of the mock instances as somewhat orthogonal to AutoQA. If we're talking about modifying the AutoQA code for better testing support, that's different though.
I was thinking slightly differently but coming to a similar conclusion. I agree that this would be a non-trivial amount of work but was thinking that it might not be done in time to make a difference for the 0.5.0 release. Even if it could be, I'm not sure it would be wise to distract from getting 0.5.0 done.
Either way, it sounds like we're thinking along similar lines here. We'll re-visit the issue once we get 0.5.0 out and working.
Tim
autoqa-devel@lists.fedorahosted.org