----- "Will Woods" wwoods@redhat.com wrote:
On Thu, 2010-08-19 at 14:58 -0400, James Laska wrote:
I was interested in testing out just the upgrade-path portion of the test (not the autotest/autoqa integration). How would you feel
about
moving the test to a stand-alone script, and then changing upgradepath.py to call the stand-alone script and process output?
No, don't say this, Vojta will try to kill me! That is exactly how he started to implement that and I talked him out of it :)
I think this would make it a lot easier for folks without a full
autoqa
and autotest-server setup to run the test at home.
I agree with James here - it's best if we can run the tests outside of AutoQA, both for debugging purposes and so we can extend/reuse test code elsewhere.
There are a few reasons why I recommended Vojta to write it as a single AutoQA script: 1. This task is very short and simple, we can easily join the the test logic and AutoQA test object stuff. 2. This task requires access to almost whole repoinfo.conf. Using it as a standalone script would require keeping a copy of repoinfo.conf next to it or passing all those information with command line options (all those repo URLs and stuff, that wouldn't be pretty). 3. This task requires a lot of functionality that we already implemented in our autoqa python library, like querying koji for latest package with some tag, comparing EVR or easily working with repoinfo. Using a standalone script would still require autoqa library installed (and therefore whole autoqa effectively) or Vojta would have to re-implement all that stuff from scratch.
Of course it is possible to have this as a standalone script, but it seemed extremely cumbersome to me and I discouraged Vojta from doing that. The code would be twice as long, handling many options passing and re-implementing some autoqa code.
But maybe I'm just looking at it all wrong. Tell me what you think.
On Fri, 2010-08-20 at 06:13 -0400, Kamil Paral wrote:
----- "Will Woods" wwoods@redhat.com wrote:
On Thu, 2010-08-19 at 14:58 -0400, James Laska wrote:
I was interested in testing out just the upgrade-path portion of the test (not the autotest/autoqa integration). How would you feel
about
moving the test to a stand-alone script, and then changing upgradepath.py to call the stand-alone script and process output?
No, don't say this, Vojta will try to kill me! That is exactly how he started to implement that and I talked him out of it :)
Doh! See below, I think we might have multiple definitions of 'stand-alone'. I agree ... writing the test without the help of any autoqa or autoqa config files would be insane.
I think this would make it a lot easier for folks without a full
autoqa
and autotest-server setup to run the test at home.
I agree with James here - it's best if we can run the tests outside of AutoQA, both for debugging purposes and so we can extend/reuse test code elsewhere.
There are a few reasons why I recommended Vojta to write it as a single AutoQA script:
- This task is very short and simple, we can easily join the the test
logic and AutoQA test object stuff. 2. This task requires access to almost whole repoinfo.conf. Using it as a standalone script would require keeping a copy of repoinfo.conf next to it or passing all those information with command line options (all those repo URLs and stuff, that wouldn't be pretty). 3. This task requires a lot of functionality that we already implemented in our autoqa python library, like querying koji for latest package with some tag, comparing EVR or easily working with repoinfo. Using a standalone script would still require autoqa library installed (and therefore whole autoqa effectively) or Vojta would have to re-implement all that stuff from scratch.
Of course it is possible to have this as a standalone script, but it seemed extremely cumbersome to me and I discouraged Vojta from doing that. The code would be twice as long, handling many options passing and re-implementing some autoqa code.
But maybe I'm just looking at it all wrong. Tell me what you think.
All good points, it's certainly worth evaluating the trade-offs. Your thoughts raise something that perhaps isn't well defined. What does it mean for a test to be stand-alone.
The 'working' definition I had in mind wasn't so much that the test script had to use only base python modules (not autotest and not autoqa), but the test could could use autoqa modules (and configs). For me, stand-alone means 1. not having to install autotest-server 2. not having to use the autotest local scheduler (might be unavoidable for some class of tests ... unclear) 3. being able to run the test from the command-line and passing the same arguments that the control file provides
There are some packaging wrinkles that we have in place that make this a bit more complicated ... since autoqa %requires the autotest base package. However, that's more for integrating the tests with autotest, and not so much for the tests themselves.
I think it's completely fine to use autoqa libraries in the stand-alone tests. I know the rats_* stuff uses autoqa.* libraries already. As far as the repoinfo.conf stuff ... that's completely fine imo as well.
Does this change things much?
All said, if the other aspects of the patch are okay, I don't know if this should block getting the test accepted. I trust your judgement for that. It would just make it a harder for non-autoqa savvy folks to use in the meantime.
Thanks, James
On Fri, 2010-08-20 at 08:00 -0400, James Laska wrote:
On Fri, 2010-08-20 at 06:13 -0400, Kamil Paral wrote:
----- "Will Woods" wwoods@redhat.com wrote:
On Thu, 2010-08-19 at 14:58 -0400, James Laska wrote:
I was interested in testing out just the upgrade-path portion of the test (not the autotest/autoqa integration). How would you feel
about
moving the test to a stand-alone script, and then changing upgradepath.py to call the stand-alone script and process output?
No, don't say this, Vojta will try to kill me! That is exactly how he started to implement that and I talked him out of it :)
Doh! See below, I think we might have multiple definitions of 'stand-alone'. I agree ... writing the test without the help of any autoqa or autoqa config files would be insane.
I think this would make it a lot easier for folks without a full
autoqa
and autotest-server setup to run the test at home.
I agree with James here - it's best if we can run the tests outside of AutoQA, both for debugging purposes and so we can extend/reuse test code elsewhere.
There are a few reasons why I recommended Vojta to write it as a single AutoQA script:
- This task is very short and simple, we can easily join the the test
logic and AutoQA test object stuff. 2. This task requires access to almost whole repoinfo.conf. Using it as a standalone script would require keeping a copy of repoinfo.conf next to it or passing all those information with command line options (all those repo URLs and stuff, that wouldn't be pretty). 3. This task requires a lot of functionality that we already implemented in our autoqa python library, like querying koji for latest package with some tag, comparing EVR or easily working with repoinfo. Using a standalone script would still require autoqa library installed (and therefore whole autoqa effectively) or Vojta would have to re-implement all that stuff from scratch.
Of course it is possible to have this as a standalone script, but it seemed extremely cumbersome to me and I discouraged Vojta from doing that. The code would be twice as long, handling many options passing and re-implementing some autoqa code.
But maybe I'm just looking at it all wrong. Tell me what you think.
All good points, it's certainly worth evaluating the trade-offs. Your thoughts raise something that perhaps isn't well defined. What does it mean for a test to be stand-alone.
The 'working' definition I had in mind wasn't so much that the test script had to use only base python modules (not autotest and not autoqa), but the test could could use autoqa modules (and configs). For me, stand-alone means 1. not having to install autotest-server 2. not having to use the autotest local scheduler (might be unavoidable for some class of tests ... unclear) 3. being able to run the test from the command-line and passing the same arguments that the control file provides
There are some packaging wrinkles that we have in place that make this a bit more complicated ... since autoqa %requires the autotest base package. However, that's more for integrating the tests with autotest, and not so much for the tests themselves.
I think it's completely fine to use autoqa libraries in the stand-alone tests. I know the rats_* stuff uses autoqa.* libraries already. As far as the repoinfo.conf stuff ... that's completely fine imo as well.
Does this change things much?
All said, if the other aspects of the patch are okay, I don't know if this should block getting the test accepted. I trust your judgement for that. It would just make it a harder for non-autoqa savvy folks to use in the meantime.
See attached patch to convey the stand-alone idea noted above.
NOTE:
* The autotest portion of this is untested and probably not correct * The stand-alone portion appears to work when tested with various correct and incorrect inputs. I just shuffled existing code and added argparse. I'm sure it would still need adjustment to coordinate the results/exit_code with the autotest portion.
Thanks, James
autoqa-devel@lists.fedorahosted.org