Forking this (delayed) conversation out of #276 - Support for independent test development (https://fedorahosted.org/autoqa/ticket/276) since it isn't directly related to the ticket.
This would be more of a long term idea, but it might be interesting to start putting tests into a pypi-ish (http://pypi.python.org/pypi) repository.
There might be some impedance mismatch, but there are already tools out there to pull packages from pypi and assuming that we could use packages with non-python code wrapped in python, it could provide a mechanism for people to develop and update tests outside of the autoqa RPM, grab and run stand-alone tests in addition to keeping them updated on automated clients.
Not sure how that would work with rpm packaging, though.
Thanks for the suggestion! Anything pypi is well outside my current knowledge, but I'm definitely open to understanding how it can solve this problem.
The idea would be to have a repository of tests that could be downloaded to any test client.
PyPI is a repository of python packages. It can keep documentation, versions and dependencies inside its metadata. Tools like pip (http://pypi.python.org/pypi/pip) or easy_install (http://pypi.python.org/pypi/setuptools or http://pypi.python.org/pypi/distribute depending on your preferred flavor) can pull packages from PyPI and install them locally.
I'm not proposing that we try to use the official PyPI repositories, but there are PyPI implementations out there that we could use to set up our own repository. I haven't looked into whether PyPI could support everything we want to do, but at the moment I'm more interested in the concept.
Instead of having to deal with numerous git repositories that we have absolutely no control over, this would allow us to have all of our tests in a single location. Test clients could check for the most recent version of a test prior to running and other users could install any individual test on their system to run without having to install ALL of the tests.
As an example: Something like 'pip install -i http://repositoryoftests.fp.org/ magical_pony_test' would install the magical_pony_test test bundle on a system.
As another part of this, I think it would be wise to set up some sort of CI system to pull in tests from various version control systems (fedorahosted git/hg/svn, github, bitbucket etc.), package them and update the repository. This way, we wouldn't have to worry about interfacing directly with any version control system and specifically multiple version control systems (assuming that we didn't force just one). This way, we could have fewer requirements around the tests.
I don't think that we have enough tests at the moment to justify this level of infrastructure, but I sincerely hope that we do have enough to justify something like this in the near future.
Tim
On Wed, 2011-03-16 at 09:01 -0600, Tim Flink wrote:
Forking this (delayed) conversation out of #276 - Support for independent test development (https://fedorahosted.org/autoqa/ticket/276) since it isn't directly related to the ticket.
Thanks for getting this started.
This would be more of a long term idea, but it might be interesting to start putting tests into a pypi-ish (http://pypi.python.org/pypi) repository.
There might be some impedance mismatch, but there are already tools out there to pull packages from pypi and assuming that we could use packages with non-python code wrapped in python, it could provide a mechanism for people to develop and update tests outside of the autoqa RPM, grab and run stand-alone tests in addition to keeping them updated on automated clients.
Not sure how that would work with rpm packaging, though.
Thanks for the suggestion! Anything pypi is well outside my current knowledge, but I'm definitely open to understanding how it can solve this problem.
The idea would be to have a repository of tests that could be downloaded to any test client.
PyPI is a repository of python packages. It can keep documentation, versions and dependencies inside its metadata. Tools like pip (http://pypi.python.org/pypi/pip) or easy_install (http://pypi.python.org/pypi/setuptools or http://pypi.python.org/pypi/distribute depending on your preferred flavor) can pull packages from PyPI and install them locally.
I'm not proposing that we try to use the official PyPI repositories, but there are PyPI implementations out there that we could use to set up our own repository. I haven't looked into whether PyPI could support everything we want to do, but at the moment I'm more interested in the concept.
Does this put requirements on what format the maintainer contributed tests must be? Meaning, with this approach, do they have to be python scripts? Can they be ruby/perl/shell/binaries etc... ?
Instead of having to deal with numerous git repositories that we have absolutely no control over, this would allow us to have all of our tests in a single location. Test clients could check for the most recent version of a test prior to running and other users could install any individual test on their system to run without having to install ALL of the tests.
I like the sound of it. Note, I don't mind having tests in multiple repositories, since I look forward to maintainer contributed tests that we don't have control over. We just can't scale to assert control over all tests. My impression (we'd need to verify) is that we'll want to allow maintainers to include tests along-side their packaging "code". This translates to dist-git [1]. Doing yet-another-repository for them seems annoying (imo as a "maintainer" of several packages). So the idea with this approach is, along with the following existing dist-git content ...
dist-git/anaconda/common dist-git/anaconda/devel dist-git/anaconda/F-15 dist-git/anaconda/F-14 dist-git/anaconda/F-13
Maintainers could also provide ... dist-git/anaconda/tests
I know you can play games with CVS and aliasing so that we can provide a common check-out for *all* maintainer contributed tests, while also allowing maintainers to have their own test space. I'm not sure whether similar trickery can be used with git </implementation_detail>. Either way, I'm not horrified by having multiple test ... whatever allows us to integrate with dist-git.
As an example: Something like 'pip install -i http://repositoryoftests.fp.org/ magical_pony_test' would install the magical_pony_test test bundle on a system.
As another part of this, I think it would be wise to set up some sort of CI system to pull in tests from various version control systems (fedorahosted git/hg/svn, github, bitbucket etc.), package them and update the repository. This way, we wouldn't have to worry about interfacing directly with any version control system and specifically multiple version control systems (assuming that we didn't force just one). This way, we could have fewer requirements around the tests.
With the theorized dist-git setup, maintainers could definitely add a test wrapper that would run whatever upstream CI tests they want.
However, I think you are saying that we'd have a new "event" to monitor upstream development and kick off any upstream CI-type tests? That does sound cool, but I'm not sure if that specific test space is a priority for maintainers. It feels a little weird running CI for upstream ... I think as a distribution, we'd want to repackage the upstream code, apply our distro-specific patches, then run any upstream tests to validate whether our packaging introduces any failures.
I don't think that we have enough tests at the moment to justify this level of infrastructure, but I sincerely hope that we do have enough to justify something like this in the near future.
Likely
Thanks, James
On 03/17/2011 11:05 AM, James Laska wrote:
On Wed, 2011-03-16 at 09:01 -0600, Tim Flink wrote:
Forking this (delayed) conversation out of #276 - Support for independent test development (https://fedorahosted.org/autoqa/ticket/276) since it isn't directly related to the ticket.
Thanks for getting this started.
This would be more of a long term idea, but it might be interesting to start putting tests into a pypi-ish (http://pypi.python.org/pypi) repository.
There might be some impedance mismatch, but there are already tools out there to pull packages from pypi and assuming that we could use packages with non-python code wrapped in python, it could provide a mechanism for people to develop and update tests outside of the autoqa RPM, grab and run stand-alone tests in addition to keeping them updated on automated clients.
Not sure how that would work with rpm packaging, though.
Thanks for the suggestion! Anything pypi is well outside my current knowledge, but I'm definitely open to understanding how it can solve this problem.
The idea would be to have a repository of tests that could be downloaded to any test client.
PyPI is a repository of python packages. It can keep documentation, versions and dependencies inside its metadata. Tools like pip (http://pypi.python.org/pypi/pip) or easy_install (http://pypi.python.org/pypi/setuptools or http://pypi.python.org/pypi/distribute depending on your preferred flavor) can pull packages from PyPI and install them locally.
I'm not proposing that we try to use the official PyPI repositories, but there are PyPI implementations out there that we could use to set up our own repository. I haven't looked into whether PyPI could support everything we want to do, but at the moment I'm more interested in the concept.
Does this put requirements on what format the maintainer contributed tests must be? Meaning, with this approach, do they have to be python scripts? Can they be ruby/perl/shell/binaries etc... ?
As far as what PyPI currently supports, I'm not sure. Whatever we end up with would have to support tests in other languages than python, though.
If we're interested in this, we would have to do some investigation as to whether or not a given system (in this case, PyPI) it would work for our uses. While I hope that we could use PyPI (or something similar) since it would decrease the amount of custom code we need, I'm just using it as an example at this point.
I'm thinking that the interface/wrapper code for tests would be written in python to create a clean, uniform interface for autoqa. I'm not sure I see the value in supporting wrappers written in arbitrary languages.
Other languages for the tests themselves? Definitely. I agree 100% that we shouldn't restrict test development to a single language.
So an autoqa test in this case would consist of metadata, wrapper code and the tests themselves. All of that would be packaged and uploaded to a repository. Once uploaded to the repository, test clients can grab the tests and run them.
Instead of having to deal with numerous git repositories that we have absolutely no control over, this would allow us to have all of our tests in a single location. Test clients could check for the most recent version of a test prior to running and other users could install any individual test on their system to run without having to install ALL of the tests.
I like the sound of it. Note, I don't mind having tests in multiple repositories, since I look forward to maintainer contributed tests that we don't have control over. We just can't scale to assert control over all tests. My impression (we'd need to verify) is that we'll want to allow maintainers to include tests along-side their packaging "code". This translates to dist-git [1]. Doing yet-another-repository for them seems annoying (imo as a "maintainer" of several packages). So the idea with this approach is, along with the following existing dist-git content ...
dist-git/anaconda/common dist-git/anaconda/devel dist-git/anaconda/F-15 dist-git/anaconda/F-14 dist-git/anaconda/F-13
Maintainers could also provide ... dist-git/anaconda/tests
I know you can play games with CVS and aliasing so that we can provide a common check-out for *all* maintainer contributed tests, while also allowing maintainers to have their own test space. I'm not sure whether similar trickery can be used with git </implementation_detail>. Either way, I'm not horrified by having multiple test ... whatever allows us to integrate with dist-git.
I'm not against the idea of integrating with dist-git but I'm wondering if that would just end up being more complicated and chaotic in the end. Assuming that I'm understanding you correctly, dist-git would effectively limit maintainers to using fedorahosted git and while I imagine most of the tests would be there anyways, I can also see other non-package-specific tests being stored elsewhere or in non-git VCS.
With the system that I'm thinking of, test maintainers could keep their tests wherever they wanted. We wouldn't have control over their tests but we would have a more flexible interface layer between the external code for autoqa tests and running those tests in autoqa.
This way, we don't have to worry about writing code to interface with a polyglot of VCS systems and we aren't restricting maintainers to host their code in fedorahosted repos (encourage, maybe. restrict, no).
We would still have almost all the advantages of allowing test maintainers to keep track of their own tests but we wouldn't be tightly coupled to any VCS system. - Test updates are not tied to autoqa releases - Maintainers don't have to do special work to update tests - Test clients don't have to be manually updated - Our work on creating/maintaining custom code is minimized
As an example: Something like 'pip install -i http://repositoryoftests.fp.org/ magical_pony_test' would install the magical_pony_test test bundle on a system.
As another part of this, I think it would be wise to set up some sort of CI system to pull in tests from various version control systems (fedorahosted git/hg/svn, github, bitbucket etc.), package them and update the repository. This way, we wouldn't have to worry about interfacing directly with any version control system and specifically multiple version control systems (assuming that we didn't force just one). This way, we could have fewer requirements around the tests.
With the theorized dist-git setup, maintainers could definitely add a test wrapper that would run whatever upstream CI tests they want.
However, I think you are saying that we'd have a new "event" to monitor upstream development and kick off any upstream CI-type tests? That does sound cool, but I'm not sure if that specific test space is a priority for maintainers. It feels a little weird running CI for upstream ... I think as a distribution, we'd want to repackage the upstream code, apply our distro-specific patches, then run any upstream tests to validate whether our packaging introduces any failures.
Hmm, I don't think that I'm doing a very good job explaining my thoughts here. This is completely separate from the recent proposal to build and test upstream code with a CI system.
In this case, the CI system would only be used to grab the code for whatever tests we're running in autoqa from remote repositories, "build" them and put them in the test repositories. We wouldn't be testing or building any of the upstream product code in the CI system, just the tests that we're using in autoqa.
We could always implement a custom system for doing this but I think that using an existing CI package would be easier. Either way, it wouldn't be tightly coupled with autoqa and would only be a supporting utility to automate the process of getting code from maintainers into a form that is easily distributed to autoqa test clients.
Here are some example workflows as I see them. There are a couple of holes in this regarding who is responsible for what and how exactly this would work, but I think that the detail is enough for this discussion.
An example update workflow: maintainer updates test already in the system CI detects the change (either polling or git hooks) CI pulls in ONLY the code for the test CI builds the test CI pushes to the test repository test clients pull down the new tests and use them
An example new test workflow: maintainer writes the test and puts it in some public VCS maintainer goes to website (or does this via email) - Creates a new test name - Adds details about the test VCS - (optional) adds VCS commit/push hook - (optional) sets polling frequency test is reviewed by QA (not sure where this responsibility would lie) (optional) test is updated based on QA comments test is approved and added to CI CI is updated with the new test CI pulls in code, builds, pushes to repository test clients can now use the new test
I don't think that we have enough tests at the moment to justify this level of infrastructure, but I sincerely hope that we do have enough to justify something like this in the near future.
Likely
Thanks, James
I'm hoping that this makes a bit more sense than my last email. I realize that its a bit ambitious but I also think that it would provide us with a better, more flexible system in the end. This could and should be broken up into more manageable chunks if we're interested but for now, I'm just thinking about the end goal.
Tim
On Thu, 2011-03-17 at 12:01 -0600, Tim Flink wrote:
On 03/17/2011 11:05 AM, James Laska wrote:
On Wed, 2011-03-16 at 09:01 -0600, Tim Flink wrote:
Forking this (delayed) conversation out of #276 - Support for independent test development (https://fedorahosted.org/autoqa/ticket/276) since it isn't directly related to the ticket.
Thanks for getting this started.
This would be more of a long term idea, but it might be interesting to start putting tests into a pypi-ish (http://pypi.python.org/pypi) repository.
There might be some impedance mismatch, but there are already tools out there to pull packages from pypi and assuming that we could use packages with non-python code wrapped in python, it could provide a mechanism for people to develop and update tests outside of the autoqa RPM, grab and run stand-alone tests in addition to keeping them updated on automated clients.
Not sure how that would work with rpm packaging, though.
Thanks for the suggestion! Anything pypi is well outside my current knowledge, but I'm definitely open to understanding how it can solve this problem.
The idea would be to have a repository of tests that could be downloaded to any test client.
PyPI is a repository of python packages. It can keep documentation, versions and dependencies inside its metadata. Tools like pip (http://pypi.python.org/pypi/pip) or easy_install (http://pypi.python.org/pypi/setuptools or http://pypi.python.org/pypi/distribute depending on your preferred flavor) can pull packages from PyPI and install them locally.
I'm not proposing that we try to use the official PyPI repositories, but there are PyPI implementations out there that we could use to set up our own repository. I haven't looked into whether PyPI could support everything we want to do, but at the moment I'm more interested in the concept.
Does this put requirements on what format the maintainer contributed tests must be? Meaning, with this approach, do they have to be python scripts? Can they be ruby/perl/shell/binaries etc... ?
As far as what PyPI currently supports, I'm not sure. Whatever we end up with would have to support tests in other languages than python, though.
If we're interested in this, we would have to do some investigation as to whether or not a given system (in this case, PyPI) it would work for our uses. While I hope that we could use PyPI (or something similar) since it would decrease the amount of custom code we need, I'm just using it as an example at this point.
I'm thinking that the interface/wrapper code for tests would be written in python to create a clean, uniform interface for autoqa. I'm not sure I see the value in supporting wrappers written in arbitrary languages.
Other languages for the tests themselves? Definitely. I agree 100% that we shouldn't restrict test development to a single language.
So an autoqa test in this case would consist of metadata, wrapper code and the tests themselves. All of that would be packaged and uploaded to a repository. Once uploaded to the repository, test clients can grab the tests and run them.
Instead of having to deal with numerous git repositories that we have absolutely no control over, this would allow us to have all of our tests in a single location. Test clients could check for the most recent version of a test prior to running and other users could install any individual test on their system to run without having to install ALL of the tests.
I like the sound of it. Note, I don't mind having tests in multiple repositories, since I look forward to maintainer contributed tests that we don't have control over. We just can't scale to assert control over all tests. My impression (we'd need to verify) is that we'll want to allow maintainers to include tests along-side their packaging "code". This translates to dist-git [1]. Doing yet-another-repository for them seems annoying (imo as a "maintainer" of several packages). So the idea with this approach is, along with the following existing dist-git content ...
dist-git/anaconda/common dist-git/anaconda/devel dist-git/anaconda/F-15 dist-git/anaconda/F-14 dist-git/anaconda/F-13
Maintainers could also provide ... dist-git/anaconda/tests
I know you can play games with CVS and aliasing so that we can provide a common check-out for *all* maintainer contributed tests, while also allowing maintainers to have their own test space. I'm not sure whether similar trickery can be used with git </implementation_detail>. Either way, I'm not horrified by having multiple test ... whatever allows us to integrate with dist-git.
I'm not against the idea of integrating with dist-git but I'm wondering if that would just end up being more complicated and chaotic in the end. Assuming that I'm understanding you correctly, dist-git would effectively limit maintainers to using fedorahosted git and while I imagine most of the tests would be there anyways, I can also see other non-package-specific tests being stored elsewhere or in non-git VCS.
With the system that I'm thinking of, test maintainers could keep their tests wherever they wanted. We wouldn't have control over their tests but we would have a more flexible interface layer between the external code for autoqa tests and running those tests in autoqa.
This way, we don't have to worry about writing code to interface with a polyglot of VCS systems and we aren't restricting maintainers to host their code in fedorahosted repos (encourage, maybe. restrict, no).
We would still have almost all the advantages of allowing test maintainers to keep track of their own tests but we wouldn't be tightly coupled to any VCS system.
- Test updates are not tied to autoqa releases
- Maintainers don't have to do special work to update tests
- Test clients don't have to be manually updated
- Our work on creating/maintaining custom code is minimized
As an example: Something like 'pip install -i http://repositoryoftests.fp.org/ magical_pony_test' would install the magical_pony_test test bundle on a system.
As another part of this, I think it would be wise to set up some sort of CI system to pull in tests from various version control systems (fedorahosted git/hg/svn, github, bitbucket etc.), package them and update the repository. This way, we wouldn't have to worry about interfacing directly with any version control system and specifically multiple version control systems (assuming that we didn't force just one). This way, we could have fewer requirements around the tests.
With the theorized dist-git setup, maintainers could definitely add a test wrapper that would run whatever upstream CI tests they want.
However, I think you are saying that we'd have a new "event" to monitor upstream development and kick off any upstream CI-type tests? That does sound cool, but I'm not sure if that specific test space is a priority for maintainers. It feels a little weird running CI for upstream ... I think as a distribution, we'd want to repackage the upstream code, apply our distro-specific patches, then run any upstream tests to validate whether our packaging introduces any failures.
Hmm, I don't think that I'm doing a very good job explaining my thoughts here. This is completely separate from the recent proposal to build and test upstream code with a CI system.
Aaah, okay. Thanks for clarifying.
In this case, the CI system would only be used to grab the code for whatever tests we're running in autoqa from remote repositories, "build" them and put them in the test repositories. We wouldn't be testing or building any of the upstream product code in the CI system, just the tests that we're using in autoqa.
I'm with ya now, thanks.
We could always implement a custom system for doing this but I think that using an existing CI package would be easier. Either way, it wouldn't be tightly coupled with autoqa and would only be a supporting utility to automate the process of getting code from maintainers into a form that is easily distributed to autoqa test clients.
Cool, the ideas sound good. At some point, I'd need your help visualizing this with pypi (or similar). I'm just not at all familiar with it. But if it fits our desired use, is stable and has an active upstream, it would be crazy not to consider using it.
Here are some example workflows as I see them. There are a couple of holes in this regarding who is responsible for what and how exactly this would work, but I think that the detail is enough for this discussion.
An example update workflow: maintainer updates test already in the system CI detects the change (either polling or git hooks) CI pulls in ONLY the code for the test CI builds the test CI pushes to the test repository test clients pull down the new tests and use them
An example new test workflow: maintainer writes the test and puts it in some public VCS maintainer goes to website (or does this via email)
- Creates a new test name
- Adds details about the test VCS
- (optional) adds VCS commit/push hook
- (optional) sets polling frequency
test is reviewed by QA (not sure where this responsibility would lie) (optional) test is updated based on QA comments test is approved and added to CI CI is updated with the new test CI pulls in code, builds, pushes to repository test clients can now use the new test
I love this. The more our conversation goes, it makes me think what I like about dist-git approaches I've seen. In our case, going back to what was discussed earlier (perhaps another thread) about sanitizing tests. I'm less inclined to be worried about tests if I know they are coming from Fedora package maintainers who have signed the CLA. For me, housing the tests in dist-git was a way to achieve that. I think it might just be easier to document for maintainers. But either way, I think that helps me better understand requirements for where tests are stored.
I don't think that we have enough tests at the moment to justify this level of infrastructure, but I sincerely hope that we do have enough to justify something like this in the near future.
Likely
Thanks, James
I'm hoping that this makes a bit more sense than my last email. I realize that its a bit ambitious but I also think that it would provide us with a better, more flexible system in the end. This could and should be broken up into more manageable chunks if we're interested but for now, I'm just thinking about the end goal.
Of course, this is definitely the start of a large effort. But no better way to kick it off ... than by figuring out what the heck the problem is :)
Thanks, James
autoqa-devel@lists.fedorahosted.org