Before releasing new version of AutoQA, I want to solve one more issue, and that is the current mess in tests we manage. Honestly stated, we have some tests that we have no idea whether they work, what exactly they do, and how to fix them if needed. Some of them are probably not being used at all, and we still need to care about them when re-factoring our framework. Worst of all, for many of them we have no idea who should be responsible for keeping them in shape.
I have spent some time and executed all of them. I also tried to provide my best guess about their current maintainer. This is the result:
Test Maintainer Works Currently useful ==== ========== ===== ================ anaconda_checkbot clumens ? no no anaconda_storage clumens ? no no compose_tree ? no no conflicts ? yes unlikely depcheck jskladan ? yes yes helloworld kparal yes yes rats_install hongqing ? yes yes rats_sanity hongqing ? yes probably repoclosure ? yes unlikely rpmguard kparal yes somewhat rpmlint kparal yes somewhat upgradepath kparal yes yes
My proposal is: 1. Every test will have a maintainer defined. In its 'control' file we will change AUTHOR line to MAINTAINER line (patch is ready). This will ensure that we always know who to talk to when the test seems broken. It doesn't mean you can't work on a test you don't maintain, no. But at least the contact point will always be defined. I tried to place my best guess in the matrix.
Please speak up who wants to maintain some test.
2. Tests without maintainer will be archived and deleted. They can stay in a separate git branch and wait for their future revival (if any), but they won't be in master.
3. To save resources, we should also archive tests that don't seem to be currently much useful. We can re-enable them once required architecture is in place. More specifically:
* rpmlint, rpmguard - the results are sent to opted-in maintainers, some of them said it's useful. I'd keep them enabled.
* repoclosure, conflicts - this lists potential dependency problems and file conflicts for the whole repository. Until now no one cared. With resultsdb frontend we can finally have a page that lists all the results day by day. It means someone can go through the results occasionally and file some bugs. The question is: who? It's nice to have some results, but if we just *hope* someone will do something about it, that seems too uncertain for me. They are also somewhat obsoleted by depcheck. I'm sitting on the fence here.
* compose_tree - this tries to build boot.iso and pxeboot images. This is basically a releng test that we execute. Previous maintainer was James, but I doubt we should ask him for future maintenance. Currently it is broken. Even if it worked, was there someone reporting the issues? It's easier to look into releng logs online: http://kojipkgs.fedoraproject.org/mash/branched-20120227/logs/ Does the above link render compose_tree test useless?
* anaconda_* - anaconda build, unit test, and automated installation using various test cases. Great work, but currently broken. I was talking to clumens some time ago and asked whether he received results. He said he did not. I fixed the opt-in emails and told him. No response since, so I guess he just doesn't care. And I'm not surprised. With the speed of anaconda development, it's much easier for anaconda devs to execute the test on their own machines (at least I suppose they do). They can't send us patches all the time. Since there hasn't been any drive from anaconda devs, I propose to obsolete this tests until we have something better to offer.
* the rest of the tests stay enabled
For the next autoqa release I'd like to focus mainly on easy deployment of tests, so that we (for our tests) and some other teams (like anaconda) can easily update tests without tedious process of "new autoqa release". After that I expect some of the tests to return.
What do you think?
And please, put your name next to the test you want to maintain.
On Mon, 27 Feb 2012 09:11:21 -0500 (EST) Kamil Paral kparal@redhat.com wrote:
Before releasing new version of AutoQA, I want to solve one more issue, and that is the current mess in tests we manage. Honestly stated, we have some tests that we have no idea whether they work, what exactly they do, and how to fix them if needed. Some of them are probably not being used at all, and we still need to care about them when re-factoring our framework. Worst of all, for many of them we have no idea who should be responsible for keeping them in shape.
Oy, that table did not survive the reformatting to 80 columns. I use compose_tree on a regular basis, it does still work and I can take over maintenance of it. If we don't think it belongs in autoqa, I'll just set it up as a separate tool.
I can also take depcheck if jskladan doesn't want it.
My proposal is:
- Every test will have a maintainer defined. In its 'control' file
we will change AUTHOR line to MAINTAINER line (patch is ready). This will ensure that we always know who to talk to when the test seems broken. It doesn't mean you can't work on a test you don't maintain, no. But at least the contact point will always be defined. I tried to place my best guess in the matrix.
Please speak up who wants to maintain some test.
Makes sense to me.
- Tests without maintainer will be archived and deleted. They can
stay in a separate git branch and wait for their future revival (if any), but they won't be in master.
Makes sense, I'm not sure there is a whole lot of point in running tests that don't have a maintainer or don't seem to be working.
- To save resources, we should also archive tests that don't seem to
be currently much useful. We can re-enable them once required architecture is in place. More specifically:
Architecture and/or one or more maintainers. With our setup the way it currently is, I'd be against having tests which don't have someone that can fix bugs in them.
- rpmlint, rpmguard - the results are sent to opted-in maintainers,
some of them said it's useful. I'd keep them enabled.
Works for me.
- repoclosure, conflicts - this lists potential dependency problems
and file conflicts for the whole repository. Until now no one cared. With resultsdb frontend we can finally have a page that lists all the results day by day. It means someone can go through the results occasionally and file some bugs. The question is: who? It's nice to have some results, but if we just *hope* someone will do something about it, that seems too uncertain for me. They are also somewhat obsoleted by depcheck. I'm sitting on the fence here.
Honestly, I don't think that I've ever looked at these tests so I can't really speak to their utility or functionality.
- compose_tree - this tries to build boot.iso and pxeboot images.
This is basically a releng test that we execute. Previous maintainer was James, but I doubt we should ask him for future maintenance. Currently it is broken. Even if it worked, was there someone reporting the issues? It's easier to look into releng logs online: http://kojipkgs.fedoraproject.org/mash/branched-20120227/logs/ Does the above link render compose_tree test useless?
It works, AFAIK - I used it to build F17 images last week. I've taken the ticket that you filed and will look into it. FWIW, I've had problems with compose_tree when everything in mock doesn't match the ISO I'm trying to build (down to the kernel version).
For the next autoqa release I'd like to focus mainly on easy deployment of tests, so that we (for our tests) and some other teams (like anaconda) can easily update tests without tedious process of "new autoqa release". After that I expect some of the tests to return.
I'm not disagreeing with the suggestion, just wondering about the timing of it. Do we really want to be doing that in the middle of F17 testing. Unless we want to work under the assumption that some people are going to stay mostly on AutoQA instead of testing F17.
Then again, if we don't start it at some point, it may not get done. There is also the question of how to implement this but I think that might be outside the scope of this thread.
Tim
I'll take Depcheck, but why remove the original author? :)
----- Original Message -----
From: "Tim Flink" tflink@redhat.com To: autoqa-devel@lists.fedorahosted.org Sent: Monday, February 27, 2012 5:40:08 PM Subject: Re: require maintainer defined for each test & clean up current tests
On Mon, 27 Feb 2012 09:11:21 -0500 (EST) Kamil Paral kparal@redhat.com wrote:
Before releasing new version of AutoQA, I want to solve one more issue, and that is the current mess in tests we manage. Honestly stated, we have some tests that we have no idea whether they work, what exactly they do, and how to fix them if needed. Some of them are probably not being used at all, and we still need to care about them when re-factoring our framework. Worst of all, for many of them we have no idea who should be responsible for keeping them in shape.
Oy, that table did not survive the reformatting to 80 columns. I use compose_tree on a regular basis, it does still work and I can take over maintenance of it. If we don't think it belongs in autoqa, I'll just set it up as a separate tool.
I can also take depcheck if jskladan doesn't want it.
My proposal is:
- Every test will have a maintainer defined. In its 'control' file
we will change AUTHOR line to MAINTAINER line (patch is ready). This will ensure that we always know who to talk to when the test seems broken. It doesn't mean you can't work on a test you don't maintain, no. But at least the contact point will always be defined. I tried to place my best guess in the matrix.
Please speak up who wants to maintain some test.
Makes sense to me.
- Tests without maintainer will be archived and deleted. They can
stay in a separate git branch and wait for their future revival (if any), but they won't be in master.
Makes sense, I'm not sure there is a whole lot of point in running tests that don't have a maintainer or don't seem to be working.
- To save resources, we should also archive tests that don't seem
to be currently much useful. We can re-enable them once required architecture is in place. More specifically:
Architecture and/or one or more maintainers. With our setup the way it currently is, I'd be against having tests which don't have someone that can fix bugs in them.
- rpmlint, rpmguard - the results are sent to opted-in maintainers,
some of them said it's useful. I'd keep them enabled.
Works for me.
- repoclosure, conflicts - this lists potential dependency problems
and file conflicts for the whole repository. Until now no one cared. With resultsdb frontend we can finally have a page that lists all the results day by day. It means someone can go through the results occasionally and file some bugs. The question is: who? It's nice to have some results, but if we just *hope* someone will do something about it, that seems too uncertain for me. They are also somewhat obsoleted by depcheck. I'm sitting on the fence here.
Honestly, I don't think that I've ever looked at these tests so I can't really speak to their utility or functionality.
- compose_tree - this tries to build boot.iso and pxeboot images.
This is basically a releng test that we execute. Previous maintainer was James, but I doubt we should ask him for future maintenance. Currently it is broken. Even if it worked, was there someone reporting the issues? It's easier to look into releng logs online: http://kojipkgs.fedoraproject.org/mash/branched-20120227/logs/ Does the above link render compose_tree test useless?
It works, AFAIK - I used it to build F17 images last week. I've taken the ticket that you filed and will look into it. FWIW, I've had problems with compose_tree when everything in mock doesn't match the ISO I'm trying to build (down to the kernel version).
For the next autoqa release I'd like to focus mainly on easy deployment of tests, so that we (for our tests) and some other teams (like anaconda) can easily update tests without tedious process of "new autoqa release". After that I expect some of the tests to return.
I'm not disagreeing with the suggestion, just wondering about the timing of it. Do we really want to be doing that in the middle of F17 testing. Unless we want to work under the assumption that some people are going to stay mostly on AutoQA instead of testing F17.
Then again, if we don't start it at some point, it may not get done. There is also the question of how to implement this but I think that might be outside the scope of this thread.
Tim
autoqa-devel mailing list autoqa-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/autoqa-devel
I'll take Depcheck, but why remove the original author? :)
Do you intend to say "Look, I did not write that horrible code, I just maintain it!"? :-D
The author credits are still preserved in all the source code files (depcheck.py etc). We can preserve it, if you think it's important, but my original aim was for "as simple as possible".
Yes, precisely that :) If it's just change in control.autoqa, I'm fine with being that Depcheck guy.
J.
----- Original Message -----
From: "Kamil Paral" kparal@redhat.com To: "AutoQA development" autoqa-devel@lists.fedorahosted.org Sent: Monday, February 27, 2012 6:06:33 PM Subject: Re: require maintainer defined for each test & clean up current tests
I'll take Depcheck, but why remove the original author? :)
Do you intend to say "Look, I did not write that horrible code, I just maintain it!"? :-D
The author credits are still preserved in all the source code files (depcheck.py etc). We can preserve it, if you think it's important, but my original aim was for "as simple as possible". _______________________________________________ autoqa-devel mailing list autoqa-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/autoqa-devel
- compose_tree - this tries to build boot.iso and pxeboot images.
This is basically a releng test that we execute. Previous maintainer was James, but I doubt we should ask him for future maintenance. Currently it is broken. Even if it worked, was there someone reporting the issues? It's easier to look into releng logs online: http://kojipkgs.fedoraproject.org/mash/branched-20120227/logs/ Does the above link render compose_tree test useless?
It works, AFAIK - I used it to build F17 images last week. I've taken the ticket that you filed and will look into it. FWIW, I've had problems with compose_tree when everything in mock doesn't match the ISO I'm trying to build (down to the kernel version).
I didn't know that. It's great that you use it and that it works/worked. I executed the test, it crashed -> I assumed it didn't work since F17 branch time (and nobody complained). But it is not this case, good.
We discussed this issue further over IRC and Tim wants to maintain and further develop this "test" (rather "tool"). But we are not sure whether it should stay part of AutoQA and whether we want to execute it regularly. Since he's the most knowledgeable here, I'll leave it up to his consideration.
For the next autoqa release I'd like to focus mainly on easy deployment of tests, so that we (for our tests) and some other teams (like anaconda) can easily update tests without tedious process of "new autoqa release". After that I expect some of the tests to return.
I'm not disagreeing with the suggestion, just wondering about the timing of it. Do we really want to be doing that in the middle of F17 testing. Unless we want to work under the assumption that some people are going to stay mostly on AutoQA instead of testing F17.
Then again, if we don't start it at some point, it may not get done.
Yeah, well. I think most of us are spending majority of our time testing F17, and I can't help myself but to report a few blocker bugs as well. But I see it as a currently most painful issue. I'll be happy to discuss once AutoQA 0.8 is out.
----- Original Message -----
From: "Kamil Paral" kparal@redhat.com To: "AutoQA development" autoqa-devel@lists.fedorahosted.org Sent: Monday, February 27, 2012 3:11:21 PM Subject: require maintainer defined for each test & clean up current tests
- repoclosure, conflicts - this lists potential dependency problems
and file conflicts for the whole repository. Until now no one cared. With resultsdb frontend we can finally have a page that lists all the results day by day. It means someone can go through the results occasionally and file some bugs. The question is: who? It's nice to have some results, but if we just *hope* someone will do something about it, that seems too uncertain for me. They are also somewhat obsoleted by depcheck. I'm sitting on the fence here.
I've never look at 'conflicts' test but if there isn't anyone who is knowledgeable regarding this test, I am happy to take it. It seems quite useful to me. And since it's already written it would be pity just to throw it away. Of course, the question on how to spread the results among the package maintainers remains...
Martin
I still would like to take rats_install test. I am also OK to take the rats_sanity test. And I am afraid the mediakit_sanity test will miss the target before the release. I have hit some problems. such as the output orders are different when run sanity.py separately and wrap it with mediakit_sanity.
Hongqing
----- Original Message -----
From: "Martin Krizek" mkrizek@redhat.com To: "AutoQA development" autoqa-devel@lists.fedorahosted.org Sent: Tuesday, February 28, 2012 1:47:11 AM Subject: Re: require maintainer defined for each test & clean up current tests
----- Original Message -----
From: "Kamil Paral" kparal@redhat.com To: "AutoQA development" autoqa-devel@lists.fedorahosted.org Sent: Monday, February 27, 2012 3:11:21 PM Subject: require maintainer defined for each test & clean up current tests
- repoclosure, conflicts - this lists potential dependency problems
and file conflicts for the whole repository. Until now no one cared. With resultsdb frontend we can finally have a page that lists all the results day by day. It means someone can go through the results occasionally and file some bugs. The question is: who? It's nice to have some results, but if we just *hope* someone will do something about it, that seems too uncertain for me. They are also somewhat obsoleted by depcheck. I'm sitting on the fence here.
I've never look at 'conflicts' test but if there isn't anyone who is knowledgeable regarding this test, I am happy to take it. It seems quite useful to me. And since it's already written it would be pity just to throw it away. Of course, the question on how to spread the results among the package maintainers remains...
Martin _______________________________________________ autoqa-devel mailing list autoqa-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/autoqa-devel
Talking to Martin, he agreed to take 'repoclosure' together with 'conflicts', because they are highly similar and could be useful. Talking to Tim, we agreed to temporarily remove tree_compose from AutoQA. Tim will develop it separately (fedorahosted, github), and we can then use it as a basis for new tests that need to operate on fresh composes.
Please review my patch:
$ git log --reverse origin/master..origin/test_cleanup
I'll create appropriate origin/archive_* branches once the patch is approved.
Talking to Martin, he agreed to take 'repoclosure' together with 'conflicts', because they are highly similar and could be useful. Talking to Tim, we agreed to temporarily remove tree_compose from AutoQA. Tim will develop it separately (fedorahosted, github), and we can then use it as a basis for new tests that need to operate on fresh composes.
Please review my patch:
$ git log --reverse origin/master..origin/test_cleanup
I'll create appropriate origin/archive_* branches once the patch is approved.
Pushed, branches created.
Tim, let us know when you create a new home for compose_tree, thanks.
On Thu, 01 Mar 2012 07:36:28 -0500 (EST) Kamil Paral kparal@redhat.com wrote:
I'll create appropriate origin/archive_* branches once the patch is approved.
Pushed, branches created.
Tim, let us know when you create a new home for compose_tree, thanks.
Will do, I've already started re-working it in python. Hopefully I'll have something to share before too long.
Tim
autoqa-devel@lists.fedorahosted.org