Before releasing new version of AutoQA, I want to solve one more issue, and that is the current mess in tests we manage. Honestly stated, we have some tests that we have no idea whether they work, what exactly they do, and how to fix them if needed. Some of them are probably not being used at all, and we still need to care about them when re-factoring our framework. Worst of all, for many of them we have no idea who should be responsible for keeping them in shape.
I have spent some time and executed all of them. I also tried to provide my best guess about their current maintainer. This is the result:
Test Maintainer Works Currently useful ==== ========== ===== ================ anaconda_checkbot clumens ? no no anaconda_storage clumens ? no no compose_tree ? no no conflicts ? yes unlikely depcheck jskladan ? yes yes helloworld kparal yes yes rats_install hongqing ? yes yes rats_sanity hongqing ? yes probably repoclosure ? yes unlikely rpmguard kparal yes somewhat rpmlint kparal yes somewhat upgradepath kparal yes yes
My proposal is: 1. Every test will have a maintainer defined. In its 'control' file we will change AUTHOR line to MAINTAINER line (patch is ready). This will ensure that we always know who to talk to when the test seems broken. It doesn't mean you can't work on a test you don't maintain, no. But at least the contact point will always be defined. I tried to place my best guess in the matrix.
Please speak up who wants to maintain some test.
2. Tests without maintainer will be archived and deleted. They can stay in a separate git branch and wait for their future revival (if any), but they won't be in master.
3. To save resources, we should also archive tests that don't seem to be currently much useful. We can re-enable them once required architecture is in place. More specifically:
* rpmlint, rpmguard - the results are sent to opted-in maintainers, some of them said it's useful. I'd keep them enabled.
* repoclosure, conflicts - this lists potential dependency problems and file conflicts for the whole repository. Until now no one cared. With resultsdb frontend we can finally have a page that lists all the results day by day. It means someone can go through the results occasionally and file some bugs. The question is: who? It's nice to have some results, but if we just *hope* someone will do something about it, that seems too uncertain for me. They are also somewhat obsoleted by depcheck. I'm sitting on the fence here.
* compose_tree - this tries to build boot.iso and pxeboot images. This is basically a releng test that we execute. Previous maintainer was James, but I doubt we should ask him for future maintenance. Currently it is broken. Even if it worked, was there someone reporting the issues? It's easier to look into releng logs online: http://kojipkgs.fedoraproject.org/mash/branched-20120227/logs/ Does the above link render compose_tree test useless?
* anaconda_* - anaconda build, unit test, and automated installation using various test cases. Great work, but currently broken. I was talking to clumens some time ago and asked whether he received results. He said he did not. I fixed the opt-in emails and told him. No response since, so I guess he just doesn't care. And I'm not surprised. With the speed of anaconda development, it's much easier for anaconda devs to execute the test on their own machines (at least I suppose they do). They can't send us patches all the time. Since there hasn't been any drive from anaconda devs, I propose to obsolete this tests until we have something better to offer.
* the rest of the tests stay enabled
For the next autoqa release I'd like to focus mainly on easy deployment of tests, so that we (for our tests) and some other teams (like anaconda) can easily update tests without tedious process of "new autoqa release". After that I expect some of the tests to return.
What do you think?
And please, put your name next to the test you want to maintain.