On 04/15/2011 03:40 AM, Kamil Paral wrote:
I wonder how hard it would be to change the way depcheck and bodhi work a little bit.
Instead of just scraping comments, another option would be to have a text field in bodhi that held the list of passing tests. When a test passes, it adds its name to the field. When an update is changed, that field is reset and the tests are re-run.
That would be a nice solution if we didn't have ResultsDB in our plan.
Do we have any documentation on what ResultsDB is supposed to do other than store and display test results?
I've found several wiki pages about ResultsDB ([1], [2], [3]) and some old emails ([4], [5]) but I have yet to find any references to ResultsDB being used inside AutoQA tests for anything other than reporting.
Granted, it isn't a stretch to see ResultsDB used that way but I dislike making assumptions about what a given feature is going to do - it leads to "feature XXX is a magical cure-all that will solve all of our problems if we could only get it implemented" types of thinking.
I'm not saying that ResultsDB has become one of those mystical, cure-all features. I'm just noting that _I_ know very little about it or what it supposed to do other than function as a front-end for test results. I'll spend some more quality time with google and the code that I found [6] to see if I can understand a bit more.
[1] http://fedoraproject.org/wiki/AutoQA_resultsdb_use_cases [2] http://fedoraproject.org/wiki/AutoQA_resultsdb_API [3] https://fedoraproject.org/wiki/AutoQA_resultsdb_approaches [4] https://fedorahosted.org/pipermail/autoqa-devel/2010-February/000201.html [5] http://web.archiveorange.com/archive/v/BwoyykzB13b8eDTlV1TC [6] http://www.assembla.com/code/resultdb/git/nodes
It might make more sense to wait for resultsdb on this one, I'm not sure. Either one but might be more accurate and faster than trying to interpret comments.
Sending bodhi comments was just a quick solution for letting maintainers know. Instead of spending time on further temporary hacks (like hacking into Bodhi something that won't be needed soon) I see as a better solution to spend the time on the proper solution - ResultDB. It's not hard, it just needs focus.
I agree that ResultsDB isn't an incredibly difficult concept and could be finished if we focused on it. If these things are already planned for ResultsDB, then requesting extra bodhi/koji modifications doesn't make sense.
Do we know how often this is happening?
Sometimes. Maintainers change their updates every now and then. Level 1 solution (re-test just the changed update) is extremely easy to implement and eliminates a lot of problems. So it's reasonable to spend a few hours and have it done.
Fair enough. If it doesn't happen all that often, we probably don't need to spend all that much time on it.
=== RelEng workflow ===
At the moment, rel-eng is pushing updates manualy few times a week [citation needed, this is informal information which has not been audited].
The thing is, that we can provide the _at the moment 'pushable'_ subset of packages. When rel-eng will want to push from -pending to {stable, updates, ...}, they'll just 'list' the pushable subset (yes, resultsdb will help ;-)).
"The tool" will detect if there were any changes in the pushable subset, and "the tool" will inform rel-eng that they either need to wait for the results of the next depcheck run, or (if we're able to do it) request new depcheck run.
Why not just make a tool that re-reruns all "*-pending" updates through a battery of tests (depcheck and upgradepath ATM), ignoring any "PASSED" results and not posting any new comments to bodhi unless there was a change for an update? That way we don't have to worry about detecting if there were changes or not and we can be more confidant of the to-be-pushed set as a whole (which is what our end goal is, IIRC)
That's exactly what upgradepath does now. Depcheck is little different, it uses the concept of 'accepted updates'.
We can't remove the 'accepted updates' concept from depcheck, because that could cause a flood of emails going to maintainers' mailboxes through bodhi comments. If we don't use the concept of ever-growing accepted set of updates, it might happen that your update will be accepted at 1 PM, rejected at 2 PM (someone pushed conflicting update), accepted at 3 PM (someone removed the conflicting update), etc. Hence the Level 1 solution, which is not perfect, but doesn't spam maintainers.
Hence the part about not changing bodhi comments when re-running the tests in this pre-push scenario.
After we have the concept of ResultDB and we abolish the concept of sending emails to maintainers for every single change in depcheck results, we can do the proper solution as described by Josef. That means emptying the 'accepted set' if some update from that set has been changed. It also means providing the latest never-outdated results directly to RelEng (or anyone else interested) on request, not continuously to package maintainers. If _in that moment of RelEng push_ some update is excluded because of failed dependencies, only after that we can send email to the maintainer. (Or some similar approach).
I don't really understand why using bodhi comments and re-running depcheck are mutually exclusive. I don't think that it would be that hard to add a parameter to depcheck that would test everything in a tag and create a report without updating bodhi.
Then again, if this isn't a big problem then it might not be worth a whole lot of effort right now. Is this something requested by rel-eng? Is it something that they would want or use?
To sum it up, the really proper solution is not possible as long as we rely on sending emails to maintainers for every change. That's not viable, just temporary, because we could easily turn into spammers.
Tim