One of the items on the Fedora 17 QA retrospective - https://fedoraproject.org/wiki/Fedora_17_QA_Retrospective - is a suggestion from Bruno that we could perhaps gain some useful insights by analyzing the (by now considerable) corpus of blocker bugs from previous releases, as a way perhaps to identify likely areas of focus for future development and testing. Bruno has promised that he'll post some more specific ideas soon, but I wanted to kickstart a thread on the topic in case anyone else has some. Here's a few of my thoughts to get the ball rolling...
The most obvious area, perhaps, would be to look at the components against which the most blockers are filed. That's so easy to do it may be worth doing anyway, but I suspect the result will be quite predictable and something we're all more or less aware of anyway: I would expect the majority of blocker bugs to be in anaconda, then in the other obvious early-boot critical components (kernel, plymouth, systemd, udev etc), firstboot, preupgrade, and image generation stuff like livecd-tools. So I'm not sure that would tell us much we don't already know, but we might be surprised.
One area that may be more interesting, I guess, would be to look at various timing issues. One key one would be 'how long it takes for bugs to be a) nominated and b) accepted as blockers, after they are reported'. I've come across a few cases before where the answer seemed to be 'too long' - it would be good to know if they were outliers, or if we have a consistent issue with not identifying quickly enough that bugs are blockers. Of course, we could look at the amount of time it takes to progress through all the other steps of the blocker process too.
So, that's my idea, anyhow :) Do others have thoughts on what kind of analysis might be interesting/useful? Bruno, can you contribute your thoughts when ready? Thanks!
Look at the time difference between blocker status (first bugzilla'd, nominated for blocker, and/or confirmed) and the package revision (culprit created, then fixed.) There might be some relationship between problem packages and frequency of releases.
--
On Fri, Jun 22, 2012 at 11:14:56 -0700, Adam Williamson awilliam@redhat.com wrote:
The most obvious area, perhaps, would be to look at the components against which the most blockers are filed. That's so easy to do it may be worth doing anyway, but I suspect the result will be quite predictable and something we're all more or less aware of anyway: I would expect the majority of blocker bugs to be in anaconda, then in the other obvious early-boot critical components (kernel, plymouth, systemd, udev etc), firstboot, preupgrade, and image generation stuff like livecd-tools. So I'm not sure that would tell us much we don't already know, but we might be surprised.
I think the thesis I would be trying to prove or disprove is that install related bugs are being discovered too late and maybe we should be putting some extra effort into getting installs tested earlier. (AutoQA might be one place this could happen. Otherwise asking for more volunteer install testing.) I am not sure this is really true, but I'd like to look at counts broken down by component for accepted blocker bugs that were open between when the first RC was supposed to be built and when release happened.
One area that may be more interesting, I guess, would be to look at various timing issues. One key one would be 'how long it takes for bugs to be a) nominated and b) accepted as blockers, after they are reported'. I've come across a few cases before where the answer seemed to be 'too long' - it would be good to know if they were outliers, or if we have a consistent issue with not identifying quickly enough that bugs are blockers. Of course, we could look@the amount of time it takes to progress through all the other steps of the blocker process too.
I'll look at this too. I didn't get as much done as I was hoping this past weekend. My idea is too extract a small amount of info related for accepted blocker bugs and get it in a database where I can do queries easily and then start looking at stuff we think might be interesting. I'll also want to grab dates for release timing from the wiki so I can do breakdowns by eash of the recent releases, going back a ways.
What I will try to extract from bugzilla, will depend on what people have ideas for looking at. It shouldn't be too hard to regrab more data later, but if you want your stuff looked at in the first go around, be sure to speak up if you'll need other info than what would be needed to answer the above questions.
On Fri, 22 Jun 2012 11:14:56 -0700 Adam Williamson awilliam@redhat.com wrote:
One area that may be more interesting, I guess, would be to look at various timing issues. One key one would be 'how long it takes for bugs to be a) nominated and b) accepted as blockers, after they are reported'. I've come across a few cases before where the answer seemed to be 'too long' - it would be good to know if they were outliers, or if we have a consistent issue with not identifying quickly enough that bugs are blockers. Of course, we could look at the amount of time it takes to progress through all the other steps of the blocker process too.
Another thing that might be interesting is to look at the timing between when the update was submitted and when the bug was filed. That way we can get a view of bugs that should have been detected as blockers in addition to the components that could/should have been tested earlier.
I'd also be interested in doing the same analysis for at least F16 and maybe F15 in case there are any consistent patterns to be found.
Tim