On Jan 24, 2014, at 4:47 AM, David Sommerseth <davids(a)redhat.com> wrote:
On 23/01/14 23:16, Chris Murphy wrote:
>
> As far as I know there isn't an explicit test case or release
> criteria that covers this functionality, or it would have been discovered. Why
> it's not a test case is a valid question, but simply put there are only
> so many QA people, much of it is voluntary so not everything important
> gets tested.
Fair enough. However, in this case it seems this was even noticed. Why
that was never looked into more thoroughly is a mystery for me.
QA volunteers don't get assignments. Most work and reports are on what's
personally important or interesting to them.
If XYZ breakage isn't in the matrix or test cases, then it must be personally
important to someone and they have to rock the boat somehow. And because testing weeks
prior to alpha, or beta, let alone final, already involve a ship at high seas…
By all means, software does and needs to evolve, and it can break.
I
have full understanding for this. But not alerting that basic
functionality of things you would expect breaks, that's the key point
here. That puts users into a difficult situation, especially when the
dependencies are so tricky.
First, I refuse the premise that it's QA's job to audit every feature or change,
to determine whether its contingency plan needs to be activated. That would be nice if it
had the resources to do that. QA is well over its eyeballs with the test matrices and test
cases it has.
But the feature page explicitly said no major regressions. So either the feature owner
disagrees with the assessment in this thread that the breakage is a major regression; or
major breakage occurred and even slipped by the feature owner. So? I'm not sure how
you expect this to work better.
One of QA ideas is actually expanding the test matrix, and prioritizing it. I'd guess
that a set of blue tooth tests could be written up (hopefully you'd volunteer to do
this since it seems important enough to you), and put into the "bonus" matrix.
That means, if not tested, it's not release blocking, but at least people can more
visibly see what QA has/hasn't tested. If QA can even get more one off involvement
from volunteers who otherwise don't participate that much, it's still very
helpful.
During the F20 beta, I was soaked into other work to be able to test
this. But knowing we have a Fedora QA group and a plan for rolling
things back, I had a trust that the Fedora community wouldn't allow this
to happen.
In my estimation, you significantly overestimate QA's scope and resources. And that is
an understatement. And I think this misunderstanding in the Fedora community is
widespread. QA maybe needs to do a recruitment drive or bake sale or something.
There are likely quite a number of "basic" things that aren't being touched
by QA first. QA is a community task. If you think it's important, you need to test it
and report on it and waive your hands if you even remotely think a feature contingency
plan should be activated. Otherwise, the result is exactly what has happened here.
But trust me, I will check things far more closely in the coming
releases ... unless I simply switch to RHEL instead to have some better
predictability.
Very, very different contexts. Fedora is made to be worked on. RHEL is made to work.
Chris Murphy