gathering useful statistics for QA team
jlaska at redhat.com
Tue Jul 20 17:48:03 UTC 2010
On Tue, 2010-07-20 at 16:52 +0000, "Jóhann B. Guðmundsson" wrote:
> On 07/20/2010 01:13 PM, Kamil Paral wrote:
> > Hello,
> > we would like to help QA team to increase public participation in its
> > activities. We believe that an important step in achieving that is in
> > rewarding the most active participants with "fame" in different top
> > tens, ladders and charts. Therefore we would like to extend the current
> > Fedora Community website  with different statistics regarding user
> > participation in different areas relevant to your team. We have called
> > this project "Fedora Hall of Fame" . This information can then be used
> > in different newsletters (FWN, etc) to praise the best people and motivate
> > the others.
> Ah the carrot game here a couple of things you need to know when playing
> that game....
I think you've only captured part of the motivation for this.
Recognition for hard work is a motivator for some people, but not all.
Long story short, you can't improve what you don't measure.
> Wont this lead to less quality of testing/triaging as in
> reporters/triagers will end up competing amongst themselves to reach the
> "carrot" the "hall of fame", resulting in every reporter starts either
> filing bug for every error msg they come across and provide +karma for
> every package that goes through bodhi with similar behaviour in bugzilla
> from triagers.
*If* we chose to reward contributors solely for filing bugs, *and* we
had so many of them that all they can do is file bugs for every little
software issue, then yes. Filing bugs is just one aspect of
contributing to Fedora. Besides, Kamil notes above, "This information
_can_ then be used [...] to praise [...] and motivate". I didn't get
the impression that the sole purpose for these metrics was to recognize
one aspect of contributing to Fedora QA. Metrics inform, they don't
> How do you think new reporters/triagers that see those stats are going
> to react?
> How far back into time are you thinking about gathering those stats?
I guess this would depend on the mechanisms used to generate the data.
> How are you going to compare Red Hat and other company's employs that
> literally are paid to work at this vs those that are freely given what
> ever time they have to the project?
This touches on a good topic. Most people get a little sensitive when
the terms 'metrics' and 'reward' are used together. Sometimes it's
because they work hard, but their efforts may not be visible on proposed
metrics data sources.
As someone who would be monitoring the QA metrics, if I were to see only
@redhat.com people in all the data sources gathered, that wouldn't
prompt me to reward and recognized paid contributors. I'd be motivated
to figure out why @redhat.com people are required to do all this work.
Is it so time consuming that it requires someone full time, are the
details not well documented to invite outside involvement, is it boring
work, can we spice it up more, are we unpleasant smelling? etc...
If you take any metrics without context, and skip the analytical
thinking phase, you lose. The workflow is ...
[gather metrics] --> [human beings analyze] --> [implement changes]
Embrace the numbers. Besides, I expect you'll be involved in the
meetings where we interpret them and determine how to move forward. :)
I read this is filthy. Ooops.
> Do you think they will ever manage to be equally and or more productive
> than those that get full time pay to do this?
Good topic, but seems outside the scope of this discussion.
> Do you believe that newcomers and those that give freely what ever
> limited time they have will look the hall of fame and think by
> themselves hey I think I can get my name up there?
> Seventhly and perhaps one of the more devastating downside when playing
> the carrot game..
> How do you suggest that the community handles those that have spent
> countless hours trying to get their name in "Hall of Fame" when the cold
> hard reality strikes them and they realize they cant and we risk loosing
> them and their valuable contribution from the project forever?
The cop out answer is that we want a community of participants who are
interested in giving back to the community, not in getting their name in
lights. While recognition is often appreciated, I don't think that's
why people show up here.
Also, if someone has the goal of getting their name in the bug-metrics
view, and they spend a long time trying to file bugs, and they don't
show up in that list ... I'm fine with that. The barrier to filing a
bug is pretty low. If the person failed to file any bugs ... that's
okay. Maybe filing bugs isn't their strong suit.
> > The information we hope to receive from you is:
> > 1. What are the most important tools you would like to have tracked?
> > For example it can be your wiki, mailing lists, Bugzilla, Koji, Bodhi,
> > Transifex, packages' source code, and so on.
> > 2. What are the most important characteristics you would like to see
> > gathered? For example: # of wiki edits, # of new bug reports, # of
> > package updates released, etc. Some of our ideas are at .
> > In short, we're looking for hints which statistical data would help
> > your team most in evaluating your best contributors.
> Here's an Idea..
> How about starting from the right end and gather those information about
> components then about the maintainers then about QA and the rest.
> We need to know components stats first and foremost so we can
> effectively focus the project resource where they are mostly needed.
From a project standpoint yes, but I'm keenly interested in QA specific
metrics. There is a similar thread open on devel@ if you want to add
thoughts on component metrics.
I'll be interested in other data sources (like component-specific
metrics) as dictated by trends in the QA metrics. For example, say we
have a ton of bugs one week against systemd. That just tells me we have
a lot of bugs, nothing more. I'd need context to assess whether this is
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 198 bytes
Desc: This is a digitally signed message part
Url : http://lists.fedoraproject.org/pipermail/test/attachments/20100720/c0b73964/attachment.bin
More information about the test