bodhi statistics

Luke Macken lmacken at redhat.com
Wed Jun 9 16:48:17 UTC 2010


On Wed, 2010-06-09 at 09:10 +0200, Ralf Corsepius wrote:
> On 06/09/2010 08:54 AM, Luke Macken wrote:
> > On Wed, 2010-06-09 at 08:38 +0200, Kevin Kofler wrote:
> >> Luke Macken wrote:
> >>> By "success" I mean that I felt we were successful in drafting,
> >>> implementing, deploying, and utilizing the mentioned policies as
> >>> expected, and the results show increased community engagement.
> >>
> >> This definition of "success" does not match mine nor the one you'll find in
> >> a dictionary. So your terminology is misleading.
> >
> > Really, Kevin?  We're digressing to a dictionary battle?
> >
> > Fine, I'll play.  First definition in the dictionary[0]: "an event that
> > accomplishes its intended purpose".
> Exactly. Your definition differs from Kevin's (and mine).

Neither of you have mentioned "your" definition of the word "success".
Care to enlighten us?

> > ...which is exactly what I meant to being with.
> 
> To me, your definition of success is "compliance with *your* process".

If by "your process" you mean "the processes created by the Fedora
Community".  I have had almost no say in the new updates criteria, nor
am I on any rubber-stamping committee to approve it.  I am, however, one
of the *few* developers who actually works on bodhi, and I have a vested
interested in improving it for the greater good of the community.  Now,
if the policies that are being approved do not actually benefit the
greater good of the community, we have bigger problems.

> Whether this process is suitable to improve package quality, whether the 
> technical system behind it is a good approach and whether your approach 
> actually improves package quality or is mere bureaucray is highly 
> questionable.

Yes, all of those are highly questionable, with regard to this "quality"
metric.

To improve a process we must first observe how it is currently being
utilized.

What comes out of bodhi is what the maintainers put into it. 
Aside from that, we've been essentially been using it to "croud-source"
QA.  As expected, this is far from perfect, but it's a start until we
have code in place that can perform rigorous and comprehensive testing.

> That said, all you demonstrated is your system not being entirely 
> broken, but I don't see any "success" related to QA in your statistic.

The numbers show an increase in community interaction.  More eyes are
looking at the updates and providing feedback.  Considering we're
croud-sourcing QA, an increase in participation is called a success.

luke



More information about the devel mailing list