measuring success

Till Maas opensource at till.name
Fri Jul 2 18:12:26 UTC 2010


On Fri, Jul 02, 2010 at 12:20:21PM -0400, Will Woods wrote:

> Therefore: I propose that we choose a few metrics ("turnaround time on
> security updates", "average number of live updates with broken
> dependencies per day", etc.). Then we begin measuring them (and, if
> possible, collect historical, pre-critpath data to compare that to).
> 
> I'm willing to bet that these metrics have improved since we started the
> critpath policies before F13 release, and will continue to improve over
> the course of F13's lifecycle and the F14 development cycle.

I am interested in these metrics, too. Afaik it will be the first time
in the update testing discussion that there will be metrics that can be
used to evaluate it. But imho the turnaround time is not only
interesting for security updates, but for all updates that fix bugs, so
probably most non-newpackage updates.

Btw. on a related issue:How do provenpackagers properly test for broken
deps manually? The case where two updates in updates-testing are
required to update one of them seems to me hard to ensure manually. But
when only one of both updates is pushed to stable, there will be a
broken dependency. I know that the fix is to bundle the builds of both
updates into one, but how is this tested?

Regards
Till
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 836 bytes
Desc: not available
Url : http://lists.fedoraproject.org/pipermail/devel/attachments/20100702/0e671a25/attachment.bin 


More information about the devel mailing list