The slip down memory lane

John Poelstra poelstra at redhat.com
Mon Aug 16 22:03:33 UTC 2010


Peter Jones said the following on 08/16/2010 11:50 AM Pacific Time:
> On 08/16/2010 02:06 PM, Mike McGrath wrote:
>> On Mon, 16 Aug 2010, Peter Jones wrote:
>>
>>> On 08/12/2010 02:39 PM, Mike McGrath wrote:
>>>> On Thu, 12 Aug 2010, Jason L Tibbitts III wrote:
>>>>
>>>>>>>>>> "BN" == Bill Nottingham<notting at redhat.com>  writes:
>>>>>
>>>>> BN>  I can't help but note that the slips have become more frequent as we
>>>>> BN>  started to actually *have* release criteria to test against. We
>>>>> BN>  didn't slip nearly as much when we weren't testing it.
>>>>>
>>>>> To me this implies that we should begin testing earlier (or, perhaps,
>>>>> never stop testing) and treat any new failure as an event of
>>>>> significance.  It's tough to meet a six month cycle if we spend half of
>>>>> it telling people to expect everything to be broken.
>>>>>
>>>>
>>>> Possibly also stop changing earlier?
>>>
>>> The window for changes is already far too short.
>>>
>>
>> How long is that window anyway?
>
> Depends on how you count.  If we count development start to feature freeze:
>
> F12: 48 days (including july 4th)
> F13: 53 days (including christmas and the US thanksgiving holiday)
> F14: 63 days (including july 4th)
>
> Or maybe development start to alpha freeze:
>
> F12: 76 days (including july 4th)
> F13: 84 days (including christmas and the US thanksgiving holiday)
> F14: 70 days (including july 4th)
>
> Of course, some people would like to count from the previous "Final Development
> Freeze" (or even earlier) to, say, feature freeze, even though this is wildly
> unrealistic for many of us:
>
> F12: 105 days (including july 4th)
> F13: 133 days (including christmas and the US thanksgiving holiday)
> F14: 115 days (including july 4th)
>
> this basically assumes nobody has to do any work on the previous release after
> the final development freeze, which isn't really true.
>
> (I realize there are other important holidays in other countries, but I figure
> this is a reasonable enough sample for exemplary purposes)
>
> Actually, from computing these numbers I think the best lesson is that our
> schedules have been so completely volatile that it's very difficult to claim
> they support any reasonable conclusions.
>

I agree.

Here's the historical data I've tracked:
http://poelstra.fedorapeople.org/schedules/f-14/f9-f14-schedule-analysis-v4.ods

Because we are date-based and it has historically been that assumed 
people love Fedora so much they don't stop working on it over holidays, 
we've never considered the impact of holidays.

Our schedules have changed a lot up until Fedora 13 and 14.  This was 
the first time we did not "try something new" (after feature freeze) 
with the schedule.  I agree it will take a few release cycles to figure 
out what is working and what isn't.  This was my primary reason for 
arguing it was time to stop "trying something new" each release with the 
schedule.

Because of our current scheduling methodology the development length 
varies between releases, but for Fedora 13 and 14 the freezes and 
testing durations are set the same (except for the week slip of the 
Fedora 13 Alpha that we absorbed which I believe contributed to slipping 
the Beta and Final).

http://fedoraproject.org/wiki/Releases/Schedule (scheduling methodology)

John


More information about the devel mailing list