Fwd: F13 Schedule Proposal--please RESPOND

Bill Nottingham notting at redhat.com
Thu Nov 12 19:14:24 UTC 2009


John Poelstra (poelstra at redhat.com) said: 
> The duration between public alpha and beta has been all over and I'm
> unsure of its significance

The significance is that it's what sets the length of the alpha or
beta release.

 (if I understand you correctly below,
> you're suggesting that measuring the time between milestones is
> meaningless):

No, that's not what I was saying. I was merely saying that measuring from
a variable 'start' date is meaningless if we're operating towards a fixed
end date each time.

> (in days)
> F8 -  21
> F9 -  23
> F10 - 35
> F11 - 28
> F12 - 56
> F13 - 42 (original proposal)

Right - this is my point. In every prior release (except for F12), the
duration was 4 weeks, give or take 1 week. I think an initial proposal
of 6 weeks is excessive.

> I think a better way to break this down is to be more specific about
> the tasks in between because saying "one month in between" is too
> vague.  To make it four weeks we'd have to set:
> 
> Alpha Public Release Available
>   2 weeks
>    --followed by--
> Beta Freeze
>   2 weeks
>    --followed by--
> Beta Public Release Available
> 
> I'd propose making the Alpha Public Release three weeks--the
> original proposal was four.

That's a reasonable point to make the adjustment - I'd still prefer
two weeks. What do other people think?

> For the past couple of releases there was been two weeks between the
> freeze date and the public release date. We need two weeks to
> include a "Test Compose"--granted I don't believe we were able to
> create any of the Test Composes on time (something we should examine
> when creating the F13 schedule).

I don't understand this reasoning. The desired work flow, as I understand
it, is something like:

1) Freeze

2) Are there blockers?
  -> No!
    -> Party!
      -> make Alpha/Beta/Release candidate compose
       -> test, etc.
  -> Yes!
    -> Bummer.
      -> make a test compose
       -> test, fix, etc.
         -> goto 2)

A test compose only exists when we know we still have blockers to solve.
Otherwise it's an alpha/beta/release candidate. Therefore, we shouldn't be 
adjusting the freeze schedule to add time for a 'test compose'.

We can certainly have additional test composes outside of the freeze for QE
to look at, but I don't know that they're events that affect the schedule
duration as much as they are simply point events in the schedule.

> >If that's adjusted down to 4 weeks, the other related dates move a
> >similar amount. Alpha Release ->  2/23, Alpha Freeze ->  2/9, Feature
> >Freeze ->  2/1, Feature submission ->  1/19.
> >
> >(As a random practical note, feature submission right after New
> >Years may not be best.)
> 
> I agree.  I was following the methodology set forth by FESCo that
> the submission deadline should be two weeks before Feature freeze.
> I'll keep it in mind as other dates move around.

Yep; any shortening of either of the alpha or beta cycles 'fixes' this
as a side effect.

> >One note as to the methodology as denoted on the spreadsheet - we
> >*do* need to define whether we are time-based with specific target
> >dates, or time-based with a specific length of cycle. If we are
> 
> Good distinction.  You are right, we can't be both :)
> 
> >time-based with specific target dates (May 1/Nov 1), tracking
> >'start ->  <foo>  milestone' durations, with 'start' being the actual
> >release date of the prior release, is meaningless to creating the
> >next schedule. If we're time based with 'a 6 month cycle, however
> >it lands', then those 'start ->  milestone' durations become a
> >meaningful thing to track again.
> >
> 
> Meaning can be found in how long it actually takes us to complete
> certain tasks and the overall durations of our schedules.  Because
> when we estimate these things wrong or don't meet the originally
> planned schedule we slip (ship late).  If we find that certain tasks
> are usually complete on time or within an acceptable variance we
> know they they are good durations to use in the next schedule.
> 
> Whether we target a specific GA date or say we are a '6 month cycle'
> we still have to make decisions about how long each of the tasks can
> take within the overall alloted time period.  This results in
> trade-offs and decisions about what parts of the schedule should
> remain the same for each release and which parts can change without
> adversely affecting our ability to meet the scheduled GA date.
> 
> We've generally found that we need a certain amount of time between
> freeze and public release (2 weeks).  We've also found that for a
> public release to be worth creating it needs to be available for a
> period of time before creating a new one.  With these task durations
> less flexible the schedule has to give somewhere.  In the past
> several releases we've been taking that off the development time
> (start to first freeze).

Right, maybe I'm not being clear.

We have defined amounts of time that we need for each freeze, each
milestone's availability, etc. To do the schedule, we set an end
date (either releasing on a certain date, or releasing X time after
the prior release), and work each milestone backwards from there.
The 'start' -> milestone duration ends up not being an input to the
schedule, but a historical output; if we're adopting the philosophy
of releasing on May 1/Nov 1 each year, then we know that those
durations will be highly variable as a consequence of our release
philosophy.

(You could also make the argument that the 'start' time for release
N+1 isn't when N is released, but when N is branched. After all,
people have been able to build for F-13 for a while now.)

> >>It would seem to me that this isn't really working for us.  We
> >>haven't been able to compose an RC on time for the Alpha, Beta or
> >>Final.
> >
> >Part of this is a more strict handling of RCs. Before, we'd just
> >compose anyway, and not track the blocker list with such severity.
> >Now that we're actually treating the blocker list with a little more
> >process, we're slipping more.
> 
> Interesting observation.  I hadn't considered it this way.  Are you
> suggesting that we should handle blockers differently?  Is this is a
> good or bad thing to you?

It's probably a good thing, considering an increased focus on quality.

> >That would just be moving the freeze date, wouldn't it? As a
> >counter-example, here in F-12 we've been frozen and just taking
> >fixes, and we still had blockers when it came time to compose the
> >RC.  I'm not sure that more freeze time would help here. Part of
> >me thinks that what would help the most is more attention to blockers
> >*throughout* the process.
> 
> We reviewed the blocker list every Friday starting at least two
> weeks before every freeze until release.  What would you suggest as
> an additional or alternate way to focus on blockers?

Ideally, some way to drive through development getting blockers fixed
sooner rather than later; this means even at the alpha or beta timeframe
trying to weed out any final blockers. This does lead to interminable
meeting length, though. Maybe have major groups be required to send status
on all of their assigned blockers before each meeting, rather than having
to go through in a global meeting and determine status there?

Bill


More information about the rel-eng mailing list