Stable release updates vision

Josh Boyer jwboyer at gmail.com
Fri Mar 12 17:35:15 UTC 2010


On Fri, Mar 12, 2010 at 10:01:48AM -0700, Kevin Fenzi wrote:
>>Stable releases should provide a consistent user experience throughout the lifecycle, 
>> and only fix bugs and security issues.
>
>I agree with the spirit of this, but I think it could be bad if it's taken to the letter of the statement. 
>For example: 
>- A design flaw causes a security problem, requiring a new version to fix. 
>- A security fix is not backportable to an old upstream release, requiring a new version. 
>- Software we ship thats under very rapid upstream development (ie, they don't have the 
>interface locked down when we first ship it, then do, shouldn't we be able to ship the new version
>with the finallized interface) (yes, you might argue we shouldn't ship it at all, but... )
>
>So, while I think this is good to aspire to, there will be corner cases and exceptions. 
>Does the Board think this should be subject to exception? Or is this an absolute?

There is no explicit description on what a _fix_ is.  Just that whatever said
fix is, it should fix a bug or a security issue.  If that necessitates a
version bump, then I'm sure FESCo can work that out in the updates policy.

>> Project members should be able to transparently measure or monitor 
>> a new updates process to objectively measure its effectiveness, and 
>> determine whether the updates process is achieving the aforementioned vision statements.
>
>Ideas on how to do this? Currently we have some people who see no
>problem at all, and others who do feel there is a problem but find it
>hard to qualify. 
>
>I guess the only thing I could think of is graphing bugs filed over
>time, and see if that changes when a new updates policy is put in
>place, but if we are doing more testing there could likely be MORE
>bugs, not less. An increase in usage of updates system might be some
>indicator, but not sure thats success or failure. 
>
>Any other ideas how the Board would like to see success or failure
>measured? 

We didn't have specifics in mind here.  I, personally, agree that bug and
regression tracking would be a good start.  Things can evolve as we gather more
data.

josh


More information about the advisory-board mailing list