Investigation of the F23 mass rebuild

Nick Coghlan ncoghlan at gmail.com
Sun Jul 5 12:52:17 UTC 2015


(First time I've posted here, so a short self-introduction: I'm a
CPython core developer who recently switched from working on Red Hat
internal testing infrastructure to Fedora software package management.
As such, I'm quite familiar with a number of efforts aimed at making
various automated testing tools that were previously only available
inside Red Hat readily available to the upstream Fedora community as
well)

On 3 July 2015 at 04:59, Matthew Miller <mattdm at fedoraproject.org> wrote:
> On Thu, Jul 02, 2015 at 01:47:11PM -0400, Adam Jackson wrote:
>> Common to all of this is a certain reactive posture.  There's not a
>> dashboard view of "sick packages".  Which could be useful along a number
>> of axes, really.  How far behind is a package relative to its upstream's
>> releases?  For a given sick package, how many packages depend on it?
>> How idle has pkg git been relative to the incoming bug rate for a
>> package?  The data exists, but we're not looking at it.  Obviously not
>> all metrics are going to be comparable across packages, maybe for the
>> kernel we want more of a moving average than a raw counter.
>
> That seems useful. "Package Janitors' Dashboard" or something.

At least some aspects of that idea sounds a bit like what I'm hoping
https://beaker-project.org/docs-develop/user-guide/beaker-provided-tasks.html#distribution-rebuild
might become, at least in terms of providing data on FTBFS issues.

That's a Beaker task that can essentially be told "rebuild this entire
yum repo, and report individual test results for each SRPM rebuilt".
It's also designed so that a modified build toolchain can be injected,
making it possible to try those out *before* they land in koji.

I'm currently planning to do that myself for some Python specific mass
rebuilds (e.g. checking 3.5 compatibility).

There are some current irritations in getting to a point of being able
to write custom Beaker tests, and the Fedora instance isn't open for
general access yet (since it isn't integrated into FAS), but I'd love
to have more than just me providing feedback on the mass rebuild task
and suggesting possible improvements.

> Hand-in-hand with reactive posture is our "high wall, chaos-filled
> inside" model for enforcing the packaging guidelines. Package review
> often consists of nitpicking with a fine-toothed comb, because except
> for very egregious situations, that's the only time package quality is
> ever looked at.
>
> This causes two undesirable situations:
>
> First, there's no way to let a "good enough" package in and have it
> progress to excellent — it must be excellent at the start, because we
> generally don't trust the package quality to do much but go down.
> That's not to say that many packagers *don't* improve the quality of
> the packages over time — I know I try to for the few I maintain
> whenever I notice something, and I know many others do too. But there's
> no regular mechanism for it, let alone accountability.
>
> Second, once something is in, it can actively get worse. I can get a
> package through package review 100% clean, and then the next day go and
> edit it to do all sorts of horrible things, and odds are really good
> that no one will call me on it. This generally isn't due to malice, but
> the rules are complicated and the guidelines long — it's easy for all
> but the most committed packagers to mess up simply by not being aware
> of the right way to handle a situation.

+1

rpmgrill aims to help with this, but getting it enabled in Taskotron
is unfortunately taking longer than hoped :(

Regards,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia


More information about the devel mailing list