On Tue, 15 Mar 2005, Elliot Lee wrote:
> Minimal buildroot isn't necessary for reproducible builds, a
> *consistently* populated buildroot is. You'll get a consistent environment
> by dropping in Base + Devel groups with yum groupinstall even with the
> stock comps.xml.
Providing consistent buildroots actually works against reproducible builds
in the long term, because of the effect those buildroots have on the way
people choose to package things.
I agree... in theory...
For maximum quality control, packages should not be affected by having
unrelated (non-BuildRequires and non-base) package installed in the
buildroot. If package X is is unrelated to the ongoing build of package Y,
then package Y's build should not be affected by the absence OR presence
of package X in the buildroot.
In the ideal world, yes. In the practical world, a very large
amount of software does ./configure time autodetection of various
libraries and other software which may or may not be present in a
buildroot, many of it being conditionally enabled/disabled based
on the presence or lack of libs/etc. avail.
This autodetection is good for Joe Blow downloading something and
compiling/installing by hand into /usr/local per se. but it
kindof works against rpm based builds in the way of
This puts a large part of the reproduceability factor square in
the hands of the package maintainer. In order to get a
reasonably good chance of having every rpm rebuild exactly the
same regardless of what deps are present or absent in the
buildroot, all package maintainers need to be come much more
intimately involved with the rpms they maintain. This would
require deeply inspecting all ./configure options with each
release of the software, and being more involved with the
underlying projects in question, and very closely analyzing the
output of ./configure and trying to determine if there are any
changes from upstream version to version and build to build.
While it could be argued "this is already the packager's
responsibility", in reality it does not work well, and it isn't
likely to ever work well as long as it is not automated in some
fashion. Relying on humans to do all of this:
1) Puts a lot of extra burden on humans, whom are already already
2) Makes the single point of failure be the human. Very
bad idea. Not scaleable. Humans make mistakes. Computers do
The most scaleable systems, are those which are as completely
automated as possible, requiring as little to no human
intervention as possible.
So my suggestion to those seeking a solution to this problem, is
to look at how it can be eliminated or reduced through software
automation. rpmdiff is an example of creative use of automation.
Perhaps someone can brainstorm an automation tool that could be
plugged into rpm or beehive or mach, etc.
The root cause of the problem here is not really having consistent
buildroots, but having improper packaging that doesn't account for all
One thing we have internally at Red Hat is a mass
rebuild system that creates a buildroot with all packages installed,
attempts rebuilds of all packages, and for the builds that succeed, it
compares the resulting binary packages against the original ones to see if
things like filelist or dependencies have changed. It'd be nice to get
the equivalent of that for Fedora.
If someone were to develop a tool that compared consecutive
./configure runs and reported major differences, that'd be cool.
I don't know how difficult that'd be though. I suspect if it
were easy someone might have done it by now, but who knows. ;o)
Mike A. Harris, Systems Engineer - X11 Development team, Red Hat Canada, Ltd.
IT executives rate Red Hat #1 for value: http://www.redhat.com/promo/vendor