Yep, has seemed pretty reasonable to me in the poking at it I've
far. mschwendt's idea of running a lot of already built packages
through to see how they fare isn't a bad one. I'll try to set up one of
my test boxes to do this over the course of this week (hopefully
tomorrow). You don't have to worry about doing that right now :-)
You did this and it did reasonably okay, right?
We _do_ need to worry about how the binary packages end up. I'm
entirely sure what's best here. Internally, we use writing to directory
trees that look like
(N is name, V is version, R is release, A is arch)
Then, the trees which end up on the FTP site are composed out of these
directory trees. This has the nice feature of making the inheritance of
builds from older releases a bit easier.
What is slightly more complicated with a scheme like this is how do you
update the repodata after a build completes quickly with the new package
info. Some of the "createrepo should be able to be done incrementally"
people will probably come back out of the woodwork.
Those people have too much ram. If push comes to shove I can think of a
few ways of doing this. createrepo will read symlinks as well, so we
could do something grotty like a symlink-farm for packages per repo.
One other thing that springs to mind is the question of what arches
build packages for and whether a specific arch failing should block the
build. My opinion, which matches how Core gets built, is that
* Packages get built for all Extras arches. ExcludeArch/ExclusiveArch
can be used for the (rare) things which need otherwise
* Build failures on one arch block all arches. Otherwise, some arches
will fall far behind and things like prereqs get really painful
This might be tricky. I'm worried about notifications b/t the
buildsystems. Or are you thinking about having levels like:
buildsystem -> pops out rpm in some known location
releasesystem -> determines if all arches have built and therefore if
the pkg ever gets moved/copied/linked to the release tree it's targeted
And the bit that we talked about during LinuxWorld on how we want to
make it possible and easy for developers to download the buildsystem and
get it going on their own workstation for test builds. That then also
enables other third party repositories (there won't be only one :-) to
use it and get some consistency.
Here's what I'm thinking right now:
If we can get a good working interface for bugzilla for package
tracking. And we can define some new fields/tags for items in this
interface then we should be able to let it work for us.
1. packager tags a release in cvs using make tag or whatever.
2. packager updates the package status in the bugzilla package tracker
a. they mark it as build
b. they mark it for what release (fc3, testing, rawhide, etc)
c. they input the cvs tag to build from
3. build system, at regular intervals, queries this information via
xml-rpc to bugzilla. It builds the packages and attaches the log
reports (or links to the log reports) to the comments in the package
4. build system puts the finalized packages + other stuff in some path
that's web accessible as you described above.(CAVEAT: some special
casing for embargo'd builds will need to be put in place)
1. users cannot directly request builds so the system isn't overwhelmed
2. regular intervals means the user doesn't have to wait for some
person to kick off a build.
3. Having one build master system scan and kick off builds on other
machines/arches is not crazy OR having multiple build systems scan,
mark the package as 'being built for arch foo on system bar' is
also not outside the realm of possibility (though there might be
race conditions there)
4. Reasonably scalable as more build systems/packagers are added
1. might be overstating the functionality available in the xml-rpc
interface to bugzilla
2. Users cannot directly kick off builds, they have to wait (waaaaaaah)
3. Dealing with Embargo'd builds - gonna be a pain no matter what
4. Ordering of builds based on dependency
5. package tracking system does not yet exist.