Is buildsys still working on today's bake, or is there some other reason why no new rawhide today?
Is buildsys still working on today's bake, or is there some other reason why no new rawhide today?
If there's a lot of rebuilds it sometimes doesn't appear until around midday GMT time so it's probably still toiling away on the builds. I saw on a blog somewhere that one of the redhat guys is playing with a unified accross all platforms srpm repo so it could be due to that.
Pete
On 2/02/2006 11:39 p.m., Peter Robinson wrote:
Is buildsys still working on today's bake, or is there some other reason why no new rawhide today?
If there's a lot of rebuilds it sometimes doesn't appear until around midday GMT time so it's probably still toiling away on the builds. I saw on a blog somewhere that one of the redhat guys is playing with a unified accross all platforms srpm repo so it could be due to that.
Every time a nasty bug shows up in rawhide (like the one with yum and python-sqlite from yesterday) I wonder if it would be possible or in fact practical to have two builds a day or more to pick up rawhide changes. Is there a reason why only 1 a day is done - is it just historical?
I'd also be curious to know what sort of machine is doing the actual building, one would expect that given sometimes there are builds of gcc and glibc on the go in one nightly build, it must be a seriously fast piece of machinery to be able to produce so many binaries in a reasonable amount of time. Anyone who has tried compiling glibc from scratch would testify how long that alone can take ;-)
reuben
Reuben Farrelly wrote:
On 2/02/2006 11:39 p.m., Peter Robinson wrote:
Is buildsys still working on today's bake, or is there some other reason why no new rawhide today?
If there's a lot of rebuilds it sometimes doesn't appear until around midday GMT time so it's probably still toiling away on the builds. I saw on a blog somewhere that one of the redhat guys is playing with a unified accross all platforms srpm repo so it could be due to that.
Every time a nasty bug shows up in rawhide (like the one with yum and python-sqlite from yesterday) I wonder if it would be possible or in fact practical to have two builds a day or more to pick up rawhide changes. Is there a reason why only 1 a day is done - is it just historical?
Rawhide compose begins at 2:00am EST or so I'm told. My understanding, is that it is done at 2:00am because that is a time of day which the buildsystems and infrastructure is usually the least busy, and has the least number of engineers using it (if any), so it is not disruptive to the regular flow of daily engineering that is happening.
If rawhide were to be composed every 12 hours instead, it would hit at 2:00am, and 2:00pm then, and most likely would lower everyone's productivity horrendously, due to the increased load on the buildsystem infrastructure.
I'd also be curious to know what sort of machine is doing the actual building, one would expect that given sometimes there are builds of gcc and glibc on the go in one nightly build, it must be a seriously fast piece of machinery to be able to produce so many binaries in a reasonable amount of time. Anyone who has tried compiling glibc from scratch would testify how long that alone can take ;-)
The buildsystem infrastructure consists of numerous computers. The primary hub, is our infamous "porkchop", which is an SMP x86 box that is the entranceway to the buildsystem and various other infrastructure. Jobs submitted to the buildsystem (beehive) on porkchop, get enqueued to be farmed out to all 7 architectures simultaneously, and the actual builds occur on each architecture as a machine in the build pool is free and available to accept a job. All of the buildmachines are SMP hardware with 2-8 CPUs (possibly 16 CPUs on occasion, not sure), with whackloads of RAM.
While the whole setup is sometimes idle, more often than not, it is very busy building whackloads of packages for Fedora, RHEL updates, and running various automated tests and other stuff. Tree composes, and various other RELENG stuff also pounds the machines into the ground, as do automated mass-rebuilds and other fun stuff.
Nonetheless, even with this much total computing power available, due to the high load on the systems, it isn't always what you might consider "fast" overall. For example, I can build Xorg on my personal workstation here at home in about 15 minutes completely, however the same rpm takes anywhere from 1.5 hours to 3 or more hours going through the buildsystem even when it's idle (usually due to s390/s390x/ia64 taking forever to compile).
Anyhow, I figured I'd share a bit of mumbo-jumbo about the buildsystem, as people often ask, and don't always get much info. ;) If you've got a 1.5Ghz CPU or faster, and install "ccache" from Fedora Extras, your machine will build pretty much any package faster than our buildsystem. ;o)
On Fri, Feb 03, 2006 at 05:15:54AM -0500, Mike A. Harris wrote:
If you've got a 1.5Ghz CPU or faster, and install "ccache" from Fedora Extras, your machine will build pretty much any package faster than our buildsystem. ;o)
That reminds me... Has anyone ever tried to tie mock and ccache together somehow? It seems like that would speed up repeated build attempts noticably.
Steve
On Fri, 2006-02-03 at 10:45 -0600, Steven Pritchard wrote:
That reminds me... Has anyone ever tried to tie mock and ccache together somehow? It seems like that would speed up repeated build attempts noticably.
It is potentially possible, but the problem is that ccache has in the past caused build issues from time to time for various people outside of mock. I dare not think what chaos it could wreak in a semi-automated buildsystem. distcc *might* be a better choice, once it's ready.
On Friday 03 February 2006 11:44am, Ignacio Vazquez-Abrams wrote:
On Fri, 2006-02-03 at 10:45 -0600, Steven Pritchard wrote:
That reminds me... Has anyone ever tried to tie mock and ccache together somehow? It seems like that would speed up repeated build attempts noticably.
It is potentially possible, but the problem is that ccache has in the past caused build issues from time to time for various people outside of mock. I dare not think what chaos it could wreak in a semi-automated buildsystem. distcc *might* be a better choice, once it's ready.
Instead of distcc, how about considering icecream?
On Fri, 2006-02-03 at 12:14 -0700, Lamont R. Peterson wrote:
On Friday 03 February 2006 11:44am, Ignacio Vazquez-Abrams wrote:
On Fri, 2006-02-03 at 10:45 -0600, Steven Pritchard wrote:
That reminds me... Has anyone ever tried to tie mock and ccache together somehow? It seems like that would speed up repeated build attempts noticably.
It is potentially possible, but the problem is that ccache has in the past caused build issues from time to time for various people outside of mock. I dare not think what chaos it could wreak in a semi-automated buildsystem. distcc *might* be a better choice, once it's ready.
Instead of distcc, how about considering icecream?
Interesting. http://wiki.kde.org/tiki-index.php?page=icecream
We use Teambuilder (mentioned in the IceCream wiki) for distributed builds. It's a commercial product from Trolltech.
/Brian/
2006/2/7, Brian Long brilong@cisco.com:
On Fri, 2006-02-03 at 12:14 -0700, Lamont R. Peterson wrote:
On Friday 03 February 2006 11:44am, Ignacio Vazquez-Abrams wrote:
On Fri, 2006-02-03 at 10:45 -0600, Steven Pritchard wrote:
That reminds me... Has anyone ever tried to tie mock and ccache together somehow? It seems like that would speed up repeated build attempts noticably.
It is potentially possible, but the problem is that ccache has in the past caused build issues from time to time for various people outside of mock. I dare not think what chaos it could wreak in a semi-automated buildsystem. distcc *might* be a better choice, once it's ready.
Instead of distcc, how about considering icecream?
Interesting. http://wiki.kde.org/tiki-index.php?page=icecream
We use Teambuilder (mentioned in the IceCream wiki) for distributed builds. It's a commercial product from Trolltech.
/Brian/
Brian Long | | | IT Data Center Systems | .|||. .|||. Cisco Linux Developer | ..:|||||||:...:|||||||:.. Phone: (919) 392-7363 | C i s c o S y s t e m s
-- fedora-devel-list mailing list fedora-devel-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-devel-list
Hello
well my opinion is that there should be config hooks to easily hook in components like ccache distcc and/or icecream.
regards, Rudolf Kastl
On Fri, 2006-02-03 at 13:44 -0500, Ignacio Vazquez-Abrams wrote:
On Fri, 2006-02-03 at 10:45 -0600, Steven Pritchard wrote:
That reminds me... Has anyone ever tried to tie mock and ccache together somehow? It seems like that would speed up repeated build attempts noticably.
It is potentially possible, but the problem is that ccache has in the past caused build issues from time to time for various people outside of mock. I dare not think what chaos it could wreak in a semi-automated buildsystem. distcc *might* be a better choice, once it's ready.
If you think make -j causes problems for people... :-)
I think that ccache is potentially helpful for people when they're testing locally and trying to get stuff working, but I definitely wouldn't want it enabled for production builds
Jeremy
On Fri, Feb 03, 2006 at 02:22:04PM -0500, Jeremy Katz wrote:
I think that ccache is potentially helpful for people when they're testing locally and trying to get stuff working, but I definitely wouldn't want it enabled for production builds
Oh, no, definitely not. I was thinking for local testing, such as for the last half-dozen times I've fired off a mock build of cone on rawhide. :-)
Steve
2006/2/3, Steven Pritchard steve@silug.org:
On Fri, Feb 03, 2006 at 02:22:04PM -0500, Jeremy Katz wrote:
I think that ccache is potentially helpful for people when they're testing locally and trying to get stuff working, but I definitely wouldn't want it enabled for production builds
Oh, no, definitely not. I was thinking for local testing, such as for the last half-dozen times I've fired off a mock build of cone on rawhide. :-)
Steve
Steven Pritchard - K&S Pritchard Enterprises, Inc. Email: steve@kspei.com http://www.kspei.com/ Phone: (618)398-3000 Mobile: (618)567-7320
-- fedora-devel-list mailing list fedora-devel-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-devel-list
you can combine distcc and ccache. i got rather good long time experience with ccache in a non automated environment. if there are problems we should fix em i guess. and the best way to find problems is to use it extensively.
regards, rudolf kastl
On Fri, 2006-02-03 at 10:45 -0600, Steven Pritchard wrote:
On Fri, Feb 03, 2006 at 05:15:54AM -0500, Mike A. Harris wrote:
If you've got a 1.5Ghz CPU or faster, and install "ccache" from Fedora Extras, your machine will build pretty much any package faster than our buildsystem. ;o)
That reminds me... Has anyone ever tried to tie mock and ccache together somehow? It seems like that would speed up repeated build attempts noticably.
At least with mach it works nicely, just install ccache into the chroot and it'll get used. I don't remember if mock recreates the chroot from scratch each time you build something - if not it should be just a matter of installing ccache in the root (via buildgroup or manually), otherwise it'll get more tricky.
- Panu -
2006/2/7, Panu Matilainen pmatilai@laiskiainen.org:
On Fri, 2006-02-03 at 10:45 -0600, Steven Pritchard wrote:
On Fri, Feb 03, 2006 at 05:15:54AM -0500, Mike A. Harris wrote:
If you've got a 1.5Ghz CPU or faster, and install "ccache" from Fedora Extras, your machine will build pretty much any package faster than our buildsystem. ;o)
That reminds me... Has anyone ever tried to tie mock and ccache together somehow? It seems like that would speed up repeated build attempts noticably.
At least with mach it works nicely, just install ccache into the chroot and it'll get used. I don't remember if mock recreates the chroot from scratch each time you build something - if not it should be just a matter of installing ccache in the root (via buildgroup or manually), otherwise it'll get more tricky.
thats the default behaviour of mock actually. theres a noclean option though you can pass to it.
regards, Rudolf Kastl
- Panu -
-- fedora-devel-list mailing list fedora-devel-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-devel-list
Peter Robinson wrote:
Is buildsys still working on today's bake, or is there some other reason why no new rawhide today?
If there's a lot of rebuilds it sometimes doesn't appear until around midday GMT time so it's probably still toiling away on the builds. I saw on a blog somewhere that one of the redhat guys is playing with a unified accross all platforms srpm repo so it could be due to that.
SRPMS have been unified across all architectures since about Red Hat Linux 7.2 or 7.3 at Red Hat. Or in other words, when a single src.rpm package is built, it is simultaneously submitted to all 7 of the architectures in the buildsystem to which are considered the "primary architectures". Essentially, these are the architectures which RHEL is available for.
If a build fails on any one architecture, a signal is sent to the builds on all other architectures to kill them and discard all results.
The consequence of this, is that ever since that has been implemented in the Red Hat build system, every single package built, is built on all 7 architectures, or on 0 architectures. Package failures require the engineer to fix the package until it builds on all 7 arches in order for it to be accepted into the buildsystem. The only exceptions to this, are:
1) Packages that are by their very nature, architecture specific or which don't make sense on some architectures.
2) Packages that fail on an architecture due to buildsystem breakage or other obscure problem, so the engineer excludes the arch from being built using ExclusiveArch/ExcludeArch in the spec file to temporarily work around the problem until someone else can fix the broken buildmachines, etc.
So.... What is this unified across all platforms srpm repo you speak of, and how does it differ from what we've been doing internally for about 4 years? ;o)
On 2/3/06, Mike A. Harris mharris@mharris.ca wrote:
Peter Robinson wrote:
Is buildsys still working on today's bake, or is there some other reason why no new rawhide today?
If there's a lot of rebuilds it sometimes doesn't appear until around midday GMT time so it's probably still toiling away on the builds. I saw on a blog somewhere that one of the redhat guys is playing with a unified accross all platforms srpm repo so it could be due to that.
SRPMS have been unified across all architectures since about Red Hat Linux 7.2 or 7.3 at Red Hat. Or in other words, when a single src.rpm package is built, it is simultaneously submitted to all 7 of the architectures in the buildsystem to which are considered the "primary architectures". Essentially, these are the architectures which RHEL is available for.
If a build fails on any one architecture, a signal is sent to the builds on all other architectures to kill them and discard all results.
The consequence of this, is that ever since that has been implemented in the Red Hat build system, every single package built, is built on all 7 architectures, or on 0 architectures. Package failures require the engineer to fix the package until it builds on all 7 arches in order for it to be accepted into the buildsystem. The only exceptions to this, are:
Packages that are by their very nature, architecture specific or which don't make sense on some architectures.
Packages that fail on an architecture due to buildsystem breakage or other obscure problem, so the engineer excludes the arch from being built using ExclusiveArch/ExcludeArch in the spec file to temporarily work around the problem until someone else can fix the broken buildmachines, etc.
So.... What is this unified across all platforms srpm repo you speak of, and how does it differ from what we've been doing internally for about 4 years? ;o)
Digging back through my rss feeds it was this post to p.f.o from Jesse http://jkeating.livejournal.com/14754.html
Pete
Peter Robinson wrote:
On 2/3/06, Mike A. Harris mharris@mharris.ca wrote:
So.... What is this unified across all platforms srpm repo you speak of, and how does it differ from what we've been doing internally for about 4 years? ;o)
Digging back through my rss feeds it was this post to p.f.o from Jesse http://jkeating.livejournal.com/14754.html
Ahhh, I see. I'm refering to the src.rpm's that are /input/ into the buildsystem to build the OS on all arches. I didn't realize the SRPMS disks we made had different contents per-architecture due to this on the /output/ side of things. ;o)
Yes, it would definitely be smart to have one ginormic SRPMS disk. ;)
As a total side-question though, just for personal curiousity...
1) How many people actually download the SRPMS disk images?
2) How many people actually really use them for anything?
I always wondered that, because a long time ago, I used to download them all and burn them, until I realized that I never actually ever used them. ;o) I always downloaded the latest src.rpm from the ftp server if I needed it, as that way I ensured I was using the latest one.
I suppose the answer likely varies by global location, and the cost of internet access in any particular place though.
Anyhoo... back to your regularly scheduled program... ;)
Once upon a time Friday 03 February 2006 4:36 am, Mike A. Harris wrote:
How many people actually download the SRPMS disk images?
How many people actually really use them for anything?
Only time ive had SRPM disks was with boxed sets of RHL if i need to build something i download the SRPM as needed. I do have alot of SRPMS downloaded for use on no supported arches.
On Fri, Feb 03, 2006 at 05:36:52AM -0500, Mike A. Harris wrote:
As a total side-question though, just for personal curiousity...
- How many people actually download the SRPMS disk images?
I routinely download the SRPMS, though I grab them from the SRPMS/ directories on the servers. Not much interest in the disk images, since after a few weeks there are dozens if not hundreds of updates.
- How many people actually really use them for anything?
I keep SRPMS for several reasons:
o Local modifications
I routinely patch:
kernel grub SysVinit initscripts util-linux openssh iptables iproute ulogd nfs-utils patch xterm vnc quagga xpdf mdadm postgresql pgadmin3 valgrind gnumeric gaim ...
o Creating compat-* packages. I've been saving the various Red Hat compat-*.src.rpm for many years, and when necessary, rolling my own.
When we roll forward to a new distribution, we (re-)build compat-* versions of various tools and libraries. I've got vendor object-code interface libraries from RH6.2 and RH7.3. :-( SO, e.g., I coaxed compat-gcc-7.3-2.96.126.src.rpm into building on FC4/FC5t2.
o Locally-maintained packages.
As some packages have been removed from RH/FC, we have had to maintain them locally. I recently convinced the RHAS 2.1 metamail to build on FC4/FC5t2 x86_64, using patches taken from the original RH package, SuSE, PLD, and local modifications.
Also, it's not always the case that newer is better ...
Regards,
Bill Rugolsky
Bill Rugolsky Jr. wrote:
On Fri, Feb 03, 2006 at 05:36:52AM -0500, Mike A. Harris wrote:
As a total side-question though, just for personal curiousity...
- How many people actually download the SRPMS disk images?
I routinely download the SRPMS, though I grab them from the SRPMS/ directories on the servers. Not much interest in the disk images, since after a few weeks there are dozens if not hundreds of updates.
Yeah, that's why I am of the general mindset that the SRPMS disks are kindof useless for the most part. They are perhaps useful for archival of the exact source of the binary disks in case someone really really needs that down the line and wants to ensure they have a local copy. But for most practical purposes, IMHO at least, the SRPMS dir on the ftp servers is much more useful. ;)
- How many people actually really use them for anything?
I keep SRPMS for several reasons:
I keep SRPMS for many reasons too. I get them from our internal server (porkchop), or our CVS server, or from our ftp servers though. I just wondered what reasons people would download the SRPMS DVD ISO image and burn it to a disk. SRPMS are useful, no question, but how useful is a set of outdated SRPMS in a week or two? ;o)
I just wondered what reasons people would download the SRPMS DVD ISO image and burn it to a disk. SRPMS are useful, no question, but how useful is a set of outdated SRPMS in a week or two? ;o)
There is a non-ignorable set of installations that never yum or up2date. They always run the original official release and nothing else. They have very good external firewalls; some are even disconnected from all external networks. The users are either all experts, or all newbies. The budget for maintenance is zero. The disaster recovery plan for system software (as opposed to application data) is: re-install the last official release from the gold[en] CDs. This is guaranteed to be a perfect recovery, with no possibility of any changes. If you are a consultant to such a system, then the SRPMs DVD never becomes outdated. [Some of these systems do run RHEL named update rollups, such as Taroon Update 5. But it is installed fresh from the full CD .iso images, not as an upgrade to any earlier release.]
On Fri, 2006-02-03 at 05:04 -0500, Mike A. Harris wrote:
So.... What is this unified across all platforms srpm repo you speak of, and how does it differ from what we've been doing internally for about 4 years? ;o)
That was my blog. What we have now once the packages are all built and we want to make an installable tree set, we build one binary package set and one source package set for each arch. As you say, the srpms are common across all the platforms, but there are some packages that only get included on one arch or another. This causes the srpm isos to not be exactly the same and unlinkable. To save space / time I am modifying our tree building software to create ONE master srpm package / cd set from all the srpms uses across all the arches we ship (in fedora thats i386, x86_64, and ppc). So instead of each arch havning a SRPMS/ dir and a SRPM CD set, there will be one top level sources/ dir that has a global SRPMS/ dir and an iso dir with isos made from the unified SRPMS.
Jesse Keating wrote :
On Fri, 2006-02-03 at 05:04 -0500, Mike A. Harris wrote:
So.... What is this unified across all platforms srpm repo you speak of, and how does it differ from what we've been doing internally for about 4 years? ;o)
That was my blog. What we have now once the packages are all built and we want to make an installable tree set, we build one binary package set and one source package set for each arch. As you say, the srpms are common across all the platforms, but there are some packages that only get included on one arch or another. This causes the srpm isos to not be exactly the same and unlinkable. To save space / time I am modifying our tree building software to create ONE master srpm package / cd set from all the srpms uses across all the arches we ship (in fedora thats i386, x86_64, and ppc). So instead of each arch havning a SRPMS/ dir and a SRPM CD set, there will be one top level sources/ dir that has a global SRPMS/ dir and an iso dir with isos made from the unified SRPMS.
This is great news, and will make the mirror admins happy, especially since last I checked there were only two or three packages differing between the i386 and x86_64 source CDs... and I doubt there are more than another handful for ppc :-)
Matthias