Extras build system back up
by Dan Williams
Hi,
We're back up and using the latest plague code. yum update your
plague-client and you should be good to go.
Dan
16 years, 7 months
manpage patch.
by Michael J. Knox
Patch adds missing options to the manpage:
--autocache Turn on build-root caching
--rebuildcache Force rebuild of build-root cache
Michael
--- mock-0.6/docs/mock.1 2005-08-24 09:11:52.000000000 +1200
+++ mock-0.6/docs/mock.1.mjk 2006-07-27 15:08:01.000000000 +1200
@@ -33,6 +33,12 @@
\fB\-\-version\fR
Show version number and exit.
.TP
+\fB\-\-autocache\fR
+Turn on build-root caching.
+.TP
+\fB\-\-rebuildcache
+Force rebuild of build-root cache.
+.TP
\fBcommand\fR is one of:
.TP
\fBinit\fR \- initialize a chroot (install packages, setup devices, etc.)
16 years, 10 months
PATCH: niagara support for plague
by Dennis Gilmore
Hey all
the attached patch adds support for Suns niagara arch to plague.
--
Dennis Gilmore, RHCE
Proud Australian
16 years, 10 months
HELP: trouble building packages for optional_arches=i686 *after* upgrading to Plague-0.5.0 (from plague-0.4.3)
by Joe Todaro
Hi,
I'm having a problem *not* being able to build 'i686' packages (i.e.
optional_arches=i686) anymore *after* having upgraded our plague
server/builder (Opteron x86_64) a couple of weeks ago from plague-0.4.3 to
*plague-0.5.0*. The problem happens only with i686 (optional_arches=i686)
-- not with i386 (base_arches=i386) which continues to work flawlessly.
PLAGUE-0.4.3 / PLAGUE 0.5.0 ARCHES CONFIGURATION:
Here's what the Arches section looks like in all the
/etc/plague/server/targets/*.cfg files:
[Arches]
base_arches=i386
optional_arches=i686 noarch
PLAGUE-0.4.3 SERVER LOG:
Here's an example of how things used to work (snipped from
/var/log/plague-server.log) whenever an i686 package-build request was
submitted to our PLAGUE-0.4.3 server:
Request to enqueue 'e1000' tag
'/afs/pok/projects/devel/SRPMS/e1000/e1000-7.0.38-1_rhel4.src.rpm' for
target 'ao100' (user 'jstodaro(a)abc.com')
503 (e1000): Starting tag
'/afs/pok/projects/devel/SRPMS/e1000/e1000-7.0.38-1_rhel4.src.rpm' on
target 'lnxaddons-100-install'
503 (e1000): Requesting depsolve...
503 (e1000): Starting depsolve for arches: ['i686'].
503 (e1000): Finished depsolve (successful), requesting archjobs.
503 (e1000/i686): https://lnxbuild1.abc.com:8888 - UID is
d90078ec928db631ae8f590e6d5491d514cfe4a8
503 (e1000/i686): Build result files - [ 'mockconfig.log', 'build.log',
'root.log', 'kernel-module-e1000-7.0.38-1.6.9_34.EL_2_rhel4.i686.rpm',
'job.log', 'e1000-7.0.38-1_rhel4.src.rpm',
'kernel-smp-module-e1000-7.0.38-1.6.9_34.EL_2_rhel4.i686.rpm' ]
Repo 'lnxaddons-100-install': updating repository metadata...
503 (e1000): Job finished.
PLAGUE-0.5.0 SERVER LOG:
But here's what happens (snipped from /var/log/plague-server.log) now,
whenever the above i686 package-build request gets submitted to our
"upgraded" plague server/builder running PLAGUE-0.5.0 (absolutely
nothing!)
Request to enqueue 'e1000' tag
'/afs/pok/projects/devel/SRPMS/e1000/e1000-7.0.38-1_rhel4.src.rpm' for
target 'ao100' (user 'jstodaro(a)abc.com')
508 (e1000): Starting tag
'/afs/pok/projects/devel/SRPMS/e1000/e1000-7.0.38-1_rhel4.src.rpm' on
target 'lnxaddons-100-install'
508 (e1000): Job finished.
OBSERVATIONS:
o The "last" function executed in the 'PackageJob.py' module (before it
returned to 'BuildMaster.py') was 'arch_handling(self, ba, exclusive,
exclude)'.
o Adding the following section to /etc/plague/server/targets/*.cfg
(including server/builder restart, request resubmit) did *not* help
'PackageJob.py' to progress any further than the 'arch_handling(self, ba,
exclusive, exclude)' function.
[Additional Package Arches]
kernel=i686
o Moving 'i686' from the 'optional_arches' line up to the 'base_arches'
line (including server/builder restart, request resubmit) *did* in fact
cause 'i686' to be recognized by 'PackageJob.py' (but only as a "base
arch" -- not as an "optional arch" like we need it to be)
MY QUESTIONS:
1. Why is the *optional_arches* tag -- in the [Arches] section of our
/etc/plague/server/*.cfg files -- *no longer* recognized *after* upgrading
to plague-0.5.0?
2. What are some things I can try, that might help resolve the above (i.e.
getting *plague-0.5.0* to recognize 'i686' as an *optional arch*?)
Any help will be much appreciated! .. I have run out of ideas, and things
to try... ;-(
Thanks,
--Joe
16 years, 10 months
proposed mock changes (diff)
by Clark Williams
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello all,
I was poking around in the mock source last week and did some minor
refactoring, a couple of name-changes and tried out the rpmlint
request. Attached below is a CVS diff of my mock.py with the head of
CVS. Please review and comment. A quick summary of the changes:
1. Changed version to 0.7.
2. Added code to avoid exec'ing mount for proc, sys, and dev/pts if
we've already done it
3. Oh yeah, added /sys to chroot mount
4. Refactoring: renamed _mount to _mountall, created _mount routine
that is called by _mountall
5. Renamed _umount_by_file to _umountall
6. Added code to run rpmlint
7. Added elevate/drop around raw chroot command
I'd especially like some thought on #7, since any time you elevate and
drop you can introduce a security hole and I freely admit that I'm not
always thinking security first.
If I don't get any push-back (or if I do and then get things
resolved), I'll commit these later this week.
Clark
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.4 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org
iD8DBQFEu6y9Hyuj/+TTEp0RAgumAJ9STO3Qc/7Ca4xYNdIAifcKs4oPvACgqpDD
zOm5eNJ1Gwsgc4KqhS8WW0s=
=0mBy
-----END PGP SIGNATURE-----
16 years, 11 months
Running rpmlint within mock
by Jason L Tibbitts III
Christian Iseli and I were discussing the possibility of automatically
running rpmlint somehow. It seems that the end of the mock build
process is the ideal place for this. It has a chroot already set up
with the package's build requirements already installed. (Obviously
this doesn't include the runtime requirements, but generally there's
quite some overlap.) It also has easy access to the freshly built
binary and source RPMs.
How difficult would it be to, at the end of the build process, install
the freshly built package, install rpmlint, and run rpmlint on the
source and any binary RPMs that were built?
My python is poor but I'm willing to take a stab at it if someone
could give me a few pointers.
- J<
16 years, 11 months