make iso failed
by Vijay N. Majagaonkar
Hi all,
I have updated my system to f14 and now i am not able to build iso out of
koji,
mock : mock-1.1.6-1.fc14.noarch
python : python-2.7-8.fc14.1.x86_64
koji : koji-1.4.0-4.fc14.noarch
[Error]
Building Installation Images: ########################################
100.0%
DEBUG util.py:267: Going to replace isolinux/isolinux.cfg with
/etc/revisor/conf.d/revisor-proxyone-isolinux.cfg
DEBUG util.py:267: Deleted the old isolinux.cfg
DEBUG util.py:267: Inserted the new isolinux.cfg
DEBUG util.py:267: Removing files and directories that do not need to be on
the media:
DEBUG util.py:267: Traceback (most recent call last):
DEBUG util.py:267: File
"/usr/lib/python2.4/site-packages/revisor/__init__.py", line 441, in run
DEBUG util.py:267: self.base.run()
DEBUG util.py:267: File
"/usr/lib/python2.4/site-packages/revisor/base.py", line 106, in run
DEBUG util.py:267: self.cli.run()
DEBUG util.py:267: File
"/usr/lib/python2.4/site-packages/revisor/cli.py", line 44, in run
DEBUG util.py:267: self.base.lift_off()
DEBUG util.py:267: File
"/usr/lib/python2.4/site-packages/revisor/base.py", line 1172, in lift_off
DEBUG util.py:267: self.buildInstallationMedia()
DEBUG util.py:267: File
"/usr/lib/python2.4/site-packages/revisor/base.py", line 1487, in
buildInstallationMedia
DEBUG util.py:267: self.plugins.exec_hook('post_exec_buildinstall')
DEBUG util.py:267: File
"/usr/lib/python2.4/site-packages/revisor/plugins.py", line 191, in
exec_hook
DEBUG util.py:267: exec("self.%s.%s()" % (plugin,hook))
DEBUG util.py:267: File "<string>", line 1, in ?
DEBUG util.py:267: File
"/usr/lib/python2.4/site-packages/revisor/modremove/__init__.py", line 61,
in post_exec_buildinstall
DEBUG util.py:267: for file in self.cfg.rm:
DEBUG util.py:267: TypeError: iteration over non-sequence
[/Error]
Is this something to do with python version, if I downgrade the version to
previous version that is f12 it works correct, do i need to patch these
packages make it work under f14 ?
Thanks for your help
V!jay
12 years, 1 month
koji using older packages in buildroots when build exists and external repos are used
by Jon Stanley
`So I got koji all installed, building stuff, happy days! (thanks
Dennis for all the help via IRC). Now I have a problem. The goal of
the installation is to be able to track and build various binary
content, not take over our entire repo management functionality. I
need to support multiple minor RHEL releases, so I set up two
independent tags with RHEL 5.3 and 5.5, each with their own external
repo with the right content. I then had a database problem when I
tried to build 5.3 content, as I was trying to put another copy of the
'filesystem' package into the rpminfo table with the same NEVR that
was there in 5.5.
So I added tag inheritance, with 5.5 as the parent and 5.3 as the
child. This got around the DB issue fine. Then I built glibc for 5.3
in order to backport a data corruption bugfix (thankfully very rare,
but we've seen it) which was fixed in 5.5. Imagine my surprise when I
then went to do a 5.5 build, and my custom glibc (which is older than
the one in 5.5 - it has the same release as the base 5.3 one and .xyz1
added after it) got used in the buildroot rather than the one from the
external repo with a newer NEVRA!
Dennis tells me that this is expected behavior, tagged builds take
precedence over external repos. For the "distro buildsystem" use case,
that makes sense. But for an enterprise use case, where basically what
we want is to track builds and what was in buildroots, and to ensure
reproducibility of builds, using an "older" package in a buildroot
when a newer one is available somewhere doesn't seem to make much
sense.
Thoughts?
12 years, 1 month
koji build help
by Jinze Xue
hello experts!
I added 2000 build tasks to my koji build system with --nowait .
I have 4 builder host and I set sleeptime=300, maxjobs=50.
but the tasks assigned to builders is to fast , why it not be 300 seconds?
the new tasks assigned to builders and my children tasks of my old builds
can't assigned .(one build task need have 3 children tasks in general)
how to solve this problem?
thanks
can I set the maxjobs to 500?
how much jobs can a general computer bear?
I think , parallel prent tasks and children tasks is not smart , will there
any changes in the future?
12 years, 1 month
[PATCH 1/3] Install build deps with yum-builddep.
by Ville Skyttä
No longer need to screen-scrape resolvedep and feed that to yum
install, and we have a chance to get BuildConflicts handing "for free"
(when RHBZ #614191 is done in yum(-builddep)).
---
mock.spec.in | 2 +-
py/mock/backend.py | 33 ++++++++++++++++++++++-----------
2 files changed, 23 insertions(+), 12 deletions(-)
diff --git a/mock.spec.in b/mock.spec.in
index e2580ca..0949975 100644
--- a/mock.spec.in
+++ b/mock.spec.in
@@ -18,7 +18,7 @@ Source: https://fedorahosted.org/mock/attachment/wiki/MockTarballs/%{name}-%{ver
URL: http://fedoraproject.org/wiki/Projects/Mock
BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)
BuildArch: noarch
-Requires: python >= 2.4, yum >= 2.4, tar, pigz, python-ctypes, python-decoratortools, usermode
+Requires: python >= 2.4, yum >= 2.4, yum-utils >= 1.1.9, tar, pigz, python-ctypes, python-decoratortools, usermode
Requires: createrepo
Requires(pre): shadow-utils
BuildRequires: python-devel
diff --git a/py/mock/backend.py b/py/mock/backend.py
index ba0ad14..81a98d0 100644
--- a/py/mock/backend.py
+++ b/py/mock/backend.py
@@ -69,6 +69,7 @@ class Root(object):
self.chroot_file_contents = config['files']
self.chroot_setup_cmd = config['chroot_setup_cmd']
self.yum_path = '/usr/bin/yum'
+ self.yum_builddep_path = '/usr/bin/yum-builddep'
self.macros = config['macros']
self.more_buildreqs = config['more_buildreqs']
self.cache_topdir = config['cache_topdir']
@@ -444,23 +445,28 @@ class Root(object):
"""figure out deps from srpm. call yum to install them"""
try:
self.uidManager.becomeUser(0, 0)
+
+ def _yum_and_check(cmd):
+ output = self._yum(cmd, returnOutput=1)
+ for line in output.split('\n'):
+ if line.lower().find('No Package found for'.lower()) != -1:
+ raise mock.exception.BuildError, "Bad build req: %s. Exiting." % line
+
+ # first, install pre-existing deps and configured additional ones
arg_string = self.preExistingDeps
for hdr in mock.util.yieldSrpmHeaders(srpms, plainRpmOk=1):
# get text buildreqs
- a = mock.util.requiresTextFromHdr(hdr)
- b = mock.util.getAddtlReqs(hdr, self.more_buildreqs)
- for item in mock.util.uniqReqs(a, b):
+ for item in mock.util.getAddtlReqs(hdr, self.more_buildreqs):
arg_string = arg_string + " '%s'" % item
-
- # everything exists, okay, install them all.
- # pass build reqs (as strings) to installer
if arg_string != "":
- output = self._yum('resolvedep %s' % arg_string, returnOutput=1)
- for line in output.split('\n'):
- if line.lower().find('No Package found for'.lower()) != -1:
- raise mock.exception.BuildError, "Bad build req: %s. Exiting." % line
+ # everything exists, okay, install them all.
+ # pass build reqs (as strings) to installer
+ _yum_and_check('resolvedep %s' % arg_string)
# nothing made us exit, so we continue
self._yum('install %s' % arg_string, returnOutput=1)
+
+ # install actual build dependencies
+ _yum_and_check("builddep '%s'" % "' '".join(srpms))
finally:
self.uidManager.restorePrivs()
@@ -676,7 +682,12 @@ class Root(object):
if not self.online:
cmdOpts = "-C"
- cmd = '%s --installroot %s %s %s' % (self.yum_path, self.makeChrootPath(), cmdOpts, cmd)
+ # invoke yum-builddep instead of yum if cmd is builddep
+ exepath = self.yum_path
+ if cmd.startswith("builddep "):
+ exepath = self.yum_builddep_path
+ cmd = cmd[len("builddep "):]
+ cmd = '%s --installroot %s %s %s' % (exepath, self.makeChrootPath(), cmdOpts, cmd)
self.root_log.debug(cmd)
output = ""
try:
--
1.7.2.3
12 years, 2 months
Users email (just a nit)
by Jon Stanley
I've successfully setup Koji with Kerberos authentication. The issue
with this is that for notifications, it seems to expect usernames to
be valid email addresses - our krb5 principals have nothing to do with
any email address. There should be a way to specify what email to use
(or a mapping of usernames to email). I guess I could setup postfix
locally with an aliases file, but that seems like an ugly hack :(
12 years, 2 months
How do folks use an SCM?
by Jon Stanley
So I have an interesting situation. I have an SCM (CVS, don't laugh)
that requires kerberized gserver authentication. How do I use Koji
with this? I don't mind embedding a password for a user that has
read-only access to the repo somewhere, but I really don't want to if
I can avoid it.
Also, with the interesting requirement of a Makefile with target srpm,
how do folks generate that for externally developed packages? Frankly,
most of the packages that we're going to build are rebuilds of RHEL
content with minor changes (sometimes a patch, sometimes just pathname
changes, etc), so generating an SRPM and feeding it directly to koji
is easier than maintaining some SCM layout that's foreign to us and a
lookaside cache. Note that the reason we want to use koji is build
reproducibility, but we'll be saving the SRPM's used in some location.
12 years, 2 months
for help
by Jinze Xue
I want to move all data from a old koji build system to a new os, what's
the best way to do?
and who have documents about configs and steps to build cvs system for a
koji build system ? send me ,thanks very much !!!!!!!
12 years, 2 months
[PATCH] Don't add --setopt=tsflags=nocontexts to all commands
by Paul Howarth
The SELinux plugin adds a hook that adds a "--setopt=tsflags=nocontexts"
option to every command routed through mock.util.do. This doesn't just
include "yum" commands, as can be seen for instance if a build fails in
the "setup" phase, where mock tries to unmount all mounted filesystems
with a umount command with the bogus option added to each invocation.
You can see this for yourself if you try building a package that pulls
in a build requirement that uses file capabilities and have the tmpfs
plugin enabled; rpm/cpio cannot apply the capability on tmpfs and so the
build bails out. I use "spamass-milter" in Rawhide as a nice, small
package that demonstrates this effect.
WARNING: Command failed. See logs for output.
# umount -n /var/lib/mock/city-fan-rawhide-x86_64/root/dev/shm
--setopt=tsflags=nocontexts
WARNING: Command failed. See logs for output.
# umount -n /var/lib/mock/city-fan-rawhide-x86_64/root/dev/pts
--setopt=tsflags=nocontexts
WARNING: Command failed. See logs for output.
# umount -n
/var/lib/mock/city-fan-rawhide-x86_64/root/proc/filesystems
--setopt=tsflags=nocontexts
WARNING: Command failed. See logs for output.
# umount -n /var/lib/mock/city-fan-rawhide-x86_64/root/tmp/ccache
--setopt=tsflags=nocontexts
WARNING: Command failed. See logs for output.
# umount -n /var/lib/mock/city-fan-rawhide-x86_64/root/var/cache/yum
--setopt=tsflags=nocontexts
WARNING: Command failed. See logs for output.
# umount -n /var/lib/mock/city-fan-rawhide-x86_64/root/sys
--setopt=tsflags=nocontexts
WARNING: Command failed. See logs for output.
# umount -n /var/lib/mock/city-fan-rawhide-x86_64/root/proc
--setopt=tsflags=nocontexts
WARNING: Forcibly unmounting
'/var/lib/mock/city-fan-rawhide-x86_64/root/dev/shm' from chroot.
WARNING: Forcibly unmounting
'/var/lib/mock/city-fan-rawhide-x86_64/root/dev/pts' from chroot.
WARNING: Forcibly unmounting
'/var/lib/mock/city-fan-rawhide-x86_64/root/proc/filesystems' from chroot.
WARNING: Forcibly unmounting
'/var/lib/mock/city-fan-rawhide-x86_64/root/tmp/ccache' from chroot.
WARNING: Forcibly unmounting
'/var/lib/mock/city-fan-rawhide-x86_64/root/var/cache/yum' from chroot.
WARNING: Forcibly unmounting
'/var/lib/mock/city-fan-rawhide-x86_64/root/sys' from chroot.
WARNING: Forcibly unmounting
'/var/lib/mock/city-fan-rawhide-x86_64/root/proc' from chroot.
The attached patch makes the plugin only apply the extra option when the
command being run is yum. Works for me, though is uses "startswith" and
so won't work on python 2.4. I'm sure a native python speaker could
write it in a more portable way.
Paul.
12 years, 2 months
for help
by Jinze Xue
hello
when I build rpms from cvs SCM ,builds failed with error these:
DEBUG backend.py:682: /usr/bin/yum --installroot
/var/lib/mock/dist-ky3.2-build-63-18/root/ groupinstall srpm-build
DEBUG util.py:301: Executing command: /usr/bin/yum --installroot
/var/lib/mock/dist-ky3.2-build-63-18/root/ groupinstall srpm-build
DEBUG util.py:267:
file:///mnt/koji/repos/dist-ky3.2-build/18/i386/repodata/repomd.xml: [Errno
14] Could not open/read
file:///mnt/koji/repos/dist-ky3.2-build/18/i386/repodata/repomd.xml
DEBUG util.py:267: Trying other mirror.
DEBUG util.py:267: Error: Cannot retrieve repository metadata (repomd.xml)
for repository: build. Please verify its path and try again
with the the one server and the one client, the error sometimes like this:
$ cvs -d : pserver:anonynous@172.19.0.30:/cvs checkout -r
avalon-logkit-1_2.......
Fatal error, aborting.
anonymous: no such user
my config of cvs client is right ,is there something wrong with my cvs
server?
any help is good ,thanks everyone , waiting...............
12 years, 2 months