On Thu, 2011-01-13 at 16:12 -0500, Will Woods wrote:
This adds the new 'depcheck' test, which checks new updates
to ensure
their dependencies are all met, and sends feedback to Bodhi if so.
Drumroll and applause ... the much anticipated depcheck test! It's
certainly a complicated test </understatement>. I'll try to look more
at the patchset itself later. For now, I just played around with
running some jobs through autoqa using the 'depcheck' branch. I ran
into a few tracebacks, I've made some adjustments and noted them below.
---
tests/depcheck/control | 33 ++
tests/depcheck/control.autoqa | 32 ++
tests/depcheck/depcheck | 811 +++++++++++++++++++++++++++++++++++++++++
tests/depcheck/depcheck.py | 175 +++++++++
4 files changed, 1051 insertions(+), 0 deletions(-)
create mode 100644 tests/depcheck/control
create mode 100644 tests/depcheck/control.autoqa
create mode 100755 tests/depcheck/depcheck
create mode 100644 tests/depcheck/depcheck.py
diff --git a/tests/depcheck/control b/tests/depcheck/control
new file mode 100644
index 0000000..63b1ae5
--- /dev/null
+++ b/tests/depcheck/control
@@ -0,0 +1,33 @@
+# vim: set syntax=python
+# Notice: Most recent documentation is available at doc/control.template.
+# (It's recommended to discard the documentation below when using this
+# file as a template so it does not get outdated in your file over time.)
+
+# The control file defines the metadata for this test - who wrote it, what
+# kind of a test it is. It also executes the very test object.
+
+## Autotest metadata ##
+# The following variables are used by autotest. The first three are important
+# for us, the rest is not so important but still required.
+NAME = 'depcheck'
+AUTHOR = "Will Woods <wwoods(a)redhat.com>"
+DOC = """
+This test checks to see if the given package(s) would cause broken dependencies
+if pushed to the live repos.
+"""
+TIME="SHORT"
+TEST_TYPE = 'CLIENT' # SERVER can be used for tests that need multiple machines
+TEST_CLASS = 'General'
+TEST_CATEGORY = 'Functional'
+
+## Job scheduling ##
+# Execute the test object here. In this example this will execute
+# testclassname.py file with specified arguments. This file will receive
+# following variables:
+# * autoqa_conf: string containing autoqa config file
+# * autoqa_args: dictionary containing all variables provided by hook (read
+# more at hooks/<hook>/README) and some additional ones:
+# - hook: name of the executing hook
+# You should pass all necessary variables for your test as method arguments.
+# You can also conveniently explode the dictionary into a list of arguments.
+job.run_test('depcheck', config=autoqa_conf, **autoqa_args)
diff --git a/tests/depcheck/control.autoqa b/tests/depcheck/control.autoqa
new file mode 100644
index 0000000..510978d
--- /dev/null
+++ b/tests/depcheck/control.autoqa
@@ -0,0 +1,32 @@
+# vim: set syntax=python
+# Notice: Most recent documentation is available at doc/control.autoqa.template.
+# (It's recommended to discard the documentation below when using this
+# file as a template so it does not get outdated in your file over time.)
+
+# The control.autoqa file allows test to define its scheduling and also enables
+# it to define some requirements or alter input arguments for this test. This
+# file will decide whether to run this test at all, on what architectures/
+# distributions it whould run, and so on.
+
+# This file will have following variables pre-defined. They can be re-defined
+# according to test's needs.
+# hook: name of the hook to run this test
+# archs: list of host architectures to run this test on; change this if
+# the whole list of archs is not necessary, you can use ['noarch']
+# if any single architecture is needed
+# labels: a list od autotest labels needed for this test to run; empty list
+# by default
+# execute: whether to execute this test at all; by default True; change this
+# if you don't want to run this test under current conditions at all;
+# please note that the execution of this test may be forced, so even
+# though you don't want to run it, setup all other variables
+# correctly, don't stop at this one
+# autoqa_args: dictionary of all variables that the test itself will receive
+# (look at doc/control.template for the documentation); please
+# be aware that the keys you expect might not be present in the
+# dictionary when some other hook evaluates this file, so always
+# first check for their presence
+
+# we want to run this test just for post-koji-build hook
+if hook not in ['post-bodhi-update']:
+ execute = False
diff --git a/tests/depcheck/depcheck b/tests/depcheck/depcheck
new file mode 100755
index 0000000..e69f008
--- /dev/null
+++ b/tests/depcheck/depcheck
@@ -0,0 +1,811 @@
+#!/usr/bin/python
+# depcheck - test whether the given packages would break the given repos
+#
+# Copyright 2010, Red Hat, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License along
+# with this program; if not, write to the Free Software Foundation, Inc.,
+# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Author: Will Woods <wwoods(a)redhat.com>
+
+# The overall strategy - make Yum assume that all the packages from the given
+# repo(s) are installed, and try to resolve deps for the new packages as if
+# they were being installed.
+#
+# As geppetto suggests, we use two YumBase objects. First, we set one up with
+# all the requested repos, then use its pkgSack (which contains package
+# objects for everything in the repos) to populate the rpmdb of the second
+# YumBase object. This makes it look like everything available in the repos
+# is installed on the system. We then attempt to apply the new packages as an
+# update to that system.
+#
+# But! NOTE! Using this as a simple pass/fail test will not suffice. We need to
+# return a (possibly empty) set of packages which *do* pass depchecking -
+# something like what yum --skip-broken does. Here's an example that shows why:
+# Assume we have some packages {A,B,C,D,...} where A requires B, C requires D,
+# and all other items are independent. If we use depcheck as a pass/fail test
+# and we test the following sets of packages - like we might test things as
+# they came out of the build system:
+# 1. {A} -> FAIL
+# 2. {A,C} -> FAIL
+# 3. {A,C,B} -> FAIL
+# 4. {A,C,B,E,F,G,...} -> FAIL
+# 5. {A,C,B,E,F,G,...,D} -> PASS
+# In step 4, even though the subset {A,B,E,F,G,...} would pass depcheck,
+# we reject the entire set because of the presence of C. No good!
+# Therefore we need to return lists of passing/failing packages instead:
+# 1. {A} -> {}
+# 2. {A,C} -> {}
+# 3. {A,C,B} -> {A,B}
+# 4. {A,C,B,E,F,G,...} -> {A,B,E,F,G,...}
+# 5. {A,C,B,E,F,G,...,D} -> {A,B,C,D,...}
+# So we're using _skipPackagesWithProblems to throw out the failing packages
+# and make a list of passing packages.
+
+import os
+import sys
+import yum
+import glob
+import tempfile
+import rpmUtils.arch as rpmarch
+import optparse
+import unittest
+from autoqa.repoinfo import repoinfo
+from yum.parser import varReplace
+import rpmfluff
+
+sys.path.append('/usr/share/yum-cli')
+from cli import YumBaseCli
+
+# TODO Should use stuff in (e.g.) rpmUtils.arch to generate this list.
+# TODO Support non-Intel arches, too.
+both_arches = ['i386', 'x86_64']
+
+class YumDepcheck(YumBaseCli):
+ def __init__(self):
+ YumBaseCli.__init__(self)
+ # disable all plugins
+ self.preconf.init_plugins = False
+ # hack to prevent YumDepcheck from setting up system repos
+ self.conf.config_file_path = self.conf.reposdir = '/var/empty'
+
+ # Copy and pasted (and modified) from YumBase - see
+ # /usr/lib/python*/site-packages/yum/__init__.py
+ # Modified to not traceback if there's no 'yumdb_info' attribute, since
+ # we're lying about the packages being installed (and thus having some
+ # data in the yumdb)
+ def _add_up_txmbr(self, requiringPo, upkg, ipkg):
+ txmbr = self.tsInfo.addUpdate(upkg, ipkg)
+ if requiringPo:
+ txmbr.setAsDep(requiringPo)
+ if hasattr(ipkg, 'yumdb_info') and \
+ ('reason' in ipkg.yumdb_info and ipkg.yumdb_info.reason ==
'dep'):
+ txmbr.reason = 'dep'
+ return txmbr
+
+# === Unit tests begin here! Whee!
+# --- unittest helper functions
+
+def fake_package_update(po):
+ '''Given a package object, use rpmfluff to generate a plausible
+ update to that package. Returns a SimpleRpmBuild.'''
+ # TODO: args to filter req/prov/etc.
+ u = rpmfluff.SimpleRpmBuild(po.name, po.version, po.release+'.fakeupdate')
+ u.buildArchs=[po.arch]
+ for p in po.provides_print:
+ u.add_provides(p)
+ for r in po.requires_print:
+ u.add_requires(r)
+ for c in po.conflicts_print:
+ u.add_conflicts(c)
+ for o in po.obsoletes_print:
+ u.add_obsoletes(o)
+ for f in po.filelist:
+ u.add_installed_file(f, rpmfluff.SourceFile(os.path.basename(f),''))
+ for d in po.dirlist:
+ u.add_installed_directory(d)
+ u.make()
+ try:
+ rpm = u.get_built_rpm(po.arch)
+ sn = os.path.basename(rpm)
+ os.rename(rpm, sn)
+ finally:
+ u.clean()
+ return sn
+
+def add_local_repo(repoid, path, parents=[]):
+ '''Add a fake entry to repoinfo for a local repo'''
+ repoinfo.config.add_section(repoid)
+ repoinfo.config.set(repoid,'name',repoid)
+ repoinfo.config.set(repoid,'tag','fake-tag-%s' % repoid)
+ repoinfo.config.set(repoid,'url','file://%s' % path)
+ repoinfo.config.set(repoid,'path','')
+ repoinfo.config.set(repoid,'collection_name',repoid.upper())
+ repoinfo.config.set(repoid,'parents',", ".join(parents))
+
+def simple_rpmlist():
+ '''Create a set of simple RPM objects for testing'''
+ m = rpmfluff.SimpleRpmBuild('mint', '1.0', '1')
+ b = rpmfluff.SimpleRpmBuild('bourbon', '1.0', '1')
+ j = rpmfluff.SimpleRpmBuild('julep', '1.0', '1')
+ j.add_requires('bourbon = 1.0')
+ j.add_requires('mint')
+ return [m,b,j]
+
+def multilib_rpmlist(arches):
+ '''Create a multilib set of RPM objects'''
+ g = rpmfluff.SimpleRpmBuild('gin', '1.0', '1')
+ v = rpmfluff.SimpleRpmBuild('vermouth', '1.0', '1')
+ v.add_subpackage('dry')
+ v.add_subpackage('sweet')
+ mt = rpmfluff.SimpleRpmBuild('martini', '1.0', '1')
+ mt.add_requires('vermouth-dry = 1.0')
+ mt.add_requires('gin')
+ mt.section_install += 'mkdir -p "$RPM_BUILD_ROOT%{_libdir}"\n'
+ mt.section_install += 'echo "Fake $RPM_ARCH library" >
$RPM_BUILD_ROOT/%{_libdir}/libmartini.so.1\n'
+ mt.basePackage.section_files += "%{_libdir}/libmartini.so.1\n"
+ # TODO multilib binary (valid duplicate file)
+ # multilib-ify
+ mt.buildArchs = arches
+ g.buildArchs = arches
+ v.buildArchs = arches
+ # add -devel subpackages, which mash handles specially
+ g.add_devel_subpackage() # proper -devel subpackage which requires the main one
+ v.add_subpackage('devel') # omits normal Requires: parent (for testing)
+ return [g,v,mt]
+
+def do_mash(pkgdir, arches, distname=None):
+ '''Set up a mash object with an ad-hoc config'''
+ import mash, mash.config, StringIO
+ from ConfigParser import RawConfigParser
+ if distname is None:
+ distname = os.path.basename(pkgdir)
+ conf = mash.config.readMainConfig('/etc/mash/mash.conf')
+ dist = mash.config.MashDistroConfig()
+ parser = RawConfigParser()
+ mash_conf = '''[%s]
+rpm_path = %s
+strict_keys = False
+multilib = True
+multilib_method = devel
+tag = this-is-a-dummy-tag
+inherit = False
+debuginfo = False
+delta = False
+source = False
+arches = %s
+keys =
+''' % (distname, pkgdir, arches)
+ parser.readfp(StringIO.StringIO(mash_conf))
+ dist.populate(parser, distname, conf)
+ conf.distros.append(dist)
+ dist.repodata_path = dist.rpm_path
+ dist.name = distname
+ themash = mash.Mash(dist)
+ # HACK - keep mash from logging excessively
+ themash.logger.handlers = [themash.logger.handlers[-1]]
+ # gotta do createrepo to get mash to Do The Right Thing
+ rpmfluff.run_command('createrepo %s' % pkgdir) # XXX error checking
+ rc = themash.doMultilib()
+ conf.distros.pop()
+ return rc
+
+class YumRepoBuildMultilib(rpmfluff.YumRepoBuild):
+ def make(self, archlist):
+ assert type(archlist) in (list, tuple, set), \
+ "YumRepoBuildMultilib.make() takes a list argument"
+ for pkg in self.rpmBuilds:
+ pkg.make()
+ for pkg in self.rpmBuilds:
+ for arch in archlist:
+ if (pkg.buildArchs is None and arch == rpmfluff.expectedArch) or \
+ (pkg.buildArchs and arch in pkg.buildArchs):
+ for n in pkg.get_subpackage_names():
+ rpmfluff.run_command('cp %s %s' % \
+ (pkg.get_built_rpm(arch, name=n), self.repoDir))
+ return do_mash(self.repoDir, " ".join(archlist))
+
+# --- Actual unittest testcases
+
+class DepcheckPrereqTestCase(unittest.TestCase):
+ '''A TestCase for the prerequisites for depcheck'''
+ def test_rpmfluff(self):
+ (n, v, r) = ('test-rpmfluff', '1.0', '1')
+ p = rpmfluff.SimpleRpmBuild(n, v, r)
+ p.clean()
+ p.make()
+ rpmsDir = p.get_rpms_dir()
+ self.assertTrue(os.path.isdir(rpmsDir))
+ arch = rpmfluff.expectedArch
+ rpmFile = os.path.join(rpmsDir, arch, "%s-%s-%s.%s.rpm"%(n,v,r,arch))
+ print rpmFile
+ self.assertTrue(os.path.isfile(rpmFile))
+ h = rpmfluff.get_rpm_header(rpmFile)
+ self.assertEqual(h['name'], n)
+ self.assertEqual(h['version'], v)
+ self.assertEqual(h['release'], r)
+ # clean up
+ p.clean()
+
+ def test_simple_yum_setup(self):
+ arch = rpmfluff.expectedArch
+ rpmlist = simple_rpmlist()
+ repo = YumRepoBuildMultilib(rpmlist)
+ repo.make([arch])
+ # falsify repoinfo
+ add_local_repo('simple_yum_setup', repo.repoDir)
+ y = set_up_yum_object('simple_yum_setup')
+ # Now see if we can get the RPM data back out through yum
+ for r in rpmlist:
+ plist = y.pkgSack.returnNewestByNameArch((r.name, arch))
+ self.assertTrue(plist)
+ for p in plist:
+ self.assertEqual(p.name, r.name)
+ self.assertEqual(p.version, r.version)
+ self.assertEqual(p.release, r.release)
+ if p.name == 'julep':
+ requires = p.returnPrco('requires')
+ self.assertTrue(('mint', None, (None, None, None)) in
requires)
+ self.assertTrue(('bourbon', 'EQ', ('0',
'1.0', None)) in requires)
+ # cleanup
+ repoinfo.config.remove_section('simple_yum_setup')
+ os.system('rm -rf %s' % repo.repoDir)
+
+ def test_mash(self):
+ '''Check to see if mash/multilib works as we expect it
to'''
+ arch = rpmfluff.expectedArch
+ is_multilib = rpmarch.isMultiLibArch(arch)
+ rpmlist = multilib_rpmlist(both_arches)
+ repo = YumRepoBuildMultilib(rpmlist)
+ if is_multilib:
+ rc = repo.make(both_arches)
+ else:
+ rc = repo.make([arch])
+ self.assertEqual(rc, 0)
+ add_local_repo('test_mash', repo.repoDir)
+ y = set_up_yum_object('test_mash')
+ # We should now have a proper multilib repo.
+ # Testing this is tricky, since we're checking mash's notoriously
+ # hard-to-understand behavior - but we'll check a few simple
+ # things and assume it's doing what we want.
+ # (first, prepare the data we're going to test)
+ pkgarch = {}
+ for p in y.pkgSack.returnNewestByNameArch():
+ if p.name not in pkgarch:
+ pkgarch[p.name] = set()
+ pkgarch[p.name].add(p.arch)
+
+ mainarch = set([arch])
+ multilib = set(both_arches)
+ # 1) should have both arches for -devel packages and packages with libs
+ if is_multilib:
+ self.assertEqual(pkgarch['martini'], multilib)
+ self.assertEqual(pkgarch['gin-devel'], multilib)
+ self.assertEqual(pkgarch['vermouth-devel'], multilib)
+ else:
+ self.assertEqual(pkgarch['martini'], mainarch)
+ self.assertEqual(pkgarch['gin-devel'], mainarch)
+ self.assertEqual(pkgarch['vermouth-devel'], mainarch)
+ # 3) all other packages should be main-arch only
+ self.assertEqual(pkgarch['gin'], mainarch)
+ self.assertEqual(pkgarch['vermouth'], mainarch)
+ self.assertEqual(pkgarch['vermouth-dry'], mainarch)
+ self.assertEqual(pkgarch['vermouth-sweet'], mainarch)
+
+ # clean up
+ for r in rpmlist:
+ r.clean()
+ # no repo.clean, unfortunately
+ repoinfo.config.remove_section('test_mash')
+ os.system('rm -rf %s' % repo.repoDir)
+
+class DepcheckTestCase(unittest.TestCase):
+ def setUp(self):
+ self.arch = rpmfluff.expectedArch
+ self.rpmlist = simple_rpmlist()
+ self.mainrepo = YumRepoBuildMultilib(self.rpmlist)
+ self.mainrepo.make([self.arch])
+ self.mainrepo.id = 'mainrepo'
+ add_local_repo(self.mainrepo.id, self.mainrepo.repoDir)
+
+ def tearDown(self):
+ for rpm in self.rpmlist:
+ rpm.clean()
+ for repo in (self.mainrepo, ):
+ os.system('rm -rf %s' % repo.repoDir) # no repo.clean()
+ repoinfo.config.remove_section(repo.id)
+
+ def test_depcheck_empty_transaction(self):
+ '''Make sure we accept a consistent repo, and no updates at
all'''
+ (rv, problems, ok_packages) = depcheck_main(self.mainrepo.id, [])
+ self.assertEqual(len(problems), 1)
+ self.assertTrue(problems[0].startswith(u'Success'))# XXX subject to
LANG?
+ self.assertEqual(rv, 0)
+
+ def test_depcheck_good_update(self):
+ '''Make sure depcheck accepts a good update as
expected'''
+ (n, v, r) = ('mint', '2.0', '1') # Now twice as minty as
mint 1.0!
+ p = rpmfluff.SimpleRpmBuild(n,v,r)
+ p.make()
+ (rv, problems, ok_packages) = depcheck_main(self.mainrepo.id,
[p.get_built_rpm(self.arch)])
+ self.assertEqual(len(problems), 1)
+ self.assertTrue(problems[0].startswith(u'Success'))# XXX subject to
LANG?
+ okp = ok_packages[0]
+ self.assertEqual((n,v,r),(okp.name, okp.version, okp.release))
+ self.assertEqual(rv, 2)
+ p.clean()
+
+ def test_missing_req(self):
+ '''make sure depcheck catches updates with missing
requires'''
+ (n, v, r) = ('mint', '3.0', '1') # Muddled for extra
flavor!
+ p = rpmfluff.SimpleRpmBuild(n,v,r)
+ p.add_requires('muddler') # uh oh, this doesn't exist..
+ p.make()
+ (rv, problems, ok_packages) = depcheck_main(self.mainrepo.id,
[p.get_built_rpm(self.arch)])
+ self.assertEqual(len(ok_packages), 0)
+ self.assertEqual(rv, 0)
+ p.clean()
+
+ def test_changed_prov(self):
+ '''Test a changed Provides:'''
+ (n, v, r) = ('bourbon', '2.0', '1') # Browner than
ever!
+ p = rpmfluff.SimpleRpmBuild(n,v,r)
+ p.make()
+ (rv, problems, ok_packages) = depcheck_main(self.mainrepo.id,
[p.get_built_rpm(self.arch)])
+ self.assertEqual(len(ok_packages), 0)
+ self.assertEqual(rv, 0)
+ p.clean()
+
+ def test_multiple_packages(self):
+ '''Test with one good and one bad update'''
+ (n, v, r) = ('bourbon', '2.0', '1') # Browner than
ever!
+ b = rpmfluff.SimpleRpmBuild(n,v,r)
+ b.make()
+ (n, v, r) = ('mint', '2.0', '1') # Now twice as minty as
mint 1.0!
+ m = rpmfluff.SimpleRpmBuild(n,v,r)
+ m.make()
+ (rv, problems, ok_packages) = depcheck_main(self.mainrepo.id,
+ [p.get_built_rpm(self.arch) for p in (m,b)])
+ self.assertEqual(len(ok_packages), 1)
+ okp = ok_packages[0]
+ self.assertEqual((n,v,r),(okp.name, okp.version, okp.release))
+ m.clean()
+ b.clean()
+
+ def test_valid_package_conflict(self):
+ '''Check to make sure we accept a valid package
conflict'''
+ b = rpmfluff.SimpleRpmBuild('bourbon', '3.0', '1')
+ b.make()
+ j = rpmfluff.SimpleRpmBuild('julep', '3.0', '1')
+ j.add_requires('mint') # same as above
+ j.add_conflicts('bourbon < 3.0') # this is new!
+ j.make()
+ (rv, problems, ok_packages) = depcheck_main(self.mainrepo.id,
+ [p.get_built_rpm(self.arch) for p in (j,b)])
+ self.assertEqual(len(ok_packages), 2)
+ b.clean()
+ j.clean()
+
+ def test_bad_package_conflict(self):
+ '''Check to make sure we reject an invalid package
conflict'''
+ j = rpmfluff.SimpleRpmBuild('julep', '4.0', '1')
+ j.add_conflicts('mint < 3.0') # The only mint we have is 1.0!
+ j.make()
+ (rv, problems, ok_packages) = depcheck_main(self.mainrepo.id,
+ [j.get_built_rpm(self.arch)])
+ self.assertEqual(len(ok_packages), 0)
+ j.clean()
+
+ def test_bad_obsoletes(self):
+ '''Make sure we reject Obsoletes that cause unresolved
deps'''
+ km = rpmfluff.SimpleRpmBuild('kentucky-colonel-mint', '1.0',
'1')
+ km.add_obsoletes('mint <= 5.0')
+ km.make()
+ (rv, problems, ok_packages) = depcheck_main(self.mainrepo.id,
+ [km.get_built_rpm(self.arch)])
+ km.clean()
+ self.assertEqual(len(ok_packages), 0)
+
+ def test_good_obsoletes(self):
+ '''Make sure we accept good Obsoletes'''
+ km = rpmfluff.SimpleRpmBuild('kentucky-colonel-mint', '2.0',
'1')
+ km.add_obsoletes('mint <= 5.0')
+ km.add_provides('mint')
+ km.make()
+ (rv, problems, ok_packages) = depcheck_main(self.mainrepo.id,
+ [km.get_built_rpm(self.arch)])
+ km.clean()
+ self.assertEqual(len(ok_packages), 1)
+
+ # TODO: test_ignored - but we currently have no way of knowing if a
+ # package was ignored or tested and rejected
+
+ def test_accepted(self):
+ '''Check that the --accepted flag properly protects
packages'''
+ # updating the whole delicious julep stack
+ m = rpmfluff.SimpleRpmBuild('mint', '5.0', '1')
+ b = rpmfluff.SimpleRpmBuild('bourbon', '5.0', '1')
+ j = rpmfluff.SimpleRpmBuild('julep', '5.0', '1')
+ j.add_requires('bourbon = 5.0')
+ j.add_requires('mint')
+ # ..but now there's a new mint - except the new julep build hasn't
+ # landed yet
+ km = rpmfluff.SimpleRpmBuild('kentucky-colonel-mint', '1.0',
'1')
+ km.add_obsoletes('mint <= 5.0')
+ plist = (m,b,j,km)
+ for p in plist: p.make()
+ (rv, problems, ok_packages) = depcheck_main(self.mainrepo.id,
+ package_files=[km.get_built_rpm(self.arch)],
+ accepted=[p.get_built_rpm(self.arch) for p in (m,b,j)])
+ should_be_ok = set(('mint','bourbon','julep'))
+ for p in plist: p.clean()
+ self.assertEqual(should_be_ok, set([p.name for p in ok_packages]))
+
+class DepcheckMultilibTestCase(unittest.TestCase):
+ '''More complex test cases that involve multilib
packages'''
+ def setUp(self):
+ '''Set up a multilib repo for use with test cases'''
+ self.arch = rpmfluff.expectedArch
+ self.arches = both_arches
+ self.is_multilib = rpmarch.isMultiLibArch(self.arch)
+ self.rpmlist = simple_rpmlist() + multilib_rpmlist(self.arches)
+ self.mainrepo = YumRepoBuildMultilib(self.rpmlist)
+ if self.is_multilib:
+ self.mainrepo.make(self.arches)
+ else:
+ self.mainrepo.make([self.arch])
+ self.mainrepo.id = 'mainrepo'
+ add_local_repo(self.mainrepo.id, self.mainrepo.repoDir)
+
+ def tearDown(self):
+ for rpm in self.rpmlist:
+ rpm.clean()
+ for repo in (self.mainrepo, ):
+ os.system('rm -rf %s' % repo.repoDir) # no repo.clean()
+ repoinfo.config.remove_section(repo.id)
+
+ def test_empty_transaction(self):
+ '''Make sure we accept an empty update transaction'''
+ (rv, problems, ok_packages) = depcheck_main(self.mainrepo.id, [])
+ self.assertEqual(len(problems), 1)
+ self.assertTrue(problems[0].startswith(u'Success'))# XXX subject to
LANG?
+ self.assertEqual(rv, 0)
+
+ def test_multilib_ok(self):
+ '''Test an acceptable update to a multilib package'''
+ (n, v, r) = ('gin', '10.0', '1') # Switching to
Tanqueray 10
+ p = rpmfluff.SimpleRpmBuild(n,v,r)
+ p.buildArchs = self.arches
+ p.add_devel_subpackage()
+ p.make()
+ pkglist = [p.get_built_rpm(a, name=n) for n in p.get_subpackage_names() for a in
self.arches]
+ (rv, problems, ok_packages) = depcheck_main(self.mainrepo.id, pkglist)
+ self.assertEqual(len(problems), 1)
+ self.assertTrue(problems[0].startswith(u'Success'))# XXX subject to
LANG?
+ if self.is_multilib:
+ # mainarch gin, multilib gin-devel
+ self.assertEqual(len(ok_packages), 3)
+ else:
+ # single arch gin + gin-devel
+ self.assertEqual(len(ok_packages), 2)
+ for okp in ok_packages:
+ self.assertEqual((v,r),(okp.version, okp.release))
+ self.assertTrue(okp.name in ('gin', 'gin-devel'))
+ self.assertEqual(rv, 2)
+ p.clean()
+
+ def test_multilib_dropped_arch(self):
+ '''Test an update which drops one of its arches'''
+ # NOTE: this test makes no sense on non-multilib arches, so skip it.
+ # XXX TODO skip this test properly.
+ # py2.7 unittest has a skip() decorator (also in unittest2 for py2.6)
+ if not self.is_multilib:
+ return
+ # This tries to simulate the nss-softokn error - see bug #596840
+ v = rpmfluff.SimpleRpmBuild('vermouth', '1.0', '2')
+ v.buildArchs = self.arches[1:] # oops, dropped one
+ # Note that the subpackages intentionally do *not* require the main
+ # package - that was a key component of the nss-softokn error.
+ v.add_subpackage('dry')
+ v.add_subpackage('sweet')
+ v.add_subpackage('devel')
+ v.make()
+ pkglist = [v.get_built_rpm(a, name=n) for n in v.get_subpackage_names() for a in
v.buildArchs]
+ (rv, problems, ok_packages) = depcheck_main(self.mainrepo.id, pkglist)
+ # NOTE: we actually expect vermouth to *pass*, since an update of this
+ # type is *technically valid* - it only becomes problematic when a
+ # subsequent update requires the missing library.
+ # This shows that depcheck alone will not be sufficient to prevent
+ # this situation from occurring in the future.
+ self.assertEqual(len(ok_packages), 4)
+ for okp in ok_packages:
+ self.assertTrue(okp.name.startswith('vermouth'))
+ # Simulate the presence of the vermouth package in an update repo
+ updaterepo = YumRepoBuildMultilib([v])
+ updaterepo.make(self.arches)
+ updaterepo.id = 'updaterepo'
+ add_local_repo(updaterepo.id, updaterepo.repoDir, parents=(self.mainrepo.id,))
+ # So now the 'martini' packages ("a subsequent update") should
fail,
+ # since martini.i386 requires the missing vermouth-dry.i386 library..
+ mt = rpmfluff.SimpleRpmBuild('martini', '3.0', '1')
+ mt.add_requires('vermouth-dry = 1.0-2')
+ mt.add_requires('gin')
+ mt.section_install += 'mkdir -p
"$RPM_BUILD_ROOT%{_libdir}"\n'
+ mt.section_install += 'echo "Fake $RPM_ARCH library" >
$RPM_BUILD_ROOT/%{_libdir}/libmartini.so.3\n'
+ mt.basePackage.section_files += "%{_libdir}/libmartini.so.3\n"
+ mt.buildArchs = self.arches
+ mt.make()
+ pkglist = [mt.get_built_rpm(a) for a in mt.buildArchs]
+ (rv, problems, ok_packages) = depcheck_main(updaterepo.id, pkglist)
+ # NOTE WELL: we expect yum to *accept* martini-3.0, because it
+ # doesn't handle dependency resolution the same way RPM does.
+ # (The nss-softokn failure happened inside the RPM transaction test.)
+ # Therefore this test case just proves that depcheck/yum alone is
+ # not sufficient to catch all dependency problems.
+ # We should use this same testcase with whatever future tool actually
+ # does the RPM transaction test, and expect it to reject martini-3.0.
+ self.assertEqual(len(ok_packages), len(mt.buildArchs))
+ # clean up updaterepo
+ os.system('rm -rf %s' % updaterepo.repoDir)
+ repoinfo.config.remove_section(updaterepo.id)
+ # and clean up our packages
+ v.clean()
+ mt.clean()
+
+# === End unit tests
+
+def parse_args():
+ usage = "%prog REPO PACKAGE [PACKAGE ...]"
+ description = "Test whether the given packages would break the given
repo."
+ parser = optparse.OptionParser(usage=usage, description=description)
+ # TODO flag to control verbosity?
+ parser.add_option('--profile',
+ action='store_true', dest='profile', default=False,
+ help="Enable yum profiling code/output")
+ parser.add_option('--selftest', '--self-test',
+ action='store_true', dest='selftest', default=False,
+ help="Run depcheck's self-test suite")
+ parser.add_option('--runtimetest','--run-time-test',
+ action='store_true', dest='runtimetest', default=False,
+ help="Run a test to estimate runtime using the live repos")
+ parser.add_option('--accepted', '-a',
+ action='append', dest='accepted', default=[],
+ help="Consider this package already accepted (may be used multiple
times)")
+ (opts, args) = parser.parse_args()
+ if opts.selftest or opts.runtimetest:
+ return (None, args, opts)
+ if len(args) < 2:
+ parser.error("Incorrect number of arguments")
+ (repo, packages) = (args[0], args[1:])
+ known_repos = repoinfo.repos()
+ if repo not in known_repos:
+ parser.error("Invalid repo. Known repos:\n " + "
".join(known_repos))
+ for p in packages + opts.accepted:
+ if not os.path.exists(p):
+ parser.error("Can't find package '%s'" % p)
+ return (repo, packages, opts)
+
+def set_up_yum_object(repoid):
+ y = YumDepcheck()
+ # yum.misc.getCacheDir() gives us a temporary cache dir
+ # TODO: use a non-temp dir so we can use cached data!
+ y.conf.cachedir = yum.misc.getCacheDir()
+ y.repos.setCacheDir(y.conf.cachedir)
+
+ # Set up repo objects for requested repo and its parents
+ for r in repoinfo.getparents(repoid) + [repoid]:
+ repo = repoinfo.getrepo(r)
+ newrepo = yum.yumRepo.YumRepository(r)
+ newrepo.name = r
+ baseurl = varReplace(repo['url'], y.conf.yumvar)
+ newrepo.baseurl = baseurl
+ newrepo.basecachedir = y.conf.cachedir
+ newrepo.metadata_expire = 0
+ newrepo.timestamp_check = False
+ y.repos.add(newrepo)
+ y.repos.enableRepo(newrepo.id)
+ y.logger.info("Added repo: %s" % r)
+ # We're good - return the Yum object
+ return y
+
+class MetaSackRPMDB(yum.packageSack.MetaSack):
+ '''Extend MetaSack to simulate an RPMDB that has all the listed
packages
+ installed.'''
+ def __init__(self, metasack):
+ yum.packageSack.MetaSack.__init__(self)
+ for (k,v) in metasack.__dict__.iteritems():
+ self.__dict__[k] = v
+ self._cached_conflicts_data = None
+ self.__cache_rpmdb__ = False
+
+ def fileRequiresData(self):
+ '''A real rpmdb has this method, but a MetaSack doesn't.
Let's fake it.
+ (See /usr/lib/python*/site-packages/yum/rpmsack.py:fileRequiresData)
+ '''
+ installedFileRequires = {}
+ installedUnresolvedFileRequires = set()
+ resolved = set()
+ for pkg in self.returnPackages():
+ for name, flag, evr in pkg.requires:
+ if not name.startswith('/'):
+ continue
+ installedFileRequires.setdefault(pkg.pkgtup, []).append(name)
+ if name not in resolved:
+ dep = self.getProvides(name, flag, evr)
+ resolved.add(name)
+ if not dep:
+ installedUnresolvedFileRequires.add(name)
+
+ fileRequires = set()
+ for fnames in installedFileRequires.itervalues():
+ fileRequires.update(fnames)
+ installedFileProviders = {}
+ for fname in fileRequires:
+ pkgtups = [pkg.pkgtup for pkg in self.getProvides(fname)]
+ installedFileProviders[fname] = pkgtups
+
+ ret = (installedFileRequires, installedUnresolvedFileRequires,
+ installedFileProviders)
+
+ return ret
+
+ def returnConflictPackages(self):
+ '''Another method that RPMDB has but MetaSack
doesn't.'''
+ if self._cached_conflicts_data is None:
+ ret = []
+ for pkg in self.returnPackages():
+ if len(pkg.conflicts):
+ ret.append(pkg)
+ self._cached_conflicts_data = ret
+ return self._cached_conflicts_data
+
+ def transactionCacheFileRequires(self, installedFileRequires,
+ installedUnresolvedFileRequires,
+ installedFileProvides,
+ problems):
+ '''No-op - depcheck doesn't cache RPMDB data'''
+ return
+ def transactionCacheConflictPackages(self, pkgs):
+ '''No-op - depcheck doesn't cache RPMDB data'''
+ return
+ def transactionReset(self):
+ '''No-op - all this does is clear the cache, and depcheck
doesn't
+ cache data, so it doesn't need to do anything.'''
+ return
+
+def depcheck_main(repo, package_files, accepted=[], profile=False):
+ # Set up YumBase object that knows about all the repo packages
+ print "Checking packages against repo %s (parents: %s)" % (repo,
+ " ".join(repoinfo.getparents(repo)))
+ yum_repos = set_up_yum_object(repo)
+
+ # choose the correct arch(es) for the upcoming mashes
+ if rpmarch.isMultiLibArch(yum_repos.arch.basearch):
+ mash_arches = ' '.join(both_arches)
+ else:
+ mash_arches = yum_repos.arch.basearch
+
+ # Add the accepted packages (if any) to the YumBase object
+ prev_accepted = []
+ if accepted:
+ # mash the accepted packages into a proto-updates repo
+ accdir = tempfile.mkdtemp(prefix='depcheck-accepted.')
+ for p in accepted:
+ os.symlink(os.path.realpath(p), os.path.join(accdir, os.path.basename(p)))
+ do_mash(accdir, mash_arches)
+ # TODO: better way to get these pkg objects into the pkgSack?
+ for p in glob.glob(accdir+'/*.rpm'):
+ pkg_obj = yum.packages.YumLocalPackage(yum_repos.ts, os.readlink(p))
+ prev_accepted.append(pkg_obj)
+ yum_repos.pkgSack.addPackage(pkg_obj)
+ os.system('/bin/rm -rf %s' % accdir)
+
+ # This YumBase object will act like all repo packages are already installed
+ y = YumDepcheck()
+ y.tsInfo = yum.transactioninfo.TransactionData()
+ y.tsInfo.debug = 1
+ y.rpmdb = MetaSackRPMDB(yum_repos.pkgSack)
+ y.xsack = yum.packageSack.PackageSack()
+ # Hacky way to set up the databases (this is copied from yum's testbase.py)
+ y.tsInfo.setDatabases(y.rpmdb, y.xsack)
+
+ # Set up some nice verbose output
+ y.doLoggingSetup(9,9)
+ y.setupProgressCallbacks()
+ # TODO: filter log messages down to the ones we actually care about
+
+ # Mash the given packages into the set that would get pushed
+ # TODO: try to handle debuginfo / src rpms? what would we need to check?
+
+ # Set up a temp dir we can mash in (so we don't delete our input files)
+ tmpdir = tempfile.mkdtemp(prefix='depcheck.')
+ for p in package_files:
+ os.symlink(os.path.realpath(p), os.path.join(tmpdir, os.path.basename(p)))
+ # mash away, you can mash away.. stay all day.. if you want to
+ do_mash(tmpdir, mash_arches)
+ # Get package objects for the now-mashed package set
+ packages = []
+ for p in glob.glob(tmpdir+'/*.rpm'):
+ if '-debuginfo' in p or p.endswith('.src.rpm'):
+ print "debuginfo/source RPMs ignored - skipping %s" % p
+ continue
+ packages.append(yum.packages.YumLocalPackage(y.ts, os.readlink(p)))
+ # (oh yeah - clean up tmpdir, now that we don't need it)
+ os.system('/bin/rm -rf %s' % tmpdir)
+ # Mark the package objects as updates
+ ignored = []
+ for p in packages:
+ if not y.update(p):
+ ignored.append(p)
+
+ # ENGAGE THRUSTERS WE ARE GO FOR LIFTOFF!! MOVE ZIG!! AND SO ON!!
+ if profile:
+ (r, problems) = y.cprof_resolveDeps()
+ else:
+ (r, problems) = y.resolveDeps()
+
+ # C&P from YumBase.buildTransaction; possibly unnecessary (but doesn't
hurt)
+ y.rpmdb.ts = None
+ # This is the skip-broken step
+ if r == 1:
+ y.skipped_packages = []
+ (r, problems) = y._skipPackagesWithProblems(r, problems)
+ # C&P from buildTransaction again: if we skipped broken packages, re-resolve
+ if y.tsInfo.changed:
+ (r, problems) = y.resolveDeps(r == 1)
+
+ if prev_accepted:
+ print "PREVIOUSLY-ACCEPTED: %s" % " ".join([str(p) for p in
prev_accepted])
+ if ignored:
+ print "IGNORE: %s" % " ".join([str(p) for p in ignored])
+ print "REJECT: %s" % " ".join([str(p) for p in
y.skipped_packages])
+ ok_packages = list(y.tsInfo.pkgSack) + list(y.tsInfo.localSack)
+ print "ACCEPT: %s" % " ".join([str(p) for p in ok_packages])
+ return (r, problems, ok_packages + prev_accepted)
+
+if __name__ == '__main__':
+ r = 0
+ try:
+ (repo, package_files, opts) = parse_args()
+ if opts.selftest:
+ unittest.main(argv=[sys.argv[0]] + package_files)
+ elif opts.runtimetest:
+ repo = repoinfo.getreleases()[0] + '-updates-testing'
+ y = yum.YumBase()
+ y.add_enable_repo(repo, baseurls=[repoinfo.get(repo,'url')])
+ # TODO get package names from parse_args and use those instead?
+ for name in ('bash','dash','mash','zsh'):
+ for po in y.pkgSack.returnNewestByName(name):
+ package_files.append(fake_package_update(po))
+ del y
+ import time
+ start = time.time()
+ print "\n ".join(["testing packages:"]+package_files)
+ (r, problems, ok_packages) = depcheck_main(repo, package_files,
+ accepted=opts.accepted,
+ profile=opts.profile)
+ print "depcheck_main took %f seconds" % (time.time() - start)
+ for p in package_files:
+ os.unlink(p)
+ else:
+ (r, problems, ok_packages) = depcheck_main(repo, package_files,
+ accepted=opts.accepted,
+ profile=opts.profile)
+ # TODO do something useful(ish) with 'problems' and
'ok_packages'
+ except KeyboardInterrupt:
+ r = 1
+ except ImportError, e:
+ print "error: startup failed: %s" % e
+ r = 2
+ sys.exit(0)
diff --git a/tests/depcheck/depcheck.py b/tests/depcheck/depcheck.py
new file mode 100644
index 0000000..1828fb3
--- /dev/null
+++ b/tests/depcheck/depcheck.py
@@ -0,0 +1,175 @@
+#
+# Copyright 2010, Red Hat, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License along
+# with this program; if not, write to the Free Software Foundation, Inc.,
+# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Author: Will Woods <wwoods(a)redhat.com>
+
+# Notice: Most recent documentation is available at doc/test_class.py.template.
+
+import autoqa.util
+from autoqa.test import AutoQATest
+from autoqa.decorators import ExceptionCatcher
+from autotest_lib.client.bin import utils
+from autoqa.repoinfo import repoinfo
+from autoqa.bodhi_utils import bodhi_list, bodhi_post_testresult,
_check_already_commented, user
+from autoqa.koji_utils import SimpleKojiClientSession
+
+localarch = autoqa.util.get_basearch()
+testarch = localarch
+
+# TODO: move these into bodhi_utils
+def list_pending_updates(repotag):
+ '''List pending updates for the repo associated with the given koji
tag'''
+ repo = repoinfo.getrepo_by_tag(repotag)
+ repotag = repo['tag'] # canonicalize!
+ # XXX UGH. repoinfo needs repo['bodhi_release'] or something
+ release = repo['name'].split('-',1)[0].upper()
+ pending_tag = repotag + '-pending'
+ params = {'release': release}
+ if repotag.endswith('-updates-testing'):
+ params['status'] = 'pending'
+ elif repotag.endswith('-updates'):
+ params['status'] = 'testing'
+ params['request'] = 'stable'
+ return bodhi_list(params)
+def get_update_nvrs(updates):
+ '''Return a set of NVRs contained in the given updates'''
+ return set([b['nvr'] for u in updates for b in u['builds']])
+
+# NOTE: This is unused right now. It could be used to sanity-check
+# list_pending_updates but at the moment they usually return different
+# sets of NVRs - for example, right now there's a Bodhi bug where it doesn't
+# untag obsolete/revoked updates. We're going to assume that Bodhi is correct
+# for now, but leave this here for future use.
+# TODO: move these into koji_utils
+def list_pending_builds(repotag):
+ '''List pending builds for the repo associated with the given koji
tag'''
+ pending_tag = repotag + '-pending'
+ koji = SimpleKojiClientSession()
+ return koji.listTagged(tag=pending_tag)
+def get_build_nvrs(builds):
+ '''Return a set of NVRs contained in the given builds'''
+ return set([b['nvr'] for b in builds])
+
+def strip_epoch(rpmstr):
+ return rpmstr.split(':',1)[-1] # works for E:N-V-R.A or N-V-R.A
+def strip_arch(rpmstr):
+ return rpmstr.rsplit('.',1)[0]
+
+# --- Everything below here is depcheck-specific
+
+def is_accepted(update, arch=testarch):
+ '''Return True if the given update has been marked as Accepted by a
+ previous depcheck run, False otherwise.'''
+ (result, time) = _check_already_commented(update, user, 'depcheck', arch)
+ return result == 'PASSED'
+
+def fetch_nvrs(nvrs, localdir):
+ '''Fetch all the RPMs for the given package NVRs.'''
+ # NOTE: Currently we're just fetching everything from koji. This might not
+ # be the fastest thing, but it works for everyone, everywhere. If there's
+ # a shortcut we can use on the autoqa test systems, we should use that,
+ # but it needs to gracefully fall back to something like this.
+ rpms = []
I get a traceback here unless I instantiate 'koji' inside fetch_nvrs.
koji = SimpleKojiClientSession()
+ for nvr in nvrs:
+ # XXX deal with debuginfo/src RPMs?
+ for rpm in koji.nvr_to_rpms(nvr, debuginfo=False, src=False):
+ outfile = autoqa.util.grabber.urlgrab(rpm['url'], localdir)
urlgrab results in a traceback since it expects a file as the second
argument. I got it working by changing that line to ...
outfile = autoqa.util.grabber.urlgrab(rpm['url'], \
localdir + '/' + \
os.path.basename(urlparse(rpm['url']).path))
And adding two imports to the top...
import os
from urlparse import urlparse
I'm sure there are probably better/easier ways, but that worked.
+ rpms.append(outfile)
+ return rpms
+
+class depcheck(AutoQATest):
+ version = 1 # increment this if setup() changes
+
+ @ExceptionCatcher()
+ def setup(self, *args, **kwargs):
+ utils.system('yum -y install python-rpmfluff')
Don't forget to install mash too
+ @ExceptionCatcher()
+ def run_once(self, envrs, kojitag, id, name, **kwargs):
+ super(self.__class__, self).run_once()
+
+ # Get our inputs
+ pending = list_pending_updates(kojitag)
+ # XXX set testarch to noarch for noarch updates?
+ accepted = filter(is_accepted, pending)
+
+ # Fetch packages and build commandline
+ repo = repoinfo.getrepo_by_tag(kojitag)
+ repoid = repo['name']
+ cmd = './depcheck %s' % repoid
+ # add the accepted packages first
+ accepted_nvrs = get_update_nvrs(accepted)
+ for rpm in fetch_nvrs(accepted_nvrs, self.tmpdir):
+ cmd += ' -a %s' % rpm
+ # then add the rest
+ pending_nvrs = filter(lambda n: n not in accepted_nvrs,
get_update_nvrs(pending))
+ for rpm in fetch_nvrs(pending_nvrs, self.tmpdir):
+ cmd += ' %s' % rpm
+
+ # Run the test
+ self.outputs = utils.system_output(cmd, retain_output=True)
+
+ # Process output
+ results = {}
I was getting a traceback on line #146 (KeyError: 'ACCEPT') since the
expected keys in results aren't initialized. I fixed it by replace the
above line with:
results = dict(ACCEPT=list(), REJECT=list(), IGNORE=list())
+ data_re = re.compile('^(IGNORE|REJECT|ACCEPT):
(.*)$')
NameError: global name 're' is not defined
I just added an 'import re' to the top.
+ for line in self.outputs.split('\n'):
+ match = data_re.match(line)
+ if not match:
+ continue
+ (what, builds) = match.groups()
+ results[what] = builds.split()
+
+ # The test passes if all the given envrs show up in the 'ACCEPTED' list
+ self.result = "PASSED"
+ def envra_to_nvr(envra):
+ return strip_arch(strip_epoch(envra))
+ passed_nvrs = [envra_to_nvr(envra) for envra in results['ACCEPT']]
+ for envr in envrs:
+ if strip_epoch(envr) not in passed_nvrs:
+ self.result = "FAILED"
+
+ # Fill out the test result details
+ self.summary = "depcheck for %s: %s %s" % (repoid, updateid,
self.result)
NameError: global name 'updateid' is not defined
I fixed this by adjusting the run_once() header slightly. Feel free to
adjust as needed.
- def run_once(self, envrs, kojitag, id, name, **kwargs):
+ def run_once(self, envrs, kojitag, **kwargs):
+
super(self.__class__, self).run_once()
# Get our inputs
pending = list_pending_updates(kojitag)
# XXX set testarch to noarch for noarch updates?
accepted = filter(is_accepted, pending)
+ updateid = kwargs.get('id', 'UNKNOWN')
+ name = kwargs.get('name', '')
+ assert name is not None and name != ''
+ self.highlights = 'ACCEPTED:\n '
+ self.highlights += '\n '.join(results['ACCEPT'])
+ self.highlights += '\n\nREJECTED:\n '
+ self.highlights += '\n '.join(results['REJECT'])
+ url = self.autotest_url
+
+ # Post bodhi results for the current update
+ if self.result == "PASSED":
+ # NOTE: this had better match what's in is_accepted
+ bodhi_post_testresult(name, 'depcheck', self.result, url, testarch,
karma=0)
+
+ # Also post results for any other newly-passing updates
+ oldupdates = {}
+ # Find the updates that correspond to the passed NVRs
+ for nvr in passed_nvrs:
+ update = bodhi_list({'package':nvr})
+ if update['title'] not in oldupdates:
TypeError: list indices must be integers, not str
bodhi_list() returns a list. But in the case of the tests I was
running, that list was empty. I'll need to dig deeper since I'm not
really familiar with the bodhi_utils module. For now, I just added a
FIXME.
+ oldupdates[update['title']] = update
+ # If every NVR in the update is OK, post good results
+ for (title, update) in oldupdates.iteritems():
+ updateok = True
+ for build in update['builds']:
+ if build['nvr'] not in passed_nvrs:
+ updateok = False
+ break
+ if updateok:
+ bodhi_post_testresult(update['title'], 'depcheck',
'PASSED', url, testarch, karma=0)
+
+ # TODO: notify maintainers when an update is rejected - but ONLY once
A complete diff of my small set of changes on top of your branch is
available at
http://fpaste.org/p6au/
Thanks,
James