[PATCH] allow tests to distinguish CRASHED and ABORTED
by Kamil Paral
This patch will allow test creators to decide whether they want to have
some error reported as CRASHED or as ABORTED. The difference is that
CRASHED tests should indicate programming errors in our code, and ABORTED
tests should indicate errors of third parties (that we don't control,
like network communication with Koji, Bodhi, etc). Then we can simply
decide to re-run ABORTED tests without closer inspection, we will want
to examine CRASHED tests more carefully.
All uncaught exceptions are reported as CRASHED. If you want to have
your test reported as ABORTED, you must set self.result to 'ABORTED' and
then re-raise the original exception, or raise a new one (error.TestFail
is preferred).
This is a code snippet where we guard some code and end the test as
ABORTED if something fails:
try:
//download from Koji
except IOError, e: //or some other error
self.result = 'ABORTED'
raise
(self.summary will be extracted from exception value.)
This is a code snippet where you don't have any original exception, but
still want to report the test as ABORTED:
foo = //do some stuff
if foo == None:
self.result = 'ABORTED'
self.summary = 'No result returned from service bar'
raise error.TestFail
You should either fill in self.summary or provide that information as an
exception argument. So this is equivalent:
foo = //do some stuff
if foo == None:
self.result = 'ABORTED'
raise error.TestFail('No result returned from service bar')
Once again, this patch does not force us to write this stuff everywhere.
It just enables us to do so. That means we will probably use that for
code snippets that communicate with external services and fail often.
For other code parts we may be completely satisfied with default
behavior.
---
lib/python/test.py | 21 ++++++++++++++-------
1 files changed, 14 insertions(+), 7 deletions(-)
diff --git a/lib/python/test.py b/lib/python/test.py
index eccbb09..d509283 100644
--- a/lib/python/test.py
+++ b/lib/python/test.py
@@ -19,6 +19,7 @@
from autotest_lib.client.bin import test, utils
from autotest_lib.client.bin.test_config import config_loader
+from autotest_lib.client.common_lib import error
from decorators import ExceptionCatcher
from util import make_autotest_url
@@ -48,16 +49,22 @@ class AutoQATest(test.test, object):
def run_once(self, **kwargs):
pass
- def process_exception(self, exc = None):
+ def process_exception(self, exc):
self._convert_list_variables()
- self.result = "CRASHED"
- if exc is not None:
- self.summary = "%s: %s" % (exc[1].__class__.__name__, exc[1])
- self.outputs += '\n%s\n%s' % ('-'*70,
- ''.join(traceback.format_exception(exc[0], exc[1], exc[2])))
+ if self.result == 'ABORTED':
+ # the exception is raised intentionally, don't override anything,
+ # only if empty
+ if not self.summary:
+ self.summary = "%s: %s" % (exc[1].__class__.__name__, exc[1])
else:
- self.summary = "Exception: Unknown exception"
+ self.result = "CRASHED"
+ self.summary = "%s: %s" % (exc[1].__class__.__name__, exc[1])
+
+ # append traceback
+ self.outputs += '\n%s\n%s' % ('-'*70,
+ ''.join(traceback.format_exception(exc[0], exc[1], exc[2])))
+
try:
self.postprocess_iteration()
except Exception:
--
1.7.2.3
13 years, 6 months
[AutoQA] #138: ResultsDB: media wiki as a storage for metadata about tests
by fedora-badges
#138: ResultsDB: media wiki as a storage for metadata about tests
----------------------------+-----------------------------------------------
Reporter: jskladan | Owner:
Type: task | Status: new
Priority: major | Milestone: Resultdb
Component: infrastructure | Version: 1.0
Keywords: |
----------------------------+-----------------------------------------------
Once #135 #136 and #137 are finished, we should create a provider/middle
man (probably a library) which will allow us to get information about the
respective test and use it for test execution/storing results/displaying
results via frontends...
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/138>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
13 years, 6 months
[AutoQA] #137: ResultsDB: propose structure for storing metadata about tests on wiki
by fedora-badges
#137: ResultsDB: propose structure for storing metadata about tests on wiki
-----------------------+----------------------------------------------------
Reporter: jskladan | Owner:
Type: task | Status: new
Priority: major | Milestone: Resultdb
Component: docs/wiki | Version: 1.0
Keywords: |
-----------------------+----------------------------------------------------
We want to use mediawiki as a storage for tests metadata. We should agree
on the fields we want to store (e.g. test owner, destructive/non-
destructive, average time to complete test etc.) and the format we'll use
to store it (probably JSON though).
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/137>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
13 years, 6 months
Re: upgradepath and updates-testing - summary of problems
by Kamil Paral
----- "Kamil Paral" <kparal(a)redhat.com> wrote:
> This is a summary of all problems related to upgradepath test and the
> updates-testing repo.
>
> First of all, upgrade path requirement is not properly defined
> anywhere.
> The best what I could find is this:
> https://fedoraproject.org/wiki/Package_maintainer_responsibilities#Miscel...
>
> That means we work mainly on assumptions.
>
> For base and -updates repo, this is not a big problem, the task is
> pretty
> straightforward:
>
> Algorithm example
> -----------------
> Repo set RS = base + updates
> For every Fedora release F:
> For every package P in RS of F:
> NVR of P <= NVR of P in RS of F+1
>
> We must check this constraint for every pending package update, right
> before
> it is pushed.
>
> We have this already implemented, and when it is enforced, we will
> guard
> all the common users (common user == using just base + updates repo
> set)
> against upgrade path problems.
>
> For -updates-testing repo however, things are not as easy.
>
>
> Problem #1: What is the desired repo set?
> -----------------------------------------
>
> In the algorithm example, repo set of F and repo set of F+1 were equal
> (base +
> updates repos). How does repo set look like when -updates-testing is
> in place?
>
> Do we want to check it this way?
> a) F base + updates + updates-testing => F+1 base + updates +
> updates-testing
> or this way?
> b) F base + updates + updates-testing => F+1 base + updates
>
> Option a) is simple, we use the same approach as in the algorithm
> example. But
> it may not really guard the user, see following problems.
>
> Option b) guards the user well, but it has some serious drawbacks. It
> basically
> says that you must first push into F+1-updates before you can push to
> F-updates-testing. That can have a large impact on package maintainers
> workflow.
> If you consider that it may take one or two weeks before your update
> is pushed
> into -updates, then it may a whole month or more before your update
> propagates
> from Rawhide to Rawhide-3.
>
> Example:
> 1. In all repos there is foo-2.0.
> 2. I want to push foo-3.0 into Rawhide, F14, F13 and F12.
> 3. First I must push to Rawhide.
> 4. Once it is in Rawhide, I can push to F14-updates-testing.
> 5. Now I must wait until testers give me enough karma, and then push
> it into
> F14-updates.
> 6. Only now (not before) I can push F13-updates-testing.
> 7. Now I must wait until testers give me enough karma, and then push
> it into
> F13-updates.
> 8. Only now (not before) I can push F12-updates-testing.
> ...
>
> Only FESCo may say whether this is the desired workflow for package
> maintainers. It is certainly different from current practice.
>
>
> Problem #2: Removing repositories
> ---------------------------------
>
> Option a) in Problem #1 sounded nice, didn't it? Well, not so much.
>
> A problem arises when you disable -updates-testing repo. Either by
> hand
> (a power-user downloaded a few packages from this repo because he
> wanted
> to test new versions of his favorite software or test some bugfix, and
> then
> he disabled this repo again) or automatically (anaconda disabled
> -updates-testing on distribution upgrade - I don't know whether this
> is
> actually done, but anaconda developers don't know either:)).
>
> After you disable this repo, you still might have some packages
> installed
> that are newer than those in F+1 base+updates. That is a problem, we
> haven't
> saved this user from upgrade path problems in this case.
>
>
> Problem #3: Unpushing packages
> ------------------------------
>
> Even more juicy problem exists - unpushing packages from
> -updates-testing.
> If we use Option a) in Problem #1, we still can't have any confidence
> in
> our results. Because our results will apply only in that precise
> moment when
> we check it. An hour later the corresponding package may be unpushed
> from
> F+1 updates-testing, but left intact in F updates-testing (or you
> already
> downloaded it, that's the same). And voilà, we have an upgrade path
> problem.
>
> Example:
> 1. foo-3.0 is pushed into F12-updates-testing and F13-updates-testing.
> 2. User installs foo-3.0 from F12-updates-testing.
> 3. foo-3.0 is unpushed from F13-updates-testing.
> 4. Upgrade path problem.
>
>
> Conclusion
> ----------
> 1. We don't know exactly how upgradepath constraint is defined.
> 2. We have no idea what -updates-testing should be tackled.
> 3. We should ask FESCo what's the preferred approach.
> 4. Current proposed solution is: "Updates-testing is for power-users,
> they
> will handle possible problems. After we receive some info how to
> proceed
> in this matter, we can extend our code."
> 5. Basic base+updates repo checking seems simple and is already
> implemented.
> All common users should be safe.
>
> If you see some logical mistakes or gaps in this email, please correct
> it.
> There might be some further issues that didn't occur to me.
>
> Thanks,
> Kamil
This has been discussed by FESCo. Logs are available here:
http://meetbot.fedoraproject.org/fedora-meeting/2010-10-12/fesco.2010-10-...
(from 20:43:55 to 20:57:51)
The result is: "Don't consider updates-testing repo for now."
Therefore no more work on this problem, yay! :)
13 years, 6 months
Re: rpmlint improvements
by Kamil Paral
----- "Alexander Todorov" <atodorov(a)redhat.com> wrote:
> I don't think thousand files in /etc/rpmlint or
> /usr/share/rpmlint/whitelist is
> a big difference. If you have many of them they will fill up the
> directory no
> matter the name.
If I read "man rpmlint" right, '-f' option is used for specifying
user config, but everything in /etc/rpmlint/*config is considered
system-wide config. Therefore it seems that all *config files in
/etc/rpmlint will be loaded even when -f is used (but source code
inspection can make us more sure).
But this question may not be relevant any more, see below.
----- "seth vidal" <skvidal(a)fedoraproject.org> wrote:
>
> you want to add a new file to EVERY SINGLE pkg?
Yes, that's what I proposed.
>
> A new file which is not, at all, useful on the users system?
>
> And in terms of fedora you want to ad 17000 new files? (1 for each
> pkg)
>
> That doesn't seem like a good use of our mirror or our users'
> bandwidth,
> to me.
If I suppose the config file has 1 kB, then it's 1 kB increase per
package. Or 17000 * 1 kB = 17 MB increase for the whole repo (and that's
not even compressed yet). In the worst case (all pkgs have such configs).
So it didn't seem to me so bad.
As you have correctly noted, Seth, my idea was driven by desire to achieve
the same behavior whether it is run by autoqa, by fedpkg, or just
with plain "rpmlint foo.rpm". I just wanted to do it "properly". I am
quite afraid to be flooded with questions "my rpmlint output is
different from yours autoqa's rpmlint output, how that's possible?".
But I must say your fedpkg idea is very good and solves a big part
of this concern. But still, there are many ways how to run rpmlint and
having different output in different scenarios doesn't seem to me
as a systematic solution.
But I see that you guys have different opinions and I don't intend
to push the mine one. Let's start with the simplest one then - just
having the config file in git and downloading it from there.
----- "James Laska" <jlaska(a)redhat.com> wrote:
> Starting with a mechanism that will allow maintainers to
> instrument rpmlint overrides, and AutoQA will honor their
> overrides during rpmlint execution seems like a good first start.
Let's never forget the KISS principle, right? :) Thank you for
your reminder.
13 years, 6 months
Re: [PATCH] allow tests to distinguish CRASHED and ABORTED
by Kamil Paral
----- "James Laska" <jlaska(a)redhat.com> wrote:
> So this means the only time we are recommending raising
> error.TestFail
> is when a test has identified a test failure/exception?
Yes, you need TestFail only when you want to end the test as ABORTED
(and you don't any internal exception to re-raise). You don't need
TestFail (or any other exception) in any other situation.
>
> I like how this clarifies how error.TestFail should be used by autoqa
> tests. It's not intended for testers to mark when a test has
> discovered
> failures, but when the test itself has failed (couldn't download
> package
> or something). Hopefully I've re-phrased this properly.
Exactly.
13 years, 6 months
Re: rpmlint improvements
by Kamil Paral
----- "seth vidal" <skvidal(a)fedoraproject.org> wrote:
> On Mon, 2010-10-11 at 04:09 -0400, Kamil Paral wrote:
>
> > Oh, this starts to be really complicated :) My idea was one config
> > file per binary package.
>
> per binary pkg? Why per binary pkg?
I got my inspiration from
http://lintian.debian.org/manual/ch2.html#s2.4
Rpmlint allows us to check individual binary packages. It would be nice
to have the whitelist config file placed right inside that particular
package, wouldn't it?
If it is included in a source package, then it won't be applied when I
run rpmlint on just a binary package.
If the config file should be located somewhere on the Internet, it won't
be applied when I just run rpmlint locally.
But I might just get something wrong. This is certainly something
I'm new into.
>
>
>
> > I have no experience with package maintenance, but different Fedora
> > releases of the same package are represented just by different git
> > branches, right? So if that need occurs, there should be no problem
> > in keeping different config file versions for f13 and el5 branch,
> > am I right?
>
> that's correct - hence my suggestion.
Great, that means that "quibble" over rules shouldn't occur, everyone has
its own playground.
>
>
>
> > The main difference is just that I propose to include that file
> directly
> > in that binary package. That means AutoQA doesn't have to download
> > it from anywhere, and (more importantly) arbitrary rpmlint run
> > produces the same result as our AutoQA run -- of course just if this
>
> > feature is supported in upstream rpmlint.
>
> That would mean every pkg gets an addition of the rpmlint config
> file?
> EVERY pkg?
>
> Why?
Not every package, just those packages that want to have some lines
whitelisted from rpmlint output.
We can also create some Fedora-wide config file (according to our
packaging guidelines) that would be applied to all our packages
globally (and could be inside our rpmlint package and stored in
/etc/rpmlint/fedora.conf), so that would further decrease the number
of packages that would need to contain such a config file. Similarly
to what Mandriva does:
http://svn.mandriva.com/cgi-bin/viewvc.cgi/packages/cooker/rpmlint-mandri...
What do you think?
13 years, 6 months