[AutoQA] #271: Create autotestd systemd.service file for F15 (and beyond)
by fedora-badges
#271: Create autotestd systemd.service file for F15 (and beyond)
-----------------------+----------------------------------------------------
Reporter: jlaska | Owner:
Type: task | Status: new
Priority: major | Milestone: Package Update Acceptance Test Plan
Component: packaging | Keywords:
-----------------------+----------------------------------------------------
07:41:40 Viking-Ice: jlaska: nr.1 upstream man pages nr.2 comparing
converting
https://fedoraproject.org/wiki/User:Johannbg/QA/Systemd/compatability (
The Fedora autotest-server package includes a sysVinit script. A
systemd.service file will be needed for f15 and beyond. Guidance from
Viking-Ice included below ...
{{{
07:41:40 Viking-Ice: jlaska: nr.1 upstream man pages nr.2 comparing
converting
https://fedoraproject.org/wiki/User:Johannbg/QA/Systemd/compatability (
you can side by side view them ) nr.3 scim through
Lennart's blog
07:42:36 jlaska: Viking-Ice: perfect, just the bread-crumb trail I
needed. Thank you
07:43:04 * jlaska goes to create a ticket with these instructions (need
to eventually create an autotestd.service)
07:44:02 Viking-Ice: I deliberately designed the wiki page so those
wanting to convert could side by side compare what already had been
converted for you
know ah.. they solve this problem that way moment..
07:45:14 <-- juhp [~petersen(a)66.187.239.10] has quit (Ping timeout: 240
seconds)
07:46:51 Viking-Ice: jlaska: throw this link into your pool
http://0pointer.de/blog/projects/systemd-for-admins-3.html
07:48:21 Viking-Ice: jlaska: what he mentioned there is the skeleton to
be built upon as inn the first example of the service file. Always start
like that
then gradually build on top of that if needed
}}}
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/271>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
12 years, 11 months
RHEL5 in AutoQA and python 2.4 compatibility
by Tim Flink
This came up in the review of #293 and I thought that I would bring it
up on the list.
Since our AutoQA server runs RHEL5, it is also running python 2.4 which
doesn't support "finally" blocks.
What is the minimum version of python we want to support? Should all of
our code be 2.4 compatible or just the stuff that will likely run on the
autotest server ( events, watchers etc.)?
We already have several finally blocks in our tests in stable and a
couple in the libs, so breaking 2.4 compatibility wouldn't be a huge
deal in the tests since we haven't seen too many problems so far.
Thoughts?
Tim
12 years, 11 months
[AutoQA] #208: update minimon to transport logs between guest and host via virtio
by fedora-badges
#208: update minimon to transport logs between guest and host via virtio
-------------------+--------------------------------------------------------
Reporter: liam | Owner:
Type: task | Status: new
Priority: major | Milestone: Automate installation test plan
Component: tests | Version: 1.0
Keywords: |
-------------------+--------------------------------------------------------
Since minimon has to use network during test, some test cases will not
activate network during test,in this case, minimon can not transport logs
to host, test also be identified fail even it was successfully run.
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/208>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
12 years, 11 months
Decreasing Bodhi Comment Emails From AutoQA
by Tim Flink
Since this has almost nothing to do with the log output of depcheck
upgradepath, I'm splitting this conversation off into another thread.
As Kamil pointed out, my initial proposal was naive and needs more
thought before implementing anything. We will soon have the ability to
disable emails for comments in Bodhi and we need to figure out how to
best make use of that ability.
Lots of questions here, I'll save my thoughts for replying to myself.
Tim
How would we deal with tests that change state between PASS and FAIL?
- Either 'PASS -> FAIL' or 'FAIL -> PASS'
What emails do we want to get rid of?
- Just certain 'PASS' messages?
- All results messages?
How do we expect maintainers to learn of AutoQA results if we turn off
email notifications?
Can we do this in a way such that it is configurable by maintainer?
- If so, should we attempt this?
- What is the benefit? How big is it?
Can we just send a single email after all tests have passed?
- Again, if so, should we?
Is there another approach that would be better?
12 years, 11 months
How do you test AutoQA?
by Tim Flink
I really can't tell if this is a particularly bad pain point for me
since I work remotely and my internet connection isn't great (frequent
SSL timeouts on koji, package downloads take forever) but I was
wondering how you all went about testing AutoQA.
For me, it really depends on what I'm trying to poke at. Most of the
time, I'll run 'watch-koji-builds.py --verbose' if I'm trying to do some
general testing. I keep track of the events that are called if I want to
run something again ('autoqa post-bodhi-update-batch --targettag
dist-f13-updates --arch x86_64 --arch i386 oxygen-gtk-1.0.4-1.fc13' as
an example).
If I'm trying to poke at something very specific, I find myself manually
cobbling together some one-off script that runs something specific, like
depcheck or upgradepath.
Testing the interaction with Koji or Bodhi? I still haven't figured out
a good way to do that. Thus far, I have been hacking in print statements
into bodhi_utils or koji_utils but that doesn't quite cover everything.
I ask because speaking for myself, I'm human. The more of a PITA it is
to test something, the more likely I am to not do it or limit the number
of times that I do test it. I think that I've been pretty good about
testing stuff before pushing to master or stable, but I'm bothered by
the amount of time I'm wasting on setting up tests and figuring out ways
that I can trick AutoQA into going down code paths I want to test.
I'm not saying that testing is a waste of time, just thinking that some
of this test setup time could be much better spent on coding or
additional testing.
This isn't meant to be aimless complaining. I have some ideas on how to
make the testing of AutoQA easier but I wanted to know if I was missing
something obvious before I went too far down that road.
Thanks,
Tim
12 years, 12 months
Proposed Change in Focus for 0.5.0
by Tim Flink
This ended up being rather long, so as an executive summary:
I'm proposing that we change the 0.5.0 release of AutoQA to focus on
getting the best information and the least noise to package maintainers.
The focus would be on decreasing the number of emails that maintainers
are receiving and improving the understandability of our logs (focusing
on depcheck and upgradepath).
This is a proposal, and I'm hoping to spark a discussion with this
thread; not dictate a change to our roadmap.
------------------------------------------------------------
After the thread in devel@ about the volume of email coming out of bodhi
for AutoQA 'PASSED' comments [1], I started thinking more about some of
my past experiences with user complaints about the information they're
presented with.
My major concern is this: low signal to noise ratio (SNR) in output
leads to users ignoring or refusing to use the tool as a whole. In my
experience, once users start ignoring the tool it is a very difficult,
uphill battle to get them to stop ignoring/hating/distrusting it.
At the moment, I think that we have two major SNR ratio issues in
AutoQA: comment emails coming out of bodhi and log files (especially
depcheck and upgradepath).
I can't seem to find the book I have that discusses it right now, but
one of the things that I believe strongly is: testing output that is
difficult for a human (with sufficient background knowledge) to
understand isn't much better than not testing at all and is actually
worse in some cases (we're not at that point, though).
As I heard James Bach [2] put it; building software is like building a
house. The developers are the construction workers and the testers are
responsible for shining light on the places of the house that need work.
The people with the light can't directly build the house but the
construction workers can build a better house when the light is shining
on the most important things to fix.
When we have a low SNR, our output is muddled which isn't very far from
putting tissue paper over the lights in the building analogy. The light
might be in the right places but it's really hard for the builders to
tell where exactly the light is pointing and have a harder time fixing
those issues.
So, what is the point of all this? Basically, I'm proposing that we
hijack our current plans for 0.5.0 and re-focus on improving the SNR and
usability of our current tests. Specifically, I would like to see us
focus on two things:
1) Stop spamming maintainers with not-needed comment updates from bodhi
- The current proposal is covered in #314 [3]
2) Improve our logging - focusing on depcheck and upgradepath
- Goal 1: maintainers should be able to find the information they
need about why their package failed within 30 seconds of
opening the log file.
- Goal 2: users should be able to easily find documentation on what
a test is supposed to do and examples on how to triage a
failure using our logging output.
How could we get there?
1) This seems pretty straight forward to me. I didn't think that the
bodhi side of things would get implemented quite that quickly or I
would have said something earlier. From my conversations with Kamil
and James, I don't think that there is going to be much resistance
to this solution, though.
2) This would take a bit more work. I'm not sure if the exact goal of
30 seconds is possible but I'm of the opinion that it is a good
place to start. We could start by looking at what we want the output
to look like, determine if that is possible and do some user testing
to get input from actual maintainers (not directly involved with
AutoQA).
I know why things are the way that they are and I don't disagree with
those design decisions. Blamestorming is pointless and
counter-productive, anyways. I'm interested in finding a solution :)
And now, discussion time! Thoughts, suggestions, complaints?
Tim
[1] http://lists.fedoraproject.org/pipermail/devel/2011-April/150901.html
[2] http://www.satisfice.com/aboutjames.shtml
[3] https://fedorahosted.org/autoqa/ticket/314
12 years, 12 months
[AutoQA] #238: Move ResultsDB to Turbogears2 application
by fedora-badges
#238: Move ResultsDB to Turbogears2 application
----------------------------+-----------------------------------------------
Reporter: jskladan | Owner: jskladan
Type: task | Status: new
Priority: major | Milestone: Resultdb
Component: infrastructure | Keywords:
----------------------------+-----------------------------------------------
Because the current implementation has noticeable disadvantages (speed,
database management, ...), I'd like to rewrite the resultsdb backend using
TG2.
This will also provide a json-rpc, added to the xml-rpc interface.
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/238>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
12 years, 12 months
[PUSHED] depcheck and koji_utils changes
by Kamil Paral
For those interested, I just pushed to master a few changes in depcheck test and koji_utils library:
commit 53aa452e605b61944d6b657bc09813972be2b262
Author: Kamil Páral <kparal(a)redhat.com>
Date: Wed Apr 27 17:12:56 2011 +0200
depcheck: download only necessary archs
If depcheck is run on i686, don't download x86_64 RPMs because we don't
need them.
commit 549517418759080f18b6e37ae3801b72adbfc879
Author: Kamil Páral <kparal(a)redhat.com>
Date: Wed Apr 27 14:41:48 2011 +0200
depcheck: unittests require x86_64 machine, abort for other archs
commit c7ef84ea635b5c862fc7c087f504eea230786116
Author: Kamil Páral <kparal(a)redhat.com>
Date: Wed Apr 27 13:11:41 2011 +0200
koji_utils: rename ensure_connection() to check_connection()
The new name fits better. Also use check_connection() mainly in tests'
setup() phase to check whether Koji is running. It is not necessary to
execute it then again in run_once() stage, that doesn't help anything.
13 years