[AutoQA] #298: test.py - split postprocess_iteration reporting into standalone methods
by fedora-badges
#298: test.py - split postprocess_iteration reporting into standalone methods
----------------------+-----------------------------------------------------
Reporter: jskladan | Owner:
Type: task | Status: new
Priority: minor | Milestone: 0.5.0
Component: core | Keywords:
----------------------+-----------------------------------------------------
At the moment, postprocess_iteration() handles the whole reporting/sending
results. When the run_once ends, this method takes the content of
self.{result, summary, highlights, outputs} and automagically sends an
email to the mailing list, creates output.log file, etc.
This approach was very reasonable for tests which actually test just one
thing (update/build/etc), because just a single result is to be sent.
With the new tests like Depcheck and Upgradepath, we'd love to be able to
force-send several results as the test proceeds - e.g. to be able to send
bodhi comment to every update, but send just one overall email...
So what I'd like to have is:
1) Take postprocess_iteration, and split it into standalone methods
according to the 'destination' of the report (e.g. send_email(),
create_output_log(), ...). These will be able to take {result, summary,
highlights, outputs} parameters, which will override the 'automagicall'
self.{result, summary, highlights, outputs}. I.e. if I set result
parameter, summary, highlights and outputs will be filled using the 'self'
variables, and so on.
2) Take the bodhi-reporting method (as used in depcheck and upgradepath),
and move it to the AutoQATest class.
3) Add a method report_results(), again with the same parameters ({result,
summary, highlights, outputs}) and one more (e.g. a dictionary) to control
which reporting methods to call (i.e. to be able to say "send email, store
in resultsdb, but do not send bodhi comment"). By default, this will call
all the reporting routines.
4) In postprocess_iteration(), call the report_results(), so the
'automagical reporting' behaviour is not changed.
5) Add a variable, which will turn on/off the call of report_results()
(True as default) in postprocess_iteration() (needs to be some attribute
of AutoQATest class, since postprocess_iteration() does not take
arguments).
This is for the wrapper-writer, to be able to control if results are to be
sent in the postprocess_iteration() or not (imagine Upgradepath - we will
report all the results in the run_once(), and do not need postprocess
iteration to actually send anything).
-------------------------------
This is also preparation for resultsdb, because it solves the problem with
reporting multiple results from one test. Also turning the resultsdb
reporting on is a matter of adding one method, and calling it from
report_results() (and adding more types of reporting [I'm looking at you
fedora message bus!] in the future will be also this simple.
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/298>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
12 years, 11 months
[AutoQA] #339: update mediakit_sanity to skip boot.iso existence test or warn boot.iso's absence
by fedora-badges
#339: update mediakit_sanity to skip boot.iso existence test or warn boot.iso's
absence
----------------------+-----------------------------------------------------
Reporter: hongqing | Owner:
Type: task | Status: new
Priority: major | Milestone: Automate installation test plan
Component: core | Keywords:
----------------------+-----------------------------------------------------
now the existences of all images, listed in .treeinfo [images-i386]
section, are tested. In DVD.iso, boot.iso is listed in .treeinfo, but
actually it does not exist. so the mediakit_sanity test always fails. I
think the solution is to skip the boot.iso existence test, since it is not
required in DVD.iso or we can just print warning information rather than
fail it. or any other ideas?
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/339>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
12 years, 11 months
Re: [AutoQA] #316: depcheck: extract possible cause of failure
by fedora-badges
#316: depcheck: extract possible cause of failure
-------------------------+--------------------------------------------------
Reporter: kparal | Owner: jskladan
Type: enhancement | Status: closed
Priority: major | Milestone: 0.5.0
Component: tests | Resolution: fixed
Keywords: |
-------------------------+--------------------------------------------------
Changes (by kparal):
* status: assigned => closed
* resolution: => fixed
Comment:
Pushed into master as 8a2b9ce2410009da8a4715ea0e33342ef46d4b2f .
This doesn't need any documentation changes I believe, because it will be
covered in ticket #326.
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/316#comment:3>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
12 years, 11 months
Re: [AutoQA] #318: Provide access to test documentation
by fedora-badges
#318: Provide access to test documentation
---------------------------+------------------------------------------------
Reporter: kparal | Owner: kparal
Type: enhancement | Status: assigned
Priority: major | Milestone: 0.5.0
Component: documentation | Resolution:
Keywords: |
---------------------------+------------------------------------------------
Changes (by kparal):
* component: core => documentation
Comment:
Pushed into master as 8a2b9ce2410009da8a4715ea0e33342ef46d4b2f .
We could probably mention this in our wiki documentation ("if you create
your own test, you can provide some documentation for it at <url>...").
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/318#comment:3>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
12 years, 11 months
Re: [AutoQA] #319: Create HTML log output if possible
by fedora-badges
#319: Create HTML log output if possible
---------------------------+------------------------------------------------
Reporter: kparal | Owner: vhumpa
Type: task | Status: new
Priority: major | Milestone: 0.5.0
Component: documentation | Resolution:
Keywords: |
---------------------------+------------------------------------------------
Changes (by kparal):
* component: core => documentation
Comment:
Pushed into master as 8a2b9ce2410009da8a4715ea0e33342ef46d4b2f .
This will definitely need some documentation adjustments.
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/319#comment:8>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
12 years, 11 months
[PATCH] pretty logs
by Kamil Paral
This patch involves tickets:
https://fedorahosted.org/autoqa/ticket/315 - Create per-item logs for multi-item tests
https://fedorahosted.org/autoqa/ticket/316 - depcheck: extract possible cause of failure
https://fedorahosted.org/autoqa/ticket/318 - Provide access to test documentation
https://fedorahosted.org/autoqa/ticket/298 - test.py - split postprocess_iteration reporting into standalone methods
https://fedorahosted.org/autoqa/ticket/319 - Create HTML log output if possible
and it is the result of my, jskladan's and vhumpa's work.
I'll the describe the patch in detail here, but I decided not to attach it nor to post it into reviewboard, because it is simply too large. Also I expect that it will receive a little more polish in the following days, so it's not absolutely finalized yet. But if you see anything you would like to change, please, send your comments.
You can display the patch by command:
$ git fetch
$ git diff origin/master..origin/pretty -M
Most important changes:
1. TestDetail class
All the test's outcomes are now stored in a single class TestDetail. It works as a simple container for test result, summary, output, highlights and a few additional information. It is easier now to call methods that operate on these information, you just pass them your TestDetail instance.
By default every test has a default "main" TestDetail instance created in self.detail. You can access self.detail to change the values of result, summary, etc. For a simple single-item tests (like rpmlint), this main TestDetail instance is all they need. For complex multi-item tests (like upgradepath) you need to create a TestDetail instance for every "report" you want to create. In the case of upgradepath and depcheck this means creating separate TestDetail for every Bodhi update tested.
2. TestDetail.update_result() method
TestDetail class remembers list of result keywords sorted by importance. If you use TestDetail.update_result(result) method, it will set the test result only if your provided test result is more important (i.e. "worse") then the current test result. This way you can easily handle all error states inside your script and only as the last line include:
self.detail.update_result('PASSED')
3. self.log() method
We used to do a pretty complicated tasks when printing and saving some output. E.g. you would print in out on the stderr, and then you would append it to self.outputs and maybe even self.highlights. Now you just use:
self.log(message, stderr=True, highlight=True)
For multi-item tests (like upgradepath) you can also use self.log() to print to several TestDetail instances at once.
4. automatic log creation, different terminology
We now have several types of logs (debug, output.log and pretty log). We want to have them transparent and simple for test authors. Debug log is just for us, the developers, so we don't mention it. The former output.log was renamed to full.log and we mention it as little as possible, preferably no at all. And the pretty log is referenced simply as "log" or "test log" and that's the only one we hope test authors will work with.
Full log (full.log) is created automatically at the end of every test run. It contains everything that has been logged via self.log() method (provided that @param printout==True). It does not contain output of "print foo" messages (that's just in the debug log).
Test log (aka pretty log) is also created automatically at the end of every test run and it's populated from main TestDetail instance (self.detail). Except for the case when the test author has created some test log already manually, then we don't create another one automatically. Test log is created in HTML and this file is used for reporting results everywhere (it is either sent or linked).
5. self.post_results() method
There is now a single method for reporting results. It takes care of everything - creating log file, sending email, sending opt-in email and sending Bodhi comment. The first two are mandatory and the last two are optional - you have to provide correct parameters.
By default the results are posted automatically at the end of the test run. It means for simple tests where you don't need anything fancy (like Bodhi comments or opt-in emails), you don't have to care about reporting results, everything is handled for you. For more advanced uses you have to call the self.post_results() method. For multi-items tests like upgradepath or depcheck you'll need to call self.post_results(test_detail) for every "item" (Bodhi update in our case) you want to report results for.
6. Test logs are in HTML
Test logs are now in HTML. It will allow us to do pretty formatting, nice highlighting, etc. When sending as an email, only a plaintext overview header and a link to proper HTML log is included in the email body.
Example: http://kparal.fedorapeople.org/autoqa/upgradepath2.html
7. All tests modified
All tests were re-written to the new architecture. Most of them don't take any advantage of the new log features yet.
rpmlint example: http://kparal.fedorapeople.org/autoqa/rpmlint.html
Upgradepath and depcheck received more love and they should creater nice per-update logs with just the relevant info included. Depcheck now tries to filter out irrelevant messages and keep only the interesting ones in the log.
upgradepath example: http://kparal.fedorapeople.org/autoqa/upgradepath.html
depcheck example: http://kparal.fedorapeople.org/autoqa/depcheck.html
We were working on it hard even today. Some code may be still rough. All comments welcome.
Thanks,
Kamil
12 years, 11 months
Unit Testing Dependencies for AutoQA
by Tim Flink
Kamil asked a good question in his review of the code for comment spam
reduction that made me realize that I haven't done enough explaining of
how the unit test code is currently working.
The question was whether python-virtualenv and python-pip (required for
running 'make test') should be listed as build dependencies in autoqa.spec.
My feeling on this is 'probably not' because of the way that those
dependencies are being used. I'm open to other ideas, though.
The current way that I have implemented 'make test' does the following:
- create virtualenv if it doesn't already exist
* install required packages (py.test etc.) if the env is being
created
- activate virtualenv
- run tests
- deactivate virtualenv
The main reason that I wrote it this way is that an appropriate version
of py.test still hasn't been reviewed for inclusion in Fedora and the
version that does exist is not new enough to run these tests. I dropped
the ball on that one and once we get 0.5.0 released, I will resume work
on getting the needed packages reviewed so that they get into rawhide
before F16 branch.
I'm not sure that we want to be using virtualenv or pip during rpm
packaging. I think that I would rather see direct execution of the tests
during rpm build and I'm not even sure that the use of virtualenv for
'make test' is acceptable for Fedora packaging standards. Then again,
AutoQA isn't a Fedora package yet so maybe we don't want to worry about
this now.
Thoughts?
Tim
12 years, 11 months
[PATCH] AutoQA Comment Spam Reduction
by Tim Flink
This patch involves fixes for tickets in preparation for 0.5.0 release:
* https://fedorahosted.org/autoqa/ticket/314
- Decrease the volume of 'PASSED' email sent to maintainers from
bodhi
* https://fedorahosted.org/autoqa/ticket/258
- implement 'make test'
This is a smaller change than the pretty log stuff, so I created a
review request in reviewboard based off of a diff from master:
- https://fedorahosted.org/reviewboard/r/149/
Otherwise, you can check the diff from master using the commands:
$ git fetch
$ git diff origin/master..origin/tflink -M
Important changes:
1. Addition of bodhi_update_state:
2. bodhi_utils._is_bodhi_comment_email_needed()
3. Adding a parameter to bodhi_post_testresult()
4. [bodhi_email] section added to autoqa.conf
5. Tests for the code associated with #314
------------------------------------------------------------
| Addition of bodhi_update_state:
------------------------------------------------------------
This object encapsulates information on the state of tests for a given
bodhi update. The idea was to encapsulate the logic and make it
relatively easy to understand.
------------------------------------------------------------
| bodhi_utils._is_bodhi_comment_email_needed()
------------------------------------------------------------
This was added to bodhi_utils to encapsulate the logic behind whether or
not to send an email. _parse_result_from_comment() and
_get_bodhi_email_config() were added to support this method
------------------------------------------------------------
| Adding a parameter to bodhi_post_testresult()
------------------------------------------------------------
Since the state of an update's tests depends on the koji tag currently
being tested (upgradepath doesn't run on *-updates-testing), the current
koji tag became an important part of determining whether or not to send
an email.
------------------------------------------------------------
| [bodhi_email] section added to autoqa.conf
------------------------------------------------------------
To support the requirement of changing email behavior without code
modifications, more configuration settings were needed. Adding a new
section to autoqa.conf was deemed to be the easiest and most
self-contained method of getting this done.
------------------------------------------------------------
| Tests for the code associated with #314
------------------------------------------------------------
This kind of code is a poster child for unit tests, in my mind.
Relatively simple code that has subtle changes in logic based on many
changing inputs. The tests are written for py.test > 2.0.1 and are in
lib/autoqa/tests/
12 years, 11 months