#436: SSH access to systems in Beaker lab
--------------------------------------+---------------------
Reporter: atodorov | Owner: tflink
Type: defect | Status: new
Priority: major | Milestone:
Component: Blocker bug tracker page | Version:
Keywords: | Blocked By:
Blocking: |
--------------------------------------+---------------------
= bug description =
Currently systems in Beaker lab can be accessed only through bastion.fp.o
which is not as convenient as direct SSH into the system.
There's also the question whether or not to open the systems directly to
the Internet.
This needs to be discussed with infra. Filing here so it doesn't get lost.
--
Ticket URL: <https://fedorahosted.org/fedora-qa/ticket/436>
Fedora QA <http://fedorahosted.org/fedora-qa>
Fedora Quality Assurance
#443: Better format for test compose (TC) and release candidate (RC) requests
----------------------------------------+------------------------
Reporter: jreznik | Owner:
Type: enhancement | Status: new
Priority: major | Milestone: Fedora 21
Component: Blocker/NTH review process | Version:
Keywords: | Blocked By:
Blocking: |
----------------------------------------+------------------------
= problem =
With Dennis (in CC), we discussed how to make release process, with
Fedora.next in mind, more transparent and bullet proof. One issue is that
releng request can become pretty messy, with full text included and
sometimes it leads to errors (omission of packages in compose etc).
= enhancement recommendation =
One possibility is to visibly separate full text description (with bug
numbers, reasons - it's good to have history) and the list of exact nvrs
(maybe in code block?), try to avoid "qt bundle" etc. so it's easier to
pick up the right list (for blockers, FEs + exceptional tools requests).
Another thing is better coordination between requester/releng - to mark
when/which list was picked up etc, in similar way how Go/No-Go decision is
stated in the ticket.
Now I'll let more space to Dennis, maybe example of how the request should
look like to make it easy to parse would help.
Long term (and preferred) solution would be to have automation in place,
Blocker app talking to releng interface, compose database, web dashboard
etc...
--
Ticket URL: <https://fedorahosted.org/fedora-qa/ticket/443>
Fedora QA <http://fedorahosted.org/fedora-qa>
Fedora Quality Assurance
#437: Need to import daily Fedora snapshots into Beaker
--------------------------------------+---------------------
Reporter: atodorov | Owner: tflink
Type: task | Status: new
Priority: major | Milestone:
Component: Blocker bug tracker page | Version:
Keywords: | Blocked By:
Blocking: |
--------------------------------------+---------------------
= bug description =
In order to perform any meaningfull testing Beaker needs to import more
recent Fedora trees. It could be daily(nightly) snapshots or less often
depending on available resources.
The tree directory structure needs to be a copy/snapshot of the current
state at the time of import. The reason is b/c devel trees utilize one URL
but the contents under this URL are updated in a rolling fashion. We need
tree URLs where content is not changing in order to produce consistent
test results.
--
Ticket URL: <https://fedorahosted.org/fedora-qa/ticket/437>
Fedora QA <http://fedorahosted.org/fedora-qa>
Fedora Quality Assurance
#433: blocker proposal form forgets everything after login timeout
--------------------------------------+---------------------
Reporter: kvolny | Owner: tflink
Type: defect | Status: new
Priority: major | Milestone:
Component: Blocker bug tracker page | Version:
Keywords: | Blocked By:
Blocking: |
--------------------------------------+---------------------
= bug description =
I was trying to propose a F20 blocker. I needed to gather information from
multiple bugs, so it took me longer to write the justification. After
finishing and submitting that, I was presented with a login screen. After
logging in again, I was redirected to the proposal form again, but it was
completely empty, all the text that took me so long to write was gone.
(Okay, I'm a smart guy and I had it in clipboard for such case, but if I
had forgotten ... booh.)
= bug analysis =
Seems the login code doesn't care about other variables in the http
request ...
= fix recommendation =
1) If there is such a short login timeout, the user should be warned about
it (e.g. countdown timer on the page) and the page should allow refresh
without submitting the data.
2) Once the login expires, the submitted data should be caried over all
the redirects back to the submission form.
--
Ticket URL: <https://fedorahosted.org/fedora-qa/ticket/433>
Fedora QA <http://fedorahosted.org/fedora-qa>
Fedora Quality Assurance
Hi,
Can someone please help me with a depcheck question? See this log file
for the package 'miniz':
https://taskotron.fedoraproject.org/taskmaster//builders/x86_64/builds/6954…
As best I as I can tell, the depcheck is failing because there is no
32-bit glibc-headers package in the x86_64 repository. But do we care
because miniz 32-bit package won't be in the x86_64 repository either?
Thanks,
Scott
On 10/22/2014 09:43 PM, Honza Horak wrote:
> Fedora lacks integration testing (unit testing done during build is not
> enough). Taskotron will be able to fill some gaps in the future, so
> maintainers will be able to set-up various tasks after their component
> is built. But even before this works we can benefit from having the
> tests already available (and run them manually if needed).
>
> Hereby, I'd like to get ideas and figure out answers for how and where
> to keep the tests. A similar discussion already took place before, which
> I'd like to continue in:
> https://lists.fedoraproject.org/pipermail/devel/2014-January/193498.html
>
> And some short discussion already took place here as well:
> https://lists.fedoraproject.org/pipermail/env-and-stacks/2014-October/00057…
It's worth clarifying your scope here, as "integration tests" means
different things to different people, and the complexity varies wildly
depending on *what* you're trying to test.
If you're just looking at tests of individual packages beyond what folks
have specified in their RPM %check macro, then this is exactly the case
that Taskotron is designed to cover.
If you're looking at more complex cases like multihost testing, bare
metal testing across multiple architectures, or installer integration
testing, then that's what Beaker was built to handle (and has already
been handling for RHEL for several years).
That level is where you start to cross the line into true system level
acceptance tests and you often *want* those maintained independently of
the individual components in order to catch regressions in behaviour
other services are relying on.
> Some high level requirements:
> * tests will be written by maintainers or broader community, not a
> dedicated team
> * tests will be easy to run on anybody's computer (but might be
> potentially destructive; some secure environment will not be part of tests)
> * tests will be run automatically after related components get built
> (probably by Taskotron)
>
> Where to keep tests?
> a/ in current dist-git for related components (problem with sharing
> parts of code, problem where to keep tests related for more components)
> b/ in separate git with similar functionality as dist-git (needs new
> infrastructure, components are not directly connected with tests, won't
> make mess in current dist-git)
> c/ in current dist-git but as ordinary components (no new infrastructure
> needed but components are not directly connected with tests)
Note that any or all of the above may be appropriate, depending on the
exact nature of the specific tests.
For example, there are already some public Beaker installer tests at
https://bitbucket.org/fedoraqa/fedora-beaker-tests for execution on
http://beaker.fedoraproject.org/
> How to deliver tests?
> a/ just use them directly from git (we need to keep some metadata for
> dependencies anyway)
> b/ package them as RPMs (we can keep metadata there; e.g. Taskotron will
> run only tests that have "Provides: ci-tests(mariadb)" after mariadb is
> built; we also might automate packaging tests to RPMs)
Our experience with Beaker suggests that you want to support both -
running directly from Git tends to be better for test development, while
using RPMs tends to be better for dependency management and sharing test
infrastructure code.
> Which framework to use?
> People have no time to learn new things, so we should let them to write
> the tests in any language and just define some conventions how to run them.
Taskotron already covers this pretty well (even if invoking Beaker
tests, it would make more sense to do that via Taskotron rather than
directly).
Regards,
Nick.
--
Nick Coghlan
Red Hat Hosted & Shared Services
Software Engineering & Development, Brisbane
HSS Provisioning Architect
After talking with Adam about the depcheck error reported last week, it
seemed like a good idea to start on some basic metrics to get an idea
of how many false positives and false negatives we have in our results.
I've written up some basic code that gets a list of updates from the
stable request messages in datagrepper over a period of time and uses
that list to gather information about bodhi updates.
At the moment, the code works well enough to be used for lists of
updates to examine but there are still corner cases which aren't well
handled (specifically, updates without 32 bit builds) and there's no
differentiation between depcheck runs on updates-testing and updates.
If anyone is interested, I put the code up on bitbucket:
https://bitbucket.org/fedoraqa/taskmetrics
Tim
Some thoughts:
>> Where to keep tests? a/ in current dist-git for related components
>> (problem with sharing parts of code, problem where to keep tests
>> related for more components) b/ in separate git with similar
>> functionality as dist-git (needs new infrastructure, components are
>> not directly connected with tests, won't make mess in current
>> dist-git) c/ in current dist-git but as ordinary components (no new
>> infrastructure needed but components are not directly connected
>> with tests)
>>
>> How to deliver tests? a/ just use them directly from git (we need
>> to keep some metadata for dependencies anyway) b/ package them as
>> RPMs (we can keep metadata there; e.g. Taskotron will run only
>> tests that have "Provides: ci-tests(mariadb)" after mariadb is
>> built; we also might automate packaging tests to RPMs)
To answer both of these, the plan is to keep taskotron tasks in their own
git repo; currently this is at (0).
To run the tasks, taskotron sets up a disposable task client and then git
clones the task to be run. This solves the issue of delivery and allows
a continuous integration-like solution.
>> Structure for tests? a/ similar to what components use (branches
>> for Fedora versions) b/ only one branch Test maintainers should be
>> allowed to behave the same as package maintainers do -- one likes
>> keeping branches the same and uses "%if %fedora" macros, someone
>> else likes specs clean and rather maintain more different branches)
>> -- we won't find one structure that would fit all, so allowing both
>> ways seems better.
This is something we'll need to figure out, but, I suspect git branches will
be involved.
>> Which framework to use? People have no time to learn new things, so
>> we should let them to write the tests in any language and just
>> define some conventions how to run them.
You'll need to at least define the task in a yaml file, and output will need to
be TAP. The example task is at (1).
> TAP (Test Anything Protocol) FTW. It really makes sense when you're
> trying to combine tests from multiple different languages, testing
> frameworks, etc.
>
> Stef
>
Indeed, which is why we settled on it.
John.
(0) https://bitbucket.org/fedoraqa
(1) https://bitbucket.org/fedoraqa/task-rpmlint
taskotron, schmaskotron, the really big news of the day is relval 1.1!
It's tagged in git, uploaded to
https://www.happyassassin.net/relval/releases/relval-1.1.tar.xz and
https://www.happyassassin.net/wikitcms/releases/wikitcms-1.1.tar.xz ,
and available from the repo at
https://www.happyassassin.net/wikitcms/repo/wikitcms.repo .
relval has two new sub-commands: user-stats and testcase-stats . They
re-implement stats-wiki.py and testcase_stats/testcase-stats.py from
fedora-qa.git stats/ , respectively. Both should be improvements on the
previous versions, as well as using wikitcms and dumping the direct MW
API calls.
testcase-stats especially got rather better. It considers 'unique
tests', not just test cases, now. It (along with wikitcms) considers
three things:
* test case page name
* test 'name' - the link text in the "Test Case" cell, if it is not just
the wiki page name
* the section of the wiki page in which the test appears
so results for the same test, but with different 'names' or appearing
multiple times on the same page in different sections (as with the Cloud
and Desktop pages), are not mashed up together, but properly tracked
separately.
The 'details' pages for unique tests now provide an accurate coverage
percentage for the test for each page on which it appeared, break down
the results by environment, and list bugs associated with 'pass' and
'warn' results as well as 'fail' results.
The summary page is split into tables according to the sections in the
underlying page (note: if a new section is added between two composes,
or a new test case added, or a section renamed or a test case moved,
relval will currently treat this as a 'new' test / section and render it
at the bottom).
The results 'bitmap' is smarter: it knows all the composes that exist in
the whole result set, and includes a completely empty column if the
'unique test' did not exist in that compose. This helps with tests that
got moved around or renamed or whatever, because all the bitmaps line up
- you can actually see which composes any given 'unique test' existed
in.
You can pass --milestone to limit the information to a specific
milestone, if you like (but you can still run on an entire release, and
the wikitcms version uses some wikitcms foo to make sure the results
don't include Rawhide and nightly and PowerPC and whatever pages).
Please do grab it, play around, and see if you can break it! I've run it
over Fedora 20, Fedora 21, Fedora 18 and Fedora 14 with good results.
You can see output for various releases here:
https://www.happyassassin.net/testcase_stats/
21's Install/Installation results are a bit messy because of the
extensive changes to that page during Alpha and Beta (it was renamed
from Install to Installation, for a start...); tomorrow I'll spend a bit
of time modifying the pre-Beta TC2 pages to match the current format, so
relval's results look nicer. I *did* do some mental design on a sort of
pre-processor function which would try to handle changes to the page
layout, but it got very complex in a hurry and I figured it makes more
sense to just clean up the pages when this sort of thing happens. The 20
results are fairly nice and clean, just a couple of table renames in
Desktop.
It just occurred to me that adding a column for 'last 100% coverage'
would be trivial...and might be useful.
If this looks good to folks, maybe we could put it into fedora-qa.git
and add it to Phabricator for issue tracking.
It would, at this point, not be very difficult to add a result reporting
CLI, either...I suppose I could do that if I get time, just for fun.
Thanks!
--
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net
This has been a long time coming, but AutoQA is no longer scheduling
jobs and Taskotron is now running all of the automated checks on
packages/updates.
The changeover should be transparent to most people - the same checks
are being run in pretty much the same situations. Until Bodhi 2.0 is
deployed, we're planning to continue providing feedback on updates in
the form of bodhi comments.
While Taskotron is a huge step forwards in terms of having a capable
and maintainable automation system for Fedora, it isn't perfect on the
UX front. That being said, there has never been much traffic to the
AutoQA instance and that's why we haven't been focusing as much on the
frontend.
This changeover is just the beginning - we're still hard at work to add
features to Taskotron. The first major feature to be added is
disposable test clients [1] which will pave the way for folks to submit
tasks to be run in Taskotron, among other oft requested features.
Huge props to the folks who have helped make this happen - the usual
suspects from Fedora QA ( Kamil Páral, Josef Skládanka, Martin
Krizek, Lukáš Brabec, Jan Sedlák, Petr Schindler, John Dulaney, Mike
Ruckman), Ralph Bean (patches to ResultsDB), Kevin Fenzi and Stephen
Smoogen (lots of help with and patience during deployment) and many,
many other folks who have contributed ideas, bug reports and/or moral
support.
If you're interested in learning more about Taskotron or helping us with
dev tasks, I've included some reference links at the end of this email.
As always, please come find us (on the qadevel@ list or in #fedora-qa)
if you have any questions or concerns.
Tim
[1] https://phab.qadevel.cloud.fedoraproject.org/T298
General Wiki Page for Taskotron:
- http://fedoraproject.org/wiki/Taskotron
Libtaskotron Documentation:
- https://docs.qadevel.cloud.fedoraproject.org/libtaskotron/latest/
QA Devel Issue Tracking and Project Management:
- https://phab.qadevel.cloud.fedoraproject.org/
"Taskotron and Me" Talk from Flock 2014:
- http://youtu.be/jMTUFCFJS6o
Taskotron Tagged Articles on tflink's Blog:
- http://tirfa.com/tag/taskotron.html