[Fedora QA] #436: SSH access to systems in Beaker lab
by fedora-badges
#436: SSH access to systems in Beaker lab
--------------------------------------+---------------------
Reporter: atodorov | Owner: tflink
Type: defect | Status: new
Priority: major | Milestone:
Component: Blocker bug tracker page | Version:
Keywords: | Blocked By:
Blocking: |
--------------------------------------+---------------------
= bug description =
Currently systems in Beaker lab can be accessed only through bastion.fp.o
which is not as convenient as direct SSH into the system.
There's also the question whether or not to open the systems directly to
the Internet.
This needs to be discussed with infra. Filing here so it doesn't get lost.
--
Ticket URL: <https://fedorahosted.org/fedora-qa/ticket/436>
Fedora QA <http://fedorahosted.org/fedora-qa>
Fedora Quality Assurance
7 years, 1 month
[Fedora QA] #443: Better format for test compose (TC) and release candidate (RC) requests
by fedora-badges
#443: Better format for test compose (TC) and release candidate (RC) requests
----------------------------------------+------------------------
Reporter: jreznik | Owner:
Type: enhancement | Status: new
Priority: major | Milestone: Fedora 21
Component: Blocker/NTH review process | Version:
Keywords: | Blocked By:
Blocking: |
----------------------------------------+------------------------
= problem =
With Dennis (in CC), we discussed how to make release process, with
Fedora.next in mind, more transparent and bullet proof. One issue is that
releng request can become pretty messy, with full text included and
sometimes it leads to errors (omission of packages in compose etc).
= enhancement recommendation =
One possibility is to visibly separate full text description (with bug
numbers, reasons - it's good to have history) and the list of exact nvrs
(maybe in code block?), try to avoid "qt bundle" etc. so it's easier to
pick up the right list (for blockers, FEs + exceptional tools requests).
Another thing is better coordination between requester/releng - to mark
when/which list was picked up etc, in similar way how Go/No-Go decision is
stated in the ticket.
Now I'll let more space to Dennis, maybe example of how the request should
look like to make it easy to parse would help.
Long term (and preferred) solution would be to have automation in place,
Blocker app talking to releng interface, compose database, web dashboard
etc...
--
Ticket URL: <https://fedorahosted.org/fedora-qa/ticket/443>
Fedora QA <http://fedorahosted.org/fedora-qa>
Fedora Quality Assurance
8 years, 4 months
[Fedora QA] #437: Need to import daily Fedora snapshots into Beaker
by fedora-badges
#437: Need to import daily Fedora snapshots into Beaker
--------------------------------------+---------------------
Reporter: atodorov | Owner: tflink
Type: task | Status: new
Priority: major | Milestone:
Component: Blocker bug tracker page | Version:
Keywords: | Blocked By:
Blocking: |
--------------------------------------+---------------------
= bug description =
In order to perform any meaningfull testing Beaker needs to import more
recent Fedora trees. It could be daily(nightly) snapshots or less often
depending on available resources.
The tree directory structure needs to be a copy/snapshot of the current
state at the time of import. The reason is b/c devel trees utilize one URL
but the contents under this URL are updated in a rolling fashion. We need
tree URLs where content is not changing in order to produce consistent
test results.
--
Ticket URL: <https://fedorahosted.org/fedora-qa/ticket/437>
Fedora QA <http://fedorahosted.org/fedora-qa>
Fedora Quality Assurance
8 years, 4 months
[Fedora QA] #433: blocker proposal form forgets everything after login timeout
by fedora-badges
#433: blocker proposal form forgets everything after login timeout
--------------------------------------+---------------------
Reporter: kvolny | Owner: tflink
Type: defect | Status: new
Priority: major | Milestone:
Component: Blocker bug tracker page | Version:
Keywords: | Blocked By:
Blocking: |
--------------------------------------+---------------------
= bug description =
I was trying to propose a F20 blocker. I needed to gather information from
multiple bugs, so it took me longer to write the justification. After
finishing and submitting that, I was presented with a login screen. After
logging in again, I was redirected to the proposal form again, but it was
completely empty, all the text that took me so long to write was gone.
(Okay, I'm a smart guy and I had it in clipboard for such case, but if I
had forgotten ... booh.)
= bug analysis =
Seems the login code doesn't care about other variables in the http
request ...
= fix recommendation =
1) If there is such a short login timeout, the user should be warned about
it (e.g. countdown timer on the page) and the page should allow refresh
without submitting the data.
2) Once the login expires, the submitted data should be caried over all
the redirects back to the submission form.
--
Ticket URL: <https://fedorahosted.org/fedora-qa/ticket/433>
Fedora QA <http://fedorahosted.org/fedora-qa>
Fedora Quality Assurance
8 years, 4 months
Task Scheduling for depcheck
by Tim Flink
This has kinda been an elephant in the room that I've talked about to a
few people but we haven't had much of a discussion about it yet. For the
sake of simplicity, I'm going to be talking specifically about depcheck
but most of this applies to upgradepath and possibly other tasks.
The base problem is that there's a bit of an impedance mismatch between
how we pretend to schedule depcheck and how depcheck actually works.
From the outside, it looks like we run depcheck on a single update when
that update is created or changed. In reality, depcheck runs on an
entire koji tag when that tag or any of its builds changes.
Another way to summarize what depcheck does is:
Verify that the dependency trees in given set of repositories is sane
and identify any problem builds which disrupt that sanity.
Just because a build didn't break the dep tree when it was first checked
doesn't mean that it won't be involved in breaking the tree when
another build is added. Along the same lines, just because a build
fails when first checked doesn't mean that it needs to be changed in
order to pass - it could require another build that hasn't been
finished or checked yet. Running depcheck on a single update/build
can't work because the effect a build has on dep trees is, by
definition, not something that can be determined by looking at that
build in isolation.
We got around this in AutoQA because we did scheduling with a cron job
and ran depcheck-old more often than we actually needed to (once for
every update that changed since the last cron job). When depcheck-old
ran, it could update the status of any update associated with the
builds in a koji tag. Now that we're moving to scheduling based on
fedmsg, it's not as easy to ignore the fact that depcheck doesn't
really work on a per-update basis.
I have some ideas about how to address this that are variations on a
slightly different scheduling mantra:
1. Collect update/build change notifications
2. Run depcheck on affected koji tags at most every X minutes
3. Report changes in build/update status on a per-build/per-update
basis at every depcheck run
This way, we'd be scheduling actual depcheck runs less often but in a
way that is closer to how it actually works. From a maintainers'
perspective, nothing should change significantly - notifications will
arrive shortly after changes to a build/update are submitted.
To accomplish this, I propose the following:
1. Add a separate buildbot builder to handle depcheck and similar tasks
by adding a "fuse" to the actual kickoff of the task. The first
received signal would start the fuse and after X minutes, the task
would actually start and depcheck would run on the entire tag.
2. Enhance taskotron-trigger to add a concept of a "delayed trigger"
which would work with the existing bodhi and koji listeners
but instead of immediately scheduling tasks based on incoming
fedmsgs, use the fused builder as described in 1.
Some changes to resultsdb would likely be needed as well but I don't
want to limit ourselves to what's currently available. When Josef and I
sat down and talked about results storage at Flock last year, we
decided to move forward with a simple resultdb so that we'd have a
method to store results knowing full well that it would likely need
significant changes in the near future.
Thoughts? Counter-proposals? Other suggestions?
Tim
9 years
Smaller Taskotron Tasks
by Tim Flink
I put together a list of tasks which will need to be done in the
somewhat near future and currently have no owner.
If you're looking for something to do, feel free to take one of them.
Some are more involved than others but with the exception of the
directive documentation task, they should be relatively straight
forward.
The phab wiki page does require login but it's much easier to have a
list of tickets there than in email or another wiki.
Tim
https://phab.qadevel.cloud.fedoraproject.org/w/taskotron-papercuts/
9 years, 1 month
Running Taskotron Tasks
by Tim Flink
After several conversations with folks, I don't think that this message
has come through very clearly so I want to re-emphasize something that
is a core design philosophy for Taskotron:
You do not need a full Taskotron system deployment to run tasks.
Yes, the production system consists of a set of services to handle job
triggering, reporting and other support services. However, the core
running of tasks can be done with a git checkout of libtaskotron.
Following the installation instructions in the readme:
https://bitbucket.org/fedoraqa/libtaskotron
Unless you're doing integration testing of the various production
components or working on the integration code between components like
libtaskotron and buildbot or trigger, you should be able to work on
tasks without _anything_ beyond libtaskotron and its direct
dependencies. If you ever find this not to be the case, let someone
know - it's almost definitely a bug.
The ability to run tasks without a full production-ish deployment is an
important feature for Taskotron. It's why we took such a large step
backwards instead of trying to directly "fix" AutoQA and why the
complexity inherent to an automation system has been moved around. It
_needs_ to be relatively trivial for non-core contributors to run and
develop tasks for Taskotron.
Taskotron as a whole is not going to be successful unless we can get
contributions from other groups/people and that's not likely to happen
unless task development and maintenance is as easy as we can possibly
make it.
Tim
9 years, 1 month
taskotron stg deployment
by Tim Flink
Now that all the qa hosts are updated, I've been working to get
taskotron built as rpm and deployed in a staging environment so that we
can start working on the integration issues that are sure to pop up.
libtaskotron, resultsdb and related builds are available in copr:
http://copr-fe.cloud.fedoraproject.org/coprs/tflink/taskotron/
I have some parts of Taskotron deployed to staging:
https://taskotron-stg.fedoraproject.org/taskmaster/
At the moment, this is just triggering and execution. Taskotron-trigger
is listening for fedmsgs and triggering based on both bodhi update
creation and koji build tags (so, builds for f19 and f20). rpmlint is
scheduled for koji builds and task-examplebodhi is scheduled for
updates.
There is no external reporting yet because of some issues found in
resultsdb and none of our tasks have been updated for the recently
completed resultsdb reporting functionality added to libtaskotron.
I had been hoping to finish the stg deployment this week but with the
resultsdb issues, that's starting to look less likely. I'll keep this
thread updated as we see more progress.
Tim
9 years, 1 month
Documentation and Docstring Format
by Tim Flink
I'm getting started on documentation for libtaskotron and while I would
like to hold off on code style and pylint discussions for the moment, I
would like to start talking about docstring formatting before we get
too much more code written.
Sphinx has some useful features for making html docs out of docstrings
but it also has some pretty strict formatting requirements in order for
the documentation to be generated properly.
https://pythonhosted.org/an_example_pypi_project/sphinx.html#auto-directives
I'm not suggesting that we drop everything and fix all the docstrings
right now but I am suggesting that we start following the sphinx
docstring format for new code and fix other-formatted docstrings as we
come across them.
Any objections?
Tim
9 years, 1 month