Review Request 60: Insufficient logging for bug proposal form issues
by Martin Krizek
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviewboard-tflink.rhcloud.com/r/60/
-----------------------------------------------------------
Review request for blockerbugs.
Repository: blockerbugs
Description
-------
Errors were logged with level 'debug' which caused them to not being logged at all on prod.
commit cb8169121c4c27ef8893eef11b8b0b0ef7f1dea7
Author: Martin Krizek <mkrizek(a)redhat.com>
Date: Tue Dec 17 15:11:26 2013 +0100
Set log level to 'error' for errors in the blocker proposal form
Fixes: T24
Diffs
-----
blockerbugs/templates/propose_bug.html 848a3e2bd7b94780348a107230f057801c7ea868
blockerbugs/controllers/main.py 5455e1aa2bbf6edc551223ee6fccb9c6236f0eb8
Diff: http://reviewboard-tflink.rhcloud.com/r/60/diff/
Testing
-------
Thanks,
Martin Krizek
10 years, 2 months
Phabricator - notifications for patch reviewers
by Kamil Paral
At present if somebody posts a patch for review in Phab, he fills in the Reviewers fields with some possible review candidates so that the people are notified. Usually me, Martin and Tim appear in that field.
I wonder if there is a better approach. Anyone interested in doing code reviews could set up a Herald rule to watch review requests in the particular repository. This is my "Differential Revisions" herald rule:
> When all of these conditions are met:
> Repository is any of rLTRN (libtaskotron)
> Take these actions every time this rule matches:
> Send an email to kparal
With this rule, you get notified once when the review is created, but not on its updates (unless you choose to get CC'd or you're specified in the Reviewers field).
If all of us (I, Martin, Tim, Josef, Petr and anyone else interested) set it up this way, we will be notified of incoming review requests, and people don't need to put a large number of names blindly into the Reviewers field. Of course they still can some provide some names manually, to create some pressure on the most likely reviewers :-) With this approach it's also easy to see which reviews are "taken" and which are still free to take (empty reviewers list).
So, if you're just slightly interested in doing some code reviews, I think that creating such Herald rule is a good way to follow what's going on. At the moment it can be created just for libtaskotron, because that's the only repository we mirror (and the rule seems to be based on repositories, not projects). But we can easily start mirroring repositories of other projects as well, I believe.
10 years, 2 months
AutoQA Downtime Today
by Tim Flink
Apologies on the late notice but I'm going to be bringing down the
autoqa hosts for a little while for firmware upgrades on the physical
hosts. While everything is down, I'll be updating and rebooting
everything.
I don't expect that this will take more than a couple of hours but I'll
send out another email when everything is back up.
Tim
10 years, 2 months
gitflow and branch naming conventions
by Kamil Paral
So, we're going the gitflow way [1][2]. However, when I looked at our bitbucket repositories today, only the libtaskotron branch uses 'develop' branch, all other projects use only 'master' branch - even taskotron-trigger or task-rpmlint. Does it mean we use gitflow only for libtaskotron? Or is it a repo author's choice? I'm a bit afraid it's going to be chaos - you'll need to inspect available branches every time to decide against which branch to base a patch or into which branch to commit.
I wonder, could we use gitflow but drop the idea of misusing 'master' branch name for something else than usual?
Because that's the main grievance I have against gitflow, otherwise it's a neat workflow. I believe gitflow should have never used master for something else, it should have used 'stable' branch instead for stable releases (i.e. 'gitflow/master' should have been 'traditional/stable' and 'gitflow/develop' should have been 'traditional/master'). All the tools (and most of the users) expect 'master' to be the main development branch. Git init creates master by default. Git clone checks out master by default. Github/Bitbucket displays master by default. Arcanist merges to master by default. Users clone and send patches against master by default. Usually you can adjust the tools, but what's the benefit? Why all the mess? I simply don't get it. (Also notice people criticizing it under one of the most famous blogposts [3] and offering the same suggestions as I do).
So, if we use gitflow with traditional master meaning, and stable branch for stable releases, I see it as a win-win. Regardless whether that particular repo uses gitflow or not, you known what branch to work with automatically. You don't need to change configuration in your tools. Everything works, and you get the benefits.
If you have installed the gitflow RPM package (it adds the "git flow" subcommand), it asks you initially what naming conventions you like to use. So if you like that tool, there's no problem using it with the traditional 'master' meaning.
[1] https://fedoraproject.org/wiki/User:Tflink/taskotron_contribution_guide
[2] http://nvie.com/posts/a-successful-git-branching-model/
[3] http://jeffkreeftmeijer.com/2010/why-arent-you-using-git-flow/
10 years, 2 months
args, key=value pairs and task yaml
by Tim Flink
As I'm working on the taskotron runner, I've been trying to work
through a good way to declare actions in the task yaml. The current
code allows for one arg and multiple key=value pairs but I don't like
the limitation of just one non-key=value arg and would like to
eliminate them entirely.
I figure that this will be easier to show with an example so I'll use
the following action which would be valid with the current code:
------------------------------------------------------------
Execution:
python: runtask.py method_name=do_something action=fix
------------------------------------------------------------
The intention of this action is to import a python file called
runtask.py, find and execute a method in that file named "do_something"
and pass in action="fix" to that method, effectively leaving us with a
call similar to:
do_something(action='fix')
Looking strictly at the yaml, there is a mix of args and key=value
pairs. 'runtask.py' is an arg, the other two are key=value. These are
currently passed into the python directive as a string and a
dictionary, roughly equivalent to:
command = 'runtask.py'
input_data = {'method_name':'do_something', 'action':'fix'}
env_data = get_envdata() # runtime information, like workdir
process(command, input_data, env_data)
What I'm looking to decide is whether we really need args in addition to
of key=value pairs and if the added complexity is worth it.
I propose that we alter the format to use only key=value pairs which
would be parsed into a dict in the runner. Using the same example as
above, the new action would look something like:
------------------------------------------------------------
Execution:
python: pyfile=runtask.py method_name=do_something action=fix
------------------------------------------------------------
This would simplify the parsing code and the data passing logic for
execution since all input data would be in the form of a dictionary
instead of separated out into a list/string and a dictionary.
While I slightly prefer the aesthetics of allowing one or more
keyless args in an action instead of all key=value pairs, I can't think
of a use case where restricting input to key=value pairs would cause
problems.
If we do decide that keyless args are needed, I'd rather support more
than one instead of the current arbitrary restriction of just one arg.
Any other thoughts?
Tim
10 years, 2 months