On Thu, Aug 16, 2018 at 8:30 AM Michal Novotny <clime@redhat.com> wrote:
On Thu, Aug 16, 2018 at 10:49 AM Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl> wrote:
f-r currently fails to build (#1603956), it has a bunch of bugs open [1]
and many issues and unhandled pull requests in the upstream repo [2, 3].
The last upstream commit was 2 years ago.

f-r has is annoyingly outdated and gives often outright bad advice
(for example about BR:gcc or BR:g++). The situation would be significantly 
improved if the outstanding PRs were merged.

f-r is also python2-only now, which will be a problem soon since
support for python2 is waning [4].

Is there any hope of upstream and downstream activity on f-r?

I was thinking about getting the fedora-review checks rewritten into the standard Test interface
can be run in Taskotron. We can also just probably run one big fedora-review check from
a taskotron test, well, this just came to my mind recently, getting the actual solution ready
might take a little bit of time.
 '


I'd *really* like to see us get to a point where package review is fully-automated. Basically we could just have a web-service that you pass a URL to an SRPM plus authenticate with your FAS account and it will perform all of the validity checks and if they all pass would go ahead and request the branches for you and import the SRPM.

Once this is fully automated, we can then *also* add the same checks to CI (taskotron, OSCI or whatever) so that on each build it gets rerun, which will allow us to help reduce the rate of packages falling out of compliance (as well as being updated whenever the checks get made more comprehensive).

Historically, we've had human review mainly to protect against two things, bundling and unacceptable licenses. In both of these cases, I'd like for us to move towards a culture of assuming goodwill on behalf of our packagers. Most of the packagers in Fedora have been doing it for a long time and know what is and is not acceptable. Optimizing for the minority case is wasteful, especially when it adds hurdles and delays to getting software delivered.

I think what we should instead do is allow things through immediately following automated review and just assume that those few cases that slip through that should not will get handled after the fact as soon as they are noticed (either by someone noticing or an improvement in the automated tool discovering the problem).

I feel strongly that automated, continuous review would be of far greater value to Fedora than front-loading the review process the way we have been doing (which serves mostly to discourage people from even starting).