before I start arguing, I appreciate all your hard work on
improving the Go guidelines, spending so much time explaining
reasoning behind your decision. Also appreciating the effort on
making the life easier for packagers, not just in the Go land,
but throughout the distribution as well. There is a lot of great
ideas in your Go proposal I would like to see implemented and
widely used. The more transparent and easy-to-use spec files,
the better for all maintainers. Let's keep moving forward,
improving the infrastructure for other folks and pushing new
ideas and improvements into practical solutions.
I am proposing for inclusion a set of rpm technical files aimed at automating the packaging of forge-hosted projects.
- Packaging draft: https://fedoraproject.org/wiki/More_Go_packaging
- go-srpm-macros RFE with the technical files: https://bugzilla.redhat.com/show_bug.cgi?id=1526721
This proposal is integrated with and depends on the https://fedoraproject.org/wiki/Forge-hosted_projects_packaging_automation draft
It builds on the hard work of the Go SIG and reuses the rpm automation of https://fedoraproject.org/wiki/PackagingDrafts/Go when it exists, and produces compatible packages.
Can you describe what you mean by "compatible packages"? I
mentioned a list of things that you did not answer fully. Most
important to me:
- your macros do not generate build-time dependencies, which I see
as one of the drawbacks. Do you plan to ignore them?
- support for more subpackages. You describe how to specify which
files go to which packages (https://fedoraproject.org/wiki/More_Go_packaging#Separate_code_packages
but you don't say how list of provided packages and a list of
dependencies are generated for the subpackages.
- reproducible evaluation of macros. Either for running automatic
analysis over spec files or for debugging purposes. I would like
the macros to be built in top of binaries I can run separately.
With Jakub (email@example.com
) we were
discussing many times we could provide a minimal rpm built on top
of gofedlib that would extract all the necessary pieces, including
the naming convention so everyone can import provided library
guided by the same rules as the guidelines enforce. I would like
to see us synchronizing on this topic at some point.
What it does:
- drastically shorter spec files, up to 90% in some cases, often removing hundreds of lines per spec.
+1 here, the shorter the spec file the better as long as it is
still clear and transparent
By better I mean:
- easier to read, easier to understand, easier to adopt
- less places to change and look if a spec file change is needed
By clear I mean:
- it is obvious what the spec file is actually declaring to do
- it is easy to customize various parts (e.g. list of tests, list
- simple, packager-friendly spec syntax
+1 as long as each macro provides a single functionality. E.g.
- generate a list of provided packages
- generate a list of tests
- generate a list of dependencies
+1 to the %goname, %gourl, %gosource, %goinstall, %__goprovides,
%_gorequires and other usefull macros as long as they are simple
As long as the macros are optional and user can use any of them if
it is suitable (suitable meant per a packager judgment).
- automated package naming derived from the native identifier (import path). No more packages names without any relation with current upstream naming.
Can you provide a list of package names that diverge from this
For historical reasons there are some packages that does not fall
into the naming schema. Either because a go project got migrated
to a different repo or because it make/made sense to combine some
projects together and ship them in a single rpm. I don't mention
Kubernetes, Docker and other popular projects as you already
mention exception for them.
- working Go autoprovides. No forgotten provides anymore.
Sometimes it make sense to skip some provided package. E.g. some
packages provide Windows specific code
which has no use on Linux. So the claim is not completely valid.
- working Go autorequires. No forgotten requires anymore.
Depending on how you generate the list you can non-intentionally
require more than is needed.
Which causes installation of unneeded trees of rpms.
E.g. for building purposes there is no need to install
dependencies of tests
So the claim is not completely valid.
(Valid for both requires/provides): Sometimes you only need to
install/provide a subset of a Go project.
There are not many projects that imports every package another
So to save time packaging dependencies of unimported packages (and
their following maintenance),
it is beneficial to generate only partial list of
Plus, in case CGO, there is no 1:1 mapping of C libraries to their
- strict automated directory ownership (used by autorequires and autoprovides).
- centralized computation of source URLs (via Forge-hosted projects packaging automation). No more packages lacking guidelines. No more broken guidelines no one notices.
In other words to use %gourl macro?
- easy switch between commits, tags and releases (via Forge-hosted projects packaging automation). No more packages stuck on commits when upstream starts releasing.
The issue is not about upstream releasing, it's about projects
that actually import snapshots of other projects instead of
particular releases. Given by the nature of the Go land.
- guidelines-compliant automated snapshot naming, including snapshot timestamps (via Forge-hosted projects packaging automation). No more packages stuck in 2014.
The issue is not that a package is stuck and not updated for more
than 6 months. The issue is API backwards incompatibility that
does not allow to easily update to a newer version. And a lack of
man power to perform an update.
- guidelines-compliant bootstraping.
- systematic use of the Go switches defined by the Go maintainer. Easy to do changes followed by a mass rebuild.
- flexibility, do the right thing transparently by default, leave room for special cases and overrides.
- no bundling (a.k.a. vendoring) due to the pain of packaging one more Go dependency.
Can you more elaborate on that? Cause this is unavoidable no
matter how great any Go guidelines get.
Choice of bundeling is not about how many more Go dependencies you
need to package.
It's due to API backwards incompatibility reasons.
- centralized Go macros that can be audited and enhanced over time.
+1 as long as they are kept on a reasonable granularity level
- aggressive leverage of upstream unit tests to detect quickly broken code.
Will not work in general due to API backwards incompatibility
We need to move the tests out of the spec files if we want to
seriously test Go projects.
Saying that, we can run subset of tests to at least have some kind
of smoke tests.
no reliance on external utilities to
compute code requirements. No more dependencies that do not
match the shipped Go code.
Please, rephrase or remove the point as for the CGO it is useful
to have external utility that can list all imported C header files
and try to find a set of packages that provide the header files
(and corresponding libraries).
Please consult packaging draft for full information.
The proposal has been tested in Rawhide and EL7 over a set of ~ 140 Go packages. This set is a mix of current Fedora packages, bumped to a more recent version, rewrites of Fedora packages, and completely new packages.
I hope posting the second part of the automation will answer some questions people had on the https://fedoraproject.org/wiki/Forge-hosted_projects_packaging_automation draft
I am surprised you are able to do that without any API issues.
Have you tried to de-bundle and build Kubernetes, Docker, Etcd,
Prometheus and other big projects from the same set of Go
packages? At some point you have/had to switch to compat packages
From the guidelines:
Totally agree with the release discipline in many Go projects.
gives up on many conventions of current
Fedora Go packaging, as they were an obstacle to the target
(Still part of the Limitations): Can you make a list of
conventions you consider as obstacles? I would like to comment on
each of them.
%gochecks . transport/http transport/grpc transport option
internal integration-tests/storage examples
The excluding mechanism is not intuitive, my first understand
would be to test all the listed directories. Maybe use %gocheck -v
the same way is grep does?
API changes may require the creation of Go code compatibility
I would not recommended doing that unless you have an army of
packagers. It not just about creating one compat package. It is
not surprising API incompatible change goes hand to hand with
updating of dependencies. So instead of creating one compat
package you will end up creating a tree of compat packages. I have
already mentioned that at https://pagure.io/packaging-committee/issue/382#comment-147649
Plus, multiply the number of compat packages to take care with the
active Fedora branches + epel7. On paper it looks like a very good
solution but in reality it is not. As long as we don't know how
much the Go projects suffer the API incompatibilities, I don't
want to spawn thousand compat rpms and maintain them.
a packager, that identifies an API change
in his Go package, MUST notify firstname.lastname@example.org
at least a week before pushing his changes to any release (and
those releases SHOULD include fedora-devel). This grace period
can be waived by FPC for security reasons.
From the point of view of updating popular projects like Etcd,
Kubernetes, Prometheus, etc. this is highly impractical. Usually,
there are more dependencies to update. If I will have to wait one
week to update etcd, than another week to push its update to
stable branch, then I will not really be effective in updating.
a packager, that identifies an
unannounced API change in another Go package, MUST notify email@example.com
those notices SHOULD be copied to the maintainers of packages
affected by the change, when they are known.
This is going to be a regular thing unless a tooling detecting
this changes is available.
a packager, that identifies that the code
he packages, is broken by API changes in another project, SHOULD
notify politely the upstream of the broken project of the API
change, if he can not ascertain it is already aware of the
Some upstream Go projects are responsive, some are not. A lot of
Go projects are forked, abandoned or inactive. Or the upstream is
a single person that does not have time to deal with distribution
specific guidelines. But I appreciate the effort to spread the
good practices. However, sometimes the API change is intentional
cause there is no other way.
Usual workflow of updating Go packages:
1. choose a commit to update to
2. update a list of provided packages
3. update a list of build/run-time dependencies (if needed create
spec files for new Go projects and open review requests)
4. update a list of tests (skip some tests that fail on specific
architectures + open upstream issues, resp. PR with a fix)
5. optionally, update %install/%build section
6. perform scratch-build (may require to override some rpms)
7. perform push and raw build
8. create package update in Bodhi
9. push updates into stable if there is not enough karmas
## Some issues one can encounter with when performing a Go package
- update is backward incompatible (it may remove ability of some
package Go projects to get built)
- 5 (arbitrary number here) or more new Go projects need to be
packaged into distribution
## What we need
- automatic updates of Go packages (each time there is a new
release or on demand to a specific commit)
- spread maintenance among a group of packagers
- run unit-tests and integration tests (if available) in CI/CD
(e.g. after each update, on periodic basis)
- automatic every routine job we do so we can concentrate on
"real" and "interesting" problems in the Go land
The updated (and fairly improved/extended) guidelines cover points
2), 3), 4), 5) to some extend.
Comparing the new way with the current way (listing only some +/-
that popped in my head during interpretation):
+ provided packages, runtime dependencies are automatically
+ tests are automatically generated
+ spec files are much smaller (more transparent, a lot of
complexity hidden in macros)
+ guidelines looks awesome, more descriptive in what needs to be
done (up to some parts mentioning many times the current spec
files sucks a lot)
- no build-time dependencies (we can not discover missing
dependencies before installation)
- always generating all provided packages that are available (I
don't want to provide all packages always, e.g. need just a subset
to build my project, the remaining provided packages need new Go
projects which adds unnecessary complexity)
- as jcajka pointed out, it is not easy to debug the lua based
- combination of bundled and un-bundled dependencies (sometimes I
just need to use one or two bundled dependencies because without
generating a tune of compat packages)
In all cases, the new approach is strongly focused on specfile
The API backwards incompatibility problems are conveniently hidden
in the creation of Go code compatibility packages.
I have already mentions this approach in the https://pagure.io/packaging-committee/issue/382
and the reasons why it is not a way to go.
Even if we create a lot of compat packages we still need a
mechanism that can tell us when a given compat package is no
I am working on a tooling (in my free time) that will provide that
, feel free
to contribute to the project with ideas who we need to detect or
extract from the code to make more spec file pieces generic and
At the end, even if we do "create as many compat Go packages as
needed" or "let's use vendored packages because we can",
we will still need to maintain entire ecosystem in some reasonable
From my point of view, the new Go packaging guidelines make it
easier for fresh Go packager to get familiar with Go packaging
practices and to quickly package his/her Go project.
However, from the point of view of entire Go ecosystem (or any
micro ecosystem = I have more projects building from the same
dependencies), it is harder to tune individual packages for the
In overall, I don't like you degrading the current Go packaging
guidelines (although it is still a draft). All the decisions made
have historical reasons. By the time of applying the guidelines,
there were only a few go packages and the severity and impact of
the practices were not eminent nor obvious. Now we know it is not
sufficient and there are things we need to improve. Indeed, 100%
agree. So please, pay at least some respect to all who spent their
time and dreamless nights coming with quick solutions. Cause a lot
of us do it in our free-time so we are limited by time and
capacity. What you are trying to solve now was known two years
ago. We already knew about that. For that reason my and my
colleagues started building tooling to detect at least some of the
known problems like API backwards incompatibilities or automation
around updating spec files.
I also suggest to remove every comparison with the Go guidelines
draft and the current state of all Go spec files. Optionally, you
can put all the mentions under "Best practices" or "Thing to
avoid" section instead.
There are other parts of the new guidelines I could contradict.
However, doing that we can end up in endless discussion, annoying
each other and burning time that we can use to do important things
So let's divide the workload. If you want to take care of the
packaging guidelines, feel free to do it. Want to improve the spec
file? I will be more than happy to witness that. Do you want to
make the Go packaging in Fedora (and maybe other distributions)
more user friendly? Man, that is awesome and noble. I have no
intention to stop you from doing that. But please, keep in mind,
you are not the only one involved in this effort. Some of us have
families, some of us has other things/folks to take care of. We
only do what we can in the time given :). I have/had my use case I
am/was driven, you have yours. That's completely fine and
appreciated. The more use cases we can fine, the more ideas and
solutions we can combine. Let's not degrade results of our
efforts, it leads nowhere. Let's combine the results, our
experience. Let's open a google document (or different
collaboration solution if you have one in mind) where we collect
all our requirements. Let's ask other Go (or wannabe) packagers
what are their requirement, what they see as drawbacks of the
current and new guidelines, who is willing to help with the
packaging and maintenance, let's ask more questions.
I am happy to take the tooling side. As I mentioned in the ticket
I shifted my focus on building tooling that do a lot of analysis
and detection which are out of scope of any guidelines. Even if we
agree on the final form of the guidelines, we still need tooling
that will help us with the packaging. E.g. detection of API
backwards incompatibilities, spec file generators (enhancing and
improving over time), detection of go packages that are no longer
needed, running CI/CD, etc.