=?ISO-8859-1?Q?Bj=F6rn?= Persson <bjorn(a)xn--rombobjrn-67a.se> writes:
The Packaging Guidelines require that all binary programs and
built from source code. How should this requirement be interpreted when some
of the "source code" is itself automatically generated from other sources?
[ details snipped ]
Thus, none of the stated reasons seem to be relevant to this case,
and I can
see only one thing that could mean that I have to run the code generation as a
part of the build, namely the term "source code".
You are overlooking one good reason for running the code generator
during package build: it ensures that what you compile actually matches
the sources it's claimed to be generated from. I've seen more than a
few cases where allegedly-automatically-built derived files shipped in
an upstream tarball were not up to date.
Now, whether it's worth doing that during package build is a tradeoff.
You have to think about what are the odds that this particular upstream
could screw up in that fashion; depending on how much you know about
their tarball creation and testing process, you might legitimately
conclude that the odds of this scenario are too small to worry about.
(Or you might be able to convince yourself that if the files *were*
out of sync, you'd get a compile failure; this seems possibly relevant
here, depending on how tightly tied these files are to the GTK+ API.)
And you have to consider how much time it adds to the package build
and whether the code generator's own needs will materially bloat the
package's BuildRequires footprint. These costs are probably
substantial, else upstream would not have chosen to ship derived files
in the first place. It might be worth it, or it might not.
Anyway, this is just to point out that regenerating derived files does
sometimes have practical value, quite independent of how narrowly
somebody wants to read the "build from source" policy.
regards, tom lane