On Wed, Nov 18, 2015 at 2:48 AM, Nick Coghlan <ncoghlan(a)gmail.com> wrote:
On 18 November 2015 at 02:29, Jason L Tibbitts III <tibbs(a)math.uh.edu> wrote:
>>>>>> "NC" == Nick Coghlan <ncoghlan(a)gmail.com>
writes:
>
> NC> If so, then there's some relevant work currently under way upstream
> NC> to improve the interaction between Python installation tools and
> NC> build systems to improve the metadata extraction process, rather
> NC> than relying on implementation details of setuptools.
>
> If that's the case, could someone bang out a few paragraphs that we
> could use as a blueprint for some packaging guidelines? An example
> spec, or even just the file layout and some idea of how autogenerated
> dependencies would work would be enough. I know this stuff is a bit
> new, but we've been doing a really big overhaul of the python stuff and
> we'd like to at least design to accommodate this rather than having to
> do it all over again once this new format comes out.
The main policy changes would be to update these two section to
mention keeping dist-info directories:
*
https://fedoraproject.org/wiki/Packaging:Python#Files_to_include
*
https://fedoraproject.org/wiki/Packaging:Python#Reviewer_checklist
It would also be desirable to state a preference for dist-info over egg-info.
Thinking about it a bit further, I don't think the latest round of
upstream changes should impact the metadata analysis step, as the
metadata querying changes are designed to support getting at the
metadata without building and installing the package first, and that's
not a consideration for the RPM use case.
However, they could potentially affect the py2/3_build macros, as
we're looking to finally migrate away from *requiring* the presence of
a setup.py file in every source tree, and instead allow out-of-tree
build tools, with machine readable instructions for bootstrapping them
into the build environment.
The draft PEP for that is at
https://github.com/pypa/interoperability-peps/pull/54/files and
upstream discussions are on distutils-sig
I'd been thinking using "pip install" instead of "setup.py
install" in
the build macros would be sufficient, but I now realise that isn't the
case - if a project uses flit (for example) as its build utility, then
we're going to need to generate a suitable BuildRequires in pyp2rpm
and similar tools (perhaps using the "BuidlRequires:
pythonX.Ydist(flit)" format). The build macros themselves could still
delegate the task of working out the right build command to invoke to
pip, though.
The main issue I see with that is how to make it so that python upgrades
aren't obnoxiously painful. If BuildRequires use pythonXdist(module) format,
but all *generated* runtime requirements use pythonX.Ydist(module) format,
this problem goes away. But as Toshio mentioned, how do we solve that in a
multi-version environment (like Enterprise Linux, for instance)?
Using pythonX.Ydist(module) for BuildRequires effectively locks the module
to a specific Python version until each and every maintainer upgrades them.
That is an awful thing to have to do, and no other programming environment
in any RPM-based distribution requires that. Most of the time, this is an
unnecessary burden on the package maintainers.
My view on pythonXdist(module) vs pythonX.Ydist(module) for
BuildRequires is that DNF/Zypper may actually solve this issue for us.
Perhaps presenting it with pythonXdist(module) and a package that provides
the appropriate "python(ABI) = X.Y" as part of the builddep grab will actually
pick the right one (after all, each module would Require a specific
"python(ABI)" anyway). I'm not sure if Yum would do the same, though
(I hope it does!). I suppose the key is whether or not the depsolver analyzes
the whole request before creating its proposed transaction, rather than
iteratively solving and presenting the results.
--
真実はいつも一つ!/ Always, there's only one truth!