Rethinking user level package management

Nick Coghlan ncoghlan at gmail.com
Fri Aug 7 00:13:27 UTC 2015


With PyCon Australia behind me (I'll send a status update about that
later today, since some of the topics were relevant to this group), I
started looking more seriously at the three candidates for user level
package management tooling:
https://fedoraproject.org/wiki/Env_and_Stacks/Projects/UserLevelPackageManagement#User_Level_Packaging_Tools

I already know conda and nix fairly well (albeit theoretically in the
latter case), so my main goal was to get started with conary, and see
what would be involved in taking my currently pip based OpenShift
deployment for Kallithea
(http://www.curiousefficiency.org/posts/2014/12/kallithea-on-openshift.html
) and switch it to using conary instead.

The short version: I failed, but in failing, I learned I was asking
the wrong question, and thus now have a very strong opinion on the
user level package management tech I think we should expose to end
users (spoiler: conda > nix > conary), but am still undecided on the
direction I think we should be going in the repository management
space (and the latter isn't Envs & Stacks decision anyway).

---------------

The key thing I realised is that the reason I find conary interesting
relates to the way it handles patch management as an open source
system integrator - how do you keep track of your different upstream
projects, your different supported downstream product versions, and
which patches need to be applied where?

When it comes to the *end user* experience I'd like for user level
package management, I already think conda has all those bases covered,
as it was specifically built by Continuum Analytics to solve their own
problems as a cross-platform ISV supporting Windows, Mac OS X, and
arbitrary Linux distros and targeting individual users that may not
have admin access to their employer provided machines.

So putting conary (and even conary concepts) into the picture isn't
likely to provide any significant gain in the usability of the final
"binary artifacts to end users' systems" installation step relative to
the vastly simpler approach of "just use conda, as the problem we want
to solve is exactly what it was built to handle, and its existing
popularity with research scientists and data analysts provides solid
evidence of its usability"*.

Instead, I'm starting to think that conary's *ideas* (if not the
software itself) will be most at a home in the
dist-git/fedpkg/rhpkg/copr/koji ecosystem - the machinery whereby
upstream source code turns into downstream binary repos, rather than
being particularly useful in making the final hop from downstream repo
to end user system.

As far as nix goes, I think it's a *really* good technology for folks
to explore in the "immutable infrastructure" world, where any low
level security update is going to involve a container rebuild anyway.
For a mutable infrastructure world (like end user workstations), the
way that Nix requires pushing updates to higher level components
whenever a lower level one is updated (even in an ABI compatible way)
is a problem.

Regards,
Nick.

P.S. *Full disclosure: the fact that conda's channel based publication
mechanism for binary artifacts aligns well not only with Fedora's
existing repo management model, but also with RHEL's subscription
management *does* influence my opinion on that - "it's like RPM, but
at the individual user level" is *really* easy to explain to both
current end users, and to anyone picking up Fedora/RHEL for the first
time and needing to learn both. It also means that folks making the
jump from "end user" to "system administrator" in the future can have
RPM explained to them as "it's like conda, but for the underlying
system definition rather than software aimed at individual users")

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia


More information about the env-and-stacks mailing list