<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On 06/24/2015 07:31 AM, Jonathan
Underwood wrote:<br>
</div>
<blockquote
cite="mid:CAANOHNk5G5CToiGA3T=txiLBiD6OuPUK2CNnNZoEe7mA-f1NgA@mail.gmail.com"
type="cite">
<pre wrap="">On 24 June 2015 at 08:01, Jan Synacek <a class="moz-txt-link-rfc2396E" href="mailto:jsynacek@redhat.com"><jsynacek@redhat.com></a> wrote:
</pre>
<blockquote type="cite">
<pre wrap="">Managing Emacs packages by the distribution makes, IMHO, no sense at
all. Users can easily manage the packages themselves via Emacs'
package.el user interface.
</pre>
</blockquote>
<pre wrap="">Well, that's the way I'm leaning too. But then, I could make similar
arguments for python, perl etc.
</pre>
</blockquote>
<br>
Exactly, and not in a good way! It seems to me that we worked hard
to create a packaging system that keeps the system updated and
secure... and now retrench away from the idea. The revisionists seem
to argue two things: <br>
<br>
- it's too hard to keep large subsystems consistent and free from
ABI clashes, so we need project-oriented contained setups<br>
<br>
- it's OK to give up on the security advantages of unified
system-level updates because subsystems have limited scope.<br>
<br>
As you say, this argument is made about Emacs, Python, Perl,
Node.js, and containters in general. I am very troubled by this
trend: at worst it leads to stagnant, vulnerable backwater
subsystems ( "over 30% of official images in Docker Hub contain high
priority security vulnerabilities",
<a class="moz-txt-link-freetext" href="http://www.infoq.com/news/2015/05/Docker-Image-Vulnerabilities">http://www.infoq.com/news/2015/05/Docker-Image-Vulnerabilities</a> ); at
best, it means that the end users have to orchestrate several
independent update processes. Simply not keeping stuff updated is
not an option---usually, all the imporant data is in the project,
and even a contained compromise that didn't subvert the core OS
means game over ("All my data was stolen/erased through a browser
exploit, but at least the kernel was not compromised".. NOT).<br>
<br>
This fragmentation leads to weird effects, such as the one I just
discovered with Node.js: the standard 'node' installation fails to
use the system-wide packages installed by 'npm' and Fedora RPMs,
because they use different paths; the node subpackage RPMs are
basically unusable as delivered. This is because node upstream
believes in 'local packages'.<br>
<br>
We should not give up on the principle of installing all software
globally, within the RPM framework. I hope there's a way to
accommodate subsystem-specific packages, but it probably requires
specific things in each environment, rather than a universal
solution that works for every one of them.<br>
<br>
Now, there are a couple of tradeoffs and issues to consider:<br>
<br>
- big collections, or lots of little packages (l think TeXlive is
overdoing it with 5323 packages)<br>
<br>
- can system/global packages be overriden by a local copy
(especially if upstream prefers local)<br>
- in particular, are global packages immediately usable, or do
they have to be 'linked' locally first<br>
<br>
- does Fedora just fix the issues for itself, or do we try to push
it upstream<br>
<br>
I think we should discuss these issues and had a consistent policy
to draw on in similar discussions in the future. <br>
<br>
<br>
<br>
</body>
</html>