> In the discussed approach where the Flatpak is composed from
> library is updated by upstream, packaged, the Flatpak is rebuilt with
> new library, and that is delivered to the user. So the extra step
> between the packaging of the library and the delivery to the user.
Maybe. I'm not sure about this. I believe the Flatpaks we're initially
planning to distribute will be composed from *Fedora* RPMs... not
upstream Flatpaks, at least not at first. So it should be possible to
build Fedora infrastructure for using Fedora RPMs as the basis for the
bundled libraries, and tracking them, and rebuilding affected Flatpaks
when there is a security problem. And stuff that's bundled frequently
enough could probably just go into the Fedora runtime.
It is true that automatically rebuilding the Flatpak if one of the RPMs it
is composed of has changed can speed up things a bit. I think there would
still be delays incurred in practice though.
I kinda agree here (though I am a bit surprised, as I did not think
were a very big SELinux fan). We absolutely could be investing more in
SELinux. But we have not been. Very few applications actually have
SELinux profiles, and they are all maintained downstream rather than
upstream. The volume of erroneous SELinux denials in Bugzilla is too
high, and the response time for fixing them too slow. SELinux profiles
work best when they are maintained upstream by application developers
who are familiar with SELinux, not by SELinux developers who are
unfamiliar with the application. But application developers who are
familiar with SELinux basically do not exist, and never will. So it
would be useful to have a general sandbox that works for the vast
majority of desktop apps.
Of course, I agree about those issues with SELinux, that is exactly why I
don't like it (and in fact don't even have it enabled, to be honest). I
think the main issue is really that the policy is mandated centrally rather
than being opted in by the individual application. The seccomp approach is
much better there, but…
Yes and yes, both are true. But also: noooooo. seccomp is useful to
supplement a general purpose sandbox, but it's not suitable to be the
primary sandboxing mechanism. A secure, restrictive seccomp policy is
extremely, *extremely* brittle: if a library you depend on starts using
a new syscall, your application is going to crash. So it either
requires bundling libraries, or else never updating system libraries.
It is basically impossible to use them to construct an effective
restrictive sandbox unless it is highly targeted to a specific
application that bundles its dependencies and is maintained by a large
team of experienced developers. That's why it works well for Chromium.
(The team maintaining the Chromium sandbox is very good; they have gone
so far as to block calling syscalls with specific flags that they know
Chromium never uses, just to reduce the kernel attack surface.) I tried
a similar approach for WebKit a few years ago and quickly found it to
be completely unworkable; my favorite anecdote is how a Fedora update
to libxshmfence caused pages to not render anymore, because
libxshmfence started using a new syscall (it was memfd_create) that was
not whitelisted by the sandbox. We can't have that. Another anecdote
is the new seccomp sandbox for tracker-extract: whenever a GStreamer
plugin starts using a new unexpected syscall, or if you just happen to
have unexpected plugins installed, tracker-extract will crash. The
sandbox has to know everything about every possible GStreamer plugin
(so hope you never write your own custom one!). tracker is not the
greatest example here, because it is not an application, but my point
is that seccomp certainly cannot be used to construct a restrictive,
general-purpose sandbox, because applications are different and use
different syscalls. So if you're going to use seccomp as your primary
sandboxing mechanism, you'd better bundle all your libraries and not
allow any plugins. (I don't think you'd like that very much. ;)
… you are right about the libraries. SELinux also has this issue, by the
way, as Petr Pisar correctly pointed out.
This is actually an issue even for QtWebEngine:
I think it would really help if we had a way for libraries to declare their
own seccomp rules rather than forcing the application to guess.
It is clear that confining applications to a container helps sandboxing a
lot. But there ought to be a way to do it without physically duplicating
everything. How about building a virtual file system view (file system
namespacing exists in the kernel these days, doesn't it?) that contains a
read-only view of the system /usr (and possibly other needed directories),
together with other directories mounted off a container image or a tmpfs? If
the kernel part is done right for this purpose, even sharing read-only code
segments of the shared libraries in RAM should work across the file system
namespaces, as long as the actually referenced inode is the same. Think of a
"virtual Flatpak" that is just a runtime view into system directories, using
I am sure that there is a way to throw out the bathwater without the baby.
Rushing to deploy the current, very suboptimal solution is a very bad idea.
I am deeply convinced that the package system, which has served GNU/Linux
well for years and is its main advantage over proprietary operating systems,
is the way to go, and sandboxing can be adapted to it, not the other way
round. If it means it will take more time until it can be deployed, then we
should wait for that amount of time. Once everything moved from RPMs to
Flatpaks, it will be a lot harder to move back.
And in the end, to be honest, if I really have to choose between dependency
resolution or sandboxing, I will pick the former. If this means I can no
longer use Fedora, then I will have to look for another distribution,
Yes, but remember that Flatpak is only for desktop applications. The
majority of your OS is still going to be packages.
When I see the plans that are floated around, the other stuff might also end
up being containerized in a similar way, just using other technologies