Hi Colin,
I wondered what your thoughts are about the possibility of serving the workstation image as a tree in Fedora atomic.
It would be interesting for the atomic update and rollback capabilities that ostree offers. Maybe we could also use it for doing similar testing to the upstream gnome-continuous, using taskotron.
What are the pros/cons ? What do we have to do to get there ?
Thanks, Matthias
On Wed, Dec 3, 2014, at 02:22 PM, Matthias Clasen wrote:
Hi Colin,
I wondered what your thoughts are about the possibility of serving the workstation image as a tree in Fedora atomic.
The TL;DR on this is that rpm-ostree would be extremely disruptive. I believe the technology is worth the benefit of that pain, but realistically until its feature set enhances, I suspect most users would want to stick with traditional package manager tradeoffs.
Longer answer:
Currently rpm-ostree is focused on use in the Project Atomic context, when paired with Docker/Kubernetes.
However, before that happened, this was an initial goal of the rpm-ostree project, and you see that reflected in the branch names:
fedora-atomic/rawhide/x86_64/docker-host
Where the last bit of "docker-host" is a *name* for a set of packages, here: https://git.fedorahosted.org/cgit/fedora-atomic.git/tree/fedora-atomic-docke...
It's quite easy in fact to make a workstation tree, that would appear as
fedora-atomic/rawhide/x86_64/workstation
It would be interesting for the atomic update and rollback capabilities that ostree offers.
Right, but for the Workstation use case this runs quickly up against the fact that the tree is immutable locally. See https://github.com/cgwalters/atomic-pkglayer for some prototype work there.
Furthermore specifically for workstation, PackageKit has no support for it. I imagine it could learn, but it would be quite a challenge. Once rpm-ostree supports partial live updates, to accurately represent the situation PackageKit would have to learn about the fact that there can be multiple "states" active at one time (each with their own RPM database).
Of course this "multiple states" scenario happens *today* with yum - not all processes get restarted on update, but it's ignored by default by yum/dnf, and PackageKit has some historical attempts but AFAIK it currently doesn't try by default.
There are later tools which try to partially reconstruct this, like: http://dnf-plugins-core.readthedocs.org/en/latest/needs_restarting.html http://rwmj.wordpress.com/2014/07/10/which-services-need-restarting-after-an...
In an rpm-ostree world this would become significantly more important because unless an update could be *proven* to apply safely live, an "rpm-ostree upgrade $package" would queue the change for the *next* boot. Notable values of $package here would include every running desktop app. This model would then show where we really need to go is coordination with Software/desktop UI to help close down apps, update them while they're not running (greying out the start icons again), and then allow starting.
Maybe we could also use it for doing similar testing to the upstream gnome-continuous, using taskotron.
This is a whole other thread and a very complex, multifaceted topic. Some brief points here:
Fedora releng investment ----
Having worked on the Project Atomic image for Fedora 21, I can say that there need to be more people in Fedora rel-eng, and some focused investment on improving iteration cycles there. I think if that team figured out how to allow contributions and decoupled component releases, it would allow sub-groups to take more ownership of their deliverables.
There's no reason the workstation images should be "nightly" - just make them continuously.
Server side rollback ----
To me, this is the most compelling feature of (rpm-)ostree, far more than atomic upgrades. The fact that it does not care about the version numbers and allows the release-engineering side to revert means that it's much easier to have a continuously *functional* system, to whatever degree desired. I touch on that here: https://mail.gnome.org/archives/desktop-devel-list/2014-September/msg00152.h...
It unlocks the ability to do continuous delivery.
Continuous-style delivery would by far be the most radical change, affecting everything between RPM, fedpkg and Koji. And changing RPM is *hard* - it's like trying to change the bottom block in a Jenga tower.
A baby step here would be nvestigating a "distro-sync by default" change for dnf. Maybe on a per-repository basis. (This would be another thing PackageKit would have to represent in the UI)
Installed tests ----
https://wiki.gnome.org/GnomeGoals/InstalledTests would be worth pursuing, particularly if beefed up with support for non-desktop tests. Think random Python libraries or the like.
Well there's more here but this email is already long, and enough to talk about =)
On Wed, Dec 3, 2014 at 3:29 PM, Colin Walters walters@verbum.org wrote:
On Wed, Dec 3, 2014, at 02:22 PM, Matthias Clasen wrote:
Hi Colin,
I wondered what your thoughts are about the possibility of serving the workstation image as a tree in Fedora atomic.
The TL;DR on this is that rpm-ostree would be extremely disruptive. I believe the technology is worth the benefit of that pain, but realistically until its feature set enhances, I suspect most users would want to stick with traditional package manager tradeoffs.
Longer answer:
Currently rpm-ostree is focused on use in the Project Atomic context, when paired with Docker/Kubernetes.
However, before that happened, this was an initial goal of the rpm-ostree project, and you see that reflected in the branch names:
fedora-atomic/rawhide/x86_64/docker-host
Where the last bit of "docker-host" is a *name* for a set of packages, here: https://git.fedorahosted.org/cgit/fedora-atomic.git/tree/fedora-atomic-docke...
It's quite easy in fact to make a workstation tree, that would appear as
fedora-atomic/rawhide/x86_64/workstation
It would be interesting for the atomic update and rollback capabilities that ostree offers.
Right, but for the Workstation use case this runs quickly up against the fact that the tree is immutable locally. See https://github.com/cgwalters/atomic-pkglayer for some prototype work there.
Out of curiosity, couldn't you have an atomic/ostree "base" layer that is immutable (perhaps shared between Base, Server, Cloud, Workstation), and then use Docker containers on top of that as the "live" system? That would still fit with the "atomic is for Docker" approach you have today, while also giving some flexibility at the application layer. One could imagine Software installations become "create a new Docker container with this app inside of it", which then leads to it be automatically sandboxed, etc.
Still, switching out the underlying atomic layer and having the upper level Docker containers still work fine might be challenging. The point is to create more of a stackable approach to the Products, rather than trying to make the entire Product an atomic image.
josh
On Wed, Dec 03, 2014 at 03:47:12PM -0500, Josh Boyer wrote:
Out of curiosity, couldn't you have an atomic/ostree "base" layer that is immutable (perhaps shared between Base, Server, Cloud, Workstation), and then use Docker containers on top of that as the "live" system? That would still fit with the "atomic is for Docker" approach you have today, while also giving some flexibility at the application layer. One could imagine Software installations become "create a new Docker container with this app inside of it", which then leads to it be automatically sandboxed, etc.
This is _definitely_ where I'd like to see us going. (Where "Docker" could be "some future synthesis of Docker and the LinuxApps proposal".)
On Wed, Dec 3, 2014, at 03:47 PM, Josh Boyer wrote:
Out of curiosity, couldn't you have an atomic/ostree "base" layer that is immutable (perhaps shared between Base, Server, Cloud, Workstation), and then use Docker containers on top of that as the "live" system?
Good point, I didn't bring up the Docker part of Project Atomic. I think it makes a significant amount of sense for Workstation to be investigating using Docker for developer tooling, and that's already happening. Actually containers for server side code help bring together the Server/Workstation story far more than we ever had before.
Previously in the package model, you can of course "yum install httpd $language" on your workstation and start making a web app and testing it locally, and many people do this today. But taking that same app and ship it to a server had a different model. With containers, it becomes a lot easier.
It's also a huge benefit for web apps on the desktop to have isolated ports - Docker makes it easy to have two web apps that both think they're listening on port 80, and on the desktop you just look at "docker ps" to find where they are.
That said, this story breaks down a bit when one introduces clustering, and that starts to lead back to local Vagrant usage or the like.
That would still fit with the "atomic is for Docker" approach you have today, while also giving some flexibility at the application layer. One could imagine Software installations become "create a new Docker container with this app inside of it", which then leads to it be automatically sandboxed, etc.
No. Docker (alone) is not a desktop sandbox tool. As soon as any process connects to your X server it has total control and could be a keylogger, write data into your terminals, etc.
And even doing so has lots of potential to break things. A simple example is opening downloaded files from Firefox. If it tries to run /usr/bin/libreoffice on a downloaded ODP, that's going to fail. (Or more likely the Firefox container would simply not have a mime type association and tell you there's no app for it)
Functional desktop app containers require *deep* integration with the UI and toolkit, and modifying apps.
A related effort here in Qubes OS (https://wiki.qubes-os.org/wiki/UserFaq ) doesn't even try this, presumably for the reasons above.
On Wed, Dec 3, 2014 at 4:40 PM, Colin Walters walters@verbum.org wrote:
On Wed, Dec 3, 2014, at 03:47 PM, Josh Boyer wrote:
Out of curiosity, couldn't you have an atomic/ostree "base" layer that is immutable (perhaps shared between Base, Server, Cloud, Workstation), and then use Docker containers on top of that as the "live" system?
Good point, I didn't bring up the Docker part of Project Atomic. I think it makes a significant amount of sense for Workstation to be investigating using Docker for developer tooling, and that's already happening. Actually containers for server side code help bring together the Server/Workstation story far more than we ever had before.
Previously in the package model, you can of course "yum install httpd $language" on your workstation and start making a web app and testing it locally, and many people do this today. But taking that same app and ship it to a server had a different model. With containers, it becomes a lot easier.
It's also a huge benefit for web apps on the desktop to have isolated ports - Docker makes it easy to have two web apps that both think they're listening on port 80, and on the desktop you just look at "docker ps" to find where they are.
That said, this story breaks down a bit when one introduces clustering, and that starts to lead back to local Vagrant usage or the like.
That would still fit with the "atomic is for Docker" approach you have today, while also giving some flexibility at the application layer. One could imagine Software installations become "create a new Docker container with this app inside of it", which then leads to it be automatically sandboxed, etc.
No. Docker (alone) is not a desktop sandbox tool. As soon as any process connects to your X server it has total control and could be a keylogger, write data into your terminals, etc.
With X, yes. It's not _worse_ than just running it all from the same OS install though. I thought this was less of a concern with Wayland, but I will admit I could be wrong.
And even doing so has lots of potential to break things. A simple example is opening downloaded files from Firefox. If it tries to run /usr/bin/libreoffice on a downloaded ODP, that's going to fail. (Or more likely the Firefox container would simply not have a mime type association and tell you there's no app for it)
Functional desktop app containers require *deep* integration with the UI and toolkit, and modifying apps.
Um, I guess I was assuming with X being network transparent, all of the containers would remote their display (or similar with however Wayland works). For things like mime type association, I was thinking there would be a local proxy that connected via some protocol (like d-bus) that started the libreoffice container.
I don't really know, I thought about all of this for like 30 seconds. Aren't containers supposed to be the magic solution these days? I wasn't expecting it to work without effort, but I also wasn't expecting "no that can't be done" to be the answer either. Good things often take effort.
josh
On Wed, Dec 3, 2014 at 11:00 PM, Josh Boyer jwboyer@fedoraproject.org wrote:
On Wed, Dec 3, 2014 at 4:40 PM, Colin Walters walters@verbum.org wrote:
On Wed, Dec 3, 2014, at 03:47 PM, Josh Boyer wrote:
Out of curiosity, couldn't you have an atomic/ostree "base" layer that is immutable (perhaps shared between Base, Server, Cloud, Workstation), and then use Docker containers on top of that as the "live" system?
Good point, I didn't bring up the Docker part of Project Atomic. I think it makes a significant amount of sense for Workstation to be investigating using Docker for developer tooling, and that's already happening. Actually containers for server side code help bring together the Server/Workstation story far more than we ever had before.
Previously in the package model, you can of course "yum install httpd $language" on your workstation and start making a web app and testing it locally, and many people do this today. But taking that same app and ship it to a server had a different model. With containers, it becomes a lot easier.
It's also a huge benefit for web apps on the desktop to have isolated ports - Docker makes it easy to have two web apps that both think they're listening on port 80, and on the desktop you just look at "docker ps" to find where they are.
That said, this story breaks down a bit when one introduces clustering, and that starts to lead back to local Vagrant usage or the like.
That would still fit with the "atomic is for Docker" approach you have today, while also giving some flexibility at the application layer. One could imagine Software installations become "create a new Docker container with this app inside of it", which then leads to it be automatically sandboxed, etc.
No. Docker (alone) is not a desktop sandbox tool. As soon as any process connects to your X server it has total control and could be a keylogger, write data into your terminals, etc.
With X, yes. It's not _worse_ than just running it all from the same OS install though. I thought this was less of a concern with Wayland, but I will admit I could be wrong.
No you are not wrong clients are isolated from each other on wayland. See https://wiki.gnome.org/Initiatives/Wayland the "why switch to wayland" part.
On 12/03/2014 05:05 PM, drago01 wrote:
On Wed, Dec 3, 2014 at 11:00 PM, Josh Boyer jwboyer@fedoraproject.org wrote:
On Wed, Dec 3, 2014 at 4:40 PM, Colin Walters walters@verbum.org wrote:
On Wed, Dec 3, 2014, at 03:47 PM, Josh Boyer wrote:
Out of curiosity, couldn't you have an atomic/ostree "base" layer that is immutable (perhaps shared between Base, Server, Cloud, Workstation), and then use Docker containers on top of that as the "live" system?
Good point, I didn't bring up the Docker part of Project Atomic. I think it makes a significant amount of sense for Workstation to be investigating using Docker for developer tooling, and that's already happening. Actually containers for server side code help bring together the Server/Workstation story far more than we ever had before.
Previously in the package model, you can of course "yum install httpd $language" on your workstation and start making a web app and testing it locally, and many people do this today. But taking that same app and ship it to a server had a different model. With containers, it becomes a lot easier.
It's also a huge benefit for web apps on the desktop to have isolated ports - Docker makes it easy to have two web apps that both think they're listening on port 80, and on the desktop you just look at "docker ps" to find where they are.
That said, this story breaks down a bit when one introduces clustering, and that starts to lead back to local Vagrant usage or the like.
That would still fit with the "atomic is for Docker" approach you have today, while also giving some flexibility at the application layer. One could imagine Software installations become "create a new Docker container with this app inside of it", which then leads to it be automatically sandboxed, etc.
No. Docker (alone) is not a desktop sandbox tool. As soon as any process connects to your X server it has total control and could be a keylogger, write data into your terminals, etc.
With X, yes. It's not _worse_ than just running it all from the same OS install though. I thought this was less of a concern with Wayland, but I will admit I could be wrong.
No you are not wrong clients are isolated from each other on wayland. See https://wiki.gnome.org/Initiatives/Wayland the "why switch to wayland" part.
My understanding is for true container isolation you need lots of changes to the desktop,
You need to have Wayland to eliminate the X Problem. You need to have a new "File Manager" App that the Apps could some how launch. Since you still want firefox download to actually download to the desktop/homedir in a location the user selects. This writing to the desktop/homedir can not be controlled by the firefox container, but has to be handled by a new mechanism see kdbus. Where the "File Manager" hands firefox a "File system object" that it can write but not create.
Then you have to worry about system defaults. If the user wants to change the screen background colors you need to communicate these into the container application.
Helper apps like Colin described have to be handled. Firefox launches evince, openoffice, random desktop plugins like flashplugin, and bjplugin. If firefox and thunderbird both use the plugin, will the user need to download it multiple times? If I launch ooffice to view a .doc file in firefox, do I want this ooffice container to be the same ooffice container that I have currently containing the Coke Secret Recepe, or should it be in a separate ooffice container so my secrets can not be hijacked?
Lots of potential problems if you use full container isolation like docker at the desktop. Using something that does more sharing of /usr into the container might make this easier.
As I found when I wrote the SELinux Sandbox. The Linux Desktop is a "cess pool" of communication and attempting to sandbox apps will have unexpected consequences.
On Thu, Dec 04, 2014 at 05:10:32AM -0500, Daniel J Walsh wrote:
As I found when I wrote the SELinux Sandbox. The Linux Desktop is a "cess pool" of communication and attempting to sandbox apps will have unexpected consequences.
But we don't have to start with the muck at the bottom. :) We can containerize the things that are easy and decompose the things which aren't as easy and ship, still ship them as modular components, and either just run them or build up whatever light sandboxing makes sense, and then move things to be more _actually_ containerized as possible.
On Thu, Dec 4, 2014 at 6:25 PM, Matthew Miller mattdm@fedoraproject.org wrote:
On Thu, Dec 04, 2014 at 05:10:32AM -0500, Daniel J Walsh wrote:
As I found when I wrote the SELinux Sandbox. The Linux Desktop is a "cess pool" of communication and attempting to sandbox apps will have unexpected consequences.
But we don't have to start with the muck at the bottom. :) We can containerize the things that are easy and decompose the things which aren't as easy and ship, still ship them as modular components, and either just run them or build up whatever light sandboxing makes sense, and then move things to be more _actually_ containerized as possible.
Right. I didn't mean to suggest everything to should be containers or nothing. I meant we should be able to do a layered approach to providing things, however that makes sense now, and then move towards more sandboxing/containers over time. The benefit and focus would be to prevent 3 products from doing the same work 3 times. Create a base, add the product layers, profit (or in our case maybe "reduce technical debt" or some other fancy catch phrase).
josh
On Wed, Dec 3, 2014, at 05:00 PM, Josh Boyer wrote:
I don't really know, I thought about all of this for like 30 seconds.
I've spent a bit longer myself...after I joined Red Hat in 2004, I looked at using SELinux for this: http://selinuxsymposium.org/2005/presentations/session3/3-1-walters.pdf
Later Dan Walsh made sandbox-x: https://www.redhat.com/promo/summit/2010/presentations/summit/whats-next/thu...
But neither really started to make any of the changes necessary in the toolkit, for issues like the MIME database or inter-app IPC.
The topic has come up at GUADEC again more recently via the KDBus effort, which will help with a more secure IPC channel for everything besides Wayland. But that's only a foundational infrastructure piece for the changes that would be needed in the toolkit and apps.
Aren't containers supposed to be the magic solution these days?
Server apps tend to be designed to be distributed, and run by operations people who can understand the setup. Desktop apps, not so much.
QubesOS doesn't try - you have to make isolated desktops manually.
I wasn't expecting it to work without effort, but I also wasn't expecting "no that can't be done" to be the answer either.
It's somewhere between those extremes, but it is a *lot* of work. Probably someone should make a wiki page with links to the different efforts.
By the way, I assume that Alex will see this thread tomorrow and respond here, as far as I know he's done the most recent work on GNOME tooling in preparation for this.
Also worth linking to Allan's design thoughts: http://blogs.gnome.org/aday/2014/07/10/sandboxed-applications-for-gnome/ http://blogs.gnome.org/aday/2014/07/23/sandboxed-applications-for-gnome-part...
And I forgot to mention, parallel to the rpm-ostree discussion, shipping apps as containers would also require fundamental redesign work in PackageKit.
I also forgot to mention fonts, codecs, and input methods before.
desktop@lists.fedoraproject.org