Hi,
here are some brainstorming notes about what might come next regarding Cockpit apps. Any feedback highly appreciated!
Cockpit can discover and install add-ons that come as RPMs or DEBs. Cockpit uses the regular AppStream mechanism for this that is also used by GNOME Software to discover and install Desktop "add-ons".
Cockpit reads the "AppStream collection metadata" to discover available add-ons, and the RPMs and DEBs install small "AppStream upstream metadata" files into /usr/share that Cockpit reads to figure out what is installed.
We also want to discover and install Cockpit add-ons that are in container images. The Desktop world has Flatpaks for this, and they are working on shipping Flatpaks as standard OCI container images. Flatpak also uses AppStream to describe individual images. Owen is working on "metastore", which should give us the equivalent of the "AppStream collection metadata" for container image registries. Once pulled, we can look at the labels and annotations of the images/containers to find the "AppStream upstream metadata".
But how does a add-on in a container work?
I would try this:
- A container needs to be running before Cockpit takes note of it. Having just the image pulled to local storage means nothing.
- When starting a session, the shell will find all running containers with a special label and "exec" a bridge in them (if the logged in user has enough permissions).
- Implementation-wise, such a container is like a remote machine. URLs refer to it explicitly. Its packages don't need to be distinct from those of other containers.
- UI-wise, their manifests and iframes are merged with those from the bridge that runs outside of any container. We could start by only allowing "dashboard"s in containers, that ought o be quite simple.
- We can even extend this merging to real remote machines so that you get all dashboards from all your machines merged in your browser window.
- In the "Applications" tool, we don't talk about "installing" applications that are in containers, but about "enabling" them. Enabling such a container means running it in its default way, and making sure it gets started on every boot. Somehow.
- We could extend this terminology to RPM/DEB apps as well. "Enabling" an app makes its entry appear in Cockpit, "disabling" makes it go away. If that needs some more rpms or images or something else, that's an implementation detail.
- The process of enabling something could talk more clearly about what will happen and let the user confirm this.
Does this make sense?
On 01/18/2018 03:48 AM, Marius Vollmer wrote:
Hi,
here are some brainstorming notes about what might come next regarding Cockpit apps. Any feedback highly appreciated!
Cockpit can discover and install add-ons that come as RPMs or DEBs. Cockpit uses the regular AppStream mechanism for this that is also used by GNOME Software to discover and install Desktop "add-ons".
Cockpit reads the "AppStream collection metadata" to discover available add-ons, and the RPMs and DEBs install small "AppStream upstream metadata" files into /usr/share that Cockpit reads to figure out what is installed.
We also want to discover and install Cockpit add-ons that are in container images. The Desktop world has Flatpaks for this, and they are working on shipping Flatpaks as standard OCI container images. Flatpak also uses AppStream to describe individual images. Owen is working on "metastore", which should give us the equivalent of the "AppStream collection metadata" for container image registries. Once pulled, we can look at the labels and annotations of the images/containers to find the "AppStream upstream metadata".
But how does a add-on in a container work?
I would try this:
A container needs to be running before Cockpit takes note of it. Having just the image pulled to local storage means nothing.
When starting a session, the shell will find all running containers with a special label and "exec" a bridge in them (if the logged in user has enough permissions).
I was thinking that containers should add a json file to the cockpit machines directory. Privileged containers can do that automatically, others have to be specifically enabled in cockpit (part of the "Applications" tool?) which would then write the needed json for them.
Implementation-wise, such a container is like a remote machine. URLs refer to it explicitly. Its packages don't need to be distinct from those of other containers.
UI-wise, their manifests and iframes are merged with those from the bridge that runs outside of any container. We could start by only allowing "dashboard"s in containers, that ought o be quite simple.
We need to teach the machine dialogs about these machines. So the troubleshooting dialog only shows when dealing with a ssh conection.
The once we have dashboards working we also should be able to have containers contribute links to the actual machine navigation as well.
- We can even extend this merging to real remote machines so that you get all dashboards from all your machines merged in your browser window.
I don't know that we want to do this, a dashboard on the secondary machine might reference the primary machine. Only loading dashboards from the primary machine and it's containers makes sense to me.
In the "Applications" tool, we don't talk about "installing" applications that are in containers, but about "enabling" them. Enabling such a container means running it in its default way, and making sure it gets started on every boot. Somehow.
We could extend this terminology to RPM/DEB apps as well. "Enabling" an app makes its entry appear in Cockpit, "disabling" makes it go away. If that needs some more rpms or images or something else, that's an implementation detail.
The process of enabling something could talk more clearly about what will happen and let the user confirm this.
Does this make sense? _______________________________________________ cockpit-devel mailing list -- cockpit-devel@lists.fedorahosted.org To unsubscribe send an email to cockpit-devel-leave@lists.fedorahosted.org
cockpit-devel@lists.fedorahosted.org