Hi,
There's been some discussion about having the Atomic trees be smaller; has anyone done any prototyping work on Cockpit as a privileged Docker container?
On 31.07.2014 17:14, Colin Walters wrote:
Hi,
There's been some discussion about having the Atomic trees be smaller; has anyone done any prototyping work on Cockpit as a privileged Docker container?
I've heard this concept slung around, but never saw it in real life. What does a Docker privileged container look like and how does it work? Any documentation? A trivial google search doesn't seem to turn up anything definitive.
Stef
On Thu, Jul 31, 2014 at 10:38:59PM +0200, Stef Walter wrote:
I've heard this concept slung around, but never saw it in real life. What does a Docker privileged container look like and how does it work? Any documentation? A trivial google search doesn't seem to turn up anything definitive.
"Normal" containers run with a munged network (i.e. 172.* address), dropped kernel capabilities, and under a limited SELinux security context (i.e. system_u:system_r:svirt_lxc_net_t:s0:c712,c869). Docker containers started with the "--privileged" option still run with a munged network but have fewer (maybe even no, I'd have to check the source) kernel capabilties dropped and run under a more lenient SELinux. The more lenient context is something like: system_u:system_r:docker_t:s0. I can't remember exactly off the top of my head.
To give a concrete example: normal containers probably can't access the /dev/ pseudo-filesystem the way Cockpit (I assume) needs to. I would expect that a "--privileged" container could.
_Trevor
On 01.08.2014 07:21, Trevor Jay wrote:
On Thu, Jul 31, 2014 at 10:38:59PM +0200, Stef Walter wrote:
I've heard this concept slung around, but never saw it in real life. What does a Docker privileged container look like and how does it work? Any documentation? A trivial google search doesn't seem to turn up anything definitive.
"Normal" containers run with a munged network (i.e. 172.* address), dropped kernel capabilities, and under a limited SELinux security context (i.e. system_u:system_r:svirt_lxc_net_t:s0:c712,c869). Docker containers started with the "--privileged" option still run with a munged network but have fewer (maybe even no, I'd have to check the source) kernel capabilties dropped and run under a more lenient SELinux. The more lenient context is something like: system_u:system_r:docker_t:s0. I can't remember exactly off the top of my head.
To give a concrete example: normal containers probably can't access the /dev/ pseudo-filesystem the way Cockpit (I assume) needs to. I would expect that a "--privileged" container could.
Interesting.
Cockpit needs to do stuff like this in the main host: * Access the file-system (eg: journal, and much more) * Access the DBus system bus and activate things there, like udisksd, NetworkManager, systemd parts ... also some of cockpit's dependencies like storaged are activated on the main system bus. * Read access to cgroup tree * Connect to docker socket * Run host commands like shutdown. * Authenticate against host PAM stack and user database
The actual networking in use for running Cockpit isn't *that* important, as we would connect out to NetworkManager anyway to do configuration. But we would need to be able to ask the kernel about the throughput of the various interfaces.
Also when you connect out remotely to have Cockpit look at multiple-machines, it does so via SSH. So we would need to somehow add that SSH subprocess into an appropriate privileged container. Or perhaps stop using SSH for this purpose ...
In addition cockpit starts a real PAM/systemd/audit session once logged in, and the logged in processes run under unconfined_t selinux context (similar as you would for a shell). So the semantics of this would need to be figured out.
Lots of work. Would be interested in the results if you end up playing with this.
Is there would be a way to *only* add a file system namespace containing the entire system file system + the cockpit data/binaries bind mounted in. This would still run into a few problems above, but many of them would just work.
Stef
On 08/01/2014 01:39 AM, Stef Walter wrote:
On 01.08.2014 07:21, Trevor Jay wrote:
On Thu, Jul 31, 2014 at 10:38:59PM +0200, Stef Walter wrote:
I've heard this concept slung around, but never saw it in real life. What does a Docker privileged container look like and how does it work? Any documentation? A trivial google search doesn't seem to turn up anything definitive.
"Normal" containers run with a munged network (i.e. 172.* address), dropped kernel capabilities, and under a limited SELinux security context (i.e. system_u:system_r:svirt_lxc_net_t:s0:c712,c869). Docker containers started with the "--privileged" option still run with a munged network but have fewer (maybe even no, I'd have to check the source) kernel capabilties dropped and run under a more lenient SELinux. The more lenient context is something like: system_u:system_r:docker_t:s0. I can't remember exactly off the top of my head.
Yes a --priv container can be thought of as a container with NO security containment. The SELinux transition is to unconfined_t, and no capabilities are dropped.
The problem with them is that you are still in "Process Containment" via namespaces.
To give a concrete example: normal containers probably can't access the /dev/ pseudo-filesystem the way Cockpit (I assume) needs to. I would expect that a "--privileged" container could.
Interesting.
Cockpit needs to do stuff like this in the main host:
- Access the file-system (eg: journal, and much more)
- Access the DBus system bus and activate things there, like udisksd, NetworkManager, systemd parts ... also some of cockpit's dependencies like storaged are activated on the main system bus.
- Read access to cgroup tree
- Connect to docker socket
- Run host commands like shutdown.
- Authenticate against host PAM stack and user database
The actual networking in use for running Cockpit isn't *that* important, as we would connect out to NetworkManager anyway to do configuration. But we would need to be able to ask the kernel about the throughput of the various interfaces.
Also when you connect out remotely to have Cockpit look at multiple-machines, it does so via SSH. So we would need to somehow add that SSH subprocess into an appropriate privileged container. Or perhaps stop using SSH for this purpose ...
In addition cockpit starts a real PAM/systemd/audit session once logged in, and the logged in processes run under unconfined_t selinux context (similar as you would for a shell). So the semantics of this would need to be figured out.
Lots of work. Would be interested in the results if you end up playing with this.
Is there would be a way to *only* add a file system namespace containing the entire system file system + the cockpit data/binaries bind mounted in. This would still run into a few problems above, but many of them would just work.
Stef _______________________________________________ cockpit-devel mailing list cockpit-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/cockpit-devel
We have kicked around the idea of a "Super Priv container" where you could switch to a limited number of namespaces. The idea would be to only switch the mnt namespace and then maintain the current namespaces of the host. Then mount all file systems under say /sysimage. Something like
docker run --privilege --namespace-add=all --namespace-drop=mnt -v /sysimage:/ cockpit
Then the cockpit daemon could run and see all of /. Theoretically if it was staticlly linked it could just chroot /sysimage inside the container. Or understand that everything is offset in /sysimage.
Bottom line the container processes would be allowed to see /proc of the host and communicate with fifo files in /run to talk to docker. It should be able to communicate using dbus to systemd ...
Problem is we don't have this yet. We have brought it up with docker and they like the idea, but there could be complications.
--net="host"
I think will not change the network and UTS namespace.
None of the others have been implemented.
sh-4.3# netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 10.199.0.1 0.0.0.0 UG 0 0 0 wlan0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 tun0 10.5.30.160 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 10.11.5.19 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 10.199.0.0 0.0.0.0 255.255.240.0 U 0 0 0 wlan0 172.16.0.0 0.0.0.0 255.255.0.0 U 0 0 0 tun0 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0 209.132.183.55 10.199.0.1 255.255.255.255 UGH 0 0 0 wlan0 sh-4.3# hostname redsox.boston.devel.redhat.com sh-4.3# docker run --rm -ti -v /usr/bin/netstat:/usr/bin/netstat --net=host fedora /bin/sh sh-4.2# netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 10.199.0.1 0.0.0.0 UG 0 0 0 wlan0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 tun0 10.5.30.160 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 10.11.5.19 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 10.199.0.0 0.0.0.0 255.255.240.0 U 0 0 0 wlan0 172.16.0.0 0.0.0.0 255.255.0.0 U 0 0 0 tun0 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0 209.132.183.55 10.199.0.1 255.255.255.255 UGH 0 0 0 wlan0 sh-4.2# hostname redsox
On Fri, Aug 01, 2014 at 06:26:03AM -0400, Daniel J Walsh wrote:
We have kicked around the idea of a "Super Priv container" where you could switch to a limited number of namespaces.
If the number of namespaces we want to support switching to is limited and --privileged containers run with a different type (docker_t) from others (svirt_lxc_net_t), do you really need upstream Docker support? Terrible idea: couldn't we take the same approach we do with service and have an executable for each namespace we want? These "entry points" only purpose would be to provide a transition point from docker_t? Combined with:
-v /var/..:/host --net="host"
such a container would only have to know to where to expect the mount point and the directory of "namespace entry points".
Taking this approach, I was able to get a container with an unmunged network, access to the host /, and able to spawn processes running as unconfined_t. That (or less) seems sufficient for cockpit given a few modifications.
Granted, real --namespace-add and --namespace-drop support would be less of a sin against god and man... but this would allow for experimentation right now.
_Trevor
On 08/01/2014 09:45 AM, Trevor Jay wrote:
On Fri, Aug 01, 2014 at 06:26:03AM -0400, Daniel J Walsh wrote:
We have kicked around the idea of a "Super Priv container" where you could switch to a limited number of namespaces.
If the number of namespaces we want to support switching to is limited and --privileged containers run with a different type (docker_t) from others (svirt_lxc_net_t), do you really need upstream Docker support? Terrible idea: couldn't we take the same approach we do with service and have an executable for each namespace we want? These "entry points" only purpose would be to provide a transition point from docker_t? Combined with:
-v /var/..:/host --net="host"such a container would only have to know to where to expect the mount point and the directory of "namespace entry points".
Taking this approach, I was able to get a container with an unmunged network, access to the host /, and able to spawn processes running as unconfined_t. That (or less) seems sufficient for cockpit given a few modifications.
Granted, real --namespace-add and --namespace-drop support would be less of a sin against god and man... but this would allow for experimentation right now.
_Trevor
That looks good, except you don't have /proc shared.
This works for me on newer docker.
docker run --rm -v /:/host -ti --net=host --privileged fedora /bin/sh
I was thinking of doing this with /sysimage, but maybe /host is a better name.
On 08/04/2014 03:34 PM, Daniel J Walsh wrote:
On 08/01/2014 09:45 AM, Trevor Jay wrote:
On Fri, Aug 01, 2014 at 06:26:03AM -0400, Daniel J Walsh wrote:
We have kicked around the idea of a "Super Priv container" where you could switch to a limited number of namespaces.
If the number of namespaces we want to support switching to is limited and --privileged containers run with a different type (docker_t) from others (svirt_lxc_net_t), do you really need upstream Docker support? Terrible idea: couldn't we take the same approach we do with service and have an executable for each namespace we want? These "entry points" only purpose would be to provide a transition point from docker_t? Combined with:
-v /var/..:/host --net="host"such a container would only have to know to where to expect the mount point and the directory of "namespace entry points".
Taking this approach, I was able to get a container with an unmunged network, access to the host /, and able to spawn processes running as unconfined_t. That (or less) seems sufficient for cockpit given a few modifications.
Granted, real --namespace-add and --namespace-drop support would be less of a sin against god and man... but this would allow for experimentation right now.
_Trevor
That looks good, except you don't have /proc shared.
This works for me on newer docker.
docker run --rm -v /:/host -ti --net=host --privileged fedora /bin/sh
I was thinking of doing this with /sysimage, but maybe /host is a better name.
cockpit-devel mailing list cockpit-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/cockpit-devel
Of course you can chroot /host and then be in a container on the host. But you loose view of your executables in your own mount.
On Mon, Aug 04, 2014 at 03:34:11PM -0400, Daniel J Walsh wrote:
-v /var/..:/host --net="host"That looks good, except you don't have /proc shared.
Right. The container can only access /proc and friends if you also use a the policy/entrypoint hack to allow it to become unconfined_t . Like I said, this is just a dirty simulation of your future feature.
Speaking of that: you mentioned a "set" collection of namespaces/privileges to choose from at container launch time. How clear are those at this point? It would be good if we could whip up roughly equivalent types now so that the cockpit guys could begin seeing what they'd need to adjust.
_Trevor
On 08/04/2014 10:57 PM, Trevor Jay wrote:
On Mon, Aug 04, 2014 at 03:34:11PM -0400, Daniel J Walsh wrote:
-v /var/..:/host --net="host"That looks good, except you don't have /proc shared.
Right. The container can only access /proc and friends if you also use a the policy/entrypoint hack to allow it to become unconfined_t . Like I said, this is just a dirty simulation of your future feature.
Well /proc is not only being blocked by SELinux, but also you are still entering a different PID namespace. We have a patch working its way upstream that will allow users to specify alternate SELinux context or to disable SELinux confinement for the container.
docker run --selinux-opt=disabled rhel7 ...
Or
docker run --selinux-opt=type:mytype_t rhel7 ...
Speaking of that: you mentioned a "set" collection of namespaces/privileges to choose from at container launch time. How clear are those at this point? It would be good if we could whip up roughly equivalent types now so that the cockpit guys could begin seeing what they'd need to adjust.
_Trevor
Not really sure what you mean. What exactly are you expecting, can you give me an example?
On Tue, Aug 05, 2014 at 09:14:59AM -0400, Daniel J Walsh wrote:
Well /proc is not only being blocked by SELinux, but also you are still entering a different PID namespace.
Fair enough, but with greater SELinux permissions /proc still gives you alot of process monitoring options.
On 08/04/2014 10:57 PM, Trevor Jay wrote:
Speaking of that: you mentioned a "set" collection of namespaces/privileges to choose from at container launch time. How clear are those at this point? It would be good if we could whip up roughly equivalent types now so that the cockpit guys could begin seeing what they'd need to adjust.
Not really sure what you mean. What exactly are you expecting, can you give me an example?
Right now Docker containers run with either the svirt_lxc_net_t or docker_t types and we've provided pre-existing policies for those type. For example, we provide the svirt_sandbox_file_t type and policy scaffolding so that's it's easy to give Docker containers access to files.
With --selinux-opt=type:X opening up the possibility of running containers as more types, I'm asking if we intend to for our standard policy to provide more "canned" types for users to use or do we expect users to always roll their own?
_Trevor
On 08/06/2014 04:57 AM, Trevor Jay wrote:
On Tue, Aug 05, 2014 at 09:14:59AM -0400, Daniel J Walsh wrote:
Well /proc is not only being blocked by SELinux, but also you are still entering a different PID namespace.
Fair enough, but with greater SELinux permissions /proc still gives you alot of process monitoring options.
On 08/04/2014 10:57 PM, Trevor Jay wrote:
Speaking of that: you mentioned a "set" collection of namespaces/privileges to choose from at container launch time. How clear are those at this point? It would be good if we could whip up roughly equivalent types now so that the cockpit guys could begin seeing what they'd need to adjust.
Not really sure what you mean. What exactly are you expecting, can you give me an example?
Right now Docker containers run with either the svirt_lxc_net_t or docker_t types and we've provided pre-existing policies for those type. For example, we provide the svirt_sandbox_file_t type and policy scaffolding so that's it's easy to give Docker containers access to files.
With --selinux-opt=type:X opening up the possibility of running containers as more types, I'm asking if we intend to for our standard policy to provide more "canned" types for users to use or do we expect users to always roll their own?
_Trevor
Funny you should ask, since I just wrote an example of how you would do this for apache. For the pull request in question.
We could start adding alternate types, but it might be nicer to give people some tooling to be able to write policy quickly for those new types.
The example I attached does not have all of the capabilities you might need to run a container as apache.
sesearch -A -s svirt_apache_t | grep cap allow svirt_apache_t svirt_apache_t : process { fork sigchld sigkill sigstop signull signal getsched setsched getpgid setpgid getcap setcap getattr setrlimit } ; allow svirt_apache_t svirt_apache_t : capability net_bind_service ;
I only give it net_bind_service, But if the apache process starts as root, and becomes non root it would probably need setuid and setgid.
Not sure what else it would need.
If we had a group of examples, I think it would be a nice idea.
In sandbox -X we wrote a few examples, but in a server world this will quickly expand.
On 06.08.2014 15:37, Daniel J Walsh wrote:
On 08/06/2014 04:57 AM, Trevor Jay wrote:
On Tue, Aug 05, 2014 at 09:14:59AM -0400, Daniel J Walsh wrote:
Well /proc is not only being blocked by SELinux, but also you are still entering a different PID namespace.
Fair enough, but with greater SELinux permissions /proc still gives you alot of process monitoring options.
On 08/04/2014 10:57 PM, Trevor Jay wrote:
Speaking of that: you mentioned a "set" collection of namespaces/privileges to choose from at container launch time. How clear are those at this point? It would be good if we could whip up roughly equivalent types now so that the cockpit guys could begin seeing what they'd need to adjust.
Not really sure what you mean. What exactly are you expecting, can you give me an example?
Right now Docker containers run with either the svirt_lxc_net_t or docker_t types and we've provided pre-existing policies for those type. For example, we provide the svirt_sandbox_file_t type and policy scaffolding so that's it's easy to give Docker containers access to files.
With --selinux-opt=type:X opening up the possibility of running containers as more types, I'm asking if we intend to for our standard policy to provide more "canned" types for users to use or do we expect users to always roll their own?
_Trevor
Funny you should ask, since I just wrote an example of how you would do this for apache. For the pull request in question.
We could start adding alternate types, but it might be nicer to give people some tooling to be able to write policy quickly for those new types.
The example I attached does not have all of the capabilities you might need to run a container as apache.
sesearch -A -s svirt_apache_t | grep cap allow svirt_apache_t svirt_apache_t : process { fork sigchld sigkill sigstop signull signal getsched setsched getpgid setpgid getcap setcap getattr setrlimit } ; allow svirt_apache_t svirt_apache_t : capability net_bind_service ;
I only give it net_bind_service, But if the apache process starts as root, and becomes non root it would probably need setuid and setgid.
Not sure what else it would need.
If we had a group of examples, I think it would be a nice idea.
In sandbox -X we wrote a few examples, but in a server world this will quickly expand.
So to summarize this... Do you know of anyone who's tried this stuff out with Cockpit? Interested in the results in any case, and open to patches that help make it work.
Stef
On Fri, Aug 8, 2014, at 11:57 AM, Stef Walter wrote:
So to summarize this... Do you know of anyone who's tried this stuff out with Cockpit? Interested in the results in any case, and open to patches that help make it work.
I don't offhand. One way to approach this might be to mount the host file system at /sysroot or something in the container. The cockpit would have to conditionalize itself and say: Am I in a container? Look at /sysroot/proc.
Though that might get untenable for things like systemd APIs that are basically just wrappers around looking at files in /run.
And for that matter, having to find the system bus at /sysroot/run/dbus/system_bus_socket.
It's messy - at least while trying to preserve the traditional deployment too.
Maybe flip it around and try to have cockpit-in-container have its data all isolated in /usr/lib/cockpit (including the binaries).
On the other hand - if we made Cockpit work in this pattern, I'd say it would work for any management agent / config system / etc.
On 09.08.2014 01:04, Colin Walters wrote:
On Fri, Aug 8, 2014, at 11:57 AM, Stef Walter wrote:
So to summarize this... Do you know of anyone who's tried this stuff out with Cockpit? Interested in the results in any case, and open to patches that help make it work.
I don't offhand. One way to approach this might be to mount the host file system at /sysroot or something in the container. The cockpit would have to conditionalize itself and say: Am I in a container?
Is there a standard way to do this on Linux?
Look at /sysroot/proc.
So I guess you're mostly talking about cockpit-agent here, although cockpit-ws would also need to some how trick PAM into looking at the /sysroot/etc/pam.d path ... But I guess that could be via a symlink.
I guess that also when we connect to such a system via ssh, we would have to run the cockpit-agent command at a different path? What would that path be?
Though that might get untenable for things like systemd APIs that are basically just wrappers around looking at files in /run.
We would probably have to symlink /run to /sysroot/run
In fact the only interesting parts of the cockpit container file system would be /usr/libexec/cockpit-* and /usr/share/cockpit.
Maybe flip it around and try to have cockpit-in-container have its data all isolated in /usr/lib/cockpit (including the binaries).
On the other hand - if we made Cockpit work in this pattern, I'd say it would work for any management agent / config system / etc.
Right. So how does this work in real life (for example with Docker). Is there a way to just remount / with a bind mount into the container at / and then remount the container file system in an alternate place?
Stef
On 08/10/2014 05:02 AM, Stef Walter wrote:
On 09.08.2014 01:04, Colin Walters wrote:
On Fri, Aug 8, 2014, at 11:57 AM, Stef Walter wrote:
So to summarize this... Do you know of anyone who's tried this stuff out with Cockpit? Interested in the results in any case, and open to patches that help make it work.
I don't offhand. One way to approach this might be to mount the host file system at /sysroot or something in the container. The cockpit would have to conditionalize itself and say: Am I in a container?
Is there a standard way to do this on Linux?
We are working to add an environment variable that says container=docker in all Fedora/RHEL images. We are hoping to get this ability upstream soon. If your question was about /sysroot for the host, I guess the closest thing to a standard is anaconda and libguesfs. But those two do not agree. I think someone in your group suggested /host.
Look at /sysroot/proc.
So I guess you're mostly talking about cockpit-agent here, although cockpit-ws would also need to some how trick PAM into looking at the /sysroot/etc/pam.d path ... But I guess that could be via a symlink.
I guess that also when we connect to such a system via ssh, we would have to run the cockpit-agent command at a different path? What would that path be?
Though that might get untenable for things like systemd APIs that are basically just wrappers around looking at files in /run.
We would probably have to symlink /run to /sysroot/run
In fact the only interesting parts of the cockpit container file system would be /usr/libexec/cockpit-* and /usr/share/cockpit.
Well stuff in /usr/lib64 is also probably used and any parts of coreutils you take use. Potentially some configuration you might want shipped within a container.
Maybe flip it around and try to have cockpit-in-container have its data all isolated in /usr/lib/cockpit (including the binaries).
On the other hand - if we made Cockpit work in this pattern, I'd say it would work for any management agent / config system / etc.
Right. So how does this work in real life (for example with Docker). Is there a way to just remount / with a bind mount into the container at / and then remount the container file system in an alternate place?
Stef
You could mount / at / theoretically, but I believe as soon as you did this your app would loose its shared libraries and could start acting strange.
cockpit-devel mailing list cockpit-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/cockpit-devel
On 12.08.2014 15:51, Daniel J Walsh wrote:
On 08/10/2014 05:02 AM, Stef Walter wrote:
On 09.08.2014 01:04, Colin Walters wrote:
On Fri, Aug 8, 2014, at 11:57 AM, Stef Walter wrote:
So to summarize this... Do you know of anyone who's tried this stuff out with Cockpit? Interested in the results in any case, and open to patches that help make it work.
I don't offhand. One way to approach this might be to mount the host file system at /sysroot or something in the container. The cockpit would have to conditionalize itself and say: Am I in a container?
Is there a standard way to do this on Linux?
We are working to add an environment variable that says container=docker in all Fedora/RHEL images. We are hoping to get this ability upstream soon. If your question was about /sysroot for the host, I guess the closest thing to a standard is anaconda and libguesfs. But those two do not agree. I think someone in your group suggested /host.
Look at /sysroot/proc.
So I guess you're mostly talking about cockpit-agent here, although cockpit-ws would also need to some how trick PAM into looking at the /sysroot/etc/pam.d path ... But I guess that could be via a symlink.
I guess that also when we connect to such a system via ssh, we would have to run the cockpit-agent command at a different path? What would that path be?
Though that might get untenable for things like systemd APIs that are basically just wrappers around looking at files in /run.
We would probably have to symlink /run to /sysroot/run
In fact the only interesting parts of the cockpit container file system would be /usr/libexec/cockpit-* and /usr/share/cockpit.
Well stuff in /usr/lib64 is also probably used and any parts of coreutils you take use. Potentially some configuration you might want shipped within a container.
Cockpit has two kinds of dependencies:
* Some basic libraries it uses to run. * The stuff it actually uses to manage the system, like NetworkManager udisks, docker, etc...
The latter would need probably need to run outside the privileged container ... or be privileged-container capable itself.
Maybe flip it around and try to have cockpit-in-container have its data all isolated in /usr/lib/cockpit (including the binaries).
On the other hand - if we made Cockpit work in this pattern, I'd say it would work for any management agent / config system / etc.
Right. So how does this work in real life (for example with Docker). Is there a way to just remount / with a bind mount into the container at / and then remount the container file system in an alternate place?
Stef
You could mount / at / theoretically, but I believe as soon as you did this your app would loose its shared libraries and could start acting strange.
Could intelligent use of LD_LIBRARY_PATH be used to solve this?
Stef
On 12.08.2014 16:22, Stef Walter wrote:
On 12.08.2014 15:51, Daniel J Walsh wrote:
On 08/10/2014 05:02 AM, Stef Walter wrote:
On 09.08.2014 01:04, Colin Walters wrote:
On Fri, Aug 8, 2014, at 11:57 AM, Stef Walter wrote:
So to summarize this... Do you know of anyone who's tried this stuff out with Cockpit? Interested in the results in any case, and open to patches that help make it work.
I've started a Trello card here ... and listed some of the issues that came to mind:
https://trello.com/c/GQNcQGni/44-privileged-container
Stef
Adding Scott Collier and William Henry, to help with this effort.
On 08/12/2014 10:28 AM, Stef Walter wrote:
On 12.08.2014 16:22, Stef Walter wrote:
On 12.08.2014 15:51, Daniel J Walsh wrote:
On 08/10/2014 05:02 AM, Stef Walter wrote:
On 09.08.2014 01:04, Colin Walters wrote:
On Fri, Aug 8, 2014, at 11:57 AM, Stef Walter wrote:
So to summarize this... Do you know of anyone who's tried this stuff out with Cockpit? Interested in the results in any case, and open to patches that help make it work.
I've started a Trello card here ... and listed some of the issues that came to mind:
https://trello.com/c/GQNcQGni/44-privileged-container
Stef
Stef can we get the list of packages needed for Cockpit?
BTW is this getting backported to Fedora 20? How are we getting this into RHEL 7 today?
----- Original Message -----
Adding Scott Collier and William Henry, to help with this effort.
On 08/12/2014 10:28 AM, Stef Walter wrote:
On 12.08.2014 16:22, Stef Walter wrote:
On 12.08.2014 15:51, Daniel J Walsh wrote:
On 08/10/2014 05:02 AM, Stef Walter wrote:
On 09.08.2014 01:04, Colin Walters wrote:
On Fri, Aug 8, 2014, at 11:57 AM, Stef Walter wrote: > So to summarize this... Do you know of anyone who's tried this stuff > out > with Cockpit? Interested in the results in any case, and open to > patches > that help make it work.
I've started a Trello card here ... and listed some of the issues that came to mind:
https://trello.com/c/GQNcQGni/44-privileged-container
Stef
On 12.08.2014 21:26, William Henry wrote:
Stef can we get the list of packages needed for Cockpit?
Currently it looks like this:
Direct dependencies (link time):
accountsservice-libs >= 0.6.35 glib >= 2.24 glib-networking >= 2.24 gudev >= 165 json-glib >= 0.14.0 libssh >= 0.6.0 libsystemd-journal >= 187 libsystemd-daemon lvm2 keyutils krb5 pam polkit-agent-1 >= 0.105 udisks2 >= 2.1.0
Run time dependencies on system being managed (outside of a theoretical superpriv container):
dbus docker (optional) geard (optional) firewalld (optional) polkit lvm2 systemd mdadm NetworkManager realmd storaged udisks2
Other (non-standard) build dependencies
libgsystem jsl pkg-config xsltproc docbook libxslt perl-Locale-PO perl-JSON
BTW is this getting backported to Fedora 20?
Yes, it's been available in Fedora 20 updates-testing since February.
$ sudo yum install --enablerepo=updates-testing cockpit $ sudo yum enable cockpit.socket $ xdg-open http://localhost:1001
How are we getting this into RHEL 7 today?
So far people can try this out in RHEL 7 Atomic. It's not yet available in RHEL 7 proper.
Stef
On 13.08.2014 08:43, Stef Walter wrote:
On 12.08.2014 21:26, William Henry wrote:
Stef can we get the list of packages needed for Cockpit?
Currently it looks like this:
Direct dependencies (link time):
accountsservice-libs >= 0.6.35 glib >= 2.24
Make that 2.34
glib-networking >= 2.24 gudev >= 165 json-glib >= 0.14.0 libssh >= 0.6.0 libsystemd-journal >= 187 libsystemd-daemon lvm2 keyutils krb5 pam polkit-agent-1 >= 0.105 udisks2 >= 2.1.0
Run time dependencies on system being managed (outside of a theoretical superpriv container):
dbus docker (optional) geard (optional) firewalld (optional) polkit lvm2 systemd mdadm NetworkManager realmd storaged udisks2
Other (non-standard) build dependencies
libgsystem jsl pkg-config xsltproc docbook libxslt perl-Locale-PO perl-JSON
BTW is this getting backported to Fedora 20?
Yes, it's been available in Fedora 20 updates-testing since February.
$ sudo yum install --enablerepo=updates-testing cockpit $ sudo yum enable cockpit.socket $ xdg-open http://localhost:1001
How are we getting this into RHEL 7 today?
So far people can try this out in RHEL 7 Atomic. It's not yet available in RHEL 7 proper.
Stef
William Henry whenry@redhat.com writes:
Stef can we get the list of packages needed for Cockpit?
This is the current list in Fedora 20 updates-testing:
accountsservice-libs-0.6.35-4.fc20.x86_64 bash-4.2.47-3.fc20.x86_64 cockpit-0.20-1.fc20.x86_64 cockpit-assets-0.20-1.fc20.noarch dbus-1.6.12-9.fc20.x86_64 glib2-2.38.2-2.fc20.x86_64 glibc-2.18-12.fc20.x86_64 glib-networking-2.38.2-1.fc20.x86_64 json-glib-0.16.2-1.fc20.x86_64 keyutils-libs-1.5.9-1.fc20.x86_64 libgcc-4.8.3-1.fc20.x86_64 libgudev1-208-21.fc20.x86_64 libssh-0.6.3-1.fc20.x86_64 libudisks2-2.1.2-2.fc20.x86_64 lvm2-2.02.106-1.fc20.x86_64 mdadm-3.3-7.fc20.x86_64 pam-1.1.8-1.fc20.x86_64 policycoreutils-2.2.5-4.fc20.x86_64 polkit-0.112-2.fc20.x86_64 realmd-0.14.6-5.fc20.x86_64 selinux-policy-3.12.1-179.fc20.noarch selinux-policy-targeted-3.12.1-179.fc20.noarch storaged-0.2.0-1.fc20.x86_64 systemd-208-21.fc20.x86_64 systemd-libs-208-21.fc20.x86_64 udisks2-2.1.2-2.fc20.x86_64
BTW is this getting backported to Fedora 20? How are we getting this into RHEL 7 today?
Cockpit is available for Fedora 20 from the updates-testing repository, and the goal is to have it installed by default in Fedora 21 Server. I don't think it is available for RHEL 7 yet.
I have opened a Etherpad page to define a superpriv container. I think we need to use this to discuss before we bring it up for discussion with docker.
Please add comments to the pad. When we have come to some agreement, we can move this to a trello card and start working on a patch to implement it.
Daniel J Walsh dwalsh@redhat.com writes:
I have opened a Etherpad page to define a superpriv container.
Would "pallet" be a suitable name for this? You know, a container without walls..
cockpit-devel@lists.fedorahosted.org