Hi,
I was working on a KCM server for SSSD for some time already in parallel with the files provider and had some discussions with Simo as well. Of course my intent wasn't to implement a feature secretly without a design review, but to have a prototype to base a proper design on :)
However it makes sense to have a peer-reviewed design page now, also because of Fedora's move towards Kerberos and KDC proxy, which leads to questions on the Fedora lists about ccache renewals and so on -- so I think it makes sense to pitch the design to Fedora at least already..
Here is the design page: https://fedorahosted.org/sssd/wiki/DesignDocs/KCM and here is the code of the KCM responder so far: https://github.com/jhrozek/sssd/tree/kcm
Time-wise, I would like to pivot to return to working on KCM now that the files provider is more or less done and tested.
For your convenience, the design page text is included below as well.
= KCM server for SSSD =
Related ticket(s): * https://fedorahosted.org/sssd/ticket/2887
External links: * [http://k5wiki.kerberos.org/wiki/Projects/KCM_client MIT wiki KCM documentation]
=== Problem statement === This design page describes adding a new SSSD responder, called `sssd_kcm`. This component would manage Kerberos credential caches and store them in SSSD's secrets storage.
=== Use cases === * A sysadmin needs to deploy applications in containers without worrying about applications clobbering each other's credential caches in a kernel keyring as keyrings are not namespaced * A user wants to keep having her Kerberos ticket automatically renewed regardless of the ticket being acquired through a PAM conversation with SSSD or from the command line with kinit * A system admin wants to leverage a collection-aware credentials cache for most of applications on their systems, yet enable a legacy application that can only work with a FILE-based ccache to interoperate with them
=== Overview of the solution === Over time, both libkrb5 and SSSD used different credential cache types to store Kerberos credentials - going from a simple file-based storage (`FILE:`) to a directory (`DIR:`) and most recently a kernel-keyring based cache (`KEYRING:`).
Each of these caches has its own set of advantages and disadvantages. The `FILE` ccache is very widely supported, but does not support multiple primary caches. The `DIR` cache does, but creating and managing the directories including proper access control can be tricky. The `KEYRING` cache is not well suited for cases where multiple semi-isolated environments might share the same kernel. Managing credential caches' lifetime is not well solved in neither of these cache types automatically, only with the help of a daemon like SSSD.
An interesting credentials cache that might solve the issues mentioned above is `KCM`. With KCM, the Kerberos caches are not stored in a "passive" store, but managed by a daemon. In this setup, the Kerberos library (typically used through an application, like for example, `kinit`) is a "KCM client" and the daemon is being referred to as a "KCM server".
Having the Kerberos credential caches managed by a deamon has several advantages: * the daemon is stateful and can perform tasks like Kerberos credential cache renewals or reaping old ccaches. Some tasks, like renewals are possible already with SSSD, but only for tickets that SSSD itself acquired (typically via a login through `pam_sss.so`) and tracks. Tickets acquired otherwise, most notably though kinit wouldn't be tracked and renewed. * since the process runs in userspace, it is subject to UID namespacing, [http://www.projectatomic.io/blog/2014/09/yet-another-reason-containers-don-t... unlike the kernel keyring] * unlike the kernel keyring-based cache, which is entirely dependant on UIDs of the caller and in a containerized environment is shared between all containers, the KCM server's entry point is a UNIX socket which can be bind-mounted to only some containers * the protocol between the client and the server can be extended for custom operations such as dumping a cache in a different format to a specific location. This would be beneficial for applications that only understand a certain Kerberos ccache type - for example, some legacy applications only know how to deal with a FILE-based cache, thus preventing the use of cache collections
Only the Heimdal Kerberos implementation currently implements a KCM server, but both Heimdal and MIT implement the client-side operations (in libkrb5) to manage KCM-based Kerberos ccaches. This design page describes adding a KCM server to SSSD. While it's of course possible to create a completely standalone deamon that would implement a KCM server, doing so in the context of SSSD has several advantages, notably: * An easy access to the authentication provider of SSSD that already has existing and tested code to renew Kerberos credentials on user's behalf * SSSD already has a D-Bus API that could publish information about Kerberos tickets and for example emit signals that a graphical application can consume * SSSD has a 'secrets provider' to store data at rest. It makes sense to leverage this component to store Kerberos ccaches persistently
=== Implementation details === A new SSSD responder will be added. Since accessing the Kerberos credentials is quite an infrequent operation, the responder will be socket-activated.
This responder would implement the same subset of the KCM protocol the MIT client libraries implement. Contrary to Heimdal's KCM server that just stores the credential caches in memory, the SSSD KCM server would store the ccaches in the secrets database through the sssd-secret's responder [https://jhrozek.fedorapeople.org/sssd/1.14.2/man/sssd-secrets.5.html public REST API].
For user credentials the KCM Server would use a secrets responder URI like `/kcm/users/1234/X` where 1234 is the user ID and X is the residual. The client then gets assigned a KRB5CCNAME of KCM:1234:X. Internally in the secrets responder we will store the credential caches under a new base DN `cn=kcm`.
The secret responder's quota on secrets must be made modular to allow different number of secrets per base DN (so, different number of secrets and credentials pretty much). What to do in case the quota is reached is debatable - we should probably first remove service (non-TGT) tickets first for valid TGTs and if that's not possible, just fail. A failure in this case would be no different than a failure if a disk is full when trying to store a FILE-based ccache.
The KCM responder would renew the user credentials by starting a tevent timer which would then contact the SSSD Data Provider for the given UID and principal, asking for the credentials to be renewed. Another tevent timer would reap and remove a ccache that reaches its lifetime.
In the future, SSSD-specific operations such as writing out a FILE-based ccache might be added. The SSSD D-Bus interface would also be extended to publish information about credentials activity (such as - a ticket being acquired, a ticket was renewed etc)
=== Configuration changes === No KCM-specific configuration options will be added. The SSSD KCM responder would use the same common options like other SSSD services such as idle timeout.
We can add a configurable KCM socket location later, if needed, but for the start it's fine to default to `/var/run/.heim_org.h5l.kcm-socket` mostly because that's what MIT defaults to as well.
'''Q''': Should we add an option to explicitly enable ccache renewals and default to no renewals? I don't think this would any any security though, the attacker can just run 'kinit -R' on behalf of the user anyway.
=== How To Test === In order for the admin to start using the KCM service, the sssd-kcm responder's systemd service must be enabled. Then, libkrb5 must also be configured to use KCM as its default ccache type in `/etc/krb5.conf` {{{ [libdefaults] default_ccache_name = KCM }}}
After that, all common operations like kinit, kdestroy or login through pam_sss should just work and store their credentials in the KCM server.
The KCM server must implement access control correctly, so even trying to access other user's KCM credentials by setting KRB5CCNAME to `KCM:1234:RESIDUAL` would not work (except for root).
Restarting the KCM server or rebooting the machine must persist the tickets.
As far as automatic unit and integration testing is required, we need to make sure that MIT's testsuite passes with Kerberos ccache defaulting to KCM and SSSD KCM deamon running. In the SSSD upstream, we should write integration tests that run a MIT KDC under socket_wrapper to exercise the KCM server.
=== How To Debug === The SSSD KCM server would use the same DEBUG facility as other SSSD services. In order to debug the client side operations, setting the `KRB5_TRACE` variable might come handy.
When debugging the setup, the admin might also inspect the SSSD secrets database (if permissable by SELinux policy) to see what credential caches have been stored by the SSSD.
=== Authors === * Jakub Hrozek jhrozek@redhat.com * Simo Sorce simo@redhat.com
Some thoughts inline:
On 11/22/2016 02:51 AM, Jakub Hrozek wrote:
...
=== Implementation details === A new SSSD responder will be added. Since accessing the Kerberos credentials is quite an infrequent operation, the responder will be socket-activated.
This responder would implement the same subset of the KCM protocol the MIT client libraries implement. Contrary to Heimdal's KCM server that just stores the credential caches in memory, the SSSD KCM server would store the ccaches in the secrets database through the sssd-secret's responder [https://jhrozek.fedorapeople.org/sssd/1.14.2/man/sssd-secrets.5.html public REST API].
For user credentials the KCM Server would use a secrets responder URI like `/kcm/users/1234/X` where 1234 is the user ID and X is the residual. The client then gets assigned a KRB5CCNAME of KCM:1234:X. Internally in the secrets responder we will store the credential caches under a new base DN `cn=kcm`.
The secret responder's quota on secrets must be made modular to allow different number of secrets per base DN (so, different number of secrets and credentials pretty much). What to do in case the quota is reached is debatable - we should probably first remove service (non-TGT) tickets first for valid TGTs and if that's not possible, just fail. A failure in this case would be no different than a failure if a disk is full when trying to store a FILE-based ccache.
The KCM responder would renew the user credentials by starting a tevent timer which would then contact the SSSD Data Provider for the given UID and principal, asking for the credentials to be renewed. Another tevent timer would reap and remove a ccache that reaches its lifetime.
OK, so the service is only semi-socket-activated? If we're keeping tevent timers around for renewals and reaping, the service won't be exiting unless all tickets have expired and been reaped.
Would it be possible to look into setting non-persistent systemd timer units instead that would "wake up" the sssd_kcm service at appropriate times to do renewal and expiration?
systemd provides the CreateTransientUnit() method on its public API that we could use for this purpose. If we did it this way, then the service would only need to be activated whenever a ticket was actually being retrieved from the collection.
In the future, SSSD-specific operations such as writing out a FILE-based ccache might be added. The SSSD D-Bus interface would also be extended to publish information about credentials activity (such as - a ticket being acquired, a ticket was renewed etc)
This should be a high priority, since it will benefit tools such as GNOME Online Accounts greatly (right now, they have to poll the kernel keyring and are not happy about it; with FILE and DIR caches, they at least could get notifications via inotify)
=== Configuration changes === No KCM-specific configuration options will be added. The SSSD KCM responder would use the same common options like other SSSD services such as idle timeout.
We can add a configurable KCM socket location later, if needed, but for the start it's fine to default to `/var/run/.heim_org.h5l.kcm-socket` mostly because that's what MIT defaults to as well.
'''Q''': Should we add an option to explicitly enable ccache renewals and default to no renewals? I don't think this would any any security though, the attacker can just run 'kinit -R' on behalf of the user anyway.
If the user asked for a renewable ticket in the first place (and the server permitted it), then I don't see any reason not to just go ahead and renew it by default.
=== How To Test === In order for the admin to start using the KCM service, the sssd-kcm responder's systemd service must be enabled. Then, libkrb5 must also be configured to use KCM as its default ccache type in `/etc/krb5.conf` {{{ [libdefaults] default_ccache_name = KCM }}}
After that, all common operations like kinit, kdestroy or login through pam_sss should just work and store their credentials in the KCM server.
The KCM server must implement access control correctly, so even trying to access other user's KCM credentials by setting KRB5CCNAME to `KCM:1234:RESIDUAL` would not work (except for root).
Restarting the KCM server or rebooting the machine must persist the tickets.
As far as automatic unit and integration testing is required, we need to make sure that MIT's testsuite passes with Kerberos ccache defaulting to KCM and SSSD KCM deamon running. In the SSSD upstream, we should write integration tests that run a MIT KDC under socket_wrapper to exercise the KCM server.
How do containers access the KCM? Would they have to run their own copy internal to the container? Would we bind-mount the /var/run/.heim_org.h5l.kcm-socket and then work some namespacing magic in the host?
You call out in the introduction that this will help address container use-cases, but then don't describe that implementation. This seems like an important piece of the puzzle that should be added to the page (or made more clear, since if it's in there, I can't spot it).
On Tue, 2016-11-22 at 09:23 -0500, Stephen Gallagher wrote:
Some thoughts inline:
On 11/22/2016 02:51 AM, Jakub Hrozek wrote:
...
=== Implementation details === A new SSSD responder will be added. Since accessing the Kerberos credentials is quite an infrequent operation, the responder will be socket-activated.
This responder would implement the same subset of the KCM protocol the MIT client libraries implement. Contrary to Heimdal's KCM server that just stores the credential caches in memory, the SSSD KCM server would store the ccaches in the secrets database through the sssd-secret's responder [https://jhrozek.fedorapeople.org/sssd/1.14.2/man/sssd-secrets.5.html public REST API].
For user credentials the KCM Server would use a secrets responder URI like `/kcm/users/1234/X` where 1234 is the user ID and X is the residual. The client then gets assigned a KRB5CCNAME of KCM:1234:X. Internally in the secrets responder we will store the credential caches under a new base DN `cn=kcm`.
The secret responder's quota on secrets must be made modular to allow different number of secrets per base DN (so, different number of secrets and credentials pretty much). What to do in case the quota is reached is debatable - we should probably first remove service (non-TGT) tickets first for valid TGTs and if that's not possible, just fail. A failure in this case would be no different than a failure if a disk is full when trying to store a FILE-based ccache.
The KCM responder would renew the user credentials by starting a tevent timer which would then contact the SSSD Data Provider for the given UID and principal, asking for the credentials to be renewed. Another tevent timer would reap and remove a ccache that reaches its lifetime.
OK, so the service is only semi-socket-activated? If we're keeping tevent timers around for renewals and reaping, the service won't be exiting unless all tickets have expired and been reaped.
Would it be possible to look into setting non-persistent systemd timer units instead that would "wake up" the sssd_kcm service at appropriate times to do renewal and expiration?
systemd provides the CreateTransientUnit() method on its public API that we could use for this purpose. If we did it this way, then the service would only need to be activated whenever a ticket was actually being retrieved from the collection.
I am trying to think if this would gain us anything. What would you use as a reasonable timeout to decide to exit ? There are other events we may want to detect in future. for example we may want to decide to destroy all of a user ccaches if all his sessions go away, this requires active probing though, but I guess it could also be a timer or maybe systemd has a way to notify and start a process when this happens too ?
In the future, SSSD-specific operations such as writing out a FILE-based ccache might be added. The SSSD D-Bus interface would also be extended to publish information about credentials activity (such as - a ticket being acquired, a ticket was renewed etc)
This should be a high priority, since it will benefit tools such as GNOME Online Accounts greatly (right now, they have to poll the kernel keyring and are not happy about it; with FILE and DIR caches, they at least could get notifications via inotify)
That's the main target, but they will still need to keep the old code for now, until we make sssd-kcm mandatory for gnome sessions.
=== Configuration changes === No KCM-specific configuration options will be added. The SSSD KCM responder would use the same common options like other SSSD services such as idle timeout.
We can add a configurable KCM socket location later, if needed, but for the start it's fine to default to `/var/run/.heim_org.h5l.kcm-socket` mostly because that's what MIT defaults to as well.
'''Q''': Should we add an option to explicitly enable ccache renewals and default to no renewals? I don't think this would any any security though, the attacker can just run 'kinit -R' on behalf of the user anyway.
If the user asked for a renewable ticket in the first place (and the server permitted it), then I don't see any reason not to just go ahead and renew it by default.
You might be surprised at what regulations may require in this area, but I tend to agree that it is ok as a default.
=== How To Test === In order for the admin to start using the KCM service, the sssd-kcm responder's systemd service must be enabled. Then, libkrb5 must also be configured to use KCM as its default ccache type in `/etc/krb5.conf` {{{ [libdefaults] default_ccache_name = KCM }}}
After that, all common operations like kinit, kdestroy or login through pam_sss should just work and store their credentials in the KCM server.
The KCM server must implement access control correctly, so even trying to access other user's KCM credentials by setting KRB5CCNAME to `KCM:1234:RESIDUAL` would not work (except for root).
Restarting the KCM server or rebooting the machine must persist the tickets.
As far as automatic unit and integration testing is required, we need to make sure that MIT's testsuite passes with Kerberos ccache defaulting to KCM and SSSD KCM deamon running. In the SSSD upstream, we should write integration tests that run a MIT KDC under socket_wrapper to exercise the KCM server.
How do containers access the KCM? Would they have to run their own copy internal to the container? Would we bind-mount the /var/run/.heim_org.h5l.kcm-socket and then work some namespacing magic in the host?
Deployment specific, I can see either way as an option depending on what you are doing.
You call out in the introduction that this will help address container use-cases, but then don't describe that implementation. This seems like an important piece of the puzzle that should be added to the page (or made more clear, since if it's in there, I can't spot it).
What would you want to call out exactly ?
Simo.
On 11/22/2016 09:38 AM, Simo Sorce wrote:
On Tue, 2016-11-22 at 09:23 -0500, Stephen Gallagher wrote:
OK, so the service is only semi-socket-activated? If we're keeping tevent timers around for renewals and reaping, the service won't be exiting unless all tickets have expired and been reaped.
Would it be possible to look into setting non-persistent systemd timer units instead that would "wake up" the sssd_kcm service at appropriate times to do renewal and expiration?
systemd provides the CreateTransientUnit() method on its public API that we could use for this purpose. If we did it this way, then the service would only need to be activated whenever a ticket was actually being retrieved from the collection.
I am trying to think if this would gain us anything. What would you use as a reasonable timeout to decide to exit ? There are other events we may want to detect in future. for example we may want to decide to destroy all of a user ccaches if all his sessions go away, this requires active probing though, but I guess it could also be a timer or maybe systemd has a way to notify and start a process when this happens too ?
Well, as far as a timeout to exit, I'd probably go with a minute or two (since you may have a short flurry of activity, such as when a user first connects to a VPN).
As far as notification of a user signing in or out, we *could* attach a helper unit to the systemd user session default.target (and have an ExecStop command for handling when it gets shut down too).
How do containers access the KCM? Would they have to run their own copy internal to the container? Would we bind-mount the /var/run/.heim_org.h5l.kcm-socket and then work some namespacing magic in the host?
Deployment specific, I can see either way as an option depending on what you are doing.
OK, but the document doesn't describe how that might be done. We should identify the set of supported approaches up-front and include them in the design.
You call out in the introduction that this will help address container use-cases, but then don't describe that implementation. This seems like an important piece of the puzzle that should be added to the page (or made more clear, since if it's in there, I can't spot it).
What would you want to call out exactly ?
Describe a couple use-cases and the expected user experience for setting them up and using them. If we bind-mount the host's KCM into the container, would the user namespacing be handled "magically" by the kernel or do we need to keep track of which cgroup our client is and sort it into its own section of the database? (Just for example).
On Tue, Nov 22, 2016 at 09:49:52AM -0500, Stephen Gallagher wrote:
On 11/22/2016 09:38 AM, Simo Sorce wrote:
On Tue, 2016-11-22 at 09:23 -0500, Stephen Gallagher wrote:
OK, so the service is only semi-socket-activated? If we're keeping tevent timers around for renewals and reaping, the service won't be exiting unless all tickets have expired and been reaped.
Would it be possible to look into setting non-persistent systemd timer units instead that would "wake up" the sssd_kcm service at appropriate times to do renewal and expiration?
systemd provides the CreateTransientUnit() method on its public API that we could use for this purpose. If we did it this way, then the service would only need to be activated whenever a ticket was actually being retrieved from the collection.
I am trying to think if this would gain us anything. What would you use as a reasonable timeout to decide to exit ? There are other events we may want to detect in future. for example we may want to decide to destroy all of a user ccaches if all his sessions go away, this requires active probing though, but I guess it could also be a timer or maybe systemd has a way to notify and start a process when this happens too ?
Well, as far as a timeout to exit, I'd probably go with a minute or two (since you may have a short flurry of activity, such as when a user first connects to a VPN).
I think the systemd transient units would be workable, but I also think it's not the biggest priority. For starters, we could make the service exit only when there are no credentials to be renewed or kept track about. So unless you disagree, I would file a ticket about this and proceed without any elaborate shutdown logic first. There is still a lot of work to do even without this additional functionality :)
As far as notification of a user signing in or out, we *could* attach a helper unit to the systemd user session default.target (and have an ExecStop command for handling when it gets shut down too).
Yes, this is something to look into. We already have https://fedorahosted.org/sssd/ticket/2551
How do containers access the KCM? Would they have to run their own copy internal to the container? Would we bind-mount the /var/run/.heim_org.h5l.kcm-socket and then work some namespacing magic in the host?
Deployment specific, I can see either way as an option depending on what you are doing.
OK, but the document doesn't describe how that might be done. We should identify the set of supported approaches up-front and include them in the design.
You call out in the introduction that this will help address container use-cases, but then don't describe that implementation. This seems like an important piece of the puzzle that should be added to the page (or made more clear, since if it's in there, I can't spot it).
What would you want to call out exactly ?
Describe a couple use-cases and the expected user experience for setting them up and using them. If we bind-mount the host's KCM into the container, would the user namespacing be handled "magically" by the kernel or do we need to keep track of which cgroup our client is and sort it into its own section of the database? (Just for example).
I added (and tested!) two, both concern containers because the single host use-case is IMO quite clear. You can find them described in detailed steps here: https://fedorahosted.org/sssd/wiki/DesignDocs/KCM#Use-case:separatingccaches... and here: https://fedorahosted.org/sssd/wiki/DesignDocs/KCM#Use-case:separatingccaches...
Writing up these cases was actually a nice exercise to see if the current version in my branch already covers what we wanted :)
Feel free to ask if you'd like me to test and document more cases.
On Tue, Nov 22, 2016 at 09:23:22AM -0500, Stephen Gallagher wrote:
Some thoughts inline:
On 11/22/2016 02:51 AM, Jakub Hrozek wrote:
...
=== Implementation details === A new SSSD responder will be added. Since accessing the Kerberos credentials is quite an infrequent operation, the responder will be socket-activated.
This responder would implement the same subset of the KCM protocol the MIT client libraries implement. Contrary to Heimdal's KCM server that just stores the credential caches in memory, the SSSD KCM server would store the ccaches in the secrets database through the sssd-secret's responder [https://jhrozek.fedorapeople.org/sssd/1.14.2/man/sssd-secrets.5.html public REST API].
For user credentials the KCM Server would use a secrets responder URI like `/kcm/users/1234/X` where 1234 is the user ID and X is the residual. The client then gets assigned a KRB5CCNAME of KCM:1234:X. Internally in the secrets responder we will store the credential caches under a new base DN `cn=kcm`.
The secret responder's quota on secrets must be made modular to allow different number of secrets per base DN (so, different number of secrets and credentials pretty much). What to do in case the quota is reached is debatable - we should probably first remove service (non-TGT) tickets first for valid TGTs and if that's not possible, just fail. A failure in this case would be no different than a failure if a disk is full when trying to store a FILE-based ccache.
The KCM responder would renew the user credentials by starting a tevent timer which would then contact the SSSD Data Provider for the given UID and principal, asking for the credentials to be renewed. Another tevent timer would reap and remove a ccache that reaches its lifetime.
OK, so the service is only semi-socket-activated? If we're keeping tevent timers around for renewals and reaping, the service won't be exiting unless all tickets have expired and been reaped.
Would it be possible to look into setting non-persistent systemd timer units instead that would "wake up" the sssd_kcm service at appropriate times to do renewal and expiration?
systemd provides the CreateTransientUnit() method on its public API that we could use for this purpose. If we did it this way, then the service would only need to be activated whenever a ticket was actually being retrieved from the collection.
All good points, I 'just' need to look into these systemd timers.
In the future, SSSD-specific operations such as writing out a FILE-based ccache might be added. The SSSD D-Bus interface would also be extended to publish information about credentials activity (such as - a ticket being acquired, a ticket was renewed etc)
This should be a high priority, since it will benefit tools such as GNOME Online Accounts greatly (right now, they have to poll the kernel keyring and are not happy about it; with FILE and DIR caches, they at least could get notifications via inotify)
OK, noted.
=== Configuration changes === No KCM-specific configuration options will be added. The SSSD KCM responder would use the same common options like other SSSD services such as idle timeout.
We can add a configurable KCM socket location later, if needed, but for the start it's fine to default to `/var/run/.heim_org.h5l.kcm-socket` mostly because that's what MIT defaults to as well.
'''Q''': Should we add an option to explicitly enable ccache renewals and default to no renewals? I don't think this would any any security though, the attacker can just run 'kinit -R' on behalf of the user anyway.
If the user asked for a renewable ticket in the first place (and the server permitted it), then I don't see any reason not to just go ahead and renew it by default.
=== How To Test === In order for the admin to start using the KCM service, the sssd-kcm responder's systemd service must be enabled. Then, libkrb5 must also be configured to use KCM as its default ccache type in `/etc/krb5.conf` {{{ [libdefaults] default_ccache_name = KCM }}}
After that, all common operations like kinit, kdestroy or login through pam_sss should just work and store their credentials in the KCM server.
The KCM server must implement access control correctly, so even trying to access other user's KCM credentials by setting KRB5CCNAME to `KCM:1234:RESIDUAL` would not work (except for root).
Restarting the KCM server or rebooting the machine must persist the tickets.
As far as automatic unit and integration testing is required, we need to make sure that MIT's testsuite passes with Kerberos ccache defaulting to KCM and SSSD KCM deamon running. In the SSSD upstream, we should write integration tests that run a MIT KDC under socket_wrapper to exercise the KCM server.
How do containers access the KCM? Would they have to run their own copy internal to the container? Would we bind-mount the /var/run/.heim_org.h5l.kcm-socket and then work some namespacing magic in the host?
Both cases are valid and depend on what the application in the container needs. If the container needs its own isolated KCM server, it can do so. If the ccaches are supposed to be shared between the container and the host, you can bind-mount the socket from the host.
Or, you can even bind mount a KCM socket from one container to another and share the credential caches this way.
You call out in the introduction that this will help address container use-cases, but then don't describe that implementation. This seems like an important piece of the puzzle that should be added to the page (or made more clear, since if it's in there, I can't spot it).
One way KCM helps is that you don't actually /need/ to give the container access to the KCM socket, unlike the kernel keyring which is always shared between all containers.
Then there are user namespaces. I'm actually not quite sure myself what is the state of user namespaces at least as far as Fedora is concerned (IIRC they were disabled for some time at least in RHEL), but my understanding is that you can remap a container's root into a subordinate namespace and thus separate different 'container roots' from one another.
On Tue, Nov 22, 2016 at 08:51:10AM +0100, Jakub Hrozek wrote:
Hi,
I was working on a KCM server for SSSD for some time already in parallel with the files provider and had some discussions with Simo as well. Of course my intent wasn't to implement a feature secretly without a design review, but to have a prototype to base a proper design on :)
However it makes sense to have a peer-reviewed design page now, also because of Fedora's move towards Kerberos and KDC proxy, which leads to questions on the Fedora lists about ccache renewals and so on -- so I think it makes sense to pitch the design to Fedora at least already..
Here is the design page: https://fedorahosted.org/sssd/wiki/DesignDocs/KCM and here is the code of the KCM responder so far: https://github.com/jhrozek/sssd/tree/kcm
Hi,
I moved the design doc to pagure: https://docs.pagure.org/SSSD.sssd/design_pages/kcm.html
It's been cleaned up and reconciled with the implementation.
On Wed, 2017-04-05 at 21:02 +0200, Jakub Hrozek wrote:
On Tue, Nov 22, 2016 at 08:51:10AM +0100, Jakub Hrozek wrote:
Hi,
I was working on a KCM server for SSSD for some time already in parallel with the files provider and had some discussions with Simo as well. Of course my intent wasn't to implement a feature secretly without a design review, but to have a prototype to base a proper design on :)
However it makes sense to have a peer-reviewed design page now, also because of Fedora's move towards Kerberos and KDC proxy, which leads to questions on the Fedora lists about ccache renewals and so on -- so I think it makes sense to pitch the design to Fedora at least already..
Here is the design page: https://fedorahosted.org/sssd/wiki/DesignDocs/KCM and here is the code of the KCM responder so far: https://github.com/jhrozek/sssd/tree/kcm
Hi,
I moved the design doc to pagure: https://docs.pagure.org/SSSD.sssd/design_pages/kcm.html
It's been cleaned up and reconciled with the implementation.
It looks really nice with the docs formatting/font/style :-)
.. and the content LGTM too.
Simo.
sssd-devel@lists.fedorahosted.org