Simo, Gunther, others,
There has been a recent discussion on sssd-devel regarding whether sssd's AD-GPO effort should support additional logon rights (in addition to the InteractiveLogonRight that we are currently planning on supporting). As you may know, there are five windows logon types:
* InteractiveLogonRight Allows a user to log on locally at the computers keyboard.
* RemoteInteractiveLogonRight Allow logon through RDP/Terminal Services
* NetworkLogonRight Determines which users are allowed to connect over the network to the computer.
* BatchLogonRight Allows a user to log on by using a batch-queue facility.
* ServiceLogonRight Allows a security principal to log on as a service. Services can be configured to run under the
There is some confusion about the NetworkLogonRight. My initial assumption was that the NetworkLogonRight referred to logging in to a windows computer over the network (e.g. by using ssh). With this assumption in mind, I thought that it would be very useful to additionally support the SeNetworkLogonRight in order to distinguish between these common use cases (network logon vs local logon); this would require us to map the various pam service names into either the "network" bucket or the "local" bucket (probably by including an ad-gpo-specific option with reasonable defaults).
However, since windows users typically use RDP (and not ssh) to perform remote network logon, and since there is a separate RemoteInteractiveLogonRight to cover that case, it is unclear what the NetworkLogonRight actually refers to. Some web sites indicate that NetworkLogon refers to connecting to a shared folder on a windows computer from elsewhere on the network. If NetworkLogon refers to accessing SMB shares, then I think the case for supporting NetworkLogonRight is less compelling. In this case, perhaps we should stay with only supporting the InteractiveLogonRight policy.
this patch replaces strerror with sss_strerror on some places.
I think it would be OK to use always sss_strerror, but to keep
the patch relatively small I left strerror on places where
we directly print value of errno or return value of some third
party functions that do not (and never will) return our specific
Patch is attached. It may look big (111 files changed), but
there are only few insertions and deletions in each.
the attached two patches are not strictly related to tokenGroups
processing, but it's very easy to reproduce the problem that way. The
issue is only confusing DEBUG messages, but it has already cost me
several hours in processing logs from an SSSD user, so I think a fix is
due, at least for master.
See the patches and the commit messages for more details.
the attached (unpolished, see my question below) patches fix
also known as:
Let me explain the problem first -- if SSSD starts before messagebus is
up, the InfoPipe responder fails to start and doesn't retry, so the
system bus service is simply not there.
A simple solution would be to start messagebus before SSSD. But I don't
think that is a robust solution, because the messagebus configuration
can reference user names, which the SSSD provides. So at the time
messagebus is up, the identities should be resolvable -- which means the
NSS responder and the back ends must be up.
The attached patches take advantage of bus activation messagebus
provides. If the interface InfoPipe provides is not registered on the
bus when requested, messagebus signals the sssd, which tells the IFP
responder to retry the system bus connection.
Currently, the WIP patches use sss_debuglevel which sends HUP to the
sssd process, but I think USR2 (aka "go online") would be better. So the
final patch version would include a helper binary that would do nothing
but singal the monitor..
I have one question to discuss though.. is it OK to use signals for the
IPC? An alternative might be to let IFP spawn a client socket and implement
only a single 'command' to retry the connection. But that seems like an
overkill to me. The disadvantage of the signal is that it's also used to
reset the online status so in theory there might be some timeouts in the
offline case, though.
In the 1.13 timeframe, we will be implementing socket-based activation.
I think it would be nice to make the IFP responder bus-activated as part
of that effort. But in the traditional schema where monitor manages all
the processes, the changes to make InfoPipe bus-activated instead of
managed by the monitor would be too invasive (I've tried to do that
please see attached patch.
This patch was previously written for BZ 1059423. But it now seems that
more detailed logging information is generally useful for issues that
are emerging from this area lately.