Hello, please see trivial patch attached.
While I was investigating case I found that to access value of
'ldap_purge_cache_timeout' option I need to use enum value
SDAP_CACHE_PURGE_TIMEOUT. I consider this to be a bad name (swap of
cache and purge) as I took me additional time to find this out. I think
that proposed name is better.
Unless somebody feels strongly against the patch I think it could be
reviewed by our new colleague.
this is the initial version of my patch which add Smartcard
authentication to SSSD. I'm still working on a design page which will
explain everything in more details so I will only add a short version
The main job will be done by a new child process called p11_child. Since
the Smartcard support in GDM is based on NSS I used NSS for the first
version of p11_child as well. But since all PKCS#11 (API to talk to
Smartcards) related code is in this child process adding support for
other PKCS#11 frameworks like p11-kit would be straight forward (in fact
I already started on the p11-kit version). Using NSS here means you have
to add the PKCS#11 module for your Smartcards reader to /etc/pki/nssdb
(the NSS DB GDM uses as well) with modutil or pk11install from the
The PAM configuration so far must not be changed. pam_sss will do a
pre-auth request similar to the OPT case for find a suitable
authentication method for the user. The pam responder then checks is
Smartcard authentication is enabled (pam_cert_auth = True in the [pam]
section of sssd.conf), if the service is a local one and if there if a
valid certificate can be found which is available in the users LDAP
entry as well. If all this checks pass pam_sss will ask the user for a
PIN and then SSSD tries to validate that PIN, public and private keys
all relate to each other. If no Smartcard is found for the user the
standard password prompt is displayed.
With some valuable input form Christian Heimes I think I found a way to
test the Smartcard support even without real hardware but I still have
to work out some of the details. I will add instructions to the design
page and better and more unit tests.
Any comments and suggestions are welcome.
This patch fixes an issue with two factor authentication. When the user
is prompted to enter long term password (first factor) and one-time
component (second factor) separately the first component may remain on
the PAM stack so that other modules can use the long term password.
it is a first version of integration test for memory cache.
The main purpose of this mail is to have some comments to the first version.
cwrap doesn't support properly initgroups therefore I used a ctype module
for calling initgroups from libnsss_sss.so.
I'm aware of some issies in this patch: wildcard import, function for calling
sssd initgroups can be moved to separate module ...
You might need a patch from thread "[SSSD] [PATCH] intg: Invalidate memory
cache before removing files" to have all gree tests.
I'm also attaching a two tests which fails. It is a known bugs with memory
cache. It just for showing that attached patch is really testing sometning :-)
Please provide any comments which can improve this tests.
one of our users ran into an interesting problem -- her AD
infrastructure was different from the DNS server. Because by default, we
perform update against the server we're connected to, the DNS update
Per Simo's suggestion, I've implemented a new option that allows the
administrator to override the DNS server used for DNS updates.
the attached two patches should fix
https://fedorahosted.org/sssd/ticket/2731 . If an object is looked up by
a POSIX UID or GID we always assume a multi-domain search and only have
a global, i.e. for all domains, negative cache.
Doing a multi-domain search makes sense because there is no such thing
as a fully-qualified UID and using fixed ranges for every domain might
not be possible in the general case, e.g. in AD forests where the POSIX
IDs are managed by AD. We might want to use some information we have in
the IPA case, but I think this way some backend data would leak into the
responders and there are better ways to fix the general case.
For the typical POSIX calls like getpwuid() and getgrgid() there already
is no issue because if a matching object is found it is added to the
memory cache and for some time no searches hit the responders.
For other calls, especially the SID-by-ID calls which is exposed by
libwbclient-sssd to samba and used regularly here, where there is no
memory cache (yet) the current processing is pretty expensive. Because
if the ID was not found in the cache of the first domain SSSD will ask
the backend of the first domain which causes an LDAP request. If the ID
was not found the second domain (or sub-domain) is checked first in the
cache and if not found via the backend. If sooner or later a matching ID
is found it is save in the cache of the corresponding domain. But the
next request for the same ID would cause the same sequence again because
we call the backend before checking the caches of the other domains.
The proper solution would be a memory cache for the SID related requests
which is tracked https://fedorahosted.org/sssd/ticket/2727 . As a short
term fix I made the negative cache for UIDs and GIDs domain aware. Now a
second request which comes shortly after the first will see the ID on
the negative cache of the other domain and will find the cached entry of
the right domain without calling the backends.