Starting SSSD without root
by Tero Saarni
Hi,
I'm trying to run SSSD inside docker container without root user. The
container is executed in OpenShift cluster which does not allow running as root
inside container.
SSSD requires root and checks for this specifically.
Is there any workaround for this?
I believe the limitation is implemented for security reasons, in order to have
most critical parts executed as root and have it drop privileges for other
parts but this now completely blocks using SSSD in the above environment.
--
Tero
2 weeks, 3 days
sssd failing due to self-signed certificates--but that's not what openssl says
by Johnnie W Adams
Hi, folks,
So I've got a very puzzling situation. Just today, when I look at sssd
with systemctl status, I get this error: *Could not start TLS encryption.
error:1416F086:SSL routines:tls_process_server_certificate:certificate
verify failed (self signed certificate in certificate chain)*
However, when I run openssl s_client -showcerts -connect
ldap.example.com:636, it shows a completely valid, not-self-signed
certificate chain.
This is happening on RHEL7 through 9. I'm puzzled. Anyone else have
ideas?
Thanks,
John A
--
John Adams
Senior Linux/Middleware Administrator | Information Technology Services
+1-501-916-3010 | jxadams(a)ualr.edu | http://ualr.edu/itservices
*UA Little Rock*
Reminder: IT Services will never ask for your password over the phone or
in an email. Always be suspicious of requests for personal information that
come via email, even from known contacts. For more information or to
report suspicious email, visit IT Security
<http://ualr.edu/itservices/security/>.
2 months
Warning for cached password expiration
by John Doe
Hello
I'm wondering if there's any way to access the informational message about
password expiration given upon login when using cached credentials? When
having pam_verbosity = 2 in sssd.conf, the following informational message
is given;
"Authenticated with cached credentials, your cached password will expire at
Sat Apr 20 15:41:18 2024"
Now I know I can calculate the time for expiration myself by checking the
'offline_credentials_expiration' value in sssd.conf and add that to the
timestamp for cache entry last update time reported by 'sudo sssctl
user-show $USER' but both of these require root access. I need to get the
expiration timestamp as a regular user.
The reason for this is that we do have a large number of external
developers who are all given laptops with the company Linux image applied,
having them log in using their Active Directory credentials. They do have
VPN access but the nature of the projects they're working on they seldom
need to be connected to our network :-(
I was thinking I could create a little script/application that notifies
them a few days ahead of password expiration to remind them to connect to
the VPN.
I was thinking of 'sss_cache' as that can run as a regular user but that
can't give me the timestamp :-(
Worst case, I can perhaps write somethinh in python, but that depends of
the availability of APIs and maybe that still will require root access.
Thanks!
2 months
Internal credentials cache error while getting initial credentials
by Albert Szostkiewicz
Hey,
Need some help here, I am unable to log-in. when trying to use kinit on my user, I am getting an error:
kinit: Failed to store credentials: Internal credentials cache error while getting initial credentials
sssd runs. log shows:
Oct 13 20:32:59 user.mydomain.com krb5_child[4846]: Internal credentials cache error
sssd_kcm.log states:
* (2023-10-13 21:17:43): [kcm] [local_db_check_peruid_number_of_secrets] (0x0040): [CID#8708] Cannot store any more secrets for this client (basedn cn=1907400001,cn=persistent,cn=kcm) as the maximum allowed limit (66) has been reached
********************** BACKTRACE DUMP ENDS HERE *********************************
(2023-10-13 21:17:43): [kcm] [sss_sec_update] (0x0040): [CID#8708] local_db_check_number_of_secrets failed [1432158289]: The maximum number of stored secrets has been reached
(2023-10-13 21:17:43): [kcm] [sec_update] (0x0040): [CID#8708] Cannot write the secret [1432158289]: The maximum number of stored secrets has been reached
********************** PREVIOUS MESSAGE WAS TRIGGERED BY THE FOLLOWING BACKTRACE:
* (2023-10-13 21:17:43): [kcm] [sss_sec_update] (0x0040): [CID#8708] local_db_check_number_of_secrets failed [1432158289]: The maximum number of stored secrets has been reached
* (2023-10-13 21:17:43): [kcm] [sec_update] (0x0040): [CID#8708] Cannot write the secret [1432158289]: The maximum number of stored secrets has been reached
********************** BACKTRACE DUMP ENDS HERE *********************************
(2023-10-13 21:17:43): [kcm] [kcm_ccdb_mod_done] (0x0040): [CID#8708] Failed to create ccache [1432158289]: The maximum number of stored secrets has been reached
(2023-10-13 21:17:43): [kcm] [kcm_op_set_kdc_offset_mod_done] (0x0040): [CID#8708] Cannot modify ccache [1432158289]: The maximum number of stored secrets has been reached
********************** PREVIOUS MESSAGE WAS TRIGGERED BY THE FOLLOWING BACKTRACE:
* (2023-10-13 21:17:43): [kcm] [kcm_ccdb_mod_done] (0x0040): [CID#8708] Failed to create ccache [1432158289]: The maximum number of stored secrets has been reached
* (2023-10-13 21:17:43): [kcm] [kcm_op_set_kdc_offset_mod_done] (0x0040): [CID#8708] Cannot modify ccache [1432158289]: The maximum number of stored secrets has been reached
********************** BACKTRACE DUMP ENDS HERE *********************************
(2023-10-13 21:17:43): [kcm] [kcm_cmd_done] (0x0040): [CID#8708] op receive function failed [1432158289]: The maximum number of stored secrets has been reached
(2023-10-13 21:17:43): [kcm] [kcm_cmd_request_done] (0x0040): [CID#8708] KCM operation failed [1432158289]: The maximum number of stored secrets has been reached
********************** PREVIOUS MESSAGE WAS TRIGGERED BY THE FOLLOWING BACKTRACE:
* (2023-10-13 21:17:43): [kcm] [kcm_cmd_done] (0x0040): [CID#8708] op receive function failed [1432158289]: The maximum number of stored secrets has been reached
* (2023-10-13 21:17:43): [kcm] [kcm_cmd_request_done] (0x0040): [CID#8708] KCM operation failed [1432158289]: The maximum number of stored secrets has been reached
********************** BACKTRACE DUMP ENDS HERE *********************************
KRB5_TRACE=/dev/stderr ipa --debug ping
ipa: DEBUG: importing plugin module ipaclient.plugins.trust
ipa: DEBUG: importing plugin module ipaclient.plugins.user
ipa: DEBUG: importing plugin module ipaclient.plugins.vault
ipa: DEBUG: trying https://workstation.mydomain.com/ipa/json
ipa: DEBUG: Created connection context.rpcclient_140066561958480
ipa: DEBUG: raw: ping(version='2.252')
ipa: DEBUG: ping(version='2.252')
ipa: DEBUG: [try 1]: Forwarding 'ping/1' to json server 'https://workstation.mydomain.com/ipa/json'
ipa: DEBUG: New HTTP connection (workstation.mydomain.com)
ipa: DEBUG: HTTP connection destroyed (workstation.mydomain.com)
Traceback (most recent call last):
File "/usr/lib/python3.11/site-packages/ipalib/rpc.py", line 644, in get_auth_info
response = self._sec_context.step()
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/decorator.py", line 232, in fun
return caller(func, *(extras + args), **kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/site-packages/gssapi/_utils.py", line 165, in check_last_err
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/decorator.py", line 232, in fun
return caller(func, *(extras + args), **kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/site-packages/gssapi/_utils.py", line 131, in catch_and_return_token
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/site-packages/gssapi/sec_contexts.py", line 584, in step
return self._initiator_step(token=token)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/site-packages/gssapi/sec_contexts.py", line 606, in _initiator_step
res = rsec_contexts.init_sec_context(self._target_name, self._creds,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "gssapi/raw/sec_contexts.pyx", line 188, in gssapi.raw.sec_contexts.init_sec_context
gssapi.raw.exceptions.MissingCredentialsError: Major (458752): No credentials were supplied, or the credentials were unavailable or inaccessible, Minor (2529639053): No Kerberos credentials available (default cache: KCM:)
During the handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.11/site-packages/ipalib/rpc.py", line 697, in single_request
self.get_auth_info()
File "/usr/lib/python3.11/site-packages/ipalib/rpc.py", line 646, in get_auth_info
self._handle_exception(e, service=service)
File "/usr/lib/python3.11/site-packages/ipalib/rpc.py", line 603, in _handle_exception
raise errors.CCacheError()
ipalib.errors.CCacheError: did not receive Kerberos credentials
ipa: DEBUG: Destroyed connection context.rpcclient_140066561958480
ipa: ERROR: did not receive Kerberos credentials
I appreciate if anyone have some ideas. Thank you!
2 months
getent group stop working
by Eric Doutreleau
Hi
I m using sssd-2.9.1 on Rocky linux 9 and i have stange behaviour with
group enumeration.
I stop sssd , remove the cache , and start sssd
when i run getent group i only get the local group.
i have put the sss_nss to debug level 9
and i got on the sssd_nss log file the follwing output
(2024-02-13 15:30:40): [nss] [sysdb_enumgrent_filter_with_views]
(0x0040): [CID#10] sysdb_enumgrent failed.
********************** PREVIOUS MESSAGE WAS TRIGGERED BY THE FOLLOWING
BACKTRACE:
* (2024-02-13 15:30:40): [nss] [cache_req_send] (0x0400): [CID#7]
CR #65: REQ_TRACE: New request [CID #7] 'Group by ID'
* (2024-02-13 15:30:40): [nss] [cache_req_select_domains] (0x0400):
[CID#7] CR #65: Performing a multi-domain search
* (2024-02-13 15:30:40): [nss] [cache_req_search_domains] (0x0400):
[CID#7] CR #65: Search will check the cache and check the data provider
* (2024-02-13 15:30:40): [nss] [cache_req_validate_domain_type]
(0x2000): [CID#7] Request type POSIX-only for domain ibfj-evry.fr type
POSIX is valid
* (2024-02-13 15:30:40): [nss] [cache_req_set_domain] (0x0400):
[CID#7] CR #65: Using domain [ibfj-evry.fr]
* (2024-02-13 15:30:40): [nss] [cache_req_search_send] (0x0400):
[CID#7] CR #65: Looking up GID:533@ibfj-evry.fr
* (2024-02-13 15:30:40): [nss] [cache_req_search_ncache] (0x0400):
[CID#7] CR #65: Checking negative cache for [GID:533@ibfj-evry.fr]
* (2024-02-13 15:30:40): [nss] [sss_ncache_check_str] (0x2000):
[CID#7] Checking negative cache for [NCE/GID/ibfj-evry.fr/533]
* (2024-02-13 15:30:40): [nss] [sss_ncache_check_str] (0x2000):
[CID#7] Checking negative cache for [NCE/GID/533]
* (2024-02-13 15:30:40): [nss] [cache_req_search_ncache] (0x0400):
[CID#7] CR #65: [GID:533@ibfj-evry.fr] is not present in negative cache
* (2024-02-13 15:30:40): [nss] [cache_req_search_cache] (0x0400):
[CID#7] CR #65: Looking up [GID:533@ibfj-evry.fr] in cache
[....... line remove]]
* (2024-02-13 15:30:40): [nss] [cache_req_search_dp] (0x0400): [CID#10]
CR #78: Looking up [Groups enumeration] in data provider
* (2024-02-13 15:30:40): [nss] [sss_dp_get_account_send] (0x0400):
[CID#10] Creating request for [ibfj-evry.fr][0x2][BE_REQ_GROUP][*:-]
* (2024-02-13 15:30:40): [nss] [sbus_dispatch] (0x4000): Dispatching.
* (2024-02-13 15:30:40): [nss] [sss_domain_get_state] (0x1000):
[CID#10] Domain ibfj-evry.fr is Active
* (2024-02-13 15:30:40): [nss] [cache_req_search_cache] (0x0400):
[CID#10] CR #78: Looking up [Groups enumeration] in cache
* (2024-02-13 15:30:40): [nss] [sysdb_enumgrent_filter] (0x1000):
[CID#10] Searching timestamp cache with [(objectCategory=group)]
* (2024-02-13 15:30:40): [nss] [sysdb_cache_search_groups]
(0x2000): [CID#10] Search groups with filter:
(&(objectCategory=group)(objectCategory=group))
* (2024-02-13 15:30:40): [nss] [sysdb_enumgrent_filter_with_views]
(0x0040): [CID#10] sysdb_enumgrent failed.
********************** BACKTRACE DUMP ENDS HERE
*********************************
(2024-02-13 15:30:40): [nss] [cache_req_search_cache] (0x0020): [CID#10]
CR #78: Unable to lookup [Groups enumeration] in cache [5]: Input/output
error
i have look at the cache with ldbtool
i can read it
export LDB_URL=/var/lib/sss/db/cache_ibfj-evry.fr.ldb
give me the content of the cache without problem.
Here is the content of my sssd.conf file
nss]
filter_groups = root,wheel
debug_level = 3
filter_users = root,nrpe
reconnection_retries = 9
[pam]
offline_credentials_expiration = 0
debug_level = 3
reconnection_retries = 3
[domain/ibfj-evry.fr]
ldap_user_name = sAMAccountName
krb5_canonicalize = false
ldap_user_home_directory = unixHomeDirectory
cache_credentials = true
ldap_group_object_class = group
ldap_account_expire_policy = ad
chpass_provider = ad
entry_cache_timeout = 7200
id_provider = ad
auth_provider = ad
ldap_id_mapping = false
ldap_referrals = false
ldap_search_base = dc=ibfj-evry,dc=fr
debug_level = 3
dyndns_update = False
krb5_realm = IBFJ-EVRY.FR
enumerate = true
ldap_user_principal = userPrincipalName
ldap_group_search_base = dc=ibfj-evry,dc=fr
ldap_sasl_mech = GSSAPI
ldap_user_search_base = dc=ibfj-evry,dc=fr
[sssd]
debug_level = 3
reconnection_retries = 3
sbus_timeout = 30
domains = ibfj-evry.fr
services = nss, pam
config_file_version = 2
I don't know where to search now
2 months, 1 week
commands to work with password cache (sssd-ad)
by sobek
I cannot find solutions in documentation or by trying. I am stuck and need help.
The environment:
Fedora Workstation 38 notebook has been joined with "realm join" (sssd-ad) to Microsoft Active Directory (ADDS).
The user account details are stored for offline login support.
The notebook is used outside of ADDS network and does not have a connection to it.
Commands run on notebook:
$ rpm -q sssd sssd-ad
sssd-2.9.4-1.fc38.x86_64
sssd-ac-2.9.4-1.fc38.x86_64
$ authselect current
Profile ID: sssd
Enabled features:
- with-mkhomedir
- with-ecryptfs
- with-mdns4
The problem:
Password changes, expiration date changes and account status changes (disable/enable) in ADDS are not propagated to notebook. Depending on situation:
- user can still log in to notebook, even when password was set to expired or account to disabled
- user cannot log in to notebook anymore, because notebook is not aware of new password expiration date
- after password change the old password must be used to log in to the notebook
Workaround:
User brings notebook to ADDS network and connects notebook with network cable. User logs in to notebook once. User disconnects and leaves.
The workaround is sometimes not possible.
My questions:
1)
Which command can read the ADDS account's expiration date from SSSD's cache?
Does the command return status of account, i.e. disabled or enabled, too?
## for an account that has its expiration date in the far future and the account can log in to the notebook "sssctl" prints (NAME and TIMESTAMP are redacted values):
$ sudo sssctl user-show NAME
Name: NAME
Cache entry creation date: TIMESTAMP
Cache entry last update time: TIMESTAMP
Cache entry expiration time: Expired
Initgroups expiration time: Expired
Cache in InfoPipe: No
2)
Which command to use in order to force SSSD to forget about a specific user? So the account cannot log in anymore, without connection to ADDS network to refresh data?
I started the notebook and from TTY2 ran as root user (no login as NAME) "sss_cache -u NAME" and "sss_cache -E" (even with "systemctl restart sssd.service"). Afterwards NAME could still log in.
These commands (executed as root) prevented all users from login:
# systemctl stop sssd.service; rm -rf /var/lib/sss/db/*; systemctl stop sssd.service
3)
How to force retrieval of updated expiration date and/or changed password on command line?
Assuming notebook has connection to ADDS network.
Either wireless network connection when on-site.
Or remotely with SSH or VPN connection (Root user and/or user connects to SSH gateway (CentOS Stream 9), which might or might not be part of ADDS, but has access to ADDS network).
Or user establishes a VPN connection. I do not know if VPN connection can be established from GDM if account is already expired according to SSSD's cache.
Thanks,
René
2 months, 2 weeks
Integrate DMZ clients (sssd) to Active Directory through proxy
by Horváth Szabolcs
Hi,
I'd like to integrate our servers sitting in DMZ to Active Directory
(domain controllers are located inside), without direct network connection
between the parties.
The security policy says we have to use some kind of intermediate party
(e.g. layer7 proxy).
A few years ago I had a project where the clients and the proxy server were
all RHEL7, and the solution was slapd as an ldap proxy and pam_ldap+nslcd
on client side (https://wiki.samba.org/index.php/OpenLDAP_as_proxy_to_AD).
Time has passed, we have no longer have RHEL7s, RHEL/SUSE no longer have
openldap-servers package (although slapd in back in EPEL9) and we don't
have nslcd+pam_ldap on SLES15 anymore.
It looks like everyone prefers 389 directory server project.
RedHat has an excellent article about the problem:
https://www.redhat.com/en/blog/identity-management-systems-dmz
Basically my options are:
1. local authentication (that's the starting point from which we want to
move on)
2. connect directly to AD (there are no working layer7 proxy solutions as
far as I know, so we rejected this option)
3. expose read only AD replicas in DMZ (so-called roDCs or 389-ds -> very
complex solution, I don't want go in this direction)
4. separate AD/IdM domain in DMZ (see above, it does not reduce complexity)
5. kdcproxy - https://github.com/latchset/kdcproxy
On the client side, I would stick to sssd (SLES 15 and RHEL8+, very limited
number of older versions).
The proxy side, I tested two solutions, none of them really work:
1. slapd (from EPEL9) as an LDAPS proxy between the clients and AD. sssd (I
tested 2.5.2) detects the presence of the proxy and fails:
---
id_provider = ldap
ldap_uri = ldaps://ldapproxy.test.local
ldap_search_base = OU=TESTLAB,DC=test,DC=local
ldap_schema = AD
[...]
auth_provider = ldap
---
* (2024-01-28 20:08:27): [be[test.local]] [sdap_process_result]
(0x2000): Trace: sh[0x55bb0dca1b10], connected[1], ops[0x55bb0dd0c900],
ldap[0x55bb0dbd53e0]
* (2024-01-28 20:08:27): [be[test.local]] [sdap_process_message]
(0x4000): Message type: [LDAP_RES_SEARCH_ENTRY]
* (2024-01-28 20:08:27): [be[test.local]] [sdap_parse_entry] (0x1000):
OriginalDN: [].
* (2024-01-28 20:08:27): [be[test.local]] [sdap_parse_range] (0x2000):
No sub-attributes for [objectClass]
* (2024-01-28 20:08:27): [be[test.local]] [sdap_parse_range] (0x2000):
No sub-attributes for [namingContexts]
* (2024-01-28 20:08:27): [be[test.local]] [sdap_parse_range] (0x2000):
No sub-attributes for [supportedControl]
* (2024-01-28 20:08:27): [be[test.local]] [sdap_parse_range] (0x2000):
No sub-attributes for [supportedExtension]
* (2024-01-28 20:08:27): [be[test.local]] [sdap_parse_range] (0x2000):
No sub-attributes for [supportedFeatures]
* (2024-01-28 20:08:27): [be[test.local]] [sdap_parse_range] (0x2000):
No sub-attributes for [supportedLDAPVersion]
* (2024-01-28 20:08:27): [be[test.local]] [sdap_process_result]
(0x2000): Trace: sh[0x55bb0dca1b10], connected[1], ops[0x55bb0dd0c900],
ldap[0x55bb0dbd53e0]
* (2024-01-28 20:08:27): [be[test.local]] [sdap_process_message]
(0x4000): Message type: [LDAP_RES_SEARCH_RESULT]
* (2024-01-28 20:08:27): [be[test.local]]
[sdap_get_generic_op_finished] (0x0400): Search result: Success(0), no
errmsg set
* (2024-01-28 20:08:27): [be[test.local]] [sdap_op_destructor]
(0x2000): Operation 1 finished
* (2024-01-28 20:08:27): [be[test.local]] [sdap_get_rootdse_done]
(0x2000): Got rootdse
* (2024-01-28 20:08:27): [be[test.local]] [sdap_get_rootdse_done]
(0x2000): Skipping auto-detection of match rule
* (2024-01-28 20:08:27): [be[test.local]]
[sdap_get_server_opts_from_rootdse] (0x0020): ldap_rootdse_last_usn
configured but not found in rootdse!
********************** BACKTRACE DUMP ENDS HERE
*********************************
Basically we don't have so much information in rootDSE which sssd depends
on:
# ldapsearch -h 127.0.0.1 -D
"CN=_svc_ldapquery,OU=Users,OU=TESTLAB,DC=test,DC=local" -b '' -s base
'(objectclass=*)'
#
dn:
objectClass: top
objectClass: OpenLDAProotDSE
# search result
search: 2
result: 0 Success
# numResponses: 2
# numEntries: 1
2. kdcproxy as a kerberos proxy: https://github.com/latchset/kdcproxy
The main problem is sssd can utilize krb5 for access_provider and
access_provider, but it needs separate id_provider, usually
id_provider=ldap, and it comes to my previous problem, I couldn't proxy
ldap protocol.
Do you have any suggestions for this problem that I haven't considered?
Thanks!
Szabolcs
2 months, 3 weeks