sssd not finding sudoRule objects in ldap
by John Snowdon
Hi,
I have recently installed sssd 2.6.3 on Ubuntu 22.04.3 LTS (sssd-ldap, sssd-tools, libsss-sudo packages). I have a very simple OpenLDAP (2.5.16) server running with a basic schema (core, cosine, nis and the sudo.schema from sudo-ldap; the package isn't installed, only the schema from it).
Everything other than sudoers is working fine with sssd on my test client. Here's my sssd.conf:
[sssd]
config_file_version = 2
domains = test
services = nss, pam, ssh, sudo
[sudo]
[nss]
[pam]
[ssh]
[domain/test]
id_provider = ldap
auth_provider = ldap
chpass_provider = ldap
sudo_provider = ldap
cache_credentials = False
enumerate = False
ldap_uri = ldap://ldap
ldap_search_base = ou=users,dc=ldap
ldap_group_search_base = ou=groups,dc=ldap
ldap_sudo_search_base = ou=sudoers,dc=ldap
ldap_netgroup_search_base = ou=netgroups,dc=ldap
ldap_id_use_start_tls = True
ldap_tls_reqcert = demand
ldap_tls_cacert = /etc/sssd/ca.crt
ldap_group_object_class = posixGroup
ldap_sudorule_object_class = sudoRule
nsswitch.conf is correctly set to 'sss files' for most things that I care about (passwd, group, shadow, suders). User lookup works, group lookup works, logins work, netgroup lookups work. All is fine, except sudo rules are not found.
My LDAP tree is bare bones, with four OU's:
ou=users,dc=ldap
ou=groups,dc=ldap
ou=netgroups,dc=ldap
ou=sudoers,dc=ldap
ou=users,dc=ldap has ONE posixAccount in (nuser1). This test account works correctly, can log in, his home directory and password are all correct.
ou=groups,dc=ldap has ONE posixGroup in (ldapgroup). This is the primary group of the account from above. It is found and correctly sets the textual name of the users GID, all good.
ou=netgroups,dc=ldap has ONE nisNetgroup in (sshhosts), this is intended to map sudo rules to groups of servers, but isn't being used yet.
ou=sudoers,dc=ldap has ONE sudoRule in (ssh), which is as follows:
dn: cn=ssh,ou=sudoers,dc=ldap
objectClass: top
objectClass: sudoRole
sudoCommand: ALL
description: Access to root role on any host in the Interactive SSH Servers netgroup
sudoUser: nuser1
sudoHost: testclient # <--- the name of the single host I have temporarily configured for this test, would normally be +sshhosts
sudoRunAs: root
cn: ssh
My problem is that this rule is never found by sssd when it starts up and attempts to scrape all of the rules for the host it is on. This is what the sssd log says when I enable debugging:
[sdap_search_bases_ex_next_base] (0x0400): Issuing LDAP lookup with base [ou=sudoers,dc=ldap]
sssd_test.log:(2023-10-29 8:38:39): [be[test]] [sdap_get_generic_ext_step] (0x0400): calling ldap_search_ext with [(&(objectClass=sudoRule)(|(&(!(sudoHost=*))(cn=defaults))(sudoHost=ALL)(sudoHost=testclient)(sudoHost=testclient)(sudoHost=192.168.2.168)(sudoHost=192.168.0.0/16)(SNIP SNIP SNIP LOADS OF IPV6 ADDRESSES HERE)(sudoHost=+*)))][ou=sudoers,dc=ldap].
sssd_test.log:(2023-10-29 8:38:39): [be[test]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [sudoCommand]
sssd_test.log:(2023-10-29 8:38:39): [be[test]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [sudoHost]
sssd_test.log:(2023-10-29 8:38:39): [be[test]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [sudoUser]
sssd_test.log:(2023-10-29 8:38:39): [be[test]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [sudoOption]
sssd_test.log:(2023-10-29 8:38:39): [be[test]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [sudoRunAs]
sssd_test.log:(2023-10-29 8:38:39): [be[test]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [sudoRunAs]
sssd_test.log:(2023-10-29 8:38:39): [be[test]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [sudoRunAsGroup]
sssd_test.log:(2023-10-29 8:38:39): [be[test]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [sudoNotBefore]
sssd_test.log:(2023-10-29 8:38:39): [be[test]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [sudoNotAfter]
sssd_test.log:(2023-10-29 8:38:39): [be[test]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [sudoOrder]
sssd_test.log:(2023-10-29 8:38:39): [be[test]] [sdap_search_bases_ex_done] (0x0400): Receiving data from base [ou=sudoers,dc=ldap]
sssd_test.log:(2023-10-29 8:38:39): [be[test]] [sdap_sudo_load_sudoers_done] (0x0200): Received 0 sudo rules
More bizarrely, if I take that ldap_search_ext filter and paste it into my LDAP browser (Apache Directory Studio), I still get no results, even though the damn tree is open, in front of my very eyes! Even if I change the filter so that it is only objectClass=sudoRule, I get no results:
#!SEARCH REQUEST (61) OK
#!CONNECTION ldap://boxxy-ldap:389
#!DATE 2023-10-29T08:50:34.262
# LDAP URL : ldap://boxxy-ldap:389/ou=sudoers,dc=ldap?objectClass?sub?(objectClass=sudoRule)
# command line : ldapsearch -H ldap://ldap:389 -ZZ -x -D "cn=admin,dc=ldap" -W -b "ou=sudoers,dc=ldap" -s sub -a always -z 1000 "(objectClass=sudoRule)" "objectClass"
# baseObject : ou=sudoers,dc=ldap
# scope : wholeSubtree (2)
# derefAliases : derefAlways (3)
# sizeLimit : 1000
# timeLimit : 0
# typesOnly : False
# filter : (objectClass=sudoRule)
# attributes : objectClass
#!SEARCH RESULT DONE (61) OK
#!CONNECTION ldap://ldap:389
#!DATE 2023-10-29T08:50:34.264
# numEntries : 0
.... but cn=ssh,ou=sudoers,dc=ldap (with objectClass=top, objectClass=sudoRule) is there, open, in front of me.
Has anyone any recent experience of implementing sudo.schema in a recent version of OpenLDAP and utilising it from sssd? It feels like slapd doesn't know what a sudoRule object class is... even though I'm doing a "include sudo.schema" in slapd.conf (and without it, the slapadd to import the directory clearly falls over, not knowing sudoHost, sudoUser, sudoRunAs etc). I don't think I have anything wrong in my slapd.conf, either, but am willing to be proven wrong:
include /etc/ldap/schema/core.schema
include /etc/ldap/schema/cosine.schema
include /etc/ldap/schema/nis.schema
include /etc/ldap/schema/sudo.schema
pidfile /var/run/slapd/slapd.pid
argsfile /var/run/slapd/slapd.args
loglevel 0
tlscacertificatefile /etc/ldap/ca.crt
tlscertificatekeyfile /etc/ldap/ldap.key
tlscertificatefile /etc/ldap/ldap.crt
security tls=1
access to dn.base=
by * read
access to attrs=userPassword
by self write
by anonymous auth
by users none
access to * by * read
modulepath /usr/lib/ldap
moduleload back_mdb.la
database mdb
directory /var/lib/ldap
suffix dc=ldap
maxsize 1073741824
rootdn cn=admin,dc=ldap
John
1 month, 1 week
reliability of mounting shares while login
by Johannes Maier
Hi @all,
I have some problems when using pam_mount.conf.xml to mount shares via kerberos (and also for ntlm) regarding reliability of the mount. I have tested the issue with 2 different environments. My environments are: 2 Microsoft Domain Controllers + a separate fileserver and Ubuntu 18.04 or 22.04 as clients. My other tested environment is one Microsoft Server 2019 (as domain controller and fileserver) + Ubuntu 22.04 as client.
The login with my configuration works all the time reliably, but sometimes the shares are not getting mounted. I have read a ton of documentation, but can not figure out where the problem really is.
I have also tried with the kernel cache, but that seems to even increase the problem.
Steps to reproduce (client side):
- Microsoft Server 2019 as Domain Controller
- Install Ubuntu 22.04
- configure domain name in /etc/krb5.conf
- join the domain with realm -v join -U Administrator
- install krb5-user package
- restart sssd (systemctl restart sssd)
- make the necessary entries in pam_mount.conf.xml
Most of the time the mounting works while login, but when restarting sometimes it can happen that the shares are not getting mounted.
The relevant syslog is here:
=========================================
Oct 11 22:45:32 pc-jm kernel: [ 13.725094] FS-Cache: Loaded
Oct 11 22:45:32 pc-jm kernel: [ 13.752265] Key type cifs.spnego registered
Oct 11 22:45:32 pc-jm kernel: [ 13.752272] Key type cifs.idmap registered
Oct 11 22:45:32 pc-jm kernel: [ 13.752483] CIFS: No dialect specified on mount. Default has changed to a more secure dialect, SMB2.1 or later (e.g. SMB3.1.1), from CIFS (SMB1). To use the less secure SMB1 dialect to access old servers which do not support SMB3.1.1 (or even SMB3 or SMB2.1) specify vers=1.0 on mount.
Oct 11 22:45:32 pc-jm kernel: [ 13.752484] CIFS: Attempting to mount \\srv-dc01.example.localnet\Daten$
Oct 11 22:45:32 pc-jm cifs.upcall: key description: cifs.spnego;0;0;39010000;ver=0x2;host=srv-dc01.example.localnet;ip4=192.168.0.36;sec=krb5;uid=0x14163c77;creduid=0x14163c77;user=tester;pid=0xaa8
Oct 11 22:45:32 pc-jm cifs.upcall: ver=2
Oct 11 22:45:32 pc-jm cifs.upcall: host=srv-dc01.example.localnet
Oct 11 22:45:32 pc-jm cifs.upcall: ip=192.168.0.36
Oct 11 22:45:32 pc-jm cifs.upcall: sec=1
Oct 11 22:45:32 pc-jm cifs.upcall: uid=337001591
Oct 11 22:45:32 pc-jm cifs.upcall: creduid=337001591
Oct 11 22:45:32 pc-jm cifs.upcall: user=tester
Oct 11 22:45:32 pc-jm cifs.upcall: pid=2728
Oct 11 22:45:32 pc-jm cifs.upcall: get_cachename_from_process_env: pathname=/proc/2728/environ
Oct 11 22:45:32 pc-jm cifs.upcall: get_cachename_from_process_env: cachename = FILE:/tmp/krb5cc_337001591
Oct 11 22:45:32 pc-jm cifs.upcall: get_existing_cc: default ccache is FILE:/tmp/krb5cc_337001591
Oct 11 22:45:32 pc-jm kernel: [ 13.764725] CIFS: VFS: Verify user has a krb5 ticket and keyutils is installed
Oct 11 22:45:32 pc-jm kernel: [ 13.764728] CIFS: VFS: \\srv-dc01.example.localnet Send error in SessSetup = -126
Oct 11 22:45:32 pc-jm kernel: [ 13.764733] CIFS: VFS: cifs_mount failed w/return code = -126
Oct 11 22:45:32 pc-jm cifs.upcall: krb5_get_init_creds_keytab: -1765328174
Oct 11 22:45:32 pc-jm sddm[2274]: (mount.c:68): Messages from underlying mount program:
Oct 11 22:45:32 pc-jm sddm[2274]: (mount.c:72): mount error(126): Required key not available
Oct 11 22:45:32 pc-jm sddm[2274]: (mount.c:72): Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) and kernel log messages (dmesg)
Oct 11 22:45:32 pc-jm sddm[2274]: (pam_mount.c:522): mount of Daten$ failed
Oct 11 22:45:32 pc-jm cifs.upcall: Exit status 1
Oct 11 22:45:32 pc-jm kernel: [ 13.771412] CIFS: Attempting to mount \\srv-dc01.example.localnet\Home$
Oct 11 22:45:32 pc-jm cifs.upcall: key description: cifs.spnego;0;0;39010000;ver=0x2;host=srv-dc01.example.localnet;ip4=192.168.0.36;sec=krb5;uid=0x14163c77;creduid=0x14163c77;user=tester;pid=0xabb
Oct 11 22:45:32 pc-jm cifs.upcall: ver=2
Oct 11 22:45:32 pc-jm cifs.upcall: host=srv-dc01.example.localnet
Oct 11 22:45:32 pc-jm cifs.upcall: ip=192.168.0.36
Oct 11 22:45:32 pc-jm cifs.upcall: sec=1
Oct 11 22:45:32 pc-jm cifs.upcall: uid=337001591
Oct 11 22:45:32 pc-jm cifs.upcall: creduid=337001591
Oct 11 22:45:32 pc-jm cifs.upcall: user=tester
Oct 11 22:45:32 pc-jm cifs.upcall: pid=2747
Oct 11 22:45:32 pc-jm cifs.upcall: get_cachename_from_process_env: pathname=/proc/2747/environ
Oct 11 22:45:32 pc-jm cifs.upcall: get_cachename_from_process_env: cachename = FILE:/tmp/krb5cc_337001591
Oct 11 22:45:32 pc-jm cifs.upcall: get_existing_cc: default ccache is FILE:/tmp/krb5cc_337001591
Oct 11 22:45:32 pc-jm cifs.upcall: krb5_get_init_creds_keytab: -1765328174
Oct 11 22:45:32 pc-jm cifs.upcall: Exit status 1
Oct 11 22:45:32 pc-jm sddm[2274]: (mount.c:68): Messages from underlying mount program:
Oct 11 22:45:32 pc-jm sddm[2274]: (mount.c:72): mount error(126): Required key not available
Oct 11 22:45:32 pc-jm sddm[2274]: (mount.c:72): Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) and kernel log messages (dmesg)
Oct 11 22:45:32 pc-jm sddm[2274]: (pam_mount.c:522): mount of Home$ failed
=========================================
This is my sssd configuration:
=========================================
[sssd]
domains = example.localnet
config_file_version = 2
services = nss, pam
[domain/example.localnet]
krb5_ccname_template=FILE:%d/krb5cc_%U
ad_gpo_access_control = enforcing
ad_gpo_map_remote_interactive = +xrdp-sesman
default_shell = /bin/bash
krb5_store_password_if_offline = True
cache_credentials = True
krb5_realm = EXAMPLE.LOCALNET
realmd_tags = manages-system joined-with-adcli
id_provider = ad
fallback_homedir = /home/%u
ad_domain = example.localnet
use_fully_qualified_names = False
ldap_id_mapping = True
access_provider = ad
=========================================
This is my pam_mount.conf.xml:
=========================================
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE pam_mount SYSTEM "pam_mount.conf.xml.dtd">
<!--
See pam_mount.conf(5) for a description.
-->
<pam_mount>
<!-- debug should come before everything else,
since this file is still processed in a single pass
from top-to-bottom -->
<debug enable="0"/>
<!-- Volume definitions -->
<!-- pam_mount parameters: General tunables -->
<!--
<luserconf name=".pam_mount.conf.xml" />
-->
<!-- Note that commenting out mntoptions will give you the defaults.
You will need to explicitly initialize it with the empty string
to reset the defaults to nothing. -->
<mntoptions allow="nosuid,nodev,loop,encryption,fsck,nonempty,allow_root,allow_other"/>
<!--
<mntoptions deny="suid,dev" />
<mntoptions allow="*" />
<mntoptions deny="*" />
-->
<mntoptions require="nosuid,nodev"/>
<!-- requires ofl from hxtools to be present -->
<logout wait="0" hup="no" term="no" kill="no"/>
<!-- pam_mount parameters: Volume-related -->
<mkmountpoint enable="1" remove="true"/>
<volume fstype="cifs" server="srv-dc01.example.localnet" path="Daten$" mountpoint="/media/%(USER)/Daten" options="iocharset=utf8,nosuid,nodev,echo_interval=15,sec=krb5i,cruid=%(USERUID)," uid="5000-999999999"/>
<volume fstype="cifs" server="srv-dc01.example.localnet" path="Home$" mountpoint="/media/%(USER)/Home" options="iocharset=utf8,nosuid,nodev,echo_interval=15,sec=krb5i,cruid=%(USERUID)," uid="5000-999999999"/>
</pam_mount>
=========================================
Any ideas?
Thanks majojoe
1 month, 2 weeks
Internal credentials cache error while getting initial credentials
by Albert Szostkiewicz
Hey,
Need some help here, I am unable to log-in. when trying to use kinit on my user, I am getting an error:
kinit: Failed to store credentials: Internal credentials cache error while getting initial credentials
sssd runs. log shows:
Oct 13 20:32:59 user.mydomain.com krb5_child[4846]: Internal credentials cache error
sssd_kcm.log states:
* (2023-10-13 21:17:43): [kcm] [local_db_check_peruid_number_of_secrets] (0x0040): [CID#8708] Cannot store any more secrets for this client (basedn cn=1907400001,cn=persistent,cn=kcm) as the maximum allowed limit (66) has been reached
********************** BACKTRACE DUMP ENDS HERE *********************************
(2023-10-13 21:17:43): [kcm] [sss_sec_update] (0x0040): [CID#8708] local_db_check_number_of_secrets failed [1432158289]: The maximum number of stored secrets has been reached
(2023-10-13 21:17:43): [kcm] [sec_update] (0x0040): [CID#8708] Cannot write the secret [1432158289]: The maximum number of stored secrets has been reached
********************** PREVIOUS MESSAGE WAS TRIGGERED BY THE FOLLOWING BACKTRACE:
* (2023-10-13 21:17:43): [kcm] [sss_sec_update] (0x0040): [CID#8708] local_db_check_number_of_secrets failed [1432158289]: The maximum number of stored secrets has been reached
* (2023-10-13 21:17:43): [kcm] [sec_update] (0x0040): [CID#8708] Cannot write the secret [1432158289]: The maximum number of stored secrets has been reached
********************** BACKTRACE DUMP ENDS HERE *********************************
(2023-10-13 21:17:43): [kcm] [kcm_ccdb_mod_done] (0x0040): [CID#8708] Failed to create ccache [1432158289]: The maximum number of stored secrets has been reached
(2023-10-13 21:17:43): [kcm] [kcm_op_set_kdc_offset_mod_done] (0x0040): [CID#8708] Cannot modify ccache [1432158289]: The maximum number of stored secrets has been reached
********************** PREVIOUS MESSAGE WAS TRIGGERED BY THE FOLLOWING BACKTRACE:
* (2023-10-13 21:17:43): [kcm] [kcm_ccdb_mod_done] (0x0040): [CID#8708] Failed to create ccache [1432158289]: The maximum number of stored secrets has been reached
* (2023-10-13 21:17:43): [kcm] [kcm_op_set_kdc_offset_mod_done] (0x0040): [CID#8708] Cannot modify ccache [1432158289]: The maximum number of stored secrets has been reached
********************** BACKTRACE DUMP ENDS HERE *********************************
(2023-10-13 21:17:43): [kcm] [kcm_cmd_done] (0x0040): [CID#8708] op receive function failed [1432158289]: The maximum number of stored secrets has been reached
(2023-10-13 21:17:43): [kcm] [kcm_cmd_request_done] (0x0040): [CID#8708] KCM operation failed [1432158289]: The maximum number of stored secrets has been reached
********************** PREVIOUS MESSAGE WAS TRIGGERED BY THE FOLLOWING BACKTRACE:
* (2023-10-13 21:17:43): [kcm] [kcm_cmd_done] (0x0040): [CID#8708] op receive function failed [1432158289]: The maximum number of stored secrets has been reached
* (2023-10-13 21:17:43): [kcm] [kcm_cmd_request_done] (0x0040): [CID#8708] KCM operation failed [1432158289]: The maximum number of stored secrets has been reached
********************** BACKTRACE DUMP ENDS HERE *********************************
KRB5_TRACE=/dev/stderr ipa --debug ping
ipa: DEBUG: importing plugin module ipaclient.plugins.trust
ipa: DEBUG: importing plugin module ipaclient.plugins.user
ipa: DEBUG: importing plugin module ipaclient.plugins.vault
ipa: DEBUG: trying https://workstation.mydomain.com/ipa/json
ipa: DEBUG: Created connection context.rpcclient_140066561958480
ipa: DEBUG: raw: ping(version='2.252')
ipa: DEBUG: ping(version='2.252')
ipa: DEBUG: [try 1]: Forwarding 'ping/1' to json server 'https://workstation.mydomain.com/ipa/json'
ipa: DEBUG: New HTTP connection (workstation.mydomain.com)
ipa: DEBUG: HTTP connection destroyed (workstation.mydomain.com)
Traceback (most recent call last):
File "/usr/lib/python3.11/site-packages/ipalib/rpc.py", line 644, in get_auth_info
response = self._sec_context.step()
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/decorator.py", line 232, in fun
return caller(func, *(extras + args), **kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/site-packages/gssapi/_utils.py", line 165, in check_last_err
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/decorator.py", line 232, in fun
return caller(func, *(extras + args), **kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/site-packages/gssapi/_utils.py", line 131, in catch_and_return_token
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/site-packages/gssapi/sec_contexts.py", line 584, in step
return self._initiator_step(token=token)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/site-packages/gssapi/sec_contexts.py", line 606, in _initiator_step
res = rsec_contexts.init_sec_context(self._target_name, self._creds,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "gssapi/raw/sec_contexts.pyx", line 188, in gssapi.raw.sec_contexts.init_sec_context
gssapi.raw.exceptions.MissingCredentialsError: Major (458752): No credentials were supplied, or the credentials were unavailable or inaccessible, Minor (2529639053): No Kerberos credentials available (default cache: KCM:)
During the handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.11/site-packages/ipalib/rpc.py", line 697, in single_request
self.get_auth_info()
File "/usr/lib/python3.11/site-packages/ipalib/rpc.py", line 646, in get_auth_info
self._handle_exception(e, service=service)
File "/usr/lib/python3.11/site-packages/ipalib/rpc.py", line 603, in _handle_exception
raise errors.CCacheError()
ipalib.errors.CCacheError: did not receive Kerberos credentials
ipa: DEBUG: Destroyed connection context.rpcclient_140066561958480
ipa: ERROR: did not receive Kerberos credentials
I appreciate if anyone have some ideas. Thank you!
1 month, 3 weeks
Re: Is there anything in the sssd RHEL server OS settings that performs LDAP binds or connections to AD every 30 mins?
by Spike White
So Trellix did not accept this as a bug in their healthcheck script. We
put in a RFE with tem to do this healthcheck invocation using setpriv or
su -c. Which doesn't trigger the LDAP queries.
Now we have an open case with RH Tech Support on this. Basically, when
sudo is invoked as root and we have early in the /etc/sudoers file:
root ALL=(ALL) ALL
and then later on in /etc/sudoers file we have:
## Read drop-in files from /etc/sudoers.
#includedir /etc/sudoers.d
then sudo should not be making group membership queries to enumerate all
the various AD groups in /etc/sudoers.d/* files. which is triggering
multiple LDAP queries on thousands of servers -- all on the hour and
half-hour.
Spike
On Fri, Oct 6, 2023 at 12:16 PM Larkin, Patrick <Patrick.Larkin(a)sabre.com>
wrote:
> On 10/6/23, 11:52, "Sam Morris" <sam(a)robots.org.uk> wrote:
> ______________________________________________________________________
> On 04/10/2023 17:02, Spike White wrote:
> > We see in other places in this McAfee script that they run this command
> > using 'su' instead of 'sudo'.
> >
> > su -s /bin/sh -c "LD_LIBRARY_PATH=... ${PROGROOT}/bin/macmnsvc
> > status" mfe
> …
> > Anyway, it's McAfee's problem to fix now. We'll report it and I'm sure
> > they'll figure out a solution.
>
> If they are root and want to drop privileges then they would be better
> served by runuser or setpriv. …
>
>
>
> …or start out as non-root user to begin with…
>
> (It’s a peeve of mine when security companies don’t follow best practice
> of elevating only if absolutely necessary.)
>
>
>
> --
>
> Pat Larkin | Manager – LinuxIMO
>
> Sabre TEO | Texas USA
>
>
> _______________________________________________
> sssd-users mailing list -- sssd-users(a)lists.fedorahosted.org
> To unsubscribe send an email to sssd-users-leave(a)lists.fedorahosted.org
> Fedora Code of Conduct:
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives:
> https://lists.fedorahosted.org/archives/list/sssd-users@lists.fedorahoste...
> Do not reply to spam, report it:
> https://pagure.io/fedora-infrastructure/new_issue
>
1 month, 3 weeks
Two domains, same users - how to access files
by Francis Augusto Medeiros-Logeay
Ok, this is a bit complicate, but I’ll try to explain:
We have two domains - let’s called them A and B. Some people have users on both domains. The usernames, uid and gid are totally different across domains.
There’s a desire to allow the users on domain B to mount shares from domain A.
Reading SSSD’s documentation, it seems trivial that one machine can be configured for two domains.
But suppose my user is francaug@domainB on the B domain, and francis@domainA. Let’s say I want to mount my_dir, exported with nfs4 from domain A. I could most likely get kerberos tickets, use NFS4 to mount it on domainB.
Will I, as francaug@domainB, be able to actually use (read, write, delete) these files, since our posix attributes are completely different? Any other way to solve it here, such as by using NFSv4 ACL attributes?
Or is there any alternative, such as using regex rules so that users are matched? Or translating/mapping uid's and gid’s?
Right now I don’t know exactly what to focus on - the only vague requirement for this task is that a person who has a user on domain B and is logged to a domainB-bound machine should be able to mount a share from domain A. I have the feeling that mount is trivial, but access is going to bite…
Any tips?
Best,
Francis
1 month, 3 weeks
Is there anything in the sssd RHEL server OS settings that performs LDAP binds or connections to AD every 30 mins?
by Spike White
All,
Is there anything in sssd's RHEL and RHEL-like Linux server OS settings
that perform LDAP binds or connections to AD every 30 minutes?
What our AD team is seeing is all of the DCs in our biggest AMER AD site
peak with LDAP sessions for about 10 minutes at the top of the hour then
again at the bottom of the hour. No other AD site in the world appears to
see this behavior not even other AD sites in this metro area.
The reason they noticed is that our non-amer DCs in this biggest AD site
hit their 5k LDAP client session limit during those 10 minutes every 30
minutes. Meaning any clients attempting to establish a LDAP session past
5000 are dropped by the DC. In their research they see thousands LDAP
Binds by RHEL Linux servers against two specific non-AMER AD DCs in a short
period of time after digging through some LDAP log samples that they pulled
from these DCs.
In this major AD sites, we have dozens and dozens of AMER AD DCs. So
there's enough preferred AD DCs to spread the load. But typically for the
non-AMER regions, the AD team puts 2 of each regions DCs in a site. For
instance, for APAC they would be put two APAC DCs in this AMER major site.
Thus all AMER RHEL servers in this site would randomly hit dozens of AMER
DCs, but concentrate on these two preferred APAC DCs. (preferred because
they're in this locatiion).
I know our older AD integration product used to hit AD every 30 mins to
check GPOs, but we're not implementing GPOs with sssd.
Spike
2 months
Preserving kerberos tickets stored in KCM when sudo'ing
by Francis Augusto Medeiros-Logeay
Hi,
We had a mechanism to allow users to mount their directory by using a user systemd service that runs mount (with sudo).
Since we use kerberos on that operation, we’d add
Defaults env_keep += «KRB5CCNAME"
To a sudoers.d file.
This worked pretty well, but then we moved to KCM: instead of FILE:.
Is there a way we can preserve access to KCM tickets for a user when he uses sudo?
Best,
Francis
2 months