Hi Simo,
I know that root over NFS is generally frowned upon. I don't enable this on all the machines. I enable it temporarily as required (no_root_squash in exports), then remove it afterwards, but I really would like to make it work so that as required, I can use it.
After hours of experimenting, I found this:
https://bugzilla.redhat.com/show_bug.cgi?id=1559185
The result of which was this:
https://access.redhat.com/articles/4040141
Since root doesn't have a Kerberos identity, I need to (I think) identify as the AD machine account, then somehow map the machine account to root:
j1# kinit -k 'J1$' j1# klist Ticket cache: FILE:/tmp/krb5cc_0 Default principal: J1$@AD.EECS.YORKU.CA
Valid starting Expires Service principal 11/10/2020 14:47:34 11/11/2020 00:47:34 krbtgt/AD.EECS.YORKU.CA@AD.EECS.YORKU.CA renew until 11/17/2020 14:47:34
But when I write a file on an nfs export that is mounted with no_root_squash (that works fine with sec=sys of course), I get:
j1# cd /mnt j1# touch rootfile j1# ls -al rootfile -rw-r--r-- 1 nfsnobody nfsnobody 0 Nov 10 14:49 rootfile
So I need identity J1$@AD.EECS.YORKU.CA to be mapped to "root".
I placed the lines into /etc/krb5.conf on the NFS server... Obviously in normal operation, I wouldn't map *everything* to root, but it's just a test afer all in a VM:
AD.EECS.YORKU.CA = { auth_to_local = RULE:[2:$1/$2@$0](*)s/*/root/ auth_to_local = DEFAULT }
... but it's not working. I can't even find a way to determine how Kerberos is processing those auth_to_local lines. I suspect it's not.
On both the client and the server, I updated /etc/gssproxy/gssproxy.conf which contained only "[gssproxy]", so I added:
[gssproxy] debug=true debug_level=3
I restarted gssproxy.
I re-mounted the share on the client. gssproxy logs:
Nov 10 15:04:51 j1 gssproxy[6099]: [2020/11/10 20:04:51]: Client connected (fd = 11)[2020/11/10 20:04:51]: (pid = 980) (uid = 0) (gid = 0)[2020/11/10 20:04:51]: Nov 10 15:04:51 j1 gssproxy[6099]: [CID 11][2020/11/10 20:04:51]: [status] Handling query input: 0x556fc251d640 (116) Nov 10 15:04:51 j1 gssproxy[6099]: [CID 11][2020/11/10 20:04:51]: [status] Processing request [0x556fc251d640 (116)] Nov 10 15:04:51 j1 gssproxy[6099]: [CID 11][2020/11/10 20:04:51]: [status] Executing request 6 (GSSX_ACQUIRE_CRED) from [0x556fc251d640 (116)] Nov 10 15:04:51 j1 gssproxy[6099]: [CID 11][2020/11/10 20:04:51]: gp_rpc_execute: executing 6 (GSSX_ACQUIRE_CRED) for service "nfs-client", euid: 0,socket: (null) Nov 10 15:04:51 j1 gssproxy[6099]: GSSX_ARG_ACQUIRE_CRED( call_ctx: { "" [ ] } input_cred_handle: <Null> add_cred: 0 desired_name: <Null> time_req: 4294967295 desired_mechs: { { 1 2 840 113554 1 2 2 } } cred_usage: INITIATE initiator_time_req: 0 acceptor_time_req: 0 ) Nov 10 15:04:51 j1 gssproxy[6099]: gssproxy[6100]: (OID: { 1 2 840 113554 1 2 2 }) Unspecified GSS failure. Minor code may provide more information, Client 'host/j1.ad.eecs.yorku.ca@AD.EECS.YORKU.CA' not found in Kerberos database Nov 10 15:04:51 j1 gssproxy[6099]: GSSX_RES_ACQUIRE_CRED( status: { 851968 { 1 2 840 113554 1 2 2 } 2529638918 "Unspecified GSS failure. Minor code may provide more information" "Client 'host/j1.ad.eecs.yorku.ca@AD.EECS.YORKU.CA' not found in Kerberos database" [ ] } output_cred_handle: <Null> ) Nov 10 15:04:51 j1 gssproxy[6099]: [CID 11][2020/11/10 20:04:51]: [status] Returned buffer 6 (GSSX_ACQUIRE_CRED) from [0x556fc251d640 (116)]: [0x7fd4a000a470 (232)] Nov 10 15:04:51 j1 gssproxy[6099]: [CID 11][2020/11/10 20:04:51]: [status] Handling query output: 0x7fd4a000a470 (232) Nov 10 15:04:51 j1 gssproxy[6099]: [2020/11/10 20:04:51]: [status] Handling query reply: 0x7fd4a000a470 (232) Nov 10 15:04:51 j1 gssproxy[6099]: [2020/11/10 20:04:51]: [status] Sending data: 0x7fd4a000a470 (232) Nov 10 15:04:51 j1 gssproxy[6099]: [2020/11/10 20:04:51]: [status] Sending data [0x7fd4a000a470 (232)]: successful write of 232 Nov 10 15:04:51 j1 gssproxy[6099]: [CID 11][2020/11/10 20:04:51]: [status] Handling query input: 0x556fc251d640 (116) Nov 10 15:04:51 j1 gssproxy[6099]: [CID 11][2020/11/10 20:04:51]: [status] Processing request [0x556fc251d640 (116)] Nov 10 15:04:51 j1 gssproxy[6099]: [CID 11][2020/11/10 20:04:51]: [status] Executing request 6 (GSSX_ACQUIRE_CRED) from [0x556fc251d640 (116)] Nov 10 15:04:51 j1 gssproxy[6099]: [CID 11][2020/11/10 20:04:51]: gp_rpc_execute: executing 6 (GSSX_ACQUIRE_CRED) for service "nfs-client", euid: 0,socket: (null) Nov 10 15:04:51 j1 gssproxy[6099]: GSSX_ARG_ACQUIRE_CRED( call_ctx: { "" [ ] } input_cred_handle: <Null> add_cred: 0 desired_name: <Null> time_req: 4294967295 desired_mechs: { { 1 2 840 113554 1 2 2 } } cred_usage: INITIATE initiator_time_req: 0 acceptor_time_req: 0 ) Nov 10 15:04:51 j1 gssproxy[6099]: gssproxy[6100]: (OID: { 1 2 840 113554 1 2 2 }) Unspecified GSS failure. Minor code may provide more information, Client 'host/j1.ad.eecs.yorku.ca@AD.EECS.YORKU.CA' not found in Kerberos database Nov 10 15:04:51 j1 gssproxy[6099]: GSSX_RES_ACQUIRE_CRED( status: { 851968 { 1 2 840 113554 1 2 2 } 2529638918 "Unspecified GSS failure. Minor code may provide more information" "Client 'host/j1.ad.eecs.yorku.ca@AD.EECS.YORKU.CA' not found in Kerberos database" [ ] } output_cred_handle: <Null> ) Nov 10 15:04:51 j1 gssproxy[6099]: [CID 11][2020/11/10 20:04:51]: [status] Returned buffer 6 (GSSX_ACQUIRE_CRED) from [0x556fc251d640 (116)]: [0x7fd4a000a470 (232)] Nov 10 15:04:51 j1 gssproxy[6099]: [CID 11][2020/11/10 20:04:51]: [status] Handling query output: 0x7fd4a000a470 (232) Nov 10 15:04:51 j1 gssproxy[6099]: [2020/11/10 20:04:51]: [status] Handling query reply: 0x7fd4a000a470 (232) Nov 10 15:04:51 j1 gssproxy[6099]: [2020/11/10 20:04:51]: [status] Sending data: 0x7fd4a000a470 (232) Nov 10 15:04:51 j1 gssproxy[6099]: [2020/11/10 20:04:51]: [status] Sending data [0x7fd4a000a470 (232)]: successful write of 232
gssproxy frequently logs the "host/j1.ad.eecs.yorku.ca@AD.EECS.YORKU.CA" not found in Kerberos database even though it is:
# klist -k /etc/krb5.keytab | grep -i host/j1.ad.eecs.yorku.ca@AD.EECS.YORKU.CA 1 host/j1.ad.eecs.yorku.ca@AD.EECS.YORKU.CA 1 host/j1.ad.eecs.yorku.ca@AD.EECS.YORKU.CA 1 host/j1.ad.eecs.yorku.ca@AD.EECS.YORKU.CA 1 host/j1.ad.eecs.yorku.ca@AD.EECS.YORKU.CA 1 host/j1.ad.eecs.yorku.ca@AD.EECS.YORKU.CA 1 restrictedkrbhost/j1.ad.eecs.yorku.ca@AD.EECS.YORKU.CA 1 restrictedkrbhost/j1.ad.eecs.yorku.ca@AD.EECS.YORKU.CA 1 restrictedkrbhost/j1.ad.eecs.yorku.ca@AD.EECS.YORKU.CA 1 restrictedkrbhost/j1.ad.eecs.yorku.ca@AD.EECS.YORKU.CA 1 restrictedkrbhost/j1.ad.eecs.yorku.ca@AD.EECS.YORKU.CA
This may have something to do with the fact that j1.eecs.yorku.ca is joined to AD.EECS.YORKU.CA, but it seems to work otherwise for NFS just fine because in krb5.conf I list:
[domain_realm] ad.eecs.yorku.ca = AD.EECS.YORKU.CA .ad.eecs.yorku.ca = AD.EECS.YORKU.CA eecs.yorku.ca = AD.EECS.YORKU.CA .eecs.yorku.ca = AD.EECS.YORKU.CA
I also have: rdns=false in [libdefaults]
Anyway, when I try to write a file as a regular system user (user "jas" group "tech") gssproxy logs:
ov 10 15:12:39 j1 gssproxy[6099]: [2020/11/10 20:12:39]: Failed to get peer's SELinux context (92:Protocol not available) Nov 10 15:12:39 j1 gssproxy[6099]: [2020/11/10 20:12:39]: Client connected (fd = 11)[2020/11/10 20:12:39]: (pid = 980) *(uid = 1004) (gid = 1000)*[2020/11/10 20:12:39]: Nov 10 15:12:39 j1 gssproxy[6099]: [CID 11][2020/11/10 20:12:39]: [status] Handling query input: 0x556fc251d640 (116) Nov 10 15:12:39 j1 gssproxy[6099]: [CID 11][2020/11/10 20:12:39]: [status] Processing request [0x556fc251d640 (116)] Nov 10 15:12:39 j1 gssproxy[6099]: [CID 11][2020/11/10 20:12:39]: [status] Executing request 6 (GSSX_ACQUIRE_CRED) from [0x556fc251d640 (116)] Nov 10 15:12:39 j1 gssproxy[6099]: [CID 11][2020/11/10 20:12:39]: gp_rpc_execute: executing 6 (GSSX_ACQUIRE_CRED) for service "nfs-client", euid: 1004,socket: (null) Nov 10 15:12:39 j1 gssproxy[6099]: GSSX_ARG_ACQUIRE_CRED( call_ctx: { "" [ ] } input_cred_handle: <Null> add_cred: 0 desired_name: <Null> time_req: 4294967295 desired_mechs: { { 1 2 840 113554 1 2 2 } } cred_usage: INITIATE initiator_time_req: 0 acceptor_time_req: 0 ) Nov 10 15:12:39 j1 gssproxy[6099]: gssproxy[6100]: (OID: { 1 2 840 113554 1 2 2 }) Unspecified GSS failure. Minor code may provide more information, No credentials cache found Nov 10 15:12:39 j1 gssproxy[6099]: GSSX_RES_ACQUIRE_CRED( status: { 851968 { 1 2 840 113554 1 2 2 } 2529639107 "Unspecified GSS failure. Minor code may provide more information" "No credentials cache found" [ ] } output_cred_handle: <Null> ) Nov 10 15:12:39 j1 gssproxy[6099]: [CID 11][2020/11/10 20:12:39]: [status] Returned buffer 6 (GSSX_ACQUIRE_CRED) from [0x556fc251d640 (116)]: [0x7fd4a000a470 (176)] Nov 10 15:12:39 j1 gssproxy[6099]: [CID 11][2020/11/10 20:12:39]: [status] Handling query output: 0x7fd4a000a470 (176) Nov 10 15:12:39 j1 gssproxy[6099]: [2020/11/10 20:12:39]: [status] Handling query reply: 0x7fd4a000a470 (176) Nov 10 15:12:39 j1 gssproxy[6099]: [2020/11/10 20:12:39]: [status] Sending data: 0x7fd4a000a470 (176) Nov 10 15:12:39 j1 gssproxy[6099]: [2020/11/10 20:12:39]: [status] Sending data [0x7fd4a000a470 (176)]: successful write of 176 Nov 10 15:12:39 j1 gssproxy[6099]: [CID 11][2020/11/10 20:12:39]: [status] Handling query input: 0x556fc251d640 (116) Nov 10 15:12:39 j1 gssproxy[6099]: [CID 11][2020/11/10 20:12:39]: [status] Processing request [0x556fc251d640 (116)] Nov 10 15:12:39 j1 gssproxy[6099]: [CID 11][2020/11/10 20:12:39]: [status] Executing request 6 (GSSX_ACQUIRE_CRED) from [0x556fc251d640 (116)] Nov 10 15:12:39 j1 gssproxy[6099]: [CID 11][2020/11/10 20:12:39]: gp_rpc_execute: executing 6 (GSSX_ACQUIRE_CRED) for service "nfs-client", euid: 1004,socket: (null) Nov 10 15:12:39 j1 gssproxy[6099]: GSSX_ARG_ACQUIRE_CRED( call_ctx: { "" [ ] } input_cred_handle: <Null> add_cred: 0 desired_name: <Null> time_req: 4294967295 desired_mechs: { { 1 2 840 113554 1 2 2 } } cred_usage: INITIATE initiator_time_req: 0 acceptor_time_req: 0 ) Nov 10 15:12:39 j1 gssproxy[6099]: gssproxy[6100]: (OID: { 1 2 840 113554 1 2 2 }) Unspecified GSS failure. Minor code may provide more information, No credentials cache found Nov 10 15:12:39 j1 gssproxy[6099]: GSSX_RES_ACQUIRE_CRED( status: { 851968 { 1 2 840 113554 1 2 2 } 2529639107 "Unspecified GSS failure. Minor code may provide more information" "No credentials cache found" [ ] } output_cred_handle: <Null> ) Nov 10 15:12:39 j1 gssproxy[6099]: [CID 11][2020/11/10 20:12:39]: [status] Returned buffer 6 (GSSX_ACQUIRE_CRED) from [0x556fc251d640 (116)]: [0x7fd4a000a4a0 (176)] Nov 10 15:12:39 j1 gssproxy[6099]: [CID 11][2020/11/10 20:12:39]: [status] Handling query output: 0x7fd4a000a4a0 (176) Nov 10 15:12:39 j1 gssproxy[6099]: [2020/11/10 20:12:39]: [status] Handling query reply: 0x7fd4a000a4a0 (176) Nov 10 15:12:39 j1 gssproxy[6099]: [2020/11/10 20:12:39]: [status] Sending data: 0x7fd4a000a4a0 (176) Nov 10 15:12:39 j1 gssproxy[6099]: [2020/11/10 20:12:39]: [status] Sending data [0x7fd4a000a4a0 (176)]: successful write of 176
Sure enough, uid is indeed 1004 and gid is 1000. The file shows that it worked:
% cd /mnt j1% touch testfile1 j1% ls -al testfile1 -rw------- 1 jas tech 0 Nov 10 15:12 testfile1
Now I do the same thing as root on j1... ah... I get nothing in the log from gssproxy:
# touch rootfile1 j1# ls -al rootfile1 -rw-r--r-- 1 nfsnobody nfsnobody 0 Nov 10 15:16 rootfile1
But if I write the file as "jas" again, I do get the log output...
Hopefully, this will provide more detail.
Thanks,
Jason.
On 11/10/2020 2:41 PM, Simo Sorce wrote:
In which direction do you care for translation ?
GSS-Proxy does indeed uses only krb5.conf for authentication mapping purposes, but what you should below seem to be a id -> name translation, perhaps in order to set permissions ?
Are you having issues at authentication time ? With gss-proxy debug at level 3 you should see the full data exchanged as part of a GSS Accept_Sec_Context call, in the reply you should see what name was used.
Also keep in mind that normally NFS servers by default squash root to nobody, not sure if this is a factor. (In general using root over NFS is discouraged).
Simo.
On Tue, 2020-11-10 at 10:03 -0500, Jason Keltz wrote:
Hi.
I have a system configured to use krb5 NFS mounts on CentOS 7 along with gss-proxy. It's a bit of a different setup because the Kerberos server is part of Samba AD. Nonetheless, NFS mounts are working perfectly. The one thing I can't seem to figure out is NFS root. I've seen instructions online for adding translations to /etc/idmapd.conf, and I tried that, and it wasn't working. I then found instructions from Red Hat to do this type of setup in /etc/krb5.conf instead:
[realms] … EXAMPLE.COM = { … auth_to_local = RULE:[2:$1/$2@$0](host/nfsclient.example.com@EXAMPLE.COM)s/.*/root/ auth_to_local = DEFAULT }
This also doesn't seem to solve the problem. These lines, I'm assuming, go on the NFS server.
I even tried changing "host/nfsclient.example.com@EXAMPLE.COM" to .* for my test setup to see if anything would happen, but it didn't. It's not clear how I can verify whether those lines are being processed.
It wasn't even clear whether the "auth_to_local" lines were required in addition to the NFS translation lines, or separately.
I did my research and discovered what those lines mean. I also discovered those lines are required instead of the NFS translation lines when using gss-proxy.
The problem is, I can't really figure out how to debug this issue. With rpc.idmapd debugging I was seeing the following:
rpc.idmapd: nfsdcb: authbuf=gss/krb5 authtype=user rpc.idmapd: nfs4_uid_to_name: calling nsswitch->uid_to_name rpc.idmapd: nfs4_uid_to_name: nsswitch->uid_to_name returned 0 rpc.idmapd: nfs4_uid_to_name: final return value is 0 rpc.idmapd: Server : (user) id "0" -> name "root@eecs.yorku.ca" rpc.idmapd: nfsdcb: authbuf=gss/krb5 authtype=group rpc.idmapd: nfs4_gid_to_name: calling nsswitch->gid_to_name rpc.idmapd: nfs4_gid_to_name: nsswitch->gid_to_name returned 0 rpc.idmapd: nfs4_gid_to_name: final return value is 0 rpc.idmapd: Server : (group) id "0" -> name "root@eecs.yorku.ca" rpc.idmapd: nfsdcb: authbuf=gss/krb5 authtype=user rpc.idmapd: nfs4_uid_to_name: calling nsswitch->uid_to_name rpc.idmapd: nfs4_uid_to_name: nsswitch->uid_to_name returned 0 rpc.idmapd: nfs4_uid_to_name: final return value is 0 rpc.idmapd: Server : (user) id "65534" -> name "nfsnobody@eecs.yorku.ca" rpc.idmapd: nfsdcb: authbuf=gss/krb5 authtype=group rpc.idmapd: nfs4_gid_to_name: calling nsswitch->gid_to_name rpc.idmapd: nfs4_gid_to_name: nsswitch->gid_to_name returned 0 rpc.idmapd: nfs4_gid_to_name: final return value is 0 rpc.idmapd: Server : (group) id "65534" -> name "nfsnobody@eecs.yorku.ca"
(with no NFS translation lines in place)..
It makes sense that root@eecs.yorku.ca is not a valid Kerberos user. root is just a local user.
Might someone have a suggestion on how to debug the processing of the auth_to_local lines?
I've turned on gssproxy debugging level 2, but not really seeing anything.
Thanks,
Jason.
gss-proxy mailing list -- gss-proxy@lists.fedorahosted.org To unsubscribe send an email to gss-proxy-leave@lists.fedorahosted.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/gss-proxy@lists.fedorahosted.or...