I should add that user authentication with NFS works fine with my original config. It is
only root executing mount that stopped working.
John
On 6/24/21 3:03 PM, John Bazik wrote:
> I wondered about euid, but gssproxy.conf(5) says:
>
> The "euid" parameter is imperative, any section without it will be
discarded.
>
> And when I delete those two lines from the configuration, gssproxy does not run:
>
> Jun 24 14:50:32 client gssproxy[596]: [2021/06/24 18:50:32]: Debug Enabled (level:
2)
> Jun 24 14:50:32 client gssproxy[596]: [2021/06/24 18:50:32]: Config file(s) not
found!
> Jun 24 14:50:32 client gssproxy: Option 'euid' is missing from
[service/nfs-client].
> Jun 24 14:50:32 client gssproxy: Error reading configuration 22: Invalid argument
> Jun 24 14:50:32 client systemd[1]: gssproxy.service: Control process exited,
code=exited, status=1/FAILURE
> Jun 24 14:50:32 client systemd[1]: gssproxy.service: Failed with result
'exit-code'.
>
> This is how I test that it's working (when gssproxy is running):
>
> # su -c 'ls /home/jbazik' jbazik
>
> So, you are right, I'm not interested in having root impersonate users. I just
want users to get keytab-authenticated.
>
> John
>
> On 6/24/21 2:21 PM, Simo Sorce wrote:
>> So think more about your config,given what you want to achieve you
>> should probably removed euid = 0 and trusted = yes
>>
>> rpc.gssd changes the uid used for the process, and this tells gss-proxy
>> what user (and hence) what keytab to use), however euid = 0 is forcing
>> to accept only connections from root, and trusted is saying root can
>> impersonated anyone, but you do not want that.
>>
>> I do not know if this will help the actual problem, but it is worth
>> getting anything cleared so we do not chase ghosts.
>>
>> Simo.
>>
>> On Thu, 2021-06-24 at 13:58 -0400, John Bazik wrote:
>>> So, no, there is no 0.keytab, and there is no AD account for root, so it
would make no sense to have one.
>>>
>>> We normally rely on root using machine credentials to do mounts. I don't
understand why that stops working when gssproxy is running.
>>>
>>> John
>>>
>>> On 6/24/21 12:41 PM, Simo Sorce wrote:
>>>>
>>>> Do you have a keytab named /var/local/keytabs/0.keytab ?
>>>>
>>>> It looks like gss-proxy attempts to acquire creds but uses
>>>> ost/client.zz.example.com(a)ZZ.EXAMPLE.COM to try to obtain a TGT, but AD
>>>> KDCs are picky and do not allow to use the SPN as the initiator they
>>>> want to see a request from client$(a)AD.EXAMPLE.COM instead.
>>>>
>>>> So gss-proxy returns an error and then rpc.gssd falls back and tries to
>>>> directly obtain a credential and succeeds (ie the creds are not
>>>> obtained through gss-proxy in this case).
>>>>
>>>> I do not know if this should be in any way a problem because you are
>>>> not trying to use impersonation, you are trying to use actual keytabs
>>>> for users. So you should try to walk in amount point as a user and post
>>>> the gss-proxy/rpc.gssd errors when that happen.
>>>>
>>>> Root is probably squashed and is generally not a good user for
>>>> debugging as rpc.gssd falls back trying to use machine credentials for
>>>> root.
>>>>
>>>> Simo.
>>>>
>>>>
>>>> On Thu, 2021-06-24 at 12:09 -0400, John Bazik wrote:
>>>>> Sure, here's the klist output:
>>>>>
>>>>> Ticket cache:
FILE:/tmp/krb5ccmachine_AD.EXAMPLE.COM
>>>>> Default principal: CLIENT$(a)AD.EXAMPLE.COM
>>>>>
>>>>> Valid starting Expires Service principal
>>>>> 06/24/2021 11:53:49 06/24/2021 21:53:49
krbtgt/AD.EXAMPLE.COM(a)AD.EXAMPLE.COM
>>>>> renew until 07/01/2021 11:53:49
>>>>> 06/24/2021 11:53:49 06/24/2021 21:53:49
nfs/nfs.example.com(a)AD.EXAMPLE.COM
>>>>> renew until 07/01/2021 11:53:49
>>>>>
>>>>> And here is larger snippet from syslog with gssproxy debug = 2:
>>>>>
>>>>> Jun 24 11:54:08 client rpc.gssd[6512]: #012handle_gssd_upcall:
'mech=krb5 uid=0 enctypes=18,17,16,23,3,1,2 ' (nfs/clnt2fe)
>>>>> Jun 24 11:54:08 client rpc.gssd[6512]: krb5_use_machine_creds: uid 0
tgtname (null)
>>>>> Jun 24 11:54:08 client rpc.gssd[6512]: Full hostname for
'nfs.example.com' is 'nfs.example.com'
>>>>> Jun 24 11:54:08 client rpc.gssd[6512]: Full hostname for
'client.zz.example.com' is 'client.zz.example.com'
>>>>> Jun 24 11:54:08 client rpc.gssd[6512]: No key table entry found for
client$(a)AD.EXAMPLE.COM while getting keytab entry for 'client$(a)AD.EXAMPLE.COM'
>>>>> Jun 24 11:54:08 client rpc.gssd[6512]: Success getting keytab entry
for 'CLIENT$(a)AD.EXAMPLE.COM'
>>>>> Jun 24 11:54:08 client rpc.gssd[6512]: INFO: Credentials in CC
'FILE:/tmp/krb5ccmachine_AD.EXAMPLE.COM' are good until 1624586029
>>>>> Jun 24 11:54:08 client rpc.gssd[6512]: INFO: Credentials in CC
'FILE:/tmp/krb5ccmachine_AD.EXAMPLE.COM' are good until 1624586029
>>>>> Jun 24 11:54:08 client gssproxy[32163]: [CID 9][2021/06/24 15:54:08]:
Connection matched service nfs-client
>>>>> Jun 24 11:54:08 client gssproxy[32163]: [CID 9][2021/06/24 15:54:08]:
gp_rpc_execute: executing 6 (GSSX_ACQUIRE_CRED) for service "nfs-client", euid:
0,socket: (null)
>>>>> Jun 24 11:54:08 client gssproxy[32163]: GSSX_ARG_ACQUIRE_CRED(
call_ctx: { "" [ ] } input_cred_handle: {
"host/client.zz.example.com(a)ZZ.EXAMPLE.COM" [ {
"host/client.zz.example.com(a)ZZ.EXAMPLE.COM" { 1 2 840 113554 1 2 2 } INITIATE
36000 0 } ] [ ....p..w.z....o.... ] 0 } add_cred: 0 desired_name: <Null> time_req:
4294967295 desired_mechs: { { 1 2 840 113554 1 2 2 } } cred_usage: INITIATE
initiator_time_req: 0 acceptor_time_req: 0 )
>>>>> Jun 24 11:54:08 client gssproxy[32163]: GSSX_RES_ACQUIRE_CRED(
status: { 0 { 1 2 840 113554 1 2 2 } 0 "" "" [ ] }
output_cred_handle: { "host/client.zz.example.com(a)ZZ.EXAMPLE.COM" [ {
"host/client.zz.example.com(a)ZZ.EXAMPLE.COM" { 1 2 840 113554 1 2 2 } INITIATE
36000 0 } ] [ ....p..w.z....o.... ] 0 } )
>>>>> Jun 24 11:54:08 client rpc.gssd[6512]: creating tcp client for server
nfs.example.com
>>>>> Jun 24 11:54:08 client rpc.gssd[6512]: DEBUG: port already set to
2049
>>>>> Jun 24 11:54:08 client rpc.gssd[6512]: creating context with server
nfs(a)nfs.example.com
>>>>> Jun 24 11:54:08 client gssproxy[32163]: [CID 9][2021/06/24 15:54:08]:
Connection matched service nfs-client
>>>>> Jun 24 11:54:08 client gssproxy[32163]: [CID 9][2021/06/24 15:54:08]:
gp_rpc_execute: executing 8 (GSSX_INIT_SEC_CONTEXT) for service "nfs-client",
euid: 0,socket: (null)
>>>>> Jun 24 11:54:08 client gssproxy[32163]:
GSSX_ARG_INIT_SEC_CONTEXT( call_ctx: { "" [ ] } context_handle: <Null>
cred_handle: { "host/client.zz.example.com(a)ZZ.EXAMPLE.COM" [ {
"host/client.zz.example.com(a)ZZ.EXAMPLE.COM" { 1 2 840 113554 1 2 2 } INITIATE
36000 0 [ { [ krb5.set.allowed... ] [ ................... ] } ] } ] [ ....p..w.z....o....
] 0 } target_name: "nfs(a)nfs.example.com" mech_type: { 1 2 840 113554 1 2 2 }
req_flags: 2 time_req: 0 input_cb: <Null> input_token: <Null> [ { [
sync.modified.cr... ] [ 64656661756c740 ] } ] )
>>>>> Jun 24 11:54:08 client gssproxy[32163]: [CID 9][2021/06/24 15:54:08]:
Credentials allowed by configuration
>>>>> Jun 24 11:54:08 client gssproxy[32163]:
GSSX_RES_INIT_SEC_CONTEXT( status: { 851968 { 1 2 840 113554 1 2 2 } 2529638972
"Unspecified GSS failure. Minor code may provide more information" "KDC
returned error string: FINDING_SERVER_KEY" [ ] } context_handle: <Null>
output_token: <Null> )
>>>>> Jun 24 11:54:08 client rpc.gssd[6512]: WARNING: Failed to create krb5
context for user with uid 0 for server nfs(a)nfs.example.com
>>>>> Jun 24 11:54:08 client rpc.gssd[6512]: WARNING: Failed to create
machine krb5 context with cred cache
FILE:/tmp/krb5ccmachine_AD.EXAMPLE.COM for server
nfs.example.com
>>>>> Jun 24 11:54:08 client rpc.gssd[6512]: WARNING: Machine cache
prematurely expired or corrupted trying to recreate cache for server
nfs.example.com
>>>>> Jun 24 11:54:08 client rpc.gssd[6512]: Full hostname for
'nfs.example.com' is 'nfs.example.com'
>>>>> Jun 24 11:54:08 client rpc.gssd[6512]: Full hostname for
'client.zz.example.com' is 'client.zz.example.com'
>>>>> Jun 24 11:54:08 client rpc.gssd[6512]: No key table entry found for
client$(a)AD.EXAMPLE.COM while getting keytab entry for 'client$(a)AD.EXAMPLE.COM'
>>>>> Jun 24 11:54:08 client rpc.gssd[6512]: Success getting keytab entry
for 'CLIENT$(a)AD.EXAMPLE.COM'
>>>>> Jun 24 11:54:08 client rpc.gssd[6512]: INFO: Credentials in CC
'FILE:/tmp/krb5ccmachine_AD.EXAMPLE.COM' are good until 1624586029
>>>>> Jun 24 11:54:08 client rpc.gssd[6512]: INFO: Credentials in CC
'FILE:/tmp/krb5ccmachine_AD.EXAMPLE.COM' are good until 1624586029
>>>>> Jun 24 11:54:08 client gssproxy[32163]: [CID 9][2021/06/24 15:54:08]:
Connection matched service nfs-client
>>>>> Jun 24 11:54:08 client gssproxy[32163]: [CID 9][2021/06/24 15:54:08]:
gp_rpc_execute: executing 6 (GSSX_ACQUIRE_CRED) for service "nfs-client", euid:
0,socket: (null)
>>>>> Jun 24 11:54:08 client gssproxy[32163]: GSSX_ARG_ACQUIRE_CRED(
call_ctx: { "" [ ] } input_cred_handle: {
"host/client.zz.example.com(a)ZZ.EXAMPLE.COM" [ {
"host/client.zz.example.com(a)ZZ.EXAMPLE.COM" { 1 2 840 113554 1 2 2 } INITIATE
36000 0 } ] [ ....p..w.z....o.... ] 0 } add_cred: 0 desired_name: <Null> time_req:
4294967295 desired_mechs: { { 1 2 840 113554 1 2 2 } } cred_usage: INITIATE
initiator_time_req: 0 acceptor_time_req: 0 )
>>>>> Jun 24 11:54:08 client gssproxy[32163]: GSSX_RES_ACQUIRE_CRED(
status: { 0 { 1 2 840 113554 1 2 2 } 0 "" "" [ ] }
output_cred_handle: { "host/client.zz.example.com(a)ZZ.EXAMPLE.COM" [ {
"host/client.zz.example.com(a)ZZ.EXAMPLE.COM" { 1 2 840 113554 1 2 2 } INITIATE
36000 0 } ] [ ....p..w.z....o.... ] 0 } )
>>>>> Jun 24 11:54:08 client rpc.gssd[6512]: creating tcp client for server
nfs.example.com
>>>>> Jun 24 11:54:08 client rpc.gssd[6512]: DEBUG: port already set to
2049
>>>>> Jun 24 11:54:08 client rpc.gssd[6512]: creating context with server
nfs(a)nfs.example.com
>>>>> Jun 24 11:54:08 client gssproxy[32163]: [CID 9][2021/06/24 15:54:08]:
Connection matched service nfs-client
>>>>> Jun 24 11:54:08 client gssproxy[32163]: [CID 9][2021/06/24 15:54:08]:
gp_rpc_execute: executing 8 (GSSX_INIT_SEC_CONTEXT) for service "nfs-client",
euid: 0,socket: (null)
>>>>> Jun 24 11:54:08 client gssproxy[32163]:
GSSX_ARG_INIT_SEC_CONTEXT( call_ctx: { "" [ ] } context_handle: <Null>
cred_handle: { "host/client.zz.example.com(a)ZZ.EXAMPLE.COM" [ {
"host/client.zz.example.com(a)ZZ.EXAMPLE.COM" { 1 2 840 113554 1 2 2 } INITIATE
36000 0 [ { [ krb5.set.allowed... ] [ ................... ] } ] } ] [ ....p..w.z....o....
] 0 } target_name: "nfs(a)nfs.example.com" mech_type: { 1 2 840 113554 1 2 2 }
req_flags: 2 time_req: 0 input_cb: <Null> input_token: <Null> [ { [
sync.modified.cr... ] [ 64656661756c740 ] } ] )
>>>>> Jun 24 11:54:08 client gssproxy[32163]: [CID 9][2021/06/24 15:54:08]:
Credentials allowed by configuration
>>>>> Jun 24 11:54:08 client gssproxy[32163]:
GSSX_RES_INIT_SEC_CONTEXT( status: { 851968 { 1 2 840 113554 1 2 2 } 2529638972
"Unspecified GSS failure. Minor code may provide more information" "KDC
returned error string: FINDING_SERVER_KEY" [ ] } context_handle: <Null>
output_token: <Null> )
>>>>> Jun 24 11:54:08 client rpc.gssd[6512]: WARNING: Failed to create krb5
context for user with uid 0 for server nfs(a)nfs.example.com
>>>>> Jun 24 11:54:08 client rpc.gssd[6512]: WARNING: Failed to create
machine krb5 context with cred cache
FILE:/tmp/krb5ccmachine_AD.EXAMPLE.COM for server
nfs.example.com
>>>>> Jun 24 11:54:08 client rpc.gssd[6512]: ERROR: Failed to create
machine krb5 context with any credentials cache for server
nfs.example.com
>>>>> Jun 24 11:54:08 client rpc.gssd[6512]: doing error downcall
>>>>>
>>>>> John
>>>>>
>>>>> On 6/24/21 10:19 AM, Simo Sorce wrote:
>>>>>> Ok two points,
>>>>>> can you raise the debug level of gssproy and see what it prints
?
>>>>>>
>>>>>> Also can you klist the contents of
/tmp/krb5ccmachine_AD.EXAMPLE.COM ?
>>>>>>
>>>>>> Thanks,
>>>>>> Simo.
>>>>>>
>>>>>> On Wed, 2021-06-23 at 23:46 -0400, John Bazik wrote:
>>>>>>> I've recently switched from using k5start to gssproxy to
allow my users to access NFSv4 mounts with sec=krb5, using keytabs I manage for them. I
have just one service configured in gssproxy:
>>>>>>>
>>>>>>> [service/nfs-client]
>>>>>>> mechs = krb5
>>>>>>> cred_store = keytab:/etc/krb5.keytab
>>>>>>> cred_store =
ccache:FILE:/var/lib/gssproxy/clients/krb5cc_%U
>>>>>>> cred_store =
client_keytab:/var/local/keytabs/%u.keytab
>>>>>>> cred_usage = initiate
>>>>>>> allow_any_uid = yes
>>>>>>> trusted = yes
>>>>>>> euid = 0
>>>>>>>
>>>>>>> I thought everything was working great, but now I find that I
can't mount remote filesystems when gssproxy is running. If I stop gssproxy, mount
works. If I change sec=krb5 to sec=sys, mount works. It seems clear that gssproxy is
preventing mount from working. When I run mount -a, I get errors like this:
>>>>>>>
>>>>>>> mount.nfs: access denied by server while mounting
[...]
>>>>>>>
>>>>>>> When I add -vvv to rpc.gssd, this is what I see in syslog
(anonymized):
>>>>>>>
>>>>>>> rpc.gssd[6512]: WARNING: Machine cache prematurely
expired or corrupted trying to recreate cache for server
nfs.example.com
>>>>>>> rpc.gssd[6512]: Full hostname for
'nfs.example.com' is 'nfs.example.com'
>>>>>>> rpc.gssd[6512]: Full hostname for
'client.zz.example.com' is 'client.zz.example.com'
>>>>>>> rpc.gssd[6512]: No key table entry found for
client$(a)AD.EXAMPLE.COM while getting keytab entry for 'client$(a)AD.EXAMPLE.COM'
>>>>>>> rpc.gssd[6512]: Success getting keytab entry for
'CLIENT$(a)AD.EXAMPLE.COM'
>>>>>>> rpc.gssd[6512]: INFO: Credentials in CC
'FILE:/tmp/krb5ccmachine_AD.EXAMPLE.COM' are good until 1624541464
>>>>>>> rpc.gssd[6512]: INFO: Credentials in CC
'FILE:/tmp/krb5ccmachine_AD.EXAMPLE.COM' are good until 1624541464
>>>>>>> rpc.gssd[6512]: creating tcp client for server
nfs.example.com
>>>>>>> rpc.gssd[6512]: DEBUG: port already set to 2049
>>>>>>> rpc.gssd[6512]: creating context with server
nfs(a)nfs.example.com
>>>>>>> rpc.gssd[6512]: WARNING: Failed to create krb5 context
for user with uid 0 for server nfs(a)nfs.example.com
>>>>>>> rpc.gssd[6512]: WARNING: Failed to create machine krb5
context with cred cache
FILE:/tmp/krb5ccmachine_AD.EXAMPLE.COM for server
nfs.example.com
>>>>>>> rpc.gssd[6512]: ERROR: Failed to create machine krb5
context with any credentials cache for server
nfs.example.com
>>>>>>> rpc.gssd[6512]: doing error downcall
>>>>>>>
>>>>>>> I'm running version 0.8.0, as distributed with Debian
Buster (I worked around the systemd ordering cycle bug in that version by using the
upstream unit file). The fileserver is run by a different group and kerberos is AD.
>>>>>>>
>>>>>>> Googling for answers, I found others describe similar
problems, but no solutions that make sense to me. Help!
>>>>>>>
>>>>>>> John
>>>>>>> _______________________________________________
>>>>>>> gss-proxy mailing list -- gss-proxy(a)lists.fedorahosted.org
>>>>>>> To unsubscribe send an email to
gss-proxy-leave(a)lists.fedorahosted.org
>>>>>>> Fedora Code of Conduct:
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
>>>>>>> List Guidelines:
https://fedoraproject.org/wiki/Mailing_list_guidelines
>>>>>>> List Archives:
https://lists.fedorahosted.org/archives/list/gss-proxy@lists.fedorahosted...
>>>>>>> Do not reply to spam on the list, report it:
https://pagure.io/fedora-infrastructure
>>>>>>
>>>>> _______________________________________________
>>>>> gss-proxy mailing list -- gss-proxy(a)lists.fedorahosted.org
>>>>> To unsubscribe send an email to
gss-proxy-leave(a)lists.fedorahosted.org
>>>>> Fedora Code of Conduct:
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
>>>>> List Guidelines:
https://fedoraproject.org/wiki/Mailing_list_guidelines
>>>>> List Archives:
https://lists.fedorahosted.org/archives/list/gss-proxy@lists.fedorahosted...
>>>>> Do not reply to spam on the list, report it:
https://pagure.io/fedora-infrastructure
>>>>
>>> _______________________________________________
>>> gss-proxy mailing list -- gss-proxy(a)lists.fedorahosted.org
>>> To unsubscribe send an email to gss-proxy-leave(a)lists.fedorahosted.org
>>> Fedora Code of Conduct:
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
>>> List Guidelines:
https://fedoraproject.org/wiki/Mailing_list_guidelines
>>> List Archives:
https://lists.fedorahosted.org/archives/list/gss-proxy@lists.fedorahosted...
>>> Do not reply to spam on the list, report it:
https://pagure.io/fedora-infrastructure
>>