Hi Wouter,
On 11 August 2017 at 15:14, <wouter.hummelink(a)kpn.com> wrote:
I've used shared keytabs before to create a loadbalanced squid
instance.
This way you don't even need to use sticky balancing since all nodes that
have the key material will be able to decrypt TGSs for the shared service.
Be sure to use the -r option with ipa-getkeytab, otherwise the secret will
be reset. Alternatively you can just copy the keytab entries.
Thank you. I got to test it this afternoon and have some follow up
question/comment. For one, I am not able to use the -r option/flag.
Very odd, it don't even show up on the man page. This is the error I
am getting.
[root@plutonium ~]# ipa-getkeytab -r -s
lithium.eng.example.com -p
lsf/digitalmob.eng.example.com -k /etc/krb5.keytab
Usage: ipa-getkeytab [-qP?] [-q|--quiet] [-s|--server Server Name]
[-p|--principal Kerberos Service Principal Name] [-k|--keytab
Keytab File Name]
[-e|--enctypes Comma separated encryption types list]
[--permitted-enctypes] [-P|--password]
[-D|--binddn DN to bind as if not using kerberos] [-w|--bindpw
password to use if not using kerberos]
[-?|--help] [--usage]
[root@plutonium ~]#
I ended up running it without the -r flag. However, it didn't seem to
reset the keytab. At least I see all the service principles I
expected. Do you know how I can verify that the secret hasn't been
reset?
ipa-getkeytab -s
lithium.eng.example.com -p
lsf/digitalmob.eng.example.com -k /etc/krb5.keytab
[root@plutonium ~]# klist -ke /etc/krb5.keytab
Keytab name: FILE: /etc/krb5.keytab
KVNO Principal
---- --------------------------------------------------------------------------
1 host/plutonium.eng.example.com(a)ENG.EXAMPLE.COM (aes256-cts-hmac-sha1-96)
1 host/plutonium.eng.example.com(a)ENG.EXAMPLE.COM (aes128-cts-hmac-sha1-96)
1 host/plutonium.eng.example.com(a)ENG.EXAMPLE.COM (des3-cbc-sha1)
1 host/plutonium.eng.example.com(a)ENG.EXAMPLE.COM (arcfour-hmac)
1 nfs/plutonium.eng.example.com(a)ENG.EXAMPLE.COM (aes256-cts-hmac-sha1-96)
1 nfs/plutonium.eng.example.com(a)ENG.EXAMPLE.COM (aes128-cts-hmac-sha1-96)
1 nfs/plutonium.eng.example.com(a)ENG.EXAMPLE.COM (des3-cbc-sha1)
1 nfs/plutonium.eng.example.com(a)ENG.EXAMPLE.COM (arcfour-hmac)
2 nfs/plutonium.eng.example.com(a)ENG.EXAMPLE.COM (aes256-cts-hmac-sha1-96)
2 nfs/plutonium.eng.example.com(a)ENG.EXAMPLE.COM (aes128-cts-hmac-sha1-96)
2 nfs/plutonium.eng.example.com(a)ENG.EXAMPLE.COM (des3-cbc-sha1)
2 nfs/plutonium.eng.example.com(a)ENG.EXAMPLE.COM (arcfour-hmac)
3 lsf/digitalmob.eng.example.com(a)ENG.EXAMPLE.COM (aes256-cts-hmac-sha1-96)
3 lsf/digitalmob.eng.example.com(a)ENG.EXAMPLE.COM (aes128-cts-hmac-sha1-96)
3 lsf/digitalmob.eng.example.com(a)ENG.EXAMPLE.COM (des3-cbc-sha1)
3 lsf/digitalmob.eng.example.com(a)ENG.EXAMPLE.COM (arcfour-hmac)
[root@plutonium ~]#
Lastly, it looks like the clients will be aware of all the servers,
just not the load balancer. This don't look like a great idea to me.
For example, if one server is down, the load balance would avoid
sending traffic to it, but since the load balancer FQDN now resolve to
all the IP addresses involved, the clients can connect to a system
that is currently down. How did you get around that?
[root@carbon ~]# dig
digitalmob.eng.example.com
; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.47.rc1.el6 <<>>
digitalmob.eng.example.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 45959
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 2, ADDITIONAL: 2
;; QUESTION SECTION:
;digitalmob.eng.example.com. IN A
;; ANSWER SECTION:
digitalmob.eng.example.com. 86400 IN A 192.168.20.42
digitalmob.eng.example.com. 86400 IN A 192.168.20.14
digitalmob.eng.example.com. 86400 IN A 192.168.20.65
;; AUTHORITY SECTION:
eng.example.com. 86400 IN NS
lithium.eng.example.com.
eng.example.com. 86400 IN NS
hydrogen.eng.example.com.
;; ADDITIONAL SECTION:
hydrogen.eng.example.com. 1200 IN A 192.168.20.1
lithium.eng.example.com. 1200 IN A 192.168.20.3
;; Query time: 1 msec
;; SERVER: 192.168.20.1#53(192.168.20.1)
;; WHEN: Fri Aug 11 18:26:22 2017
;; MSG SIZE rcvd: 172
[root@carbon ~]#
Thanks again for your help. Appreciate
Regards,
William