On 04.11.22 15:32, Brendan Kearney via FreeIPA-users wrote:
If you dont own the DNS service and records, then i am willing to bet
you dont own the load balancers and their configs, either. so the
hurdle to overcome, engaging another team/department when needing a
change, probably still exists.
DNS is actually managed by a differnent department but luckily we are
allowed to modify the records.
depending on the autonomy you are given over your servers, you may
have
the ability to stop advertising a route via the routing daemon in order
to bring the anycasted KDC service "out of the mix".
if you are able to run quagga, frr, bird or some other dynamic routing
daemon, it can inject the route to your KDC when operational, and remove
the route for maintenance, etc. this puts you in control of your own
destiny and makes the change control process entirely internal to your
team/department. plus it allows you to stand up an new KDC at will,
with little more that some collaboration with the team that manages the
routing for your organization.
i used to manage the caching forward proxies used for internet access by
my org, and was kicking around the idea of using anycast for the VIP on
the load balancer, and doing normal load balancing behind the box for
the proxies. the specifics are a bit different, the theory still holds,
for KDC vs HTTP and you could likely do the same depending on how
"enterprise grade" your load balancers are.
to me, the idea that names and IPs would never have to change for the
KDC services is the selling point for anycast. i brought up the idea
for anycast when the org i work for was moving data centers and we didnt
want to disrupt DNS services, or require thousands of servers to be
updated. for the proxies, it would have brought high availability
across multiple data centers and failure recovery would happen as fast
as the network could reconverge. given that apps usually use proxy host
and proxy port configs, having the same name point to physically,
geographically disparate footprints, with inherent prioritization by the
underlying network, would have save a lot of meantime-to-repair (MTTR)
minutes.
I definitely should take a closer look on anycast as my knowledge is at
a very basic level.
in Simo's blog, the idea of using only one SPN is what i have
gone with
in nearly all my configs, but i have added the ability to use alternate
ports in order to access a single individual host running the service. i
still have the single name/IP on the VIP, but alternate ports dictate
which backend server you hit. i did this for the proxies at work, and
do it with MariaDB, Apache, OpenLDAP, and Squid at home. they are all
kerberized services, and i can talk to the pool of servers for scaling
and performance or talk to an individual daemon for diagnostics or other
specific intentions, all the while authenticating properly with kerberos
tickets.
I am glad to see that someone already does what I was thinking about in
theory only. As we do have an IPA test instance I will definitely try
that out.
Cheers,
Ronald