default local DNS caching name server

Simo Sorce simo at redhat.com
Sun Apr 13 17:45:05 UTC 2014


On Sun, 2014-04-13 at 16:39 +0930, William Brown wrote:
> On Sun, 2014-04-13 at 02:53 -0400, Simo Sorce wrote:
> > On Sun, 2014-04-13 at 16:10 +0930, William Brown wrote:
> > 
> > > A system wide resolver I am not opposed to. I am against a system wide
> > > *caching* resolver. 
> > 
> > > In this case, a cache *is* helpful, as is DNSSEC. But for the other 6, a
> > > cache is a severe detriment. 
> > 
> > About the above 2, can you explain *why* ?
> > A bunch of people here, feel that it would be a great improvement, you
> > keep saying it is doomsday, yet I haven't seen a concise explanation of
> > why that would be (maybe I overlooked, apologies if so).
> > 
> > 
> > > I disable the DNS cache in firefox with developer tools. 
> > 
> > So you will be able to do the same by setting 1 configuration option in
> > unbound, or you could disable the resolver entirely.
> > 
> > Can you tell why *everybody* should have the cache disabled by default ?
> > 
> > > Additionally, a short TTL is good, for this situation, but it can't fix
> > > everything. 
> > 
> > Paul mentioned the single configuration option need to make your
> > resolver tweak the TTL locally, what else do you need ? And again why
> > your preference should be the default ? What compelling arguments can
> > you make ?
> > 
> > Simo.
> 
> Internal and external zone views in a business. These records may
> different, and so would need flushing between network interface state
> changes.

As mentioned unbound does flush away the specific domains, so I will
write this one off. We can discuss the fine tuning for sure but it is
not a general concern.

> Additionally, local DNS caches may issues and delay diagnosis.

Just like about everything, I will write this off as well as I've seen
lack of caching causing the same difficult to diagnose issues (two
collaborating application getting different IPs for the same service and
inconsitencies between the 2 endpoints causing odd results).

So I do not think this is a generally valid concern, in the sense that
the benefits outweight the potential issues IMO.

> It's also not *needed* in a lot of setups. The business cases were to
> show that these caching layers already exist on these networks. It would
> be duplication of effort.

Not really, it would reduce unnecessary traffic, and give you for free a
bit of server affinity in general a good thing for most cases.
Whether it is *needed* or not depends on the situation, however my
sensation over many years is that a cache would bring more benefits than
not. I have had a lot more issues with flaky networks on machine w/o a
cache than one those with a cache, browsing in particular becomes
erratic on networks with high packet loss or flakey DNS server when you
do not have a cache as UDP packets are easily lost while TCP connection
can retry and recover more quickly, so the DNS is the one that causes
more issues and delays for the browser.

> In businesses, it's also common place to have a low-ish ttl (Say 5
> minutes) and when a system is migrated, they swap the A/AAAA records to
> the new system. The dns servers on the network are updated, but the
> workstation has the old record cached.

If the TTL is 5 minutes, the cache will expire in 5 minutes too.

> Without a local cache, they would query the local server again, which is relatively cheap.

And tey will do the same with a local cache, local caches *will* respect
TTLs of course!

>  IE: It keeps
> users happier even if they only needed to wait 5 minutes. Some people
> like things to be instant. 

And some people want unicorns, nobody prevent those people from
disabling this default. My personal experience is that these are rare
events and are not that important, and can be properly handled by admins
by lowering the TTLs in advance of a planned outage by bringing them
down to very short timeouts or even 0.

> It's certainly not the end of the world, but it's adding more
> complexity, and a potential source of issues. 

And also a source of benefits, as always it is a matter of balance, and
with the advent of DNSSEC I personally think the balance has definitely
tipped in favor of a *default* local resolver cache. (note: the
*default*, it means it is not bolted on, you can easily replace it like
you do for other services, for example the first thing I do on my
machines is to throw sendmail out the window and bring in postfix, it's
not a big deal, kisckstarts makes it trivial too).

> There is additionally, some confusion: It sounds like Paul wants to add
> the resolver to only forward queries for the local domain name to the
> local name servers. But this is impossible to discover all possible
> local domain names that are available. 

I think the default will be to forward everything to the DHCP provided
nameservers, and forward to specific servers per specific domain on
resources like VPN (unless otherwise configured, and there are already
knobs for that).

> tl;dr - DNSSEC I believe is a good thing (Even if it's rare). I don't
> think there are "benefits" to caching except in a minor number of cases
> where existing DNS caching mechanisms aren't in place. We are adding a
> layer of caching complexity that doesn't solve a real problem. 

I guess you do not travel much with a laptop on unreliable networks,
there a local cache makes a big difference. I think it is a great
default for workstations, and debatable for servers if it weren't for
DNSSEC, which again makes it a good default for server too.

If you want machines on your network to not cache locally much, just
change your local DNS servers to change TTLs to 0 or a very low amount,
problem solved.

HTH,
Simo.

-- 
Simo Sorce * Red Hat, Inc * New York



More information about the devel mailing list