default local DNS caching name server

William Brown william at firstyear.id.au
Mon Apr 14 15:01:30 UTC 2014


> 
> >> unbound does not really care about transparent proxy's on port 53. As
> >> long as they don't break DNS (and DNSSEC). If they redirect port 53 to
> >> some broken DNS server, unbound will try to work around it. If port 53
> >> is broken it will attempt DNS over port 80 of various fedoraproject DNS
> >> servers, or DNS over TLS on port 443.
> >
> > How do you setup DNS over TLS?
> 
> Unbound has this capability already build in. unbound-control activates
> via (currently via dnssec-triggerd, in the future via NM) using the
> keywords tcp-upstream or ssl-upstream.

I meant for say bind, but okay. 


> 
> > It's not as much the case, which makes me happier, but I want to know
> > the conditions on which you decide a DNS server is "dodgy" or not.
> 
> For a detailed list you will have to check the source code. But it
> includes thing like DNSSEC records, proper wildcard NSEC(3) records,
> CNAME support, EDNS0 support, packet sizes, etc. The known bugs in older
> versions of common DNS software. Cases the IETF actually experienced in
> the wild.

IE, If I have an out of box bind9 setup with a few zones, or even 100s
of zones, these cases should never be triggered. I would hate to see the
"dodgy DNS" check giving a false positive on networks that are actually
sane ... Such checks need to be conservative in their triggers IMO. 

> 
> >>> * If a forwarder exists on the network, unbound uses it for all queries.
> >>
> >> Yes, but not for open wifi. Only for physical wire and secured wifi.
> >
> > Okay. Can this point be made clear on the proposal page? Also the
> > conditions for Physical wire, and secured wifi?
> 
> Yes, we can do that.

Thanks.

> > okay, but lets combine these two points. My ISP mucks with the TTL of
> > some website from say 300 to 30000000. Unbound would respect this to
> > that amount, or to the TTL max (Which is still 86400 iirc). If you
> > aren't flushing the cache between networks you could end up with:
> >
> > * Suboptimal routes causing a poor user experience.
> > * Incorrect cached zone data moving between networks with different DNS
> > views of the world.
> 
> If we believe that artificial increase of TTL is a common manglement, we
> can have dnssec-trigger (or the NM integrated version of that) check for
> such mangling. I'm reluctant to try and solve every _imaginable_ problem
> out there. If your ISPs badness causes suboptimal routes, than that's
> not the end of the world, and you have your ISP to blame. One ISP
> shouldn't be responsible for every fedora user flushing caches all the
> time. Let's deal with this problem when we actually find it is a real
> world problem.

It actually is quite common from certain Australian ISPs .... especially
the "cheap" ones (You get what you pay for ... )

Even if we ignore the TTL mangling, the first issue of incorrect cached
zone data moving between networks is a real world issue IMO. As
previously mention, split view business networks. I believe you have
said this is solved by flushing "." forwarder between networks that are
"secure". 


> > * On an open (Insecure) access point, unbound bypasses the local
> > forwarder, except for names listed in the single valued attribute
> > "options domain-name"  from dhcp
> 
> No, we cannot do that. As I said, a rogue hotspot could give the
> domain-name "corp.paypal.com" to fool me into thinking I'm connecting
> to my internal corporate network. We cannot automatically insert those
> forwards on open wifi, unless the user manually performs an override.

Okay, This is another point to make clear on the wiki. I thought this
was what you were saying was the case on open wifi.

> 
> > * On a secure network (Encrypted wifi, lan) unbound will use the
> > forwarders as provided by DHCP.
> 
> Provided they are functional (eg don't break DNSSEC)

Again, can you on the wiki detail the "functional" requirements. 


The reason I ask these are documented, is so that when other network
admins (like myself) come along, you have already had the argument and
provided the justification and detailed explanations of these "edge
cases". 


> 
> > * Unbound will flush the cache between authenticated networks. (If I
> > read your last point correctly)
> 
> If we did a "." forward, yes.

Moved ...


> > 
> > > Ignoring the TTL change, lets just look at flushing between network
> > > state change. This would solve both the dot points listed. You only need
> > > to rebuild the cache on first network reconnect meaning:
> > 
> > "only rebuild"? You are asking everyone else to do hundreds of queries for
> > each time to join their 3G network. Remember, when validating, you don't
> > just have one record for a queried A record. Since you need to recurse
> > and do all the intermediate queries too because otherwise you don't have
> > the records to do full DNSSEC validation. It's not a reasonably thing to
> > flush the cache. We are working hard on ensuring the user _hits_ their
> > cache and gains speed up (including pre-fetching).  Waiting on various
> > roundtrips for DNS over 3G is going to cause a lot more delays than a
> > "suboptimal route". Your workaround will actually be detrimental to the
> > user experience.
> > 
> > Note, I'm trying to optimise that path too, see:
> > http://tools.ietf.org/html/draft-ietf-dnsop-edns-chain-query-00


These two statements really seem to contradict. On one hand you say that
moving between secure networks, the "." forwarder gets flushed. But then
you say the whole point is that it isn't flushed!

On my 3g tether, and work, both would be secure wifi, so according to
this both flush (Which really, I like :) ) But according to what you are
saying they shouldn't do that, but they do? 


Really, it seems like the only time the cache *won't* flush is when I
move from a secure wifi to an insecure wifi. What happens when I move
from the insecure wifi back? I would like to argue that given not all
domains have DNSSEC yet, you can't "trust" the records from the insecure
wifi, so at the least on insecure wifi interface down, you should flush
the non-dnssec cached records. 

Which collecting this seems to mean (Current functional state):

Secure to secure network -> Flush "." cache.
Secure to insecure network -> Keep cache
Insecure to Insecure network -> Keep cache
Insecure to secure network -> Keep cache.

I think in the perfect world, assuming that insecure networks are
insecure shouldn't it be?

Secure to secure network -> Flush "." cache.
Secure to insecure network -> Keep cache
Insecure to insecure network -> Keep DNSSEC cache only. 
Insecure to secure network -> Keep DNSSEC cache only. 

But considering split horizon private secure networks etc, shouldn't it
really be? 

Secure to secure network -> Keep DNSSEC cache only. 
Secure to insecure network -> Keep DNSSEC cache only. 
Insecure to insecure network -> Keep DNSSEC cache only. 
Insecure to secure network -> Keep DNSSEC cache only. 

The only records you can really guarantee as being the same on all
network views are ones signed by DNSSEC. From the secure networks you
may have internal views, and on the insecure networks you can't trust
unsigned records. And IIRC there are DNSSEC split view functions too
which may not even let you cache the DNSSEC records anyway .... 


It gets messy very quickly when you start talking about moving a cache
with specific, limited views of the DNS world around between sites ;) 

Sure, this won't affect "every person ever who uses fedora", but it will
really annoy they ones who get affected by it. Some people will know how
to diagnose it, others won't. 

Sincerely,

-- 
William Brown <william at firstyear.id.au>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: This is a digitally signed message part
URL: <http://lists.fedoraproject.org/pipermail/devel/attachments/20140415/5dd5ad1d/attachment.sig>


More information about the devel mailing list