F21 System Wide Change: Default Local DNS Resolver

Simo Sorce simo at redhat.com
Wed Apr 30 13:29:28 UTC 2014


On Wed, 2014-04-30 at 08:49 +0200, Alexander Larsson wrote:
> On tis, 2014-04-29 at 11:24 -0400, Simo Sorce wrote:
> > On Tue, 2014-04-29 at 17:15 +0200, Alexander Larsson wrote:
> > > On tis, 2014-04-29 at 14:15 +0200, Jaroslav Reznik wrote:
> > > > = Proposed System Wide Change:  Default Local DNS Resolver = 
> > > > https://fedoraproject.org/wiki/Changes/Default_Local_DNS_Resolver
> > > > 
> > > > Change owner(s): P J P <pjp at fedoraproject.org>, Pavel Šimerda 
> > > > <pavlix at pavlix.net>,	 Tomas Hozza <thozza at redhat.com>
> > > > 
> > > > To install a local DNS resolver trusted for the DNSSEC validation running on 
> > > > 127.0.0.1:53. This must be the only name server entry in /etc/resolv.conf.
> > > 
> > > This is gonna conflict a bit with docker, and other  users of network
> > > namespaces, like systemd-nspawn. When docker runs, it picks up the
> > > current /etc/resolv.conf and puts it in the container, but the container
> > > itself runs in a network namespace, so it gets its own loopback device.
> > > This will mean 127.0.0.1:53 points to the container itself, not the
> > > host, so dns resolving in the container will not work.
> > > 
> > > Not sure how to fix something like that though...
> > 
> > Any way we can redirect the connection to the host ?
> > 
> > On the host we cannot listen on 0.0.0.0 so we cannot make unbound
> > available through normal routing on a different interface.
> > 
> > However we can perhaps make it listen on a special virtual interface
> > dedicated to let containers talk to other processes on the host maybe ?
> > (could even be other privileged containers). There is a question of what
> > addresses to use though ...
> 
> I don't see any nice way to make this "just work" in docker (i.e.
> without changes to docker). Docker as well as the host sets up
> 127.0.0.1/8 for the loopback device, so any 127.0.0.* will get routed to
> the local loopback. 

Yep, seen that.

> The only ways to have a ip available to both the host and the container
> are to either have a ip not in the 127.0.0.x range (thus this will be
> forwarded to the default gw, i.e. the host) or to set up some kind of
> forwarding of a port in 127.0.0.x (i.e. use iptables). The later needs
> to be done by docker, as its what sets up the network interfaces for the
> container.

I thought as much, would it be rally bad to have docker forward via
iptables ? I guess the question really is, *how* do you do that ?
The local resolver listend on 127.0.0.1:53 *only* on the host, so it is
not like we can use iptables to forward to a routable address. Although
clearly we are on the same machine ... but I guess iptables is
namespaced so the one configured in the container has no way to see the
host's loopback ?

> So, without changes to docker the option seems to be to set up another
> local interface with address range different than 127.0.0.1 and have the
> dns server listen to that.

And here comes the problem (actually 2)
1. the local caching resolver is meant to listen exclusively on
127.0.0.1:53 in the normal case, although I guess docker could be
allowed to poke at the configuration.
2. what address are you going to steal ? Just pull one out of the hat
like libvirt does for the default VMs network and just take possession
of another address in 192.168.X.0/24 ?

Sounds like we should try to define some "standard" network address to
do things like this instead, would it make sense to use the IPv4 network
some times ago microsoft bought and made available as a local collision
domain for these kind of things ?
They reserved the network in 169.254.0.0 as a local collision domain
where clients can auto-assign themselves an IP address and try to
communicate with neighbours in the same collision domain. I wonder if we
should perhaps hijack a subnet of that network, so we can avoid stealing
another /24 from 192.168 ?

Simo.

-- 
Simo Sorce * Red Hat, Inc * New York



More information about the devel mailing list