On 11/15/20 4:31 PM, Lennart Poettering wrote:
On So, 15.11.20 10:18, Marius Schwarz (fedoradev(a)cloud-foo.de)
> Am 11.11.20 um 16:58 schrieb Lennart Poettering:
>> So if you configure 4 DNS servers then each will still get roughly
>> 1/4th of your requests? That's still quite a lot of info.
> the more you use, and i did, the better it protects against tracking by the
> dns cache owners.
Use stubby. 1/4 is not enough, more resolvers are necessary.
find a trusted ISP and hide in his resolver crowd.
> How about putting this as a feature request in resolved?
Please file an RFE issue on github:
Implementing this does not come without drawbacks though: right now
resolved tries hard to use the same server if at all possible, since
we want to use newer DNS features if possible, but many DNS servers
(wifi routers, yuck) tend to support them quite badly. This means
resolved has an elaborate scheme to learn about the feature set of the
DNS servers it contacts. And that can be slow, in particular on
servers where we step-by-step have to downgrade to the most minimal of
DNS protocols. This learning phase is run only when first contacting
some server (and after some grace period).
Understood, learning about server features, especially weird ones that
use request drop instead of request refusal, can take some time. It
definitely should be cached for some time.
If we'd switch servers all
the time, for every single lookup, then we'd start from zero
time, not knowing what the server supports, and thus having to learn
about it over and over again. This would hence make all,
*every*single* transaction pretty slow. And that sucks.
This is ridiculous. It might be limitation of current systemd-resolved
implementation, but it is not necessary. All DNS servers track info
about used remotes and their detected features. Even dnsmasq, which is
not full recursive server, just like systemd-resolved. It can learn
about each configured server and keep information about it cached for
some time. Just like TTL of cached records. It can also flush such cache
on interface configuration change, network errors from the server etc.
But it does not have to learn everything about a server, because it
switched the active one. If it has to, try to find way to store server
instance features per server IP, not per link.
It might be something to add as opt-in, and come with the warning that
you better list DNS servers that aren't crap if you want to use that,
so that we never have to downgrade protocol level, and thus the
learning phase is short.
Sure enough, many router DNS implementations are bad or ugly. If it can
choose from full featured validated ISP resolver and crappy router
implementation, prefer the one with better features. Most likely it is
much better maintained as well.
However, some people rely on the order of used servers in resolv.conf.
First server might know some local names, which secondary backup does
not know about. Such situation is impossible to detect automatically,
but is not too uncommon. I miss some way to force first server always
first if possible. Something like --strict-order of dnsmasq.
(There have been suggestions to probe ahead-of-time, i.e. already
before we have to dispatch the first lookup request to it, i.e. at the
time the DNS server address is configured. However that is a privacy
issue, too, since it means systems would suddenly start contacting DNS
servers, even without anyone needing it.)
> It should of course use encrypted protocols first.
It is questionable, how much
are encrypted protocols needed. Of course,
ISP can monitor all your traffic. They can usually monitor all your
queries. But if you seek protection for it, why don't you change your
ISP? Thanks to GDPR, they cannot just sell information about your
actions without your consent. And cannot force you to give the consent
too. If you connect to ISP's server, they can see your queries anyway.
Even encrypted. If you don't, they can see TLS info, which usually leaks
plaintext hostnames too. In the last resort, then can see targetted IPs
and can deduce from them a lot. In short, if you dont trust them, use
full VPN or change them altogether. Most of us living in a free world
can do that.
It supports DoT since a longer time. Is currently opt-in on Fedora
though, but we should change that.
DoT becomes efficient when we can reuse the established TCP/TLS connection
for multiple lookups. But if we'd switch servers all the time, then of
course there's no reuse of TCP/TLS connections possible.
I don't see a reason for it here as well. It should be perfectly
possible to have 3 connections to one server and 2 to another one. And
randomize queries to each connection. Reuse of TLS connection is
definitely wanted feature. Again, it might be limitation of current
implementation, but it is possible. I admit maintaining multiple
connections is much harder to implement (properly).
or in other words: adding this conflicts with other (and I think more
important) goals here. Thus if we add this, only as option i
figure. It's not suitable as sensible default.
Red Hat, http://www.redhat.com/