DNS over HTTPS in Firefox
DNS over HTTPS in Firefox
Posted Jun 4, 2018 10:43 UTC (Mon) by fratti (guest, #105722)In reply to: DNS over HTTPS in Firefox by roc
Parent article: DNS over HTTPS in Firefox
Do you have any data on that performance claim? Because I heavily doubt it.
In almost all default router configs I've seen so far, one of two things happens:
A) the router gives out the ISPs DNS server over DHCP
B) the router gives out itself as DNS server over DHCP, forwarding requests to the ISPs and doing caching.
While Cloudflare's peerings are quite good, they do not beat an ISP local route. They also do not beat LAN. They also do not beat the Windows local OS-wide DNS cache. That's one huge flaw in your argument already.
Secondly, DNS can (and does) use UDP. UDP is a datagram protocol. It is connectionless: there is no handshake. A client makes a request, a server responds, two packets in total if both of them are below the common internet MTU of 1500. Furthermore, the kernel knows when to send those packets, because it precisely knows when a message starts and when it ends: precisely when a packet starts and it ends.
Now compare this to HTTP on TLS on TCP:
1. We've got a TCP handshake
2. We've got a TLS handshake (which can be done partially in the TCP handshake, but since you built a bad idea on top of previous bad ideas, nobody knows if the real world stacks will support this)
3. We've got a HTTP request which is in a stream protocol so the kernel needs some heuristic to figure out when to actually send a packet.
"but ooh, OOOH!" you shout, "we can just keep the connection open! Yes! Handshake only once!"
Well, but then what does that mean? You're going to be pinging back and forth keep-alive packets, keeping useless state on both machines, and essentially all you're doing is sending datagrams awkwardly wrapped in plain-text over a stream protocol making the kernel's network stack sad. You cannot get faster than request->response in one packet each. You do not get faster by adding more packets. This is not how protocols work. This is not how computers work. No, adding JSON and WebGL into the mix won't lubricate it with HTML5 awesomesauce.
Now that I've cast some serious doubt as to the performance claim, let's cast some serious doubt as to the "not centralised" claim.
DNS, in its current form, is by default centralised. Your average consumer laptop will get DNS from DHCP. Your average consumer router will get DNS from DHCP.
DoH is centralised by default. Cloudflare cackles as they get all the data because Mozilla buried the option to change the DoH "service" somewhere in about:config, and any alternative DoH service wouldn't have as good peerings as Cloudflare does.
And another question: where did you get the "for almost all users" statistic from? Did those users agree to participate in your data collection? How big was that sample size? Could you please tell me about the way your measuring setup was constructed?
Posted Jun 4, 2018 10:47 UTC (Mon)
by fratti (guest, #105722)
[Link]
Posted Jun 4, 2018 11:47 UTC (Mon)
by tialaramex (subscriber, #21167)
[Link]
So the answer is they're collecting that data. I'm sure if the data they get says "This is slower and often doesn't work" they will accept that and re-evaluate their approach.
It's important to actually collect data, that's how we got Natural Philosophy (and so Science) as a huge upgrade over previous approaches to philosophy that proceed as you've demonstrated by arguing from premises that seem to support the philosopher's position. Among the things philosophers felt comfortable they'd figured out by your method are that the Earth is fixed at the centre of the universe, flies are spontaneously generated by the decay of living matter, and all numbers are rational.
Chrome too has programmes collecting data (although it seems Google is more comfortable just "volunteering" its own users without their knowledge), it has found all sorts of interesting things through such experiments, and since it often shares the results that lets everybody avoid getting blind-sided. For example the reason TLS 1.3 on the wire looks so silly today is that the earlier drafts (before Draft 23) were randomly enabled for Google Chrome users, and Google's metrics showed that a frighteningly high proportion failed -- even though your approach (write down axioms, scratch your beard, come to a conclusion) said that these drafts were definitely back-compatible with TLS 1.2, reality insisted otherwise. So from there experimentation found a way to sneak past most of the garbage middleboxes that weren't compatible and the draft with those changes (plus other language that doesn't change anything on the wire) is what we have today.
[ For TLS 1.2 nobody did that before publication, they relied on your approach. The standard sat on paper for _years_ largely unused, because if you tried to enable it for ordinary users a huge proportion of them got a non-working browser, and so they'd just switch to a competitor who wasn't trying to do TLS 1.2. ]
Posted Jun 4, 2018 12:26 UTC (Mon)
by excors (subscriber, #95769)
[Link] (1 responses)
The latency of a single DNS request isn't hugely important in a web browser - the problem is when you need to sequentially make one request for the HTML page, then another to a different domain for the JS, then to a third domain for some JSON data, then to a fourth for an image, etc. Doing the handshake per request would be silly, but you can fix that by just keeping the DoH connection alive for a few seconds - you don't need an endless flood of keep-alive packets. If it times out and you need to restart the connection when the user clicks a link an hour later, that's fine.
https://bitsup.blogspot.com/2018/05/the-benefits-of-https... notes that DoH will be able to easily transition from HTTP/2-over-TCP to HTTP/2-over-QUIC-over-UDP, for more efficient transport and better handling of packet loss (which I assume is really bad in DNS-over-UDP), and there's the possibility of providing a speculative response before the request has even been sent (if they can work out how to preserve security/privacy/etc). Since browsers and CDNs obviously want to improve HTTP performance anyway, so the protocols and implementations will already exist, and DoH only requires coordination between a single browser and a single DNS provider (not every single ISP in the world) to provide the full benefit to the browser's users, it seems a reasonably straightforward approach.
Posted Jun 6, 2018 15:40 UTC (Wed)
by buchanmilne (guest, #42315)
[Link]
This is hardly surprising; however for a long time the p0 for ICMP echo request to Cloudflare in my country was 170ms where the p50 for DNS requests to my ISPs caching DNS was < 20ms.
And most users are unlikely to worry about a p75 DNS resolution time of 181ms, if the quality of the p50 answers is 10x better (uses a CDN 5ms away rather than 50ms away, or 20ms away rather than 200ms away).
Posted Jun 4, 2018 13:34 UTC (Mon)
by roc (subscriber, #30627)
[Link] (1 responses)
https://bitsup.blogspot.com/2018/05/the-benefits-of-https...
Having said that: you're right. I overstated the case to say it's "actually faster for almost all users". Sorry. It probably is for many users, but we don't have that data yet, and improvements like QUIC and push are probably needed to make it really fast.
> Cloudflare cackles as they get all the data because Mozilla buried the option to change the DoH "service" somewhere in about:config, and any alternative DoH service wouldn't have as good peerings as Cloudflare does.
Google probably does, and they're running a public DoH service: https://github.com/curl/curl/wiki/DNS-over-HTTPS#publicly...
Posted Jun 4, 2018 19:06 UTC (Mon)
by fratti (guest, #105722)
[Link]
Posted Jun 4, 2018 17:37 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Simply keeping a pool of open TLS connections is probably going to be a speed win.
Posted Jun 5, 2018 14:11 UTC (Tue)
by nilsmeyer (guest, #122604)
[Link]
Posted Jun 8, 2018 20:27 UTC (Fri)
by mcmanus (guest, #4569)
[Link] (3 responses)
In the soft fail (expected) mode, the resolver never waits for the DoH connection to be ready - if the http2 session isn't up, then it uses the os resolver. Hard fail mode will block if that's what you want but that won't be the default config. Honestly, the connection is pretty much there unless you haven't been browsing at all lately - the browser looks up a surprising number of names. (nothing about this project changes that)
the cache is interesting. Firefox has a cache of its own - meant to cover the lack of an OS cache. The cloud cache might be farther away than an ISP cache (but sometimes not), but its probably actually better populated than most ISPs but not all. It will be interesting.
Lots of ISPs just forward silently to quad 8 anyhow as they sit in the middle. (note they don't assign the end user e2e quad 8 - wonder why that is.) Especially in asia.
and tcp absolutely can outperform 1 packet per message in udp some of the time because DNS senders have absolutely terrible strategies for loss recovery and congestion control and tcp has been working on those problems for a long time. Indeed I've seen DNS stacks that actually induce their own losses (and then crudely repair them with 1second timers) through too much parallelism.. so there are some definite non obvious tradeoffs here.
Posted Jun 8, 2018 23:09 UTC (Fri)
by fratti (guest, #105722)
[Link] (2 responses)
a lot of these arguments appear to try pushing DoH claiming the inferiority of real-world deployments of UDP DNS protocols, not actual inferiority of the protocol itself. I don't think this is fair, of course if you say "Yes, we at Cloudflare and Mozilla are going to deploy this better," it'll be a better deployment in the real world. That, however, does not make DoH a good idea, because you didn't fix the existing stacks, you made an overengineered new stack that is not more performant on the protocol level, but just on the implementation level. If you're going to take over both the protocol and the implementation, why not do it properly in both areas? Why not DTLS? Why HTTP?
Posted Jun 8, 2018 23:57 UTC (Fri)
by excors (subscriber, #95769)
[Link]
https://bitsup.blogspot.com/2018/05/the-benefits-of-https... seems to answer that.
Posted Jun 9, 2018 4:57 UTC (Sat)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Well, it failed miserably. With DNSSEC you are routinely looking at replies that are greater than 1500 bytes long. IPv4 fragmentation usually saves the day (though it's slowly getting more and more broken) but with IPv6 it's a complete non-starter.
There are two ways to fix it:
2) Just forget about all this stateless nonsense and go full-metal-stateful. This way you can utilize all the advances made by browsers, in particular QUIC and TLS 1.3. They allow zero-RTT connection initiation, at the cost of stored data.
Posted Jun 20, 2018 18:37 UTC (Wed)
by mstone_ (subscriber, #66309)
[Link]
My ISP is one of the largest in the US, and the latency to the nameservers they put in the DHCP replies is worse than the latency to 8.8.8.8... They also hijack NXDOMAIN, and they don't play particularly nicely with DNSSEC (would probably break the revenue stream from NXDOMAIN hijacking.)
DNS over HTTPS in Firefox
DNS over HTTPS in Firefox
DNS over HTTPS in Firefox
DNS over HTTPS in Firefox
DNS over HTTPS in Firefox
This is also relevant: https://developers.google.com/speed/public-dns/docs/perfo...
The average ISP just isn't that good at managing this kind of service.
DNS over HTTPS in Firefox
DNS over HTTPS in Firefox
DNS over HTTPS in Firefox
DNS over HTTPS in Firefox
DNS over HTTPS in Firefox
DNS over HTTPS in Firefox
DNS over HTTPS in Firefox
1) Make DNS great^W small again. ECC instead of RSA basically fixes it for _most_ cases, but not all.
DNS over HTTPS in Firefox