Fedora and fallback DNS servers
Systemd-resolved continues the systemd tradition of replacing venerable, low-level system components. It brings a number of new features, including a D-Bus interface that provides more information than the traditional gethostbyname() family (which is still supported, of course), DNS-over-TLS, LLMNR support, split-DNS support, local caching, and more. It is not exactly new; Ubuntu switched over in the 16.10 release. Fedora thus may not have lived up to its "first" objective with regard to systemd-resolved, but it did eventually make the switch.
It is probably fair to say that most Fedora users never noticed that things had changed. Toward the end of 2020, though, Zbigniew Jędrzejewski-Szmek made a change that drew some new attention toward systemd-resolved: he disabled the use of fallback DNS servers. The fallback mechanism is intended to ensure that a system has a working domain-name resolver, even if it is misconfigured or the configured servers do not work properly. As a last resort, systemd-resolved will use the public servers run by Google and Cloudflare for lookup operations. On Fedora 33 systems, though, that fallback has been disabled as of the systemd-246.7-2 update, released at the end of 2020.
Toward the end of February, Tadej Janež went
to the fedora-devel mailing list to argue that this change should be
reverted, saying: "On F33, this actually breaks a working vanilla
cloud instance by removing the fallback DNS server list in a systemd
upgrade, effectively leaving the system with no DNS servers
configured
". As one might expect, this was not the desired state of
affairs. This post generated some discussion about the change, but
it may not lead to an update to Fedora's policy.
One might wonder why a seemingly useful feature like automatic fallback was disabled. The reasoning, as described by Jędrzejewski-Szmek in this changelog, has to do with privacy and compliance with the European GDPR directive:
Janež suggested that the situation could be improved in either of a couple
of ways. Rather than disabling the fallback servers everywhere, Fedora
could leave them enabled for cloud images, where, it seems, broken DNS
configurations are more likely to happen and there tends not to be an
individual user to identify in any case. Or Fedora could pick a
"reputable DNS resolver
" that is known to respect privacy and
use it to re-enable the fallback for everybody. Jędrzejewski-Szmek
replied
that the first option might be possible, but rejected the second, saying
that finding a provider that is acceptable worldwide would be a challenge
at best.
Beyond privacy concerns, there was another reason cited in the discussion for the removal of the DNS fallbacks: they can hide problems in DNS configurations. Without the fallbacks, a broken configuration is nearly guaranteed to come to the user's attention (though said user may be remarkably unappreciative) and will, presumably, be fixed. With the fallbacks, instead, everything appears to work and the user may never know that there is a problem. So the configuration will not be fixed, leading to a worse situation overall.
Lennart Poettering, though, described this
view as "bogus and very user unfriendly
". It is better, he
said, to complain loudly and fall back to a working setup than to leave the
system without domain-name service entirely. A lot of users do not know
how to fix DNS themselves, and they won't even be able to ask for help on
the net if DNS is not working for them.
Poettering also raised another issue: the privacy argument does not always make sense because using the public DNS servers may well be the more privacy-respecting option anyway:
The change by Jędrzejewski-Szmek acknowledged this point as well, and noted
the additional point that ISP-provided DNS servers may not have the user's
best interest in mind either. He still concluded that they were the better
option because "they are more obvious to users and fit better in the
regulatory framework
". In any case, nobody is proposing using
Google or Cloudflare servers in preference to those provided by the local
network.
What will happen with Fedora's configuration is far from clear at this
point. There seems to be some real resistance to enabling the fallback
servers, even though the actual privacy and legal risk would appear to be
small. Most Fedora users will probably never notice, but a subset may have
to learn the details of using the resolvectl
command to create a
working DNS configuration by hand. Once again, they may be limited in
their appreciation of this state of affairs.
Posted Feb 25, 2021 15:30 UTC (Thu)
by atnot (subscriber, #124910)
[Link] (54 responses)
There is another issue here though. If one distro works by default and another doesn't, users will blame their distro, even if it's behaviour is correct.
Posted Feb 25, 2021 16:06 UTC (Thu)
by ju3Ceemi (subscriber, #102464)
[Link] (36 responses)
First case: random non-technical people
Second case: servers
Last case: technical users, that tweek their DNS configuration for some reasons (may be dns routing partially across a couple of VPN, or local zones, or whatever)
So I am asking: when is the fallback a good idea, exactly ?
Posted Feb 25, 2021 17:39 UTC (Thu)
by gnu_lorien (subscriber, #44036)
[Link] (33 responses)
In your first you assert that the fallback simply won't work, if that's the case, then why turn it off? It *could* work. If it does, then everything's fine. The random non-technical person has no control over the DNS configuration of any of the things that broke them.
Somebody else made this point here too, but it bears repeating: If my Fedora laptop "breaks" in this situation and my Windows laptop "works," then, to the user, it was Fedora's fault that it broke not the network.
This example solidifies for me that the non-fallback way is the expert's configuration not the default. If I'm setting up or debugging this network then I need to turn the fallback off to make sure I configured DNS correctly. This setting is of no use to people that don't have control over the network configuration themselves.
In the second case it's essentially catastrophic to have no DNS. I've personally dealt with the scourge of having to hard-code IP addresses into bootstrap scripts for VMs at a cloud provider because we didn't hook up DNS properly. The problem was that VMs appeared to start and then just never registered. The workaround eventually burned us when the IP addresses changed everything lost access to their master resource. Suddenly every VM stopped registering because they weren't using DNS.
Posted Feb 25, 2021 21:42 UTC (Thu)
by pmb00cs (subscriber, #135480)
[Link] (29 responses)
Why don't they ask the person who set them up with a linux install then?
I find it very hard to believe there are swathes of users computer literate enough to either change the default OS on their machine, dual boot with linux, or buy/build a computer with no OS and install Linux, who are also completely incapable of troubleshooting non working DNS. Which then leaves us with users who cannot troubleshoot DNS issues, who must therefore have had their machines configured for them.
This argument that there exist people completely incapable of recognising or debugging broken config, who use what is a niche operating system preferred by more technically minded people, that is unavailable by default on almost all commercially supplied computers strikes me as an excuse by the developers of this system for a forcing a technically poor choice on everyone. DNS issues aren't that hard to recognise, and although it can be tricky to resolve them without the wealth of information available on the internet, they aren't that difficult to resolve. Leaving a system working only by the grace of a "fall back" setting will mask failing networks, and other issues, that makes it far more dangerous to anyone who manages their own network than the pain it will save people who are non-technical, and yet still for some reason have no support for running the non-default OS on their computer.
Posted Feb 25, 2021 21:58 UTC (Thu)
by mpr22 (subscriber, #60784)
[Link] (9 responses)
You don't (necessarily) need to understand your desktop computer to install Linux on it any more.
It helps, particularly if you have obnoxious hardware that needs proprietary blobs to function, but you don't need to.
Posted Feb 25, 2021 22:33 UTC (Thu)
by pmb00cs (subscriber, #135480)
[Link] (8 responses)
I'm not arguing that non-technical people can't install Linux. I'm arguing that they don't do so in sufficient numbers, absent any form of technical support, for their needs to outweigh the needs of the technical people for who the fallback DNS servers are actively detrimental.
Posted Feb 26, 2021 6:50 UTC (Fri)
by roc (subscriber, #30627)
[Link] (7 responses)
Posted Feb 26, 2021 7:40 UTC (Fri)
by pmb00cs (subscriber, #135480)
[Link] (6 responses)
I'd also argue those users would be far more inconvenienced by the total collapse of their networking that they were unable see coming because the failure of their DNS set up was masked from them so they didn't talk to their network provider to get that fixed early.
The needs of the technical users aren't mutually exclusive with the needs of the non-technical users.
Posted Feb 28, 2021 23:51 UTC (Sun)
by gnu_lorien (subscriber, #44036)
[Link] (5 responses)
I've been on corporate networks where this happened. I contacted the IT people who could fix the DNS and waited until I got a response to my ticket before I switched back to the internal DNS.
At least two times that I remember this happened on hotel networks. I never told them about it and certainly wasn't going to wait and hope that a hotel network was going to get fixed in any timely manner.
In each of these cases there were at least one of the following things that saved me:
If I hadn't had one of these three things then it wouldn't have been an inconvenience, it would have been completely broken. The fallback of using a custom DNS setting has worked for me over and over again. Enough that I have memorized these addresses.
I'm a living counter-example to the idea that the fallbacks are useless or that the problem of bad DNS is both rare and only an inconvenience. Even if the occurrence rate I mentioned here is considered rare I would have remained completely broken if not either for applying the same fallback that systemd-resolved seems to apply or switching to a different device.
I'm curious if you've ever been in the situation where you needed to try a DNS fallback. I'm curious why it didn't work or help you resolve the situation.
Posted Mar 1, 2021 0:30 UTC (Mon)
by pizza (subscriber, #46)
[Link]
Posted Mar 1, 2021 2:55 UTC (Mon)
by pabs (subscriber, #43278)
[Link]
Since then I switched to doing recursive DNS resolution on my laptop with a local unbound daemon, but that just introduced more issues. Networks where recursive resolving is too slow to work, ISPs that block outgoing DNS queries except to their own resolver, ISPs that strip DNSSEC results and so on.
Perhaps the right thing to do is to move the fallback DNS servers into the network configuration settings. Then when you have issues on a particular network you just reconfigure the corresponding network connection to choose one of the available public DNS servers. You could probably do better though; if systemd-resolved detects DNS server issues (an ISP known to sell your data, a country without privacy regulation, DNS servers that don't support DoT/DoH, broken resolution, stripping DNSSEC, etc) it can prompt the user in the GUI and give them the option to switch the configuration for the current network to one of the several different public resolvers, with information about their country of origin, countries of deployment, privacy policies etc.
Posted Mar 1, 2021 8:15 UTC (Mon)
by pmb00cs (subscriber, #135480)
[Link] (2 responses)
When I have had to set DNS settings manually on end devices I've had mixed results. Sometimes it would have worked, and I carried on. Sometimes it would not, and I'd need to find another solution, or live without a network connection until the responsible party could fix it. This included in at least one case a public network with a captive portal that was so broken that I resolved the DNS issue but couldn't then connect to anything. (I know tunnelling over DNS is possible, but I have never actually tried it)
As to your hotel networks, if you didn't tell them about it, how do you expect them to fix it at all? They may not have fixed it in a timely manner, but it may have helped the next person with the same issue?
Posted Mar 1, 2021 11:25 UTC (Mon)
by pizza (subscriber, #46)
[Link]
Oh, that's easy; Linux isn't listed under "supported systems"
Posted Mar 1, 2021 19:14 UTC (Mon)
by gnu_lorien (subscriber, #44036)
[Link]
This is the case that sytemd-resolved is implementing automatically for people that don't know how to set these manually or don't know which values to try.
"As to your hotel networks, if you didn't tell them about it, how do you expect them to fix it at all?"
That's not my problem. It's not my network. I'm not responsible for it.
"They may not have fixed it in a timely manner, but it may have helped the next person with the same issue?"
That's not my problem either. In this case I might suggest those other users use a GNU/Linux system with the default configured systemd-resolved fallbacks so that they're not at the whims of the broken DNS of a captive portal.
In the captive portal situation especially the economic incentive is the other way around. Any time I have to spend debugging their network and reporting this is time that I spent on their behalf where I'm paying them to fix their network. I happily give my labor free of charge to free systems. Proprietary ones do not get this privilege.
Posted Feb 26, 2021 9:07 UTC (Fri)
by intelfx (subscriber, #130118)
[Link] (15 responses)
There are lots of them.
People who find themselves using Linux for application-level reasons (computer scientists and CS students, for example) are literate enough to install and use Linux with varied degrees of success, but are not nearly as literate (or interested) in network administration or troubleshooting. Most of those probably never heard the term "DNS" (and do not wish to hear it, either).
Posted Feb 26, 2021 9:31 UTC (Fri)
by pmb00cs (subscriber, #135480)
[Link] (14 responses)
I do contend however that they are not "non-technical" users. Computer Scientists, by the nature of their job are working on the bounds of what computers can do, and Computer Science Students are studying that subject. How does that put them in the "non-technical" group? They may not have heard of DNS (really? in the CS field there are academics who don't know how networking works to the point they've never heard of DNS? Because modern Computer Science has nothing at all to do with networking does it?) but they will be technical enough to find roughly where the problem is, and report it to the responsible party if that is not them.
It's been mentioned elsewhere in the comments, but I'll bring it up here as well. What about the OS's that are installed by default? Where there is a regulatory burden for the OEM to provide support to the users, and therefore an actual financial incentive to make their use as easy as possible for non-technical users. How many of thos have a fall back for broken DNS?
Lets face it the only reason systemd-resolved has fall back DNS settings is because Lennart Poettering has probably had a DNS issue on a network that wasn't his own, and he is arrogant enough to believe that the rest of us are too stupid to fix that issue without his help, and he can't see how masking a DNS issue could come back to bite him. The rest of us however can extrapolate from our experiences, when we've had an issue masked from us it has come back to bite us, therefore masking a DNS issue will come back to bite us.
Lets not defend a poor decision by using arrogance to invoke "what about the people not as bright as us?" when those people either don't exist, or aren't as stupid as we want to think they are. We were all non-technical once, but we became technical by learning. I'm not suggesting that all Linux users are as technical as I, or that I am the most technical Linux user. What I am contending is that the least technical Linux user is at least partly technical, and must have reached a level of technical ability that would allow them to do basic diagnostics BEFORE they became a Linux user, because in this day and age that is what you need to do before you would recognise that Linux is even an option.
Posted Feb 26, 2021 11:28 UTC (Fri)
by intelfx (subscriber, #130118)
[Link] (1 responses)
You probably intended a sarcastic meaning, but you are absolutely correct from first word to last.
(Source: first-hand experience.)
Posted Feb 26, 2021 11:35 UTC (Fri)
by peniblec (subscriber, #111147)
[Link]
I'd say "modern" is superfluous; the "computer science is no more about computers than astronomy is about telescopes" aphorism is at least 30 years old.
Posted Feb 28, 2021 8:35 UTC (Sun)
by jond (subscriber, #37669)
[Link]
Posted Mar 5, 2021 13:15 UTC (Fri)
by jschrod (subscriber, #1646)
[Link] (10 responses)
I hope that DNS is *not* taught in a CS course at any self-respecting university. There are more important things to teach, principles instead of specific technics.
> Because modern Computer Science has nothing at all to do with networking does it?
Yes. (I studied CS, and made my PhD in this field.) CS is about structures and how we manipulate them. Similar to mathematics, which, in academia, isn't about math (as you know it from school) either. Or, as an other poster wrote, astronomy is not the science of telescopes.
Posted Mar 8, 2021 13:54 UTC (Mon)
by LtWorf (subscriber, #124958)
[Link] (9 responses)
Yes in network courses the teacher just goes "it's all magic. Never use tcpdump and never try to understand anything. Also never learn about flow control, error correction, 3 way ack."
Sounds me more like what would happen in the most terrible university.
Posted Mar 8, 2021 14:10 UTC (Mon)
by farnz (subscriber, #17727)
[Link] (1 responses)
Indeed; a Computer Science course (not Computer Engineering) wouldn't bother with tcpdump. Flow control, error correction and 3 way ack algorithms would probably be described and discussed, but not in terms of the details of how they're applied in the TCP/IP stack - you're looking at them as abstract theory.
Computer Engineering probably would cover tcpdump, the TCP handshake (actually a 4 way handshake, with two steps combined into one packet), flow control in TCP and on network links, ECC as used in real networks etc.
Posted Mar 9, 2021 3:56 UTC (Tue)
by deater (subscriber, #11746)
[Link]
As an aside, the networking class is getting hard to teach. With DNS moving to be tunneled over https, with https being encrypted (instead of plaintext), and with HTTP3 being QUIC which is custom-protocol-tunnelled through UDP, the analyzing-tcpdump exercises are becoming more or less useless.
Posted Mar 9, 2021 19:25 UTC (Tue)
by jschrod (subscriber, #1646)
[Link] (6 responses)
Excuse me, but we seem to have *very* different opinions what a university course is.
> Yes in network courses the teacher just goes "it's all magic. Never use tcpdump and never try to understand anything. Also never learn about flow control, error correction, 3 way ack."
Yes, that's all important -- but for a high-school course. More specific, in my country (Germany) these topics are part of the computer science (Informatik) curriculum at high-school level. For advanced courses, which are preparations for studying a topic at college level, these topics are mandatory for the syllabus the teachers have to create.
I know that other countries distribute the curriculum differently. E.g., the US places these topics probably at the college level -- which starts earlier there and which often introduces topics like 2nd (or even 1st) foreign language that are considered high-school topics in my country. Even other countries teach such topics in special engineering schools that are decidedly not geared towards an academic education.
This is the heart of my argument about *university courses*.
The goal of a university course in *computer science* is an *academic education* in that field. I take technical knowledge about specific protocols, as cited by you, as a sensible precondition. At my university, people had the opportunity to take "tutorial classes" (without credit points) in advance of a university course to fill up or refill their knowledge holes on the high-school/college level.
To repeat that high-school education cannot be the task of a course that shall teach you about theory, research, and practice of networking at an academic level -- similar to an analysis course at university level which won't repeat the "curve discussion" that we did on high-school. (Well, at least my math courses at my university didn't do so. They did expect us to know this.)
To be more specific: I would demand for a network course at university level to give the students the ability to read and understand current research articles in reviewed academic journals like ACM TOIN (or the network specific ones in ACM TOCS), and to follow research papers in respective ACM and IEEE proceedings. It would expect them to give graduates enough knowledge to start their own research in that area if they do their master's or Ph.D. thesis there.
So, no: I stand by my opinion that it is not the task of a university to teach *high-school topics* like DNS or TCP-as-a-protocol. This is the task of a school, maybe of a college, but not of a university.
(As the other persons who answered you before me have noted: for Computer Engineering that's a bit different. I specifially mentioned *computer science* courses in my post.)
Posted Mar 9, 2021 19:53 UTC (Tue)
by Wol (subscriber, #4433)
[Link]
Unfortunately, here (in the UK) we've pretty much abolished all "schools that are decidedly not geared towards an academic education." They were called Polytechnics.
30 years on, I think we're finally realising that was a big, BIG, mistake. (And now we're making another - we're turning Universities into Polytechnics, and wondering why nobody has academic *skills* any more.)
Cheers,
Posted Mar 11, 2021 6:29 UTC (Thu)
by LtWorf (subscriber, #124958)
[Link] (4 responses)
In italy you can sign up to any university course having done any high school. In fact most people signing up for computer science, typically will have gone to a "liceo scientifico" rather than a "tecnico industriale informatico" and will have a focus more on mathematics than network protocols.
No credit mathematics courses are offered to bring people up to speed on mathematics, but you are absolutely not expected to now the entire content of "Computer Networks by Andrew S. Tanenbaum" before you can even apply.
I can't really understand how learning about networks or computer architecture or operating systems makes it impossible to understand scientific papers. Does it make sense to talk about distributed algorithms without knowing how it all works and why a certain set of assumptions is made for the proof?
I also have no idea what you mean college vs university.
Is image manipulation computer science? Can it be mentioned that jpg saves more green information because camera sensors are built this way, because we see green better? Or is that out of topic and forbidden?
I guess you are limiting "computer science" to what you learnt in your university and are excluding anything else as not relevant.
Posted Mar 11, 2021 14:02 UTC (Thu)
by farnz (subscriber, #17727)
[Link] (3 responses)
The distinction between Computer Engineering (which is the application of Computer Science to real world problems) and Computer Science (which is all about the theory) is common in many countries. Some places do mix the two together, and call the resulting mixture Computer Science, but that is by no means the common outcome.
In Computer Engineering, you will absolutely have to deal with practical things like tcpdump, TCP handshake, DNS, Ethernet frame structure and more
In Computer Science, you're looking at algorithms and how computation can be done usefully with them. So, for example, you will make certain assumptions about a distributed world, and those assumptions will be backed either by some handwaving about how a Computer Engineer can build a real system that meets those assumptions or by referencing some work by a Computer Engineer that shows that these assumptions are valid given a system that has been built.
To give an example of how this separates out; a Computer Scientist will make some assumptions about how routers in a network could be made to work (messages passed to neighbouring routers, neighbours forward packets towards their destination, there is a time delay between sending a packet and its reception), and look at how you could guarantee that packets go through the network to their destination efficiently. If they pull in Dijkstra's SPF algorithm, they'll describe something that works a lot like OSPF, but without all the little practicalities that make OSPF work in real networks.
In contrast, a computer engineer will look at things like the reliability of multicast, practical packet formats, MTU limitations, and build you something that works like OSPF.
It sounds to me like you have been through a system that blends Computer Engineering with Computer Science, and calls it Computer Science; this does happen in many institutions, but is not the most common case.
Posted Mar 11, 2021 15:21 UTC (Thu)
by pizza (subscriber, #46)
[Link]
Where I went to college [1], CompE was a specialized form of Electrical Engineering, focusing more on digital circuits and the logical building blocks that go into computer hardware. In other words, the physical layer.
Their Computer Science program was originally an offshoot of Mathematics, focused on computational theory and algorithms, although you could get quite a lot of real-world practicalities in the various specializations and electives -- and I recall one course that specifically covered the design principles behind TCP/IP, DNS, and so forth.
> It sounds to me like you have been through a system that blends Computer Engineering with Computer Science, and calls it Computer Science; this does happen in many institutions, but is not the most common case.
"The common case" is clearly not as common as one would think...
[1] Georgia Institute of Technology, widely considered to be a tier-1 STEM school in the US
Posted Mar 11, 2021 16:47 UTC (Thu)
by excors (subscriber, #95769)
[Link] (1 responses)
But also the computer engineer might not realise that some of the implementation details violate the assumptions made in the mathematical proofs of Dijkstra's algorithm, so in rare edge cases their implementation fails to find a correct routing solution, and they can't understand the research paper that explains the problem precisely with six pages of algebra.
I think that's a significant challenge for Computer Science as a field - there's often a lack of connection between theory and practice. CS isn't like pure maths which can often be considered valuable in its own right; it's more like theoretical physics in that it's only successful when it gets applied to the real world. It's fine if it takes decades of speculative theoretical work before finding an application, but there should be a reasonable expectation that it will eventually find one. An unimplementable computer science concept is like an untestable physics theory - it's not really CS/physics any more, it's just an inefficient way to do maths.
But a lot of CS in academia doesn't really understand real-world computer engineering, because it's had no exposure to environments outside a university, so it fails to identify real problems that need solving; and a lot of computer engineering doesn't understand or care much about academic CS, so it keeps discovering and inventing bad fixes for problems that *have* been solved properly.
It's good for people to specialise but I think it's important to have at least some people who are comfortable with both sides, to keep them connected and working productively on the same problems. There are many cases where that is happening - see e.g. decades of programming language research which was only implemented in niche languages, while real software was written in C, but that research is now being adopted by mainstream production-quality languages thanks to people working to bridge the gap - but I suspect it's far less common than it should be.
Posted Mar 11, 2021 21:03 UTC (Thu)
by LtWorf (subscriber, #124958)
[Link]
Anyway, turned out I knew 2 people doing their thesis in 2 different research groups and it was basically the same topic. Low power algorithms, if I remember. It was years ago.
Anyway, the 2 had not met each other and had no idea that in the same building there was another person working on the same project from a different angle.
Posted Feb 27, 2021 1:34 UTC (Sat)
by rgmoore (✭ supporter ✭, #75)
[Link] (2 responses)
I don't find this hard to believe at all. I've been running Fedora since Fedora Core 1 (and Red Hat before that), and I've never had to learn how to troubleshoot a non-working DNS. I doubt I would have a lot of luck learning on a computer that couldn't connect to the network somehow so I could look for instructions. More to the point, I think the attitude that most Linux users are experts so there's no reason to make a system that's easy for novices to be foolish. A system with sensible default behavior may be most important for novices, but it's helpful for everyone.
Posted Feb 28, 2021 8:37 UTC (Sun)
by jond (subscriber, #37669)
[Link]
Posted Mar 8, 2021 13:57 UTC (Mon)
by LtWorf (subscriber, #124958)
[Link]
All other broken DHCP configurations will still make you unable to connect to anything, the default DNS only prevents one of thousands of ways to break it.
At this point. Is it worth the privacy implications when it's a thing that already normally never happens and if it happens it breaks networking on every OS?
Don't get sidetracked about technical vs non technical.
Posted Feb 26, 2021 0:24 UTC (Fri)
by jkingweb (subscriber, #113039)
[Link] (1 responses)
Does Windows have similar built-in fallbacks? Does macOS? As far as I know they don't. Chrome OS and Android might, I'll grant.
Posted Feb 26, 2021 4:05 UTC (Fri)
by ttuttle (subscriber, #51118)
[Link]
Posted Feb 26, 2021 15:19 UTC (Fri)
by ju3Ceemi (subscriber, #102464)
[Link]
Well, does windows have such fallback DNS system ?
If the network is broken, both fedora and windows will be broken
Having some fallback is useless at best, in any cases
Posted Feb 25, 2021 19:26 UTC (Thu)
by ezuck (subscriber, #59641)
[Link] (1 responses)
Posted Feb 25, 2021 23:04 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
Cheers,
Posted Feb 25, 2021 17:13 UTC (Thu)
by warrax (subscriber, #103205)
[Link] (1 responses)
That is absolutely not true when it comes to security. Postel's law is widely regarded (at least by security experts) as misguided in a hostile world like today's Internet. (It was probably a good idea at the time, but no longer. Of course, in *some* cases it might still be a good idea.)
Posted Feb 25, 2021 17:42 UTC (Thu)
by jkingweb (subscriber, #113039)
[Link]
Posted Feb 25, 2021 17:23 UTC (Thu)
by excors (subscriber, #95769)
[Link] (13 responses)
E.g. HTML4 parsing (as implemented in the real world) had the tolerance but not the specification. Early browsers made up their own rules for error handling in an attempt at DWIM, web sites started accidentally relying on those rules, then new browsers would break on those sites and (in order to remain competitive and stop users defecting) had to reverse-engineer and emulate the old browsers' behaviour. Similar for HTTP headers and other parts of the web platform. That led to many interoperability issues, and to security and privacy issues (because nobody could understand how the browsers actually behaved, so they couldn't work out a coherent security model and verify whether the browsers followed it).
XHTML didn't have the tolerance - browsers would completely refuse to display an ill-formed document. But a vanishingly small minority of sites actually used XHTML properly (virtually everyone sent it with Content-Type: text/html which meant it got parsed as HTML4 instead, relying on the browsers' error handling to cope with e.g. "<br/>" which is not valid HTML4). And almost every dynamically-generated site that used XHTML properly (as application/xhtml+xml) could be broken by e.g. users posting a comment containing a U+FFFF, which is not allowed in XML but the sites didn't realise and would happily print it out again, thus completely breaking the page for every user (and, in some cases, also breaking the admin pages that were needed to delete the offending comment).
I suspect the main reason that browsers didn't implement a tolerant XHTML parser (which would make the browser significantly more usable on many sites, giving it a competitive advantage and attracting more users) is that nobody used XHTML so it wasn't worth the bother. It's not a good case study for the benefits of rejecting invalid inputs.
HTML5 precisely specified the HTML4 error-handling behaviour. You can pass /dev/urandom into any browser and it should get parsed the same way, and that way is based on the original browsers' DWIM behaviour. There are test suites to verify that, and browser developers care about following the specification, and there's little competitive advantage in violating the specification and DWIMing differently, so the implementations converged and that has greatly increased interoperability. And then it became possible to analyse security/privacy issues by looking at the specification (which is much easier than untangling the logic from source code) and verifying that it follows some proposed security model - it doesn't automatically solve the issues but it makes it possible to reason about them and begin to address them comprehensively. A similar process has happened with HTTP etc.
(Then the web added a zillion more features, and the sheer quantity and complexity means that interoperability is very hard again. But at least it's been largely solved in specific areas.)
I think that lesson applies to most protocol-like technologies for large-scale communication between independent implementations. It doesn't apply to e.g. programming languages, where the person who writes the invalid input can immediately see a fatal error message and fix it themselves. It may not apply to configuration files, since interoperability isn't particularly relevant there, though there's possibly a similar dynamic between the person providing the invalid input (misconfiguring the DHCP/DNS server or whatever) and the person who just wants to get their work done and who doesn't have a good way to convince the first person to fix the issue and will eventually switch to a different distro that doesn't keep getting in their way.
Posted Feb 25, 2021 17:54 UTC (Thu)
by Sesse (subscriber, #53779)
[Link] (5 responses)
“Should” is the word. I've read HTML5 parsers full of comments like “the spec says this, but Firefox does it differently, so we have to oblige”.
Posted Feb 26, 2021 6:49 UTC (Fri)
by roc (subscriber, #30627)
[Link] (4 responses)
The guy who owns the Gecko HTML5 parser is VERY diligent about avoiding this sort of thing. I can let him know.
Posted Feb 26, 2021 7:29 UTC (Fri)
by Sesse (subscriber, #53779)
[Link] (1 responses)
Posted Feb 26, 2021 11:10 UTC (Fri)
by roc (subscriber, #30627)
[Link]
Posted Feb 26, 2021 14:25 UTC (Fri)
by jkingweb (subscriber, #113039)
[Link] (1 responses)
Maybe Sesse was referring to code which predated the parsing test suite, however.
Posted Feb 26, 2021 15:57 UTC (Fri)
by excors (subscriber, #95769)
[Link]
I'm sure it wasn't perfect and there were still bugs, and probably things have changed a lot since I last looked at it seriously (a worryingly large number of years ago), but my impression at the time was that it was very successful at achieving interoperability across all the browsers and several non-browser parser implementations. (And it was enormously more successful than HTML4's approach of "here's the specification of a valid document, and how browsers should handle it. Huh, invalid document? Why would anyone do that? Just fix your document" and XHTML's approach of "Invalid document? YELLOW SCREEN OF DEATH".)
(Of course parsing is only a tiny part of the web platform, and probably one of the easiest parts for this kind of comprehensive specification and testing because it's a nice self-contained platform-independent linear transformation from bytes to a tree of elements (ignoring fiddly bits like document.write). But similar principles were applied with some success to other parts of the platform too, and I think the lesson is that it's a significant improvement over Postel's law.)
Posted Feb 25, 2021 18:49 UTC (Thu)
by NYKevin (subscriber, #129325)
[Link] (3 responses)
To clarify: That technically *is* valid HTML4, it's just the wrong HTML4. Formally, it's equivalent to <br>> (i.e. the tag ends at the slash, as part of a more general <foo/bar/ syntax which is allegedly "easier" than writing <foo>bar</foo>), but I think approximately three people in the entire history of the universe have actually wanted it to be interpreted that way, so all the browsers cheated and ignored the slash. Then XHTML came along and said "actually, you need the slash" and made everything even worse (because as you say, everyone was serving XHTML with text/html and it was then getting parsed as HTML4).
Then HTML5 came along and had to fix this mess. So they decided to specify that the slash is optional and has no semantic meaning (i.e. <br> and <br/> are exactly equivalent), and while they were going to the trouble of doing that, they also specified that </br> is illegal (the tag is always empty, so no need to close it), as is <p/> (arbitrary self-closing tags are not supported). Both of those misfeatures had been legal in XHTML, but approximately nobody had been using them, so it was reasonably safe to just yank them from the spec before they could turn into an attractive nuisance.
Posted Feb 25, 2021 22:31 UTC (Thu)
by pbonzini (subscriber, #60935)
[Link] (2 responses)
Posted Feb 26, 2021 0:42 UTC (Fri)
by NYKevin (subscriber, #129325)
[Link] (1 responses)
If you do include the slash, it just strips it out and emits a parse error, so you would end up with an unclosed <td>. But as discussed in the previous paragraph, that <td> will probably close itself anyway, and so it hardly matters.
Incidentally, you can also do this with <p>, meaning you can write prose like this:
<p>
This is also considered well-formed HTML5.
Posted Feb 26, 2021 1:20 UTC (Fri)
by jkingweb (subscriber, #113039)
[Link]
This has been a design feature of HTML from its earliest days. Many end tags are optional, as are some start tags, including those for html, body, and tbody.
That last is perhaps lesser-known: in an HTML (but not XHTML) document, <tr> is never a child of <table>; there is always an implicit tbody (or explicit thead or tfoot) element in between.
Posted Feb 25, 2021 23:14 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
Many moons ago, in the days back when ISPs actually knew what they were doing, our ISP upgraded their email servers, and they started sending "250 EHLO". Our MS Mail Gateway threw a hissy fit, and the PHB demanded that our ISP "fix" their sendmail, despite MS Mail completely ignoring the SMTP spec, namely that you MUST NOT quit in response to a command you don't recognise.
Cheers,
Posted Feb 26, 2021 6:51 UTC (Fri)
by roc (subscriber, #30627)
[Link]
Posted Mar 6, 2021 17:58 UTC (Sat)
by anton (subscriber, #25547)
[Link]
Posted Mar 4, 2021 13:34 UTC (Thu)
by davecb (subscriber, #1574)
[Link]
Logically, Postel's advice should be considered as part of an iterative process for robust software development, not as a magic trick that is necessary, sufficient and always correct.
First, interpret the specification narrowly when writing and broadly when reading
Posted Feb 25, 2021 16:19 UTC (Thu)
by clugstj (subscriber, #4020)
[Link] (4 responses)
Posted Feb 25, 2021 16:37 UTC (Thu)
by rahulsundaram (subscriber, #21946)
[Link] (2 responses)
Why? There is nuance allowed. You can permit when it is strictly required and limit usages that doesn't serve user needs.
Posted Feb 25, 2021 18:47 UTC (Thu)
by clugstj (subscriber, #4020)
[Link] (1 responses)
Posted Feb 25, 2021 20:51 UTC (Thu)
by rahulsundaram (subscriber, #21946)
[Link]
https://src.fedoraproject.org/rpms/systemd/c/14b2fafb3688...
Posted Mar 5, 2021 18:47 UTC (Fri)
by miquels (guest, #59247)
[Link]
The GDPR does not limit transmission of IP addresses. It does, however, require that whoever provides this service, does not use the information gathered for any other purpose than providing the service. Do google and/or cloudflare have a GDPR statement somewhere? If so, fine. No problem. If not, why not?
I think that in general, in the EU it's better to use your ISPs DNS servers. In the US, it's probably better to use Google / Cloudflare DNS as your own ISP is probably worse, both quality-wise and privacy-wise.
Mike.
Posted Feb 25, 2021 16:26 UTC (Thu)
by jkingweb (subscriber, #113039)
[Link]
Posted Feb 25, 2021 16:27 UTC (Thu)
by dskoll (subscriber, #1630)
[Link] (19 responses)
Compiled-in fallback DNS servers seem like a terrible idea to me. We all know that Google and Cloudflare are much too big to fail, but what if the unthinkable happens and 8.8.8.8 or 1.1.1.1 end up being owned by someone whose motives might not be the purest, unlike those two companies?
If someone wants fallback DNS servers, it's easy enough to configure them when provisioning a new installation. Or heck, just implement a recursive resolver in systemd. It has the rest of the kitchen, so why not add the sink?
Posted Feb 25, 2021 17:13 UTC (Thu)
by jafd (subscriber, #129642)
[Link] (9 responses)
Posted Feb 25, 2021 20:14 UTC (Thu)
by zdzichu (subscriber, #17118)
[Link] (4 responses)
Posted Feb 25, 2021 20:26 UTC (Thu)
by corbet (editor, #1)
[Link] (3 responses)
Thank you.
Posted Feb 26, 2021 4:07 UTC (Fri)
by JoeBuck (subscriber, #2330)
[Link] (2 responses)
I like Ars Technica's system, it seems to produce high quality discussions most of the time. There are other good ones.
Posted Feb 26, 2021 23:58 UTC (Fri)
by jrn (subscriber, #64214)
[Link] (1 responses)
It may be that additional moderation features are also needed (though I've been coping okay with the killfile equivalent) but I don't want to see this other tool for good go away.
Posted Mar 5, 2021 22:40 UTC (Fri)
by flussence (guest, #85566)
[Link]
We don't have an endemic unchecked plague of trolls here partly because it doesn't present a UI up front that sets expectations that they're part of the system. I can guarantee the second something with countable numbers were to be added, there'd be crowds trying to gamify it in all directions — it's already bad enough when I see a large user ID or reply count and brace for the worst.
(Here's where I'd apologise for veering so far off topic, but I think arguing over software-political DNS hijacking is a horse that's already been flogged into dust.)
Posted Feb 25, 2021 21:16 UTC (Thu)
by NYKevin (subscriber, #129325)
[Link]
My conclusion is that the IP address in the headers is not relevant to the attack vector which you describe (hostile network/router, active MitM attacks, etc.), except perhaps for cases where an attacker can reroute by IP address but not by port. This should be rare, but given how frequently we see ridiculous BGP leaking/hijacking, I wouldn't put it past them...
Posted Feb 26, 2021 10:22 UTC (Fri)
by smurf (subscriber, #17840)
[Link] (2 responses)
Posted Mar 5, 2021 12:09 UTC (Fri)
by kpfleming (subscriber, #23250)
[Link] (1 responses)
Posted Mar 5, 2021 12:13 UTC (Fri)
by zdzichu (subscriber, #17118)
[Link]
Posted Feb 25, 2021 22:51 UTC (Thu)
by patrakov (subscriber, #97174)
[Link] (8 responses)
Posted Feb 26, 2021 7:25 UTC (Fri)
by tialaramex (subscriber, #21167)
[Link] (7 responses)
Apparently everybody in this thread pays a lot of attention to their DNS configuration and so I'm sure everybody here is using TLS right?
The British government's old white paper (before it was repeatedly back burnered and effectively scrapped) described DNS based filtering censorship as the practical way forward. I remember reading it at the same time IETF 101 London happened, I remember because of the irony.
At that point what is now "Encrypted Client Hello" was only a napkin sketch, but DPRIV and TLS 1.3 were essentially done. DNS-based filtering was thus a dead man walking. Fast forward three years, it's irrelevant. If your teenager wants to read Oglaf then an ISP filter won't stop them.
Posted Feb 26, 2021 13:34 UTC (Fri)
by dskoll (subscriber, #1630)
[Link] (6 responses)
I'm intrigued as to how you run DNS over UDP port 53 with TLS. Please enlighten...
Sure, there's DNSSec, but it's not widely used at all.
Posted Feb 26, 2021 17:00 UTC (Fri)
by johannbg (guest, #65743)
[Link] (5 responses)
As far as I can tell DNSSEC usage is skyrocketing...
Posted Feb 26, 2021 18:02 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Feb 26, 2021 19:14 UTC (Fri)
by dskoll (subscriber, #1630)
[Link] (3 responses)
What percentage of domains (not TLDs, actual registered domains) use DNSSec? I suspect it's under 5%. A quick check shows no DS records for biggies like google.com, microsoft.com, facebook.com, amazon.com, netflix.com, apple.com or oracle.com. There is one for whitehouse.gov, though, which is good.
Posted Feb 26, 2021 19:56 UTC (Fri)
by johannbg (guest, #65743)
[Link] (2 responses)
Given the rate how fast this is being adopted, now that cloud providers offer it, I'm pretty sure Microsoft will have completed their adoption atleast for the Office 365 platform by the end of this year.
NIST provides statistics on IPv6 and DNSSEC adoption within the US government here [1].
1. https://fedv6-deployment.antd.nist.gov/
Posted Feb 28, 2021 19:01 UTC (Sun)
by dskoll (subscriber, #1630)
[Link] (1 responses)
Thanks for the link. As this page shows, DNSSEC adoption is very limited.
Posted Feb 28, 2021 19:59 UTC (Sun)
by johannbg (guest, #65743)
[Link]
Posted Feb 25, 2021 17:09 UTC (Thu)
by madscientist (subscriber, #16861)
[Link] (11 responses)
What's needed, instead, is better support for handling DNS problems!!! At the moment DNS is so much in the "background" that people don't even realize that it's there. This is great when it works, but DNS is also one of the more complicated things we have especially these days where people are using VPN regularly; maybe even multiple VPNs simultaneously!
We need to make it obvious what is wrong and provide easy-to-understand ways to fix the problem. And clearly, that has to be part of the base installation since as Lennart rightly says, without DNS you can't get help to fix DNS. Today there's no help available locally: THAT is the problem.
We could do better at the command line, for sure, but more importantly we need to do better at the desktop. I don't know if this falls into NetworkManager, or some separate GUI utility, but DNS troubleshooting and problem resolution must be made straightforward, and it must be installed by default. When the network is first set up, or when a user first logs in, it should be standard to check DNS connectivity and provide some kind of troubleshooting tool if it doesn't work.
Maybe we should also have the tool invoked automatically if a DNS issue is detected (obviously exactly what "is detected" means needs to be carefully considered, since people fat-finger hostnames all the time).
It cannot be that hard to create a troubleshooting tool that clearly shows which DNS servers you have configured, whether they are responding or not, and asks the user to choose one of a few different options to resolve it. It should be possible to easily explain the DNS server info: if the server IP is on the local LAN you know that's a DNS server being provided by your local router for example. If you have multiple routes (for VPN split tunneling) you can map the DNS server to one of them, and show which ones are associated with which VPN. One of the options for a solution surely would be "use default public DNS servers". And when that option is chosen the ramifications of that MUST be made clear in simple English, so people understand that when they choose this option they won't be able to see their local hosts, or if they're using VPN to get to work they won't be able to see their internal corporate hosts.
There's no question that DNS issues are some of the most frustrating, opaque, and obscure kinds of network problems we have today. I work with a group of amazingly smart software developers and many times they have no idea what is going on or how to fix it when DNS is problematic.
But, making DNS even more magic and inscrutable than it already is is NOT the way forward. Instead we need to be raising DNS issues up to the user and giving them the information and tools they need to fix it.
Posted Feb 25, 2021 17:26 UTC (Thu)
by IanKelling (subscriber, #89418)
[Link]
Simple solution to that problem: preinstall tor browser. Use it if the system dns breaks.
Posted Feb 25, 2021 21:49 UTC (Thu)
by NYKevin (subscriber, #129325)
[Link] (9 responses)
To the best of my understanding, most of those things use mDNS, or mDNS-like-technologies, which is completely independent of "real" DNS.
(Basically, devices regularly send broadcast packets that say "Hi, my name is..." and everything that cares listens for those packets. It's kind of horrible, but it has the advantage of nearly always working, no matter how incompetently the network is administered. It does not involve the use of real DNS on port 53 at all, and can't be broken by an invalid DNS configuration or other such nonsense.)
> It cannot be that hard to create a troubleshooting tool that clearly shows which DNS servers you have configured, whether they are responding or not, and asks the user to choose one of a few different options to resolve it. It should be possible to easily explain the DNS server info: if the server IP is on the local LAN you know that's a DNS server being provided by your local router for example. If you have multiple routes (for VPN split tunneling) you can map the DNS server to one of them, and show which ones are associated with which VPN. One of the options for a solution surely would be "use default public DNS servers". And when that option is chosen the ramifications of that MUST be made clear in simple English, so people understand that when they choose this option they won't be able to see their local hosts, or if they're using VPN to get to work they won't be able to see their internal corporate hosts.
The thing I think you are missing is that non-technical users don't want to understand the problem at all. They want to go on Facebook. That's all they ever wanted to do. They don't want to learn something, they don't want to figure out why it's broken, they just want to go on Facebook.
They will dismiss the wizard as soon as it comes up, and then they will find that they still can't go on Facebook. And then they will find my number in their phone, and call me, and I will have to spend 2+ hours first figuring out what is broken (they won't even mention the wizard, so I'm debugging from first principles here) and then explaining to them that they need to relaunch the wizard and here's how to do that.
And at the end of the whole charade, I will probably tell them to click the "use default public DNS servers" button anyway, because in practice, it will probably get them onto Facebook (and out of my hair) faster.
Posted Feb 25, 2021 22:25 UTC (Thu)
by pizza (subscriber, #46)
[Link]
Exactly. And for those non-technical users, the "$bigtech will know what web sites you're looking at" argument is rather ludicrous when $bigtech is the entire point of them "getting online"
Posted Feb 25, 2021 23:27 UTC (Thu)
by nivedita76 (subscriber, #121790)
[Link] (1 responses)
Posted Feb 26, 2021 0:13 UTC (Fri)
by NYKevin (subscriber, #129325)
[Link]
Posted Feb 26, 2021 7:55 UTC (Fri)
by madscientist (subscriber, #16861)
[Link] (5 responses)
I'm quite familiar with zeroconf / mDNS / Rendezvous. I worked writing software for routers and switches for the first half of my career. However they are not used everywhere for everything. In fact I just replace my home router last month and had this exact issue, where systems on my home network were not being seen/not available due to DNS problems.
> The thing I think you are missing is that non-technical users don't want to understand the problem at all. They want to go on Facebook. That's all they ever wanted to do. They don't want to learn something, they don't want to figure out why it's broken, they just want to go on Facebook.
You're missing my point. I surely understand that people want things to just work without having to mess with it. _I_ want things to just work. The question is what the best thing is to do when they DON'T work.
If it were the case that there was a simple solution with no downsides in behavior then selecting that behavior by default without checking with the user would be a good option. The problem is that choosing 8.8.8.8 makes some things work but not other things, and by using it automatically you make it even harder to figure out what is really wrong.
Please note I am NOT arguing about privacy here. I'm talking about correct behavior.
If the system magically chooses a partly-successful attempt to fix things without the user even knowing there's an issue, now you have a system that mostly works, but not completely, and figuring out why it doesn't completely work will be much much harder than it was otherwise. If someone comes to you and says "whatever host I try to reach I get an error 'hostname not found'", you pretty much know what's going on. If someone says "I can't reach the remote storage drive attached to my router from Linux but it works from my Mac", or "I can't talk to my printer from my Linux system, but it works fine from my Windows system", etc., well, it's going to take you quite a while to figure out that the reason for that is DNS related.
Automatically choosing a default is better than what we have today, which is that you're totally dead in the water and you're basically having to resort to browsing FAQs on your phone. But a much better solution than choosing a default would be to provide a simple troubleshooting facility.
Posted Feb 28, 2021 7:19 UTC (Sun)
by NYKevin (subscriber, #129325)
[Link] (4 responses)
Really? You have devices on your home network that are, completely automatically and with no human intervention, registering .home.arpa addresses, and the router has a DNS server which is letting them do that? That's amazing. I thought[1] this sort of thing was still just an IETF pipe dream. I hadn't realized people were actually going around implementing it.
[1]: I formed this impression by skimming RFC 7368, until I eventually realized that it is not a spec at all, but just a vague listing of the IETF's general aspirations for home networking on IPv6.
Posted Feb 28, 2021 13:33 UTC (Sun)
by madscientist (subscriber, #16861)
[Link] (3 responses)
I said that local systems could not see other systems on the local network because DNS was misconfigured to not use the local server, exactly as this proposal suggests should become the fallback behavior.
Posted Feb 28, 2021 18:25 UTC (Sun)
by NYKevin (subscriber, #129325)
[Link] (2 responses)
My assumption is that we start from the premise of "make it easy for non-technical users, and possible to configure for technical users." But perhaps you have a different set of priorities and if so, I don't think we have any common ground to debate.
Posted Mar 1, 2021 15:16 UTC (Mon)
by madscientist (subscriber, #16861)
[Link] (1 responses)
> I believe this proposal is "use the DHCP DNS if one is provided, and only fall back to the public servers if DHCP gives us nothing usable."
That's not clear to me: it would be interesting to know exactly WHAT the removed behavior is. The article uses terms like "fallback mechanism" and "last resort", but without actually defining what these mean. Does that mean that if there's no DNS server _configured_ then the fallback is used, so if you have configured servers but they are wrong or don't work you're back to no DNS? Or does it mean if there's no DNS server _available_ (either no configured servers OR none of the configured servers respond to DNS requests) the fallback is used? If the latter, when is this checked?
Either way, things can still go wrong.
> My assumption is that we start from the premise of "make it easy for non-technical users, and possible to configure for technical users."
The premise we start from is "DNS is not working". If DNS does not work, because we don't get the right configuration via DHCP or for some other reason, what is the best thing to do?
Posted Mar 5, 2021 10:12 UTC (Fri)
by cortana (subscriber, #24596)
[Link]
Does that mean that if there's no DNS server _configured_ then the fallback is used, so if you have configured servers but they are wrong or don't work you're back to no DNS
Yes, see resolved.conf(5):
See also systemd-resolved(8) for a general description of how it resolves names via unicast DNS:
The following query routing logic applies for unicast DNS traffic:
Posted Feb 25, 2021 17:23 UTC (Thu)
by IanKelling (subscriber, #89418)
[Link] (2 responses)
Posted Feb 26, 2021 4:22 UTC (Fri)
by helmholtz_coil (guest, #123692)
[Link]
Posted Feb 26, 2021 21:43 UTC (Fri)
by thoughtpolice (subscriber, #87455)
[Link]
Tor can only anonymize and keep you private when you're actively working with it under specific assumptions. Throwing users into the network and making some trivial claim to privacy isn't just a low-effort nerd cop out red herring BS (yes, that's exactly what it is), it's actively misleading to the user about what they can expect.
Posted Feb 25, 2021 20:24 UTC (Thu)
by logang (subscriber, #127618)
[Link] (10 responses)
I disagree with this statement. It's easy for people to understand the risks and privacy implications of using free internet at a Cafe. Most of my non-technical friends understand this is a concern and will adjust their behaviour accordingly, even if they don't know the specific risks.
There's a much bigger privacy implication of sending all of your DNS queries (encrypted or otherwise) to a single company so they know every website you go to, whether at home or at the cafe. Add to the fact that they can correlate this with your account information when you visit that company's services and it becomes even more egregious. So they know exactly who you are and every website you have ever visited. And to make matters worse, the non-technical person may never know they are giving up all this information because some random person mis-configured something and their system silently fell back to using a third party's services without their knowledge.
The focus on DoT misses the point entirely. Yes, it would be nice if more servers provided this extra security but it's beside the privacy issue. I'd much rather have my ISP have this information (as they largely have it anyway seeing they sit between me and the internet) than give it to a big tech company; and I'm willing to risk having someone intercept the data between me and my ISP than give away all my information freely to a single party.
But, yes, a fall back is fine *if* you complain loudly so the user can know that something bad has happened and can perhaps seek help. The message needs to be visible (so not hidden in some log somewhere) and acknowledge which third-party is actually being given what information.
Posted Feb 25, 2021 21:38 UTC (Thu)
by pizza (subscriber, #46)
[Link] (8 responses)
This argument falls flat once you consider that most folks already send "all of their DNS queries" to "a single company" -- namely their home ISP -- and the historical record is full of examples of ISPs (and especially hotspot operators) being much less trustworthy (and less reliable) than the likes of Google or Cloudfare.
This whole discussion seems to be question about "fail closed" or "fail open" -- or alternatively, two points on the "usability vs security" curve. Which one is appropriate is _entirely_ context-dependent. and to be honest, for those scenarios where "fail closed" is appropriate, this default is just one of many things that need changing for their particular deployment environment. For most everyone/everything else, having a sane fallback is a GoodThing(tm), because the alternative is not "working" at all.
> But, yes, a fall back is fine *if* you complain loudly so the user can know that something bad has happened and can perhaps seek help.
Sure, though it's not entirely clear what mechanism could be used to do this complaining.
Posted Feb 25, 2021 23:56 UTC (Thu)
by gdt (subscriber, #6284)
[Link] (7 responses)
It makes sense that a law about privacy -- such as the EU's GDPR -- becomes involved and makes that approval that a firm requirement rather than just a good practice.
From a practical point of view, DNS is a useful leverage point to detect and control botnets and some sources of malware. Again, flipping out of that security environment into another with no approval by the user is not a great idea. And again, that applies to both moving from the ISPs servers to Google's, or from Google's servers to the ISPs.
Posted Feb 26, 2021 0:34 UTC (Fri)
by pizza (subscriber, #46)
[Link] (4 responses)
Sure, and those laws say "the provider can collect whatever they want and do whatever they want with it". As does the contract, incidentally.
> It makes sense that a law about privacy -- such as the EU's GDPR -- becomes involved and makes that approval that a firm requirement rather than just a good practice.
So why limit this to your "upstream" DNS resolver? What about authoritative DNS server operators, TLD operators, and DNS root operators? (Since my household utilizes a private resolving DNS server, my "private" IP address and what I'm trying to resolve gets leaked to all of them, and there is *nothing* I can do short of not using DNS at all. Though I shouldn't have to point out that these DNS lookups are triggered by my explicit action, so isn't that actually my giving informed consent that my DNS lookups will have to leak out, by design?)
Posted Feb 26, 2021 3:30 UTC (Fri)
by wahern (subscriber, #37304)
[Link] (3 responses)
Some jurisdictions do prohibit ISPs from selling user data. And some ISPs are genuinely good netizens. People in these situations (a not insubstantial number, even in the U.S.) accidentally failing over to Google or Cloudflare are objectively in a *worse* situation.
Furthermore, small choices that push the entire Internet ecosystem into reliance on Google, Cloudflare, etc, means it becomes increasingly difficult to significantly improve the situation for everyone. It's not politically difficult (at least not in many jurisdictions outside the U.S.) to justify restrictions on ISPs collecting and leveraging personal data. But try to do that for Google and Cloudflare once a majority of the internet is relying on them to provide "free" DNS service, and then you'll find that you've burned all your bridges (port 53 is blocked everywhere except to Google and Cloudflare) and no longer have any real leverage. They can just take their ball and go home and then your citizens or clients will complain, "what use is privacy if I can't perform the activities I was interested in at all."
Look, it's a difficult problem juggling these competing demands--convenience vs privacy, security, etc. No doubt about it. But there's a difference between taking a path which we're not quite sure where it leads, and taking a path that very clearly leads to an undesirable end, even if it's slightly better than the status quo. Anyhow, the latter path isn't ever going away. Google and Cloudflare want you to use their DNS services because it not only makes them more money, it promises even greater dividends down the road as more people become reliant on them. That's true today and it will remain true for the foreseeable future.
Anyhow, if convenience is your primary objective, the solution is easy: just run a local recursing resolver. NLnet Labs' unbound is one of the most popular local resolvers in FOSS systems (perhaps second only to systemd-resolved). It's reputation is unimpeachable, supports all the latest standards to a much greater degree than systemd-resolved (including DoT and DoH, client- *and* server-side), and it's a first-class recursing, caching resolver. Moreover, it's composed of a collection of well documented APIs, meaning it's relatively easy to stitch together your own local resolver that transparently performs whatever fancy fallback magic you could ever want. OpenBSD does this: they provide unbound in the default install, but also provide their own bespoke "road warrior" resolver built on the unbound libraries. systemd could have decided to use these libraries if they had wanted to; it still can, in fact.
Conflation of the convenience and privacy issues is happening largely because of deficiencies in systemd-resolved itself. Only if you can't reliably perform recursive queries do you need to resort to choosing Google or Cloudflare as the fallback. And even then the options aren't mutually exclusive--you could first try the DHCP-declared server; if that doesn't work try recursing yourself; if that doesn't work fall back to Google over DoT/DoH. And to reiterate, libunbound puts all that within reach with a fraction of the effort that has gone into writing the systemd-resolved stack.[1]
[1] Not that I think the systemd-resolved stack is bad. I had no qualms relying on it to proxy upstream (to the DHCP-declared servers) for our clustering architecture.
Posted Feb 26, 2021 4:36 UTC (Fri)
by pizza (subscriber, #46)
[Link] (2 responses)
Sure, some do. Many more don't.
Meanwhile, Google (and for that matter, Cloudfare) has never "sold user data".
(Now Google sells _advertising_ that uses that data to improve targeting. But so have my last two ISPs)
And your ISP has some pretty detailed user activity data that many jurisdictions mandate be collected and retained, for "law enforcement" purposes. This sort of thing was a prime reason for the https-everywhere push. (Which led to even more intrusive middleboxes, which led browsers to pin certificates to catch data interception, and so forth...)
> Google and Cloudflare want you to use their DNS services because it not only makes them more money, it promises even greater dividends down the road as more people become reliant on them. That's true today and it will remain true for the foreseeable future.
...And also because plenty of middlemen routinely muck with end-users' DNS queries (and anything else that can be intercepted) leading to all manner of shenanigans, from relatively benign (data collection), somewhat skeevy (injecting advertising), to outright hostile (MITM attacks, credential harvesting)
(TBH I'd be quite surprised if Google and/or Cloudfare make any money off of their public DNS resolver, much less enough to offset the cost of providing/maintaining the service..)
> Anyhow, if convenience is your primary objective, the solution is easy: just run a local recursing resolver.
Um, how is installing and appropriately configuring an additional software packages "convenient" or "easy"?
If "convenience" is truly the primary objective, then systemd-resolved's upstream behaviour is ideal, as it will use whatever your ISP/etc hands you and only fall back to well-known public services if what you were handed doesn't work (or is nonexistent) for whatever reason.
(And I say that as someone who has private recursive resolvers set up for all of the networks I'm responsible for. And who has long made sure that "internal" DNS zones are publicly resolvable due to corporate VPN clients overriding local resolver settings..)
Posted Feb 26, 2021 10:40 UTC (Fri)
by smurf (subscriber, #17840)
[Link]
The systems running the public DNS resolvers are there anyway, they provide search / content acceleration. Data gained from them helps identify malicious users (if suddenly 100k random queries for random123.s0me0bscured0ma1n.com show up, something fishy may be going on) which helps both secure and/or run their other services. So I strongly suspect that their effect is net positive.
Posted Feb 27, 2021 6:40 UTC (Sat)
by tialaramex (subscriber, #21167)
[Link]
For now this aligns their interests and mine very well. In principle the Network might some day be transitioning to a successor technology and we could imagine Google and Cloudflare, if they still existed when that happens, fighting this change, like a 1990s telco (profiting from the previous iteration of the Network the global PSTN) trying to stop the Internet rather than going with the flow, but if that happens it would be in the distant future and I expect to be long dead.
Anyway, under this rationale offering public DNS unbreaks the Internet for some non-trivial fraction of users, which in turn drives up your profitability.
For Cloudflare in particular there's an extra bonus, the 1.1.1.1 server gets to choose which of several valid answers to give in response to queries and so it can choose answers for Cloudflare services that reduce RTT between origin and server since it knows where they both are.
Historically there was effort to help other servers do this in DNS, by telling them the first few octets of the asking client's IP address. EDNS Client Subnet. Unfortunately of course as we see in this thread, people consider their IP address private information and don't want it leaked. So Cloudflare does not use EDNS Client Subnet at all.
Posted Feb 26, 2021 6:39 UTC (Fri)
by tialaramex (subscriber, #21167)
[Link] (1 responses)
Nope. The user is just a user. Perhaps somebody in their home has such a contract, and perhaps it's with a "home ISP", and perhaps that home ISP operates a DNS server upstream which somebody in those actually actually chose to use, but likely not. In either case I can't see why you'd imagine this somehow creates a relationship between a user and an ISP regulated by law when the two don't even have or want a relationship.
I don't for one moment buy the theory that somehow the GDPR means the user needs to explicitly configure a protocol they've never heard of because of some tortured logic about IP addresses as identifiers. If your concern is that operators of big public DNS servers like 8.8.8.8 and 1.1.1.1 might invade your privacy I have great news - unlike most ISPs they've actually got good reasons not to and policies saying they won't.
Posted Feb 27, 2021 8:22 UTC (Sat)
by gdt (subscriber, #6284)
[Link]
In either case I can't see why you'd imagine this somehow creates a relationship between a user and an ISP regulated by law Well I can't speak to the USA, but in Australia that's precisely what the Telecommunications Act exists to do. The ISP is a "carriage service provider" or a "telecommunications provider" and thus has a black-letter list of the occasions when the content of the user's telecommunications can be disclosed, with other disclosures being criminal. If your concern is that operators of big public DNS servers like 8.8.8.8 and 1.1.1.1 might invade your privacy I have great news - unlike most ISPs they've actually got good reasons not to and policies saying they won't. Whereas ISPs are controlled by telecommunications legislation rather than by self-interest. My point is that invisible failover between these two very different privacy scenarios is not desirable.
Posted Feb 26, 2021 3:19 UTC (Fri)
by ibukanov (subscriber, #3942)
[Link]
And I really do not understand the issue with cloud images. I suspect that a probability of a cloud provider misconfiguring DNS in production is much lower than for ISP and when it happens it is not only DNS that will be done. So the nuisance in practice is when a developer misconfigure/underconfigure a VM/container engine on their laptop.
Posted Feb 26, 2021 3:59 UTC (Fri)
by k8to (guest, #15413)
[Link] (2 responses)
Posted Feb 26, 2021 6:09 UTC (Fri)
by Otus (subscriber, #67685)
[Link]
Posted Feb 26, 2021 9:16 UTC (Fri)
by intelfx (subscriber, #130118)
[Link]
Normally, resolved _generates_ resolv.conf for you...
Posted Feb 27, 2021 12:44 UTC (Sat)
by abo (subscriber, #77288)
[Link]
Of course the image may be to blame, perhaps the provider requires routing and DNS to be configured statically but the person who made the image forgot to add the right resolver.
Fedora and fallback DNS servers
Fedora and fallback DNS servers
That guy starts its laptop using default configuration, connect to wifi, get internet with a working DNS from DHCP
If somehow, the dhcp gives him a wrong DNS setting, this means the whole networking is broken. Shall we really try to hide that ? For customers, I doubt that this ever happen. When was the last time a cable compagny broke the DNS of millions of customers ? For entreprises, there is something more: using a random DNS resolver from internet will probably not work, rendering the fallback useless
Basically, this is the same thing as an enterprise non-technical people: useless
In that last case, using any resolver from internet will be broken too, because it won't have that specific required configuration (the local zone, custom routing or whatever)
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
- I had another device to use
- I had the alternative DNS addresses memorized
- I knew how to change the DNS that had been given to me by the network
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Windows: Nope
Android: Nope
MacOS: Nope
IOS: (I don't actually know, but I suspect not)
These Operating systems in use on BILLIONS of devices don't have DNS fall back, why? Because it simply isn't the genius idea it is being sold as. It either doesn't add value, or where it does add value it masks an issue, which is far more damaging than the value it offers in masking that issue.
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Because modern Computer Science has nothing at all to do with networking does it?
You probably intended a sarcastic meaning, but you are absolutely correct from first word to last.
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Afterwards, I would expect them to have a grasp of queing theory, know about some important concepts like "time in a network" coined by Leslie Lamport, maybe reason about issues like buffer bloat in a scientific instead of an empiric way.
Where else should the graduates get that level of education from?
Fedora and fallback DNS servers
Wol
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
>
> In contrast, a computer engineer will look at things like the reliability of multicast, practical packet formats, MTU limitations, and build you something that works like OSPF.
Fedora and fallback DNS servers
Fedora and fallback DNS servers
I find it very hard to believe there are swathes of users computer literate enough to either change the default OS on their machine, dual boot with linux, or buy/build a computer with no OS and install Linux, who are also completely incapable of troubleshooting non working DNS.
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
If not, what situation, exactly, are we talking about ? Somehow, the dhcp will send a broken setting to fedora, but a good one to windows ?
If fedora only is broken, then the user is right to blame it
When was the last time a cable company broke the DNS of millions of customers?
For me, recently (Feb 12 ).
Videotron, one of the major ISPs here in Quebec was suffering outages that appear to have been due to DNS.
In my case, switching to google (8.8.8.8) resolved things.
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Wol
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Here is a paragraph of text...
<p>
Here is a second paragraph...
<p>
[and so on]
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Wol
Fedora and fallback DNS servers
Fedora and fallback DNS servers
It doesn't apply to e.g. programming languages, where the person who writes the invalid input can immediately see a fatal error message and fix it themselves.
If only programming languages guaranteed a fatal error message on invalid input. We have that for syntax and so-called "static semantics" (things beyond context-free grammars that are checked by the compiler). But then there are run-time errors, which may be seen by a different person. And then there is undefined behaviour, where a new version of the compiler that the code was tested with might compile the code different than the old version; or worse, the same version of a library might choose to behave differently on some hardware than on the tested hardware (happened with memcpy).
Sidebar on robustness
Second, complain loudly (but perhaps with exponential backoff) when a narrow reading fails,
Finally, treat the results as a bug report against the spec. Fix the spec and go to step 1.
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
But frankly, your comment is absurd and brings nothing to the discussion.
I would really like it if comment posters would stop attacking each other in this way.
If you disagree with the idea (as you evidently do) then explain your disagreement, but you do not need to insult the poster like this.
Please
Jon, it has been 20 years. Time to look around for some mechanism to get comments under control. Simply treating comments as a tree in the order that they were submitted as if they are ordinary articles might have been acceptable two decades ago, but it is way too easy for discussion to be derailed, especially if the very first comment is trollish. There are some topics that just can't be discussed because of the problems with the comment system, and your occasional requests for civility just aren't effective.
Please
Please
Please
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
The thing I think you are missing is that non-technical users don't want to understand the problem at all. They want to go on Facebook. That's all they ever wanted to do. They don't want to learn something, they don't want to figure out why it's broken, they just want to go on Facebook.
But is there any evidence that the fallback ever helps non-technical users? The bug report was for a cloud instance, and the fallback was previously papering over a bug that cloud-init didn't configure DNS correctly.
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
No. I don't see where I said anything like that.
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers
Fedora and fallback DNS servers