LWN: Comments on "An Illustrated Guide to the Kaminsky DNS Vulnerability" https://lwn.net/Articles/293382/ This is a special feed containing comments posted to the individual LWN article titled "An Illustrated Guide to the Kaminsky DNS Vulnerability". en-us Tue, 14 Oct 2025 17:57:41 +0000 Tue, 14 Oct 2025 17:57:41 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net An Illustrated Guide to the Kaminsky DNS Vulnerability https://lwn.net/Articles/294553/ https://lwn.net/Articles/294553/ job <div class="FormattedComment"><pre> TCP is not invulnerable to injection attacks. TCP sequence numbers works a lot like the numbers discussed in this attack, and has had (still has?) similar problems. </pre></div> Tue, 19 Aug 2008 07:37:16 +0000 Uncaching the bad guy https://lwn.net/Articles/293930/ https://lwn.net/Articles/293930/ man_ls I remembered something vaguely in the spec, and your response was intriguing enough. In a fast reading of <a href="http://www.netfor2.com/rfc1035.txt">RFC 1035</a> it describes the TTL field as: <blockquote> a 32 bit unsigned integer that specifies the time interval (in seconds) that the resource record may be cached before it should be discarded. Zero values are interpreted to mean that the RR can only be used for the transaction in progress, and should not be cached. </blockquote> So the spec says when an answer should not be cached, but it does not force implementors to cache responses. However later on it says: <blockquote> In general, we expect a resolver to cache all data which it receives in responses since it may be useful in answering future client requests. </blockquote> But there are certain exceptions: <blockquote> However, there are several types of data which should not be cached: [...] RR data in responses of dubious reliability. When a resolver receives unsolicited responses or RR data other than that requested, it should discard it without caching it. The basic implication is that all sanity checks on a packet should be performed before any of it is cached. </blockquote> It would seem to allow (maybe even encourage) behaviors as the one you propose with so much detail. The next step would seem to be a proof-of-concept implementation :D Thu, 14 Aug 2008 08:56:24 +0000 Uncaching the bad guy https://lwn.net/Articles/293889/ https://lwn.net/Articles/293889/ jzbiciak <div class="FormattedComment"><pre> Refining my idea a bit... What if it was structured like this? Split the DNS cache into two caches: The primary cache and a secondary, tentative cache. When a request comes in, check the primary cache. If it hits, return the response from the primary cache. If it misses, check the tentative cache. If it hits, return the response from the tentative cache. If it misses both caches, send a request upstream. Place the reply in the tentative cache. If the reply survives in the tentative cache longer than some timeout (say 1-2 minutes), commit the information from that reply to the primary cache. If replies arrive for non-existent requests, put "negative entries" for those replies into the tentative cache. Give these negative entries a short timeout, perhaps 30 seconds, to prevent DoS attacks. Perhaps also put an upper bound on the storage associated with these negative entries. Not all information about the replies needs to be stored--rather, just a hash of the hostname and the corresponding IP address. If replies arrive for elements already in the tentative cache, and the details disagree, remove the element from the tentative cache so that it does not get committed to the primary cache. If a reply comes back for an outstanding request, and it matches a hostname OR an IP address with an active negative entry, do not insert it into the tentative cache. Thus, the tentative cache tracks two things: Recent bogus replies (factored apart and simplified), and recent requests that may or may not be bogus. If a request survives its quarantine in the tentative cache, then it can be committed to the primary cache. Hmmm.... </pre></div> Wed, 13 Aug 2008 22:13:43 +0000 Uncaching the bad guy https://lwn.net/Articles/293888/ https://lwn.net/Articles/293888/ jzbiciak <P>Philosophically, caching should only ever be a performance optimization. If a caching DNS server randomly fails and loses its cache, it should never cause a correctness problem. Therefore, dropping cached entries is always safe, theoretically speaking. This is true because the caches are just holding local copies of largely read-only information. There are never any updates that go upstream&mdash;updates always come from authoritative endpoints based on when the information is needed and not yet in the cache. The only "coherence protocol" is the time-to-live field which states when a cached entry is supposed to expire.</P> <P>I say "philosophically" and "theoretically" because there <I>are</I> practical issues. We cache answers for a reason. One is to provide faster service to those downstream from us. This is the performance optimization part. The other is to reduce the load on those <I>upstream</I> from us, since they don't necessarily have the capacity to handle all of the requests for their domain name. They rely on caching to keep the load manageable. If we never cached anything, then all requests for a given domain would end up at the authoritative name servers, and all the intermediate hops would never offer any value.</P> <P>What I was thinking in my proposal above was that if we selectively dropped answers, it would be no different than if we hadn't seen the query to begin with. Host A asking Server B about Host C shouldn't rely on Host D having stopped by earlier to ask the same question, thereby causing the answer to be cached in Server B. My selective drop heuristic would allow someone who's flooding my DNS server to push recent requests out of my cache, and it wouldn't stop the occasional transient "poisoned answer" to leak through. It would prevent the poisoned answer from being cached. It would also allow my cache to continue functioning for all other domain names. The additional upstream traffic that would result is strictly less than (by a couple orders of magnitude, I'd reckon) the amount of traffic you could generate on upstream servers with a direct flood.</P> <P>There are still a few attacks that are possible in such an environment, but this would raise the bar much more I would think. Now instead of redirecting all comers for whatever the max TTL is after a week's worth of effort, you might be able to redirect 1 or 2 people.</P> Wed, 13 Aug 2008 21:47:55 +0000 Uncaching the bad guy https://lwn.net/Articles/293884/ https://lwn.net/Articles/293884/ man_ls Uhm, in that case the balance changes a bit -- one week does not seem so much to get hold of, say, google.com in a large network. One question though, for someone more knowledgeable in the DNS protocol than me: can you not cache random answers and still be compliant with the protocol? Wed, 13 Aug 2008 21:24:57 +0000 Uncaching the bad guy https://lwn.net/Articles/293786/ https://lwn.net/Articles/293786/ jzbiciak <P>Is that <A HREF="http://tech.slashdot.org/article.pl?sid=08/08/09/123222">really true</A> that source port randomization is enough? From what I can tell, the change took an attack that required 10 seconds and made it cost about a week, and then someone showed if your link is fast and near enough, it's actually less than a day. </P> <P>The real problem is that the DNS server's cache gets poisoned, so that a single successful bogus reply gets amplified into a long series of successful bogus replies by virtue of it being cached. This happens despite obvious evidence (the flurry of unsolicited replies carrying the same information) that the reply is bogus. Hence my suggestion. :-)</P> Wed, 13 Aug 2008 13:38:26 +0000 An Illustrated Guide to the Kaminsky DNS Vulnerability https://lwn.net/Articles/293762/ https://lwn.net/Articles/293762/ ekj <div class="FormattedComment"><pre> No single technique is good enough. I think my bank is reasonable. The authenthication works like this: There's a client-side certificate, that you need to install the first time you visit the bank, due to how web-browsers work, this certificate is only presented to the proper bank. (unless there's a bug in your browser offcourse, but that's a different problem from phishing) When you go to the bank, the browser presents the certificate, and the bank greets you, by name, and with the time of your last visit. A MITM-attack could not replicate this. My browser would not present the certificate in the first place, since the domain is wrong, and even if it did, the certificate-authenthication-mechnism is secure against MITM-attacks anyway. You enter your password. The usual problems of poorly selected passwords etc apply. But here's the trick; when you do a SMS is sent to your mobile phone with a 5-char random hex-string that you need to enter. End-result is reasonably secure, and only a tiny bit more cumbersome than the basic username-password thing. To impersonate ME to the bank, an attacker would have to find out my username and password, get hold of my client-certificate AND get access to read SMS-messages sent to my phone (for example by stealing the phone). Each one of these ain't impossible (a trojan on my machine could even do the first two). But getting all three -- preferably without me even noticing, is going to be hard. To impersonate the bank to ME, the attacker would have to hope I don't notice that I'm not greeted by name, like I normally am, in prominent letters on the login-screen. And even if this succeeded, and he got my password, he'd still need my mobile-phone AND my client-certificate to be able to make any progress. Not perfect. But good enough, methinks. </pre></div> Wed, 13 Aug 2008 09:16:21 +0000 Uncaching the bad guy https://lwn.net/Articles/293755/ https://lwn.net/Articles/293755/ man_ls Seems like a good idea, but given that there is a good solution to the problem (random source port and random query ID), why bother? It would only add complexity to the current code. Wed, 13 Aug 2008 08:26:43 +0000 UDP vs TCP https://lwn.net/Articles/293756/ https://lwn.net/Articles/293756/ man_ls Probably because TCP is quite more expensive than UDP, and everyone would have to scale up their DNS infrastructure. A good-enough solution with UDP seems preferable. Wed, 13 Aug 2008 08:25:54 +0000 An Illustrated Guide to the Kaminsky DNS Vulnerability https://lwn.net/Articles/293690/ https://lwn.net/Articles/293690/ jzbiciak <div class="FormattedComment"><pre> Perhaps a stupid question: Right now caching servers drop unexpected replies. Why can't the DNS server use a flurry of unexpected replies that are "close" to a recent request as a hint it shouldn't cache the corresponding response? (e.g. force its TTL to 0 if it was "recently" added, and don't hold the response when it does arrive if it was not yet added?) You might get the occasional successful bogus query, but you would at least not poison the cache. What am I missing here? </pre></div> Tue, 12 Aug 2008 20:02:41 +0000 An Illustrated Guide to the Kaminsky DNS Vulnerability https://lwn.net/Articles/293605/ https://lwn.net/Articles/293605/ muwlgr <div class="FormattedComment"><pre> I just wonder why not use TCP/53 connections to the DNS servers you need ? It is certainly harder for the 3rd party to inject arbitrary data into established TCP connection than into connectionless UDP packet stream. </pre></div> Tue, 12 Aug 2008 06:29:55 +0000 An Illustrated Guide to the Kaminsky DNS Vulnerability https://lwn.net/Articles/293561/ https://lwn.net/Articles/293561/ branden Uh, right. <p>Name one company that has "suffered in the marketplace" due to their complicity with President Bush's illegal, warrantless domestic wiretapping. <p><em>And</em>, thanks to Congress, those <strong>companies</strong> have their retroactive immunity as well. Not the government, the companies. The government doesn't need immunity because the Department of Justice will never investigate the crimes. The next administration, no matter who wins the election, will not investigate it, either, because people need to have "confidence" in their government. <p>You go right on pretending that Big Business is more trustworthy than Big Government. I hope you never learn the hard way how misplaced your trust is. Mon, 11 Aug 2008 17:17:13 +0000 Failed DNS https://lwn.net/Articles/293512/ https://lwn.net/Articles/293512/ brianomahoney <div class="FormattedComment"><pre> If the DNS is failed, or flakey you can always use the IP address if you know it, so a poisoned, or group of poisoned DNS servers is, in fact, a denial of service, far less serious than a break in or ability to mount a MIM attack. That is why it is important that clients, as well as servers have certificates but even certificate-less clients can take measures to prevent MIM attacks, since, simply you can refuse to talk to her! by checking the IP or the certificate but this is still iffy and inconvenient, much better to just get into the way of partner trust chains and challenge response. Once this is the norm, even Joe Sixpack will learn it! </pre></div> Mon, 11 Aug 2008 14:08:02 +0000 An Illustrated Guide to the Kaminsky DNS Vulnerability https://lwn.net/Articles/293513/ https://lwn.net/Articles/293513/ freemars <p><i> thats why you want the government to issue these, so that everybody has one. </i></p><p> You trust governments more than I do. If a private company does something illegal and word gets out, they suffer in the marketplace. If a government does something illegal and word gets out, they retroactively change the law. </p> Mon, 11 Aug 2008 14:04:55 +0000 One more thaught https://lwn.net/Articles/293511/ https://lwn.net/Articles/293511/ drag <div class="FormattedComment"><pre> Well to be brutally truthful about it, think about what you guys are asking: 1. Your solution to the DNS cache poisoning is to initiate a world wide effort to catalog, identify, and create a certificate for every single person on the planet earth who would like to use a computer. This is a equivelent to a international ID card, but instead of having to 'show your papers' to use the airport, you just have to register your computers with the government. 2. Your asking the governments to figure out a international PKI infrastructure, which is only mildly successful and is insanely difficult and expensive for people to get to working properly with a company of 10,000 employees... and your asking the governments of the world to get together and work out a solution for potentially _billions_ of people?! This is not something that is going to scale. -------------------------------- And all of this to solve a DNS trick? -------------------------------- Personally I trust people at Microsoft more then the people at my local DMV, much less the federal government. Why on earth does the idea of handing over (what would essentially become) full control of the internet to them is a good thing is beyond my comprehension. Give me Verizon anyday. At least I can sue them, tell them to f*k off, or ignore them. I try to do that to my government they send armed men to my house and throw me in a cage for 20 years. I deal with a hell of a lot of government regulation dealing with computers every day. Except in a few small cases were you have to have real encryption for use in government, the laws and such do much more to codify bad practices and bad behavior then anything else. Almost everything has very little to do with improving the state of the art or providing good code quality or security and much more with simply documentation requirements and jumping through hoops that some committee made up of minor level political appointees thought sounded very technical one weekend 5 years ago. </pre></div> Mon, 11 Aug 2008 13:57:37 +0000 An Illustrated Guide to the Kaminsky DNS Vulnerability https://lwn.net/Articles/293501/ https://lwn.net/Articles/293501/ tialaramex <div class="FormattedComment"><pre> "All is lost" because DNS is the name to address directory service. If your directory service is broken then the fact that in theory you and your bank could still conduct business securely is no comfort, because you can't find the bank. For example, usually when discussing SSL someone will bring up certificate revocation services. Your SSL client should be able to periodically or on each connection verify whether the server cert offered is on a list of revoked certificates, perhaps because the physical hardware containing the certificate was stolen. To locate the certificate revocation service the client translates a human readable name to an IP address. It does this with, you've guessed it, DNS. The directory service isn't and shouldn't be an alternative to securing the protocols themselves, but it must be secure and robust on its own. Today that just isn't true of DNS. We need to get a political solution for DNSSEC on the root, without compromising the integrity of the root operators. I couldn't care less whether Verisign mis-manages the com gTLD but having the root unsecured means it's not practical to secure ccTLDs and gTLDs operated by responsible people. For example, who here actually has DNSSEC enabled for Sweden? The Swedish ccTLD provides DNSSEC, but why would anyone want to manually edit their DNS server configuration for one country, right? </pre></div> Mon, 11 Aug 2008 09:47:37 +0000 An Illustrated Guide to the Kaminsky DNS Vulnerability https://lwn.net/Articles/293433/ https://lwn.net/Articles/293433/ kjp <div class="FormattedComment"><pre> If there's a time to start using those reserved and zero bits in dns queries, I'd say this is it - a 128bit guid would be nice. Only servers have to be able to support it. </pre></div> Sat, 09 Aug 2008 14:16:49 +0000 An Illustrated Guide to the Kaminsky DNS Vulnerability https://lwn.net/Articles/293426/ https://lwn.net/Articles/293426/ ibukanov <div class="FormattedComment"><pre> <font class="QuotedText">&gt; where the good guys actually have lawyers</font> I second that and wish that SRP-type of technology would get a strong backing to get through the patent mess. Just as like Theora codec has finally got a chance after Mozilla payed the lawyers and decided to take the calculated risk and include Theora into the forthcoming FireFox 3.1. </pre></div> Sat, 09 Aug 2008 13:30:51 +0000 One more thaught https://lwn.net/Articles/293425/ https://lwn.net/Articles/293425/ brianomahoney <div class="FormattedComment"><pre> I just realized why I am so mad about this issue, when, as another commenter has pointed out, there are clear solutions to all these problems which do not involve daft brain dead ideas like embedding CA lists in browsers which is so obviously a bad idea that how it happened must be a vital consideration: what happens is: Security consultant says this way is secure (independent academic peer review) ... Marketing/business/management say: 1.Joe Sixpack will never understand it 2.Its too costly 3.Its too complicated 4.Its not what the rest of our industry is doing And a chorus of vested interests, tool merchants, snakeoil salesmen and Micro$oft join in and a new, not quite secure system is rolled out to be a couple of weeks to shouts of surprise and outrage from Marketing/business/management and the public (a) see another security foobar and (b) loose further confidence and with good reason. Much as Sarbanes Oxley is oppressive, until individuals in management are made civilly and criminally liable for these mistakes the same bad arguments will continue to prevail. </pre></div> Sat, 09 Aug 2008 13:19:41 +0000 An Illustrated Guide to the Kaminsky DNS Vulnerability https://lwn.net/Articles/293424/ https://lwn.net/Articles/293424/ ibukanov <div class="FormattedComment"><pre> <font class="QuotedText">&gt; just using a pin + SSL, which seems to be what SRP is.</font> This is wrong and just spreads more misunderstanding about SRP. The main idea behind SRP is to use a secret both to authenticate yourself to a third party and to verify the identity of the third party *without* ever leaking the secret on the wire. As the result of this handshake both parties can get a random arbitrary huge number that they can use for symmetric encryption later. When the secret is just a password, it roughly corespondes to using both client and server SSL certificates. But a human can remember a strong pass-phrase while it is not realistically possible to remember SSL certificates/private keys. When the secret is a special private key combined with a password, this is much stronger then a password-protected SSL client cerificates. If an attacker can get access to a computer that stores the SSL private key, he can brute-force the password. But with SRP this is useless as both the key and the password necessary for authentication. This also allows to have weak passwords like pin-codes with keys stored on the smart-cards for very secure authentication/third party verification. </pre></div> Sat, 09 Aug 2008 13:14:04 +0000 An Illustrated Guide to the Kaminsky DNS Vulnerability https://lwn.net/Articles/293423/ https://lwn.net/Articles/293423/ deleteme <div class="FormattedComment"><pre> Yes but you have to authenticate actions as well. And most banks doesn't do that, best I've seen is in order of security: using a 1. Calculator that lets you choose "send money"+1200 Euro, which gives you a hash to enter on the bank website. 2. Calculator were you enter numbers the bank give you, e.g. 9+random number for login, and 0000120000 for transfering 1200 euro, which gives you a hash to enter on the bank website. 3. onetime pad doing no action verification at all. 4. pin to login + special SSL certificate in the browser. I have never seen any bank just using a pin + SSL, which seems to be what SRP is. </pre></div> Sat, 09 Aug 2008 12:33:35 +0000 Restatement on SMART cards/government https://lwn.net/Articles/293420/ https://lwn.net/Articles/293420/ brianomahoney <div class="FormattedComment"><pre> The reason I want all governments to give their citizens a free X509 is to put shyster rip off merchants eg Verisign out of business since it would mean that (a) everyone had at least 1 X509, and (b) that CAs could quickly authenticate other certificate applications. This should be enough to drive the X509 cost right down, at least for real people and company/charity registrars could do the same for non-real people. Then, given the one X509, you create credentials by having your counterparty (counter-)sign the X509, a signature they can check and that means that if they send their challenge, encrypted with your public-key you make mounting a MIM attack as hard as breaking the key pair and quite impossible in real time. The point is once I have one secure X509, I can use it to create an indefinite number of other secure credentials based on a self-signing CA which signs with _MY_ secret key, I can then write this key on a CD or paper using bar codes and send it to my counterparty without disclosing my secret key, they can countersign and return using their secret key, without disclosing it, creating a credential that can be used both to secure from MIM attacks, and authenticate the parties to each other. This sort of thing is what should be on debit/credit cards etc. Given the individual/corporate (birth) X509s are signed by a trusted authority, government/company registry/public-ca the created self signed secondary X509 eg web server cert could be dynamically checked by ssh and browsers ___WITHOUT___ requiring a trusted CA list in the browser. This list, and the whole CA are clearly a __BAD__ idea, which serves only to make money for CAs while counterveiling against real security. All we really need is to simplify the trust chain and make it so it can be verified dynamically. </pre></div> Sat, 09 Aug 2008 12:20:04 +0000 An Illustrated Guide to the Kaminsky DNS Vulnerability https://lwn.net/Articles/293421/ https://lwn.net/Articles/293421/ njs <div class="FormattedComment"><pre> Yes. But no major players have touched it for the last decade; it (and all related techniques) have been left to rot because of a bunch of FUD from various patent holders. (I.e., no-one's even willing to claim that SRP *does* infringe on any patents, but the patent holders sort of twitch and growl occasionally, and apparently that's enough.) You might think that in this strange modern world where the good guys actually have lawyers, it would be possible to clear this up and get SRP into, say, OpenSSH, but AFAICT so far no-one has even started doing the necessary work. Sigh. </pre></div> Sat, 09 Aug 2008 12:17:02 +0000 An Illustrated Guide to the Kaminsky DNS Vulnerability https://lwn.net/Articles/293416/ https://lwn.net/Articles/293416/ kune <div class="FormattedComment"><pre> The problem is what is regarded as the endpoint. Security against Man-in-the-Middle always requires the integrity of the endpoint. If you regard the system consisting of computer, user, smartcard reader and smartcard as the endpoint, you can say that the communication between the endpoint and the server endpoint can be secured by the cryptographic protocol against man-in-the-middle attacks under the condition that both endpoints maintain their integrity. However this condition is often violated under the conditions in the Internet, so others regard only the user as an endpoint and assume its integrity. (I will not discuss whether this is always the right assumption.) In this case the communication between smartcard and user (endpoint) is not secured against man-in-the-middle attacks. </pre></div> Sat, 09 Aug 2008 10:32:41 +0000 An Illustrated Guide to the Kaminsky DNS Vulnerability https://lwn.net/Articles/293411/ https://lwn.net/Articles/293411/ ibukanov <div class="FormattedComment"><pre> SRP protocol [1] allows to prevent man-in-the-middle attack using just a password. Moreover, the password is never leaked. Even if the attacker manages to gain full control of, say, bank's computer, he still would not be able to guess the password. When combined with a pin-code and a smart-card, it allows to have a system which not only verify the identity of the other side without reveling neither pin-code not card's key, but also makes the card completely useless without the pin and the pin without the card. [1] - <a href="http://srp.stanford.edu/">http://srp.stanford.edu/</a> </pre></div> Sat, 09 Aug 2008 09:31:01 +0000 An Illustrated Guide to the Kaminsky DNS Vulnerability https://lwn.net/Articles/293410/ https://lwn.net/Articles/293410/ dd9jn <div class="FormattedComment"><pre> Whether a key is on a smartcard or on disk has nothing to do with mitigating MITM attacks. A smartcard protects your key against being stolen but not even against somebody using your key (he just needs to convice you to enter the PIN, which is a simple social engineering trick). Ther real problem is that it is too easy in standard browser to click away all these warning dialogs. What banks should do, is to provide their customers with custom applications for online banking which don't allow to bypass the security checks. And educate them to use this and only this application for online banking. If a bank wants to protect their customers also against todays trojans they would deliver a live CDROM - but that will probably be too long winded for most users. </pre></div> Sat, 09 Aug 2008 09:00:42 +0000 An Illustrated Guide to the Kaminsky DNS Vulnerability https://lwn.net/Articles/293409/ https://lwn.net/Articles/293409/ mosfet <div class="FormattedComment"><pre> As far as I know a RSA smart card based encrypted public key communication should be man-in-the-middle proof. If the bank followed protocol the public keys are verified through another channel (telephone/mail). This is anything but "security theater". </pre></div> Sat, 09 Aug 2008 08:35:34 +0000 An Illustrated Guide to the Kaminsky DNS Vulnerability https://lwn.net/Articles/293408/ https://lwn.net/Articles/293408/ bangert <div class="FormattedComment"><pre> if both parties have certificates you are secure against MITM attacks <a href="http://en.wikipedia.org/wiki/Man-in-the-middle_attack#Defenses_against_the_attack">http://en.wikipedia.org/wiki/Man-in-the-middle_attack#Def...</a> thats why you want the government to issue these, so that everybody has one. in denmark they actually did it. but all the complex infrastructure was used for was as a openid alternative. the cryptography of the system is never really used, as it is too complex. the next version is not going to use certificates anymore.... </pre></div> Sat, 09 Aug 2008 08:33:42 +0000 An Illustrated Guide to the Kaminsky DNS Vulnerability https://lwn.net/Articles/293406/ https://lwn.net/Articles/293406/ rgmoore <blockquote>One bank I use gives me a SMART card X509 certificate and a calculator which uses it, others UK and USA have crap security theater systems.</blockquote> <p>Sorry, but smart cards are a "crap security theater system", at least for this application. Smart cards are great at preventing somebody from copying your authentication and re-using it later. But that isn't the biggest danger anymore. The real danger is from man-in-the-middle attacks, which smart cards do nothing to prevent. Sat, 09 Aug 2008 06:48:53 +0000 An Illustrated Guide to the Kaminsky DNS Vulnerability https://lwn.net/Articles/293405/ https://lwn.net/Articles/293405/ drag <div class="FormattedComment"><pre> Ya.. Plus I'd trust the government less then I would to some anonymous stranger. Much less likely to get screwed over. ---------------------------------- We all knew that DNS was not secure. Hell even IP addresses, which the DNS is just pointing at, is not a trustworthy and can be spoofed. If we know the IP Addresses can be faked, then how the hell are we suppose to depend on DNS?! It was exploded as a myth years ago that IP Address-based security was dependable in any way. The answer is: _We_Don't_. That's all. It's not dependable, it's a convenience. That's all it ever was, that's all it was ever meant to be. It never was secure in the past, it never will be secure in the future. That's _it_. As long as we all acknowledge this and realize the implications then we can move on. This is why we have SSL/TLS. This is why we have cryptographically secure ways to identify SSH servers and VPN servers and so on and so forth. If DNS was ever considered secure then why do I care what sort of hash my openssh returns when I try to connect to it? This is why we have GPG signatures in email. This is why there are protocols for the secure transfer of program packages. That's why none of this stuff depends on DNS, none of this stuff depends on IP addresses. People figured this out decades ago. etc etc etc. ----------------------------- Now you may say "Well how do people know which sites are trustworthy?" Well they _don't_. Saying things like "Only download software from trustworthy websites" is bullshit advice. It's the blind-leading-the-blind style advice. It ranks up their in the top "bad and misleading security advice of all time" along the likes of 'Only click on attachments from people you trust' or the classic lie, 'The anti-virus found some stuff, but it cleaned it up so you should be ok now' or the huge blunder 'Hey, Lets stick the gpg keyring on the same server as the deb package.. that way we know the package is secure!!!' You can't trust websites. I can't trust websites. I can't trust the content on them isn't going to infect my computer if my browser has a flaw. I never could of, I never will. That's why I update my browser every time I think about it, which is allot. Unless I am talking to a SSL website and am able verify their certificates I can't trust that the person I thought made the site actually made the site. And even with strong and correct certs that means I can't trust it any more then I can trust the certificate authority. (which is to say, a little bit, enough to only make me slightly nervous) And if 'Grandma' can't figure all that stuff out then they can't trust any website. If you want to keep people safe then develop a way for a light on the top of a monitor to flash the bright words 'NO this place is Bad' every time a user visits a website or gets email and then flash the bright words 'Only Slightly Bad' when they get pgp-signed emails and visit https sites that have correct certs. </pre></div> Sat, 09 Aug 2008 06:35:22 +0000 An Illustrated Guide to the Kaminsky DNS Vulnerability https://lwn.net/Articles/293397/ https://lwn.net/Articles/293397/ dlang <div class="FormattedComment"><pre> even token authentication is not good enough. if the bad guy can trick you into going to his site, he can proxy your connections to your bank (so that you see the bank's challenge and the bank sees your response, it's jut going through the bad guys's server) after you authenticate the bad guy can then send all sorts of instructions to the bank in your name between the clicks that you make. true comprehensive public key certificates would address this problem, but only as long as you can trust all key signers and keys never get comprimised (in other words, in a fantasy world, not the one we all live in) </pre></div> Sat, 09 Aug 2008 04:08:26 +0000 An Illustrated Guide to the Kaminsky DNS Vulnerability https://lwn.net/Articles/293391/ https://lwn.net/Articles/293391/ brianomahoney <div class="FormattedComment"><pre> Clearly, and while the security of the DNS is important for both convenience and security, to say "all is lost" is just FUD. The fact is that it is necessary, sadly, to use __ONLY__ secure protocols, eg ssh, SSL ... in a hostile world. One bank I use gives me a SMART card X509 certificate and a calculator which uses it, others UK and USA have crap security theater systems. If I can crytographically identify the counterparty, and have have a secure conversation with her, the absolute security of the DNS does not matter since the identity of both parties can be un-ambiguously secured. What we really need is convenient, cheap, secure X509's for the masses, eg issued by governments, like passports, and signed with the governments signing key, which can then be further signed by counterparties to create a shared counterparty specific X509 used to establish mutual trust and transmission security. </pre></div> Sat, 09 Aug 2008 00:26:45 +0000