LWN: Comments on "Encouraging a wider view" http://lwn.net/Articles/567977/ This is a special feed containing comments posted to the individual LWN article titled "Encouraging a wider view". hourly 2 Encouraging a wider view http://lwn.net/Articles/569497/rss 2013-10-04T20:06:31+00:00 khim <blockquote><font class="QuotedText">No. The NSA just needs to store the encrypted packets and then decrypt them later at their leisure.</font></blockquote> <p>That's totally different kind of attack. Almost undetectable, yes, but also millions of billion times more computationally expensive.</p> <blockquote><font class="QuotedText">They've already admitted to doing this in many cases.</font></blockquote> <p>The've admitted that they keep encrypted sessions but nobody knows how many of them they can actually decrypt.</p> <p>And if they can “decrypt them later at their leisure” it's <b>still</b> “DoS attack against NSA” - just somewhat less effective.</p> <blockquote><font class="QuotedText">Let's not also forget that even with a MITM attack, they aren't routing all packets to their buildings for real-time decryption.</font></blockquote> <p>It's the only way to perform a MITM attack, sorry.</p> <blockquote><font class="QuotedText">They're still injecting the code to read the unencrypted traffic into the existing infrastructure (either at the end-points or at a common existing intermediary) and then streaming that data efficiently to their data stores.</font></blockquote> <p>There are <b>no</b> "common existing intermediary" if you, e.g. connect to Google from your home and they need permit from court to actually hack your computer. Yes, I know, they can hack Google itself and/or your computer but now we are at stage “asteroid can kill you any time thus it's pointless to watch traffic lights”.</p> Encouraging a wider view http://lwn.net/Articles/569487/rss 2013-10-04T18:01:04+00:00 elanthis <div class="FormattedComment"> <font class="QuotedText">&gt; And this needs to be done in real time, or Nsa's cover is blown. This is a DoS attack against Nsa.</font><br> <p> No. The NSA just needs to store the encrypted packets and then decrypt them later at their leisure. They've already admitted to doing this in many cases.<br> <p> Let's not also forget that even with a MITM attack, they aren't routing all packets to their buildings for real-time decryption. They're still injecting the code to read the unencrypted traffic into the existing infrastructure (either at the end-points or at a common existing intermediary) and then streaming that data efficiently to their data stores.<br> <p> There's nothing to DoS.<br> </div> Encouraging a wider view http://lwn.net/Articles/569117/rss 2013-10-02T10:05:19+00:00 intgr <div class="FormattedComment"> <font class="QuotedText">&gt; Unauthenticated encryption would make it more difficult to implement large-scale passive eavesdropping operations like the NSA's</font><br> <p> You're missing my point. It's not just "more difficult"; it's extremely unlikely that they could get away with it on a large scale.<br> <p> Even with unauthenticated encryption, they would have to perform active MITM attacks to eavesdrop on encrypted connections. There is no such thing as passive MITM, they have to decrypt and re-encrypt all data passing through. If it's done on a large scale, it will eventually be detected by tech-savvy users, even if by accident. Since you can collect evidence about the traffic manipulation, you can demand explanations from your ISP -- unlike passive eavesdropping, which is undetectable. You can tell them to stop altering data sent via a connection you paid for or lose business.<br> <p> It would very likely force NSA to limit eavesdropping only to people being targeted, instead of the ubiquitous surveillance we have now. Yes, it won't "guarantee privacy", but it would make the majority of us more secure.<br> <p> <font class="QuotedText">&gt; This presumes that you have a secure way of knowing what data was actually sent, on a channel not controlled by the MITM attacker</font><br> <p> Only in theory. In practice it's impossible to fool everyone all of the time about what traffic is being captured. Imagine tcpdump running over a MITMed SSH session; in order to fool the user, they would have to detect what packets are being printed out onto the console and substitute all that with the ones the person is "supposed" to see. Or what about Wireshark running over VNC? The attacker would have to re-render whole images passing over the network to cover up what traffic is being captured.<br> <p> Not to mention that SSH does authentication (key pinning), so I don't think that's even on the table.<br> <p> <font class="QuotedText">&gt; Given the choice, you should always authenticate.</font><br> <p> Yes, nobody is arguing against that.<br> <p> </div> Encouraging a wider view http://lwn.net/Articles/569026/rss 2013-10-01T16:33:12+00:00 nybble41 <div class="FormattedComment"> <font class="QuotedText">&gt; I meant to say if you control *both* endpoints, then it's easy to detect altered traffic by simply comparing the data sent from one end and data received on the other end.</font><br> <p> This presumes that you have a secure way of knowing what data was actually sent, on a channel not controlled by the MITM attacker. If you have such a channel, why not just use that instead?<br> <p> Anyway, "useless" was perhaps a bit strong. Unauthenticated encryption would make it more difficult to implement large-scale passive eavesdropping operations like the NSA's, but it doesn't offer any actual privacy guarantees. It's more on the level of security through obscurity, since it won't protect you against a determined, active attack. Given the choice, you should always authenticate. Certainly you should never transmit any truly private data without authenticating the receiver--you could be handing your secrets directly to your worst enemy and you'd never know it until it was too late.<br> <p> If the information _isn't_ truly private, though, why bother encrypting it in the first place? Just raising the noise level to hide the real secrets?<br> </div> Encouraging a wider view http://lwn.net/Articles/568994/rss 2013-10-01T14:55:29+00:00 freemars <p> Exactly. An attacker (let's call her 'Nsa') needs to decrypt packets and re-encrypt them with her faked key. And this needs to be done in real time, or Nsa's cover is blown. This is a DoS attack against Nsa. </p> Encouraging a wider view http://lwn.net/Articles/568991/rss 2013-10-01T13:51:38+00:00 intgr <div class="FormattedComment"> <font class="QuotedText">&gt; they don't need to change any data as part of their MITM</font><br> <p> The what? You can't eavesdrop on any modern crypto protocol (including SSL, tcpcrypt) without modifying traffic. Even if the connection is un-authenticated.<br> <p> <font class="QuotedText">&gt; and the new cert that was being presented was 'valid'</font><br> <p> They had to swap the certificate. They modified the traffic. The modification was detected. Which was my point.<br> <p> </div> Encouraging a wider view http://lwn.net/Articles/568990/rss 2013-10-01T13:38:02+00:00 dlang <div class="FormattedComment"> they don't need to change any data as part of their MITM (the NSA wants to read the data, not change it), so just comparing what was sent with what was received would not help any.<br> <p> Ted discovered this because he had the particular cert pinned (i.e. he knew who the cert belonged to), so when he saw a different cert, he knew something was wrong.<br> <p> and the new cert that was being presented was 'valid', just not the correct one.<br> <p> so this seems like exactly the case that having encryption without knowing who the cert belongs to would have done no good.<br> </div> Encouraging a wider view http://lwn.net/Articles/568987/rss 2013-10-01T13:24:51+00:00 intgr <div class="FormattedComment"> <font class="QuotedText">&gt; this is only true if you can validate who the key belongs to</font><br> <p> I meant to say if you control *both* endpoints, then it's easy to detect altered traffic by simply comparing the data sent from one end and data received on the other end.<br> <p> And it doesn't matter that there are very few people actually checking it. Even if it's just some developers debugging SSL issues while developing an SSL library. If NSA (or anyone) was MITMing a significant portion of Internet traffic, someone, somewhere will notice that things are amiss. Once detected and reported, ISPs and middlemen can be forced to cease or explain what's going on. Organizations like EFF may get involved, like they did to detect BitTorrent throttling: <a href="https://www.eff.org/pages/switzerland-network-testing-tool">https://www.eff.org/pages/switzerland-network-testing-tool</a><br> <p> It's simply impossible to remain hidden if you do traffic manipulation on a large scale.<br> <p> Compare it to our current situation with un-encrypted Internet: nobody has any who is eavesdropping what; there's no way to detect passive eavesdropping. Igorance is bliss?<br> <p> <font class="QuotedText">&gt; report from Ted Tso about how he discovered a MITM aimed at the MIT IMTP server</font><br> <p> I can't find this report on Google, but it seems you're actually proving my point -- MITM will be detected if it's done on a sufficiently large scale. If the traffic was in the clear, there's no way at all that anyone could detect it.<br> <p> The large-scale MITM attack in Iran against Gmail was also detected because encryption was used.<br> <p> </div> Encouraging a wider view http://lwn.net/Articles/568985/rss 2013-10-01T12:48:41+00:00 dlang <div class="FormattedComment"> <font class="QuotedText">&gt; And most importantly, MITM is very easy to detect by someone controlling the endpoints.</font><br> <p> this is only true if you can validate who the key belongs to, otherwise you have no way to detect the difference between a legitimate remote endpoint and a MITM endpoint<br> <p> Yes, doing encryption even without validating the remote endpoint will raise the cost of doing a MITM, but is it enough?<br> <p> I suggest that you take a look at the report from Ted Tso about how he discovered a MITM aimed at the MIT IMTP server. <br> </div> Encouraging a wider view http://lwn.net/Articles/568973/rss 2013-10-01T09:39:48+00:00 intgr <div class="FormattedComment"> <font class="QuotedText">&gt; FTFY. Encryption is pointless if you don't know who the decryption key belongs to.</font><br> <p> No! This is the fallacy that got us into the whole wiretapping problem in the first place.<br> <p> The reality is, eavesdropping unencrypted traffic is simply too easy and too cheap. MITMing connections such that it doesn't break any clients/servers, is complicated. And most importantly, MITM is very easy to detect by someone controlling the endpoints.<br> <p> While it's true that corporate and public access networks could get away with large-scale MITM, it would be unacceptable to do it on every end-user who pays for their own network bandwidth -- on the scale at which NSA is currently eavesdropping. And it simply cannot be done on someone else's network that you have compromised, without raising red flags all over.<br> <p> Further, if your protocol initiates encryption *before* opportunistic authentication, then any MITM risks breaking this authentication if they don't know in advance. While the attacker could probe the server to check whether it provides certificates, there are edge cases that would still break. In the case of SSL, URL-specific client certificate authentication: the server renegotiates in the middle of a SSL session, after receiving the HTTP request to an authenticated URL.<br> <p> All of this was already anticipated in a tcpcrypt IETF draft in 2011, but sadly the project looks dead for now. (Though there was some discussion about resurrecting it recently: <a href="http://www.ietf.org/mail-archive/web/perpass/current/msg00305.html">http://www.ietf.org/mail-archive/web/perpass/current/msg0...</a>)<br> <p> </div> "A Good Crisis" http://lwn.net/Articles/568949/rss 2013-09-30T19:48:30+00:00 raven667 <div class="FormattedComment"> As a matter of history I believe that quote is a bit older and should be attributed to Winston Churchill, although it was referenced more recently by Rahm Emmanuel causing a minor scandal which is why Google associates it so strongly.<br> </div> "A Good Crisis" http://lwn.net/Articles/568946/rss 2013-09-30T19:01:29+00:00 jake <div class="FormattedComment"> <font class="QuotedText">&gt; Jake: you're staff, are you not? LWN is *supposed* to pay for you </font><br> <font class="QuotedText">&gt; to attend such things; I shouldn't think you need to 'disclose' it...</font><br> <p> I am indeed on the staff. When we started doing the disclosures, we figured we would go ahead and thank subscribers as well when they are footing the bill. iirc, there was an article without a disclosure at one point that someone asked about, so it heads off that question too ...<br> <p> jake<br> <p> <br> </div> "A Good Crisis" http://lwn.net/Articles/568941/rss 2013-09-30T18:48:27+00:00 Baylink <div class="FormattedComment"> Google attributes that quote, with relatively good confidence, to ex-Obama CoS Rahm Emmanuel, which I am more inclined to believe.<br> <p> And I personally have no problem with the underlying view, either; you uses what you can gets. :-)<br> <p> Jake: you're staff, are you not? LWN is *supposed* to pay for you to attend such things; I shouldn't think you need to 'disclose' it...<br> </div> Warning about certificate changes doesn't work in todays world http://lwn.net/Articles/568800/rss 2013-09-29T06:33:50+00:00 ras <div class="FormattedComment"> <font class="QuotedText">&gt; In that context, the obvious solution seems to be to trust the next certificate up in the hierarchy of trust, surely?</font><br> <p> I had not thought about it that deeply - but now that you mention it - yes. When I installed it that is what I thought it would do. It is after all the signing cert that you are trusting. The rest are irrelevant.<br> <p> That highlights another design flaw in the X509 PKI system I guess. A single signing bit was a mistake. Something that allowed the owner of foo.com to sign anything under foo.com (but nothing else) would be far more useful.<br> <p> <font class="QuotedText">&gt; I think it mainly requires a new feature: the ability to say "remember this certificate, but remember the older ones too". That'll help with sites that keep switching between certificates. </font><br> <p> Funny. I though it did that. Clearly I didn't think about it deeply enough, as it obviously doesn't.<br> <p> <font class="QuotedText">&gt; Does anyone know if certpatrol is still being developed? </font><br> <p> It was updated regularly until October 14, 2011. Then nothing. So no. But it is open source, so the world would be a better place if someone picked up the ball. :D<br> </div> Warning about certificate changes doesn't work in todays world http://lwn.net/Articles/568782/rss 2013-09-28T22:04:17+00:00 dark I think it mainly requires a new feature: the ability to say "remember this certificate, but remember the older ones too". That'll help with sites that keep switching between certificates. <p> The problem with Google only really started when they started using two different roots, which is something they're doing on purpose to prevent API users from hardcoding their original root cert. <p> Does anyone know if certpatrol is still being developed? Warning about certificate changes doesn't work in todays world http://lwn.net/Articles/568739/rss 2013-09-28T08:54:52+00:00 peter-b <div class="FormattedComment"> In that context, the obvious solution seems to be to trust the next certificate up in the hierarchy of trust, surely? If Google are using that many certificates, surely they are deriving them from a "Google root certificate" rather than purchasing them all individually from the CA...?<br> </div> Warning about certificate changes doesn't work in todays world http://lwn.net/Articles/568738/rss 2013-09-28T08:00:54+00:00 ras <div class="FormattedComment"> <font class="QuotedText">&gt; Ts'o doesn't know how to solve the CA problem, but did have a selfish request: he would like to see certificates be cached and warnings be issued when those certificates change. </font><br> <p> I had the same lament as Ted a year or two ago, which I mentioned to a Mozilla developer. He pointed me to a Firefox plugin called certificate patrol that does just what Ted asks. I installed it immediately.<br> <p> After a few months I turned it off. You get flooded with warnings about certificates changing, particularly from Google sites.<br> <p> The problem is Content Delivery Networks. The pages owners (quite rightfully) issue a different certificate to each machine that serves the same page. In Google's case the number of machines is so large there is no way you will meet them all in a reasonable time frame. No only does Google own a large number of machines. They serve their pages from a large number of domains, so just whitelisting domains doesn't work either.<br> <p> In the end, I concluded checking for certificate changes is one of those seemingly straightforward ideas that doesn't work.<br> <p> </div> Appraisal of DNSSEC-based certificate verification http://lwn.net/Articles/568707/rss 2013-09-27T20:01:22+00:00 Cyberax <div class="FormattedComment"> It's not really viable. While NSA can push Verisign it has absolutely no control over the other top-level domains.<br> <p> For example, one can host a site in .ua (Ukraine). In this case to surreptitiously intercept your traffic NSA will have to redirect the whole .ua top-level domain and use faked certificates - it's possible if they have full control over your pipe.<br> <p> But it can be beaten fairly easy - just use 'sticky' DNSSEC keys. Since there are just over 300 top-level domains and DNSSEC key rotation happens rarely it's not that burdensome. Also, the mechanics of redirection itself are quite complicated.<br> <p> This is way better than having 500 CAs each of which can be used to create a certificate for ANY site.<br> </div> MarkMonitor http://lwn.net/Articles/568593/rss 2013-09-27T10:42:39+00:00 shane <div class="FormattedComment"> The article author seems to have correctly identified MarkMonitor's business (it's on their web site, so that's not a tricky bit of journalism). However, note that *every* record in .COM is *already* in the hands of one company - the .COM registry operator, Verisign.<br> <p> The US government has *already* issued DNS domain takedowns via this mechanism, so I'm not sure what additional problems that companies using MarkMonitor could add:<br> <p> <a href="http://blog.easydns.org/2012/03/05/the-ramifications-of-us-government-domain-takedowns/">http://blog.easydns.org/2012/03/05/the-ramifications-of-u...</a><br> <p> As for why these varied companies are using this service, my guess is that they do a good job and have kick-ass sales people. We don't expect companies to do *everything* themselves... Google and Microsoft both use light bulbs, but we wouldn't worry about the NSA "turning the lights out" if they both bought them from Philips...<br> </div> Appraisal of DNSSEC-based certificate verification http://lwn.net/Articles/568590/rss 2013-09-27T10:35:04+00:00 jschrod <div class="FormattedComment"> I don't understand the hyperbole of that article.<br> <p> Yes, MarkMonitor is a registrar. Google and others need a registrar for their domain names. They chose one. The registrar could set up false name server delegation records. Well, news at 11. That's why we want DNSSEC.<br> <p> And while they might be potentially a CA at the same time, they're not an approved Firefox CA, AFAICS. I don't have Chrome or a current IE fired up now, is MarkMonitor really an approved CA there?<br> </div> Appraisal of DNSSEC-based certificate verification http://lwn.net/Articles/568588/rss 2013-09-27T10:27:55+00:00 mpr22 <p>An interesting article, which I lack the effort to actively verify.</p> <p>It's a shame the author has chosen to post it to a site whose maintainers, at a first glance, appear to have never met a conspiracy theory they didn't like.</p> Appraisal of DNSSEC-based certificate verification http://lwn.net/Articles/568584/rss 2013-09-27T09:56:22+00:00 error27 <div class="FormattedComment"> People are saying that the NSA already has prepared a work around for that?<br> <p> <a href="http://ppjg.me/2012/05/11/one-company-to-rule-them-all/">http://ppjg.me/2012/05/11/one-company-to-rule-them-all/</a><br> </div> Appraisal of DNSSEC-based certificate verification http://lwn.net/Articles/568573/rss 2013-09-27T08:43:19+00:00 shane From the article: <blockquote><i> Many have claimed that DNSSEC is a solution, but Marlinspike has argued otherwise—the actors are different, but the economic incentives are the same. Instead of trusting a bunch of CAs, Verisign will have to be trusted instead. </i></blockquote> I don't think this is completely fair, because the current DNS-based solution need not be a <i>replacement</i> for the CA system, but can be used <i>in addition to</i> the CA system. So in order for an attacker to spoof a host both the CA <b>and</b> someone in the DNS hierarchy above your domain would have to be compromised. It adds a little special sauce to your TLS security. Encouraging a wider view http://lwn.net/Articles/568481/rss 2013-09-26T17:57:51+00:00 mathstuf <div class="FormattedComment"> Convergence[1] might be worth looking into as well.<br> <p> [1]<a href="http://convergence.io/">http://convergence.io/</a><br> </div> Encouraging a wider view http://lwn.net/Articles/568462/rss 2013-09-26T17:02:41+00:00 nybble41 <div class="FormattedComment"> <font class="QuotedText">&gt; In the new httpe: the **MITM attacker** swears blind she generated the public and private keys and nobody else knows them (both).</font><br> <p> FTFY. Encryption is pointless if you don't know who the decryption key belongs to.<br> <p> <font class="QuotedText">&gt; It should be willing to carry a bit of water for the browser -- doing some Tor-style session forwarding to try to reduce man in the middle attacks.</font><br> <p> If you're using a Tor-style protocol in .onion mode (no outproxy) then you don't need any other encryption. The traffic is already secured end-to-end. You just need authentication, so that you know who you're talking to.<br> </div> Encouraging a wider view http://lwn.net/Articles/568451/rss 2013-09-26T16:30:03+00:00 freemars <p>re: <i>the CA Problem</i> </p><p> One problem is our current https:// scheme confuses <i>authenticated</i> and <i>encrypted</i>, which aren't the same thing at all. I'm trying to push the need for a couple new web standards. </p><p> In the familiar http<i>s</i>: some certificate authority -- possibly squirming under the thumb of a Three Letter Agency -- swears blind the web site is the one it claims to be. </p><p> In the new http<i>e</i>: the website owner swears blind she generated the public and private keys and nobody else knows them (both). The website owner may be squirming as well; the hope is the various TLAs will soon run out of thumbs. Also, it's a lot harder to keep several million national security letters secret than 50. The httpe: webserver should do a couple other tricks to help security -- it should automatically generate new keys at irregular intervals (say daily-to-monthly plus once per reboot) to make brute force attacks less fruitful. It negotiates a session key with the browser client before a specific page is requested, reducing the amount of metadata that can be collected. It should be willing to carry a bit of water for the browser -- doing some Tor-style session forwarding to try to reduce man in the middle attacks. </p><p> My final new web version would be http<i>es</i>: where the web server and browser software establish an encrypted session first, then verify the web server has been blessed by some CA, and finally moves on to send the page. This is the standard I would want my bank to use. </p> Encouraging a wider view http://lwn.net/Articles/568398/rss 2013-09-26T10:43:54+00:00 spender <div class="FormattedComment"> I've said many of the same things in public (<a href="https://twitter.com/grsecurity/status/380828847790227456">https://twitter.com/grsecurity/status/380828847790227456</a>) and at the 2010 LSS (<a href="http://grsecurity.net/spender_summit.pdf">http://grsecurity.net/spender_summit.pdf</a>, security != access control). Saying it to the LSS audience though I think means the message will fall on deaf ears. The kernel community has a problem of only being capable of learning lessons the hard way.<br> <p> -Brad<br> </div>