Not logged in
Log in now
Create an account
Subscribe to LWN
LWN.net Weekly Edition for May 23, 2013
An "enum" for Python 3
An unexpected perf feature
LWN.net Weekly Edition for May 16, 2013
A look at the PyPy 2.0 release
Langley: False Start's Failure
Posted Apr 13, 2012 9:48 UTC (Fri) by nmav (subscriber, #34036)
Scary SSL warnings
Posted Apr 13, 2012 10:28 UTC (Fri) by man_ls (subscriber, #15091)
I understand that this scheme helps the prime-numbers mafia (aka the certification authorities racket), which for $17/year rents you a prime number that takes a few milliseconds of CPU time to generate. But I don't appreciate it. At least it seems that modern browsers now include CACert as a root authority, so perhaps the situation will unravel in a few years.
Applying Hanlon's (or Heinlein's) razor), it must be easier to just pop up the warning than worry about how to make the distinction between a spoofed and a self-signed certificate.
Posted Apr 13, 2012 11:26 UTC (Fri) by pataphysician (guest, #73773)
Posted Apr 13, 2012 11:29 UTC (Fri) by Jan_Zerebecki (guest, #70319)
The default policy encoded in Firefox is like that for a good reason. Here is my try at explaining it, but it is my first try done in a hurry, so take it with a grain of salt: For the intended audience insecure and unverified are the same thing. Because the browser should never show them a page that is not secure if a https link is clicked, because they are assumed not to be able to notice it. As the Browser has no knowledge about if this site you entered is intended to be unverified of if it is just a spoofed self signed certificate the Browser can not differentiate between unverified and unsecure.
Does that sufficiently explain it and makes sense? I'm really interested, because your argument against what Firefox does is often told.
Posted Apr 13, 2012 12:50 UTC (Fri) by ibukanov (subscriber, #3942)
However, this is a lesser problem now with all those efforts the browsers invested in annotating the address line according to the verification status. For example, I tell to everyone among family/friends to look for colors on URL and just ignore http/https difference. If one need to do banking or enter a credit-card number, then the page URL should be green in FireFox. If it is just blue, be suspicious!
So I think it is time for browsers to change the policy and handle by default self-signed pages the same way as plain http pages and indeed stop that prime number scam.
Posted Apr 13, 2012 13:22 UTC (Fri) by sorpigal (subscriber, #36106)
If browser vendors were really interested in communicating with users about whether they're safe then non-encrypted http connections would be flagged with a scary, red URL bar with a clickable information button which describes the dangers of browsing unencrypted. Saying that "Users expect https to be safe" is just making excuses; users expect "green" to mean safe, not S.
Posted Apr 14, 2012 0:06 UTC (Sat) by josh (subscriber, #17465)
If browsers had started out by never showing users the protocol (http versus https) and treating https with an unverifiable certificate as identical security to http, then sure, we could have opportunistic encryption today that would at least protect against passive packet sniffing, and users would know to look for the verification information rather than the protocol string. But with the state of browsers today, allowing https with an unverifiable certificate would let users get exploited.
Posted Apr 14, 2012 12:27 UTC (Sat) by Jan_Zerebecki (guest, #70319)
Request 1 to https://example.com/ is verified to be signed by a trusted CA. This triggers the rainbows and padlocks in the UI, all went fine security wise. The user fills out a form on the page and sends it, now Request 2 is POSTed to the same URL. This time a self signed certificate is used because a MAn in the Middle attack happened. In your proposed change there is nothing available to let the Browser differentiate between a successful attack that uses a self-signed certificate and the correct certificate. That is because in the current scheme of things there is no client state about the identity of a site.
Poking holes in this argument is welcome.
HTTP Strict Transport Security changes this, but it doesn't have a mechanism for the initial distribution of the identity. One could put this HSTS information in DNSSEC (there are proposals for that). (Additionally allowing only the public key or the hash without HSTS in DNSSEC is IMHO important.)
That would solve most of the problems of the current system I know of. It doesn't need any flag day and can be used by those who want without cooperation from anyone else. It makes it easier for the untrained user to be more secure than the current mechanisms.
But there are downsides: It is much more complicated for the administrators of the system. (Although it scales, i.e. one knowledgeable person can ensure it works for basically unlimited systems once the procedures and software is in place. But if you trust your DNS Server operator he could do this for you and what is left is not really more complicated than it is now.) And the software neither on the client nor on the server is yet finished (never mind deployed).
What is left is the standardization, implementation work and convincing your browser vendor to ship such an implementation (not in that order). If that wasn't enough work I fully expect someone to find some legacy related use case that would prevent all of this (just as with IPv6, HTTP Pipelining, TCP ECN, etc...).
Probably that in the absence of saved HSTS records for a host you must decide between being vulnerable to downgrade to the current CA model or being disconnected from any HTTPS site in parts of the internet DNSSEC queries are broken in. That may be because fully (from the root) resolving without an intermediary is somehow filtered outside of your control (i.e. DNS is blocked) and the resolver assigned to you (also outside of your control) does strip or choke on DNSSEC.
Does anyone know if that is a problem in practice and how widespread it might be? Or any other problems with this way of solving it?
Posted Apr 16, 2012 13:53 UTC (Mon) by nye (guest, #51576)
Realistically, there should be.
Even aside from how you want to treat self-signed certificates, a browser should think something's up if the certificate for a given URL changes between two requests, unless the first certificate was right on the edge of its expiration date. Keeping a record of the certificate received on the last request would be an improvement even if you continue to treat self-signed certificates the same way.
Do any browsers currently do that?
Posted Apr 16, 2012 16:21 UTC (Mon) by sorpigal (subscriber, #36106)
Posted Apr 13, 2012 20:09 UTC (Fri) by josh (subscriber, #17465)
"unverified" means "insecure"; if you don't verify certificates, a MITM attack can trivially substitute any other certificate and you'll accept it.
And these days you can get an SSL certificate for free that all browsers will accept (http://www.startssl.com/).
Posted Apr 14, 2012 9:04 UTC (Sat) by man_ls (subscriber, #15091)
Posted Apr 14, 2012 13:44 UTC (Sat) by Jan_Zerebecki (guest, #70319)
The Firefox implementation as opposed to an implementation that doesn't allow you to save the site identity actually allows the user telling the browser he can distinguish unverified and verified sites. As adding the permanent exception can be used as trust on first use like known from SSH, because the exception is only valid for one host and certificate. (Although the implementation probably does not do pinning to only this verified key, but I didn't check. Does anyone know? But that can be solved by the host doing HSTS.) But using that safely requires complicated special knowledge which makes the scariness a necessity of good usability.
This means with Firefox you can have secure sites that are verified and different secure sites that are unverified, as long as you don't confuse them.
Posted Apr 14, 2012 20:27 UTC (Sat) by dkg (subscriber, #55359)
TLS operates in a bi-directional stream communication, so the identity of the other party (the peer) is critical.
"confidential" means "only myself and the peer can read this". But if you don't know who the peer is, you cannot have meaningful confidentiality. "only myself and some mystery party can read this" is not confidential. The mystery party could very well be an adversary.
"integrity-checked" means "only the peer could have written what i'm reading". Again, if you don't know who the peer is, you cannot have meaningful integrity-checking. "Only some mystery party could have written this" is not integrity-checked. The mystery party could very well be an adversary.
So without identity verification of the remote endpoint, your communications are neither integrity-checked nor confidential in any meaningful way.
This is not to say that modern certificate management practices (in browsers or elsewhere) are good -- they're horrible and need serious improvement. But we shouldn't pretend that we can have secure communications without some form of strong verification of the peer's identity.
Posted Apr 14, 2012 21:05 UTC (Sat) by man_ls (subscriber, #15091)
The real point is security is only important when it is needed. "Secure and verified" means, in this context, "Good enough so I can send credit card information through it". "Secure but unverified" stands for: "Good enough as long as you don't send important information".
I don't need security to access https://puszcza.gnu.org.ua/. I need a little security for Facebook, although mostly I need some privacy: a MitM would only be a small worry (witness the firesheep experiments and how most people just continued browsing after a while). I need the strong version of security to connect to my bank -- and not the kind where my corporate network can MitM me. As long as we are stuck in the land of protocols, certificates, channels and networks we are losing sight of our real objective.
Posted Apr 14, 2012 22:01 UTC (Sat) by dkg (subscriber, #55359)
However, I disagree with your focus on commerce and financial transactions as the only things in need of confidentiality and integrity-checking. Surveillance, censorship, invasive advertising, infiltration of private groups, and other forms of social control pose real and growing potential for abuse as we move more of our society online. Ubiquitous confidential and integrity-checked communication would make it much harder for any would-be abuser to succeed.
Please don't encourage the idea that secured communications are only relevant for financial communications.
Posted Apr 14, 2012 22:20 UTC (Sat) by man_ls (subscriber, #15091)
It is a pity that governments don't treat online communications with the same degree of respect as traditional mail or even telephone communications. Those technologies were easily abused but quite protected by law. Nowadays, if we are not allowed our privacy by way of the law, we must seek technical measures.
But let's put things into perspective: I would not care so much if most of my traffic was MitMed, even on sites as lwn.net (except on those rarest of occasions when I enter my password or my credit card info). Security is a set of trade-offs: If I was to support something like HTTPS everywhere, I would prefer if browsers could act sensibly on self-signed certificates so websites did not rely on central authorities. Or if those projects you mention were practical alternatives to the prime numbers racket that are current certificate authorities -- from recent comments here on LWN.net it seems they are not there yet but advancing pretty fast.
Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds