User: Password:
|
Log in / New account

Chromium to start marking HTTP as insecure

The Chromium development team has posted a plan to start actively marking web pages served with HTTP as not being secure. "We know that people do not generally perceive the absence of a warning sign... Yet the only situation in which web browsers are guaranteed not to warn users is precisely when there is no chance of security: when the origin is transported via HTTP."
(Log in to post comments)

Chromium to start marking HTTP as insecure

Posted Dec 13, 2014 14:52 UTC (Sat) by gvy (guest, #11981) [Link]

Are these nice guys willing to warn people of their "secure" google/microsoft/apple/younameit traffic ending up on NSA storage?

Chromium to start marking HTTP as insecure

Posted Dec 13, 2014 17:14 UTC (Sat) by clopez (subscriber, #66009) [Link]

Why would they warn about that? Actually that storage is a (national) security thing.

Chromium to start marking HTTP as insecure

Posted Dec 13, 2014 23:33 UTC (Sat) by gvy (guest, #11981) [Link]

Just to be honest, for example.

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 2:54 UTC (Sun) by okusi (guest, #96501) [Link]

whose nation would this be?

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 4:38 UTC (Mon) by b7j0c (guest, #27559) [Link]

https makes no claims about what happens to data at the destination. it merely claims to attempt to protect that data while on its way to the destination

Chromium to start marking HTTP as insecure

Posted Dec 13, 2014 15:25 UTC (Sat) by nix (subscriber, #2304) [Link]

So the 90% of websites on which I just don't care about security at all (and neither do they, because they're using HTTP) are now going to have an annoying icon whining about insecurity? Gee, thanks, Chromium, way to devalue the annoying icon that appropriately pops up on sites actually carrying data I do care about but messing up their security -- you know, ones that are trying to use HTTPS and doing it wrong. The ones you warn about now.

Chromium to start marking HTTP as insecure

Posted Dec 13, 2014 18:14 UTC (Sat) by mathstuf (subscriber, #69389) [Link]

It depends on the implementation. Insecure HTTPS is obviously bad, plain HTTP less so, but it is still bad. I'd like it to be something like "yellow" rather than "red" that bad HTTPS gives you. It just cannot be "green".

Chromium to start marking HTTP as insecure

Posted Dec 13, 2014 19:26 UTC (Sat) by josh (subscriber, #17465) [Link]

> and neither do they, because they're using HTTP

And that's precisely the reason to do this.

Chromium to start marking HTTP as insecure

Posted Dec 13, 2014 20:17 UTC (Sat) by rodgerd (guest, #58896) [Link]

Because we want to train users to ignore browser security warnings?

Chromium to start marking HTTP as insecure

Posted Dec 13, 2014 20:27 UTC (Sat) by josh (subscriber, #17465) [Link]

Because right now users don't typically ignore browser security warnings, and there are enough users that don't that any site serving over http will raise many user concerns, hopefully enough to get them to stop.

It's not perfect; there's a reason why Chromium is outlining a long transition plan here. But I'd like to see us work towards a world where there simply aren't any plaintext protocols left.

Chromium to start marking HTTP as insecure

Posted Dec 13, 2014 22:59 UTC (Sat) by jospoortvliet (subscriber, #33164) [Link]

Exactly. Secure-by-default, even if it isn't perfect security, increases the cost of blanket surveillance of the type the NSA engages in. And that's always a good thing ;-)

Chromium to start marking HTTP as insecure

Posted Dec 13, 2014 23:35 UTC (Sat) by gvy (guest, #11981) [Link]

Depends on who can get their hands on those certificates -- or present mis-issued ones.

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 0:49 UTC (Sun) by gutschke (subscriber, #27910) [Link]

When requesting an SSL certificate to be signed, the private key should never be provided to anybody. It stays on the machine that runs the web server. Only the public certificate gets signed.

This doesn't mean that it is impossible to steal the private key, but it requires a deliberate targeted attack on the web server. Attacking the CA is not sufficient to obtain the private key.

Being able to issue fake signed certificates is a real problem, but there are ways to mitigate this issue. Public audit records would be great, but that's a very new technology and I am not sure which CA if any support them. In any case, Chrome has started informing about the lack of audit records and I suspect at some point this will become an actual security warning.

Another good option is DANE, but unfortunately there currently is no support for it in any browsers that I am aware of. It also requires DNSSEC, which takes a bit of effort to set up the first time round.

Pinning would also work to detect compromised certificates, but I am unclear on how well that works at this time. Feel free to comment, if you know more.

Chromium to start marking HTTP as insecure

Posted Dec 16, 2014 9:43 UTC (Tue) by nim-nim (subscriber, #34454) [Link]

And then there were backup systems and other op needs…

Chromium to start marking HTTP as insecure

Posted Dec 17, 2014 12:46 UTC (Wed) by Tjebbe (subscriber, #34055) [Link]

Not only is DANE not supported, browsers currently have no plans on implementing it either, since it does indeed mean DNSSEC support and that costs valuable microseconds in the resolution process.

So they go for certificate pinning, which imho is a half-baked DANE.

But let's hope they come around at some point in the near future.

Chromium to start marking HTTP as insecure

Posted Dec 17, 2014 22:33 UTC (Wed) by flussence (subscriber, #85566) [Link]

Maybe browsers could spare a second to do a DNSSEC lookup in lieu of those obnoxious interstitial scare pages, for each site with a decentralized certificate, that demand the user spend minutes of their time poring through X.509 technical jargon and filling out forms in triplicate before granting access.

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 14:40 UTC (Sun) by edt (guest, #842) [Link]

As has been pointed out, with most sites, there is no need for 'security'. It just does not matter. I does make a lot of sense to detect 'bad' https and flag it.

The yellow idea makes a little sense. It would be worth adding a link to the icon that explains what http implies and why you might want to be aware that you are not protected by https. (eg. http lets anyone seeing the data stream see what you are looking at)

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 16:09 UTC (Sun) by tialaramex (subscriber, #21167) [Link]

Presumably you have a broad range of different kinds of sites in mind where it "doesn't matter" if all the content is put under the control of bad guys, and everything submitted is silently passed to bad guys.

So please give half a dozen different examples of such sites and why everybody should be fine with those two constraints for those sites and not ask for better.

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 17:06 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link]

Blogs? LWN? Slashdot? Reddit?

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 17:32 UTC (Sun) by vapier (subscriber, #15768) [Link]

you don't mind bad guys (gov't/isp/wifi provider) being able to inject ads/javascript/flash into every connection you make ? or rewrite/filter content they don't like ?

while some of this you might consider hyperbole, there are free wifi providers (e.g. coffee shops) today as well as isp's that are actively injecting javascript into just about every http connection.

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 17:39 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link]

This practice should be banned. Besides, ISPs can very well do it with HTTPS by giving you choices like: "No Internet" and "Ignore Security Warnings".

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 18:26 UTC (Sun) by tialaramex (subscriber, #21167) [Link]

So, in your view if it would remain even distantly _possible_ to do something bad then there's no point whatsoever in any technical intervention to make that more difficult or visible ?

And yet, a _legislative_ solution in the form of a ban and presumably new law enforcement powers in order to implement the ban, with all the associated infringement of people's normal rights that implies, is apparently very welcome even though it would almost certainly be ineffective.

I can't tell whether you just haven't thought about this very hard or whether you're actively in favour of leaving things as vulnerable as possible for some reason.

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 21:03 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link]

It's quite simple - providers should not change the content. And I do want it codified in law, rather than simply worked around using HTTPS.

And I also totally hate TLS/SSL - it's an ugly protocol that should not have had been born at all.

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 21:54 UTC (Sun) by cesarb (subscriber, #6266) [Link]

> It's quite simple - providers should not change the content. And I do want it codified in law, rather than simply worked around using HTTPS.

Law is never simple. What's the definition of "provider"? If I open my home's guest wifi to a visitor, am I a "provider"?

Law is not always obeyed. The bigger the provider is, the easier it is to find out they're not following the law; but if every coffee shop is considered a "provider", how would them all be monitored for compliance?

Also, which law? I live in a country different from yours. A law forbidding providers from meddling with the content in my country would have no effect on your provider, and vice versa.

Finally, criminals do not obey the law. A criminal which takes over my ISP's router could make it rewrite the content; TLS protect against that.

And why not both? A law forbidding content manipulation and technical means to detect content manipulation complement themselves. With TLS, if a provides tries to change the content, it complains loudly, which would be perfect for the enforcement of your hypothetical law.

> And I also totally hate TLS/SSL - it's an ugly protocol that should not have had been born at all.

Yes, TLS has its warts. It could have been made better; for instance, it could always use Diffie-Hellman to negotiate a shared key, like IKEv2 does. But not having been born at all? It or something like it was inevitable; it came from the desire to protect HTTP requests from eavesdroppers. If it wasn't Netscape, it would be Microsoft; if it weren't Microsoft, it would be somebody else.

Since we have to live with it, it's more constructive to fix TLS's warts; developers are working on things like the Encrypt-Then-MAC extension, the TLS 1.3 work, and so on.

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 23:26 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link]

> Law is never simple. What's the definition of "provider"? If I open my home's guest wifi to a visitor, am I a "provider"?
Yes.

> Law is not always obeyed. The bigger the provider is, the easier it is to find out they're not following the law; but if every coffee shop is considered a "provider", how would them all be monitored for compliance?
You report that someone misbehaves to FCC and they gladly fine the violator. Additionally, if FCC shuts down companies that enable ad injection then it'd be enough to stop that.

In reality, I haven't seen these problems with injected ads. However, I did see problems with HTTPS negotiations taking far longer than simple HTTP requests, especially on slow cellular connections.

> Yes, TLS has its warts. It could have been made better; for instance, it could always use Diffie-Hellman to negotiate a shared key, like IKEv2 does.
TLS can use ECDH, that's not an issue. The issue is that the whole certificate system is braindead in the extreme in all of its facets. Starting from certificate format itself.

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 8:32 UTC (Mon) by spaetz (subscriber, #32870) [Link]

>> Law is never simple. What's the definition of "provider"? If I open my home's guest wifi to a visitor, am I a "provider"?
> Yes.

My country disagrees and makes me responsible for all illegal stuff flowing through,in contrast to what they consider a provider....

Chromium to start marking HTTP as insecure

Posted Dec 16, 2014 9:42 UTC (Tue) by nim-nim (subscriber, #34454) [Link]

Mine is a little different. You are responsible if you can not identify the users of your access point in case of judiciary request. In practical terms identifying means setting up a captive portal to auth accesses. And captive portal handling is terminally broken in current browsers.

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 10:10 UTC (Mon) by tialaramex (subscriber, #21167) [Link]

"You report that someone misbehaves to FCC and they gladly fine the violator. Additionally, if FCC shuts down companies that enable ad injection then it'd be enough to stop that."

They will gladly send you a response saying that, alas, they have limited resources available to investigate this sort of thing, but they've made a note of it.

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 6:29 UTC (Mon) by filipjoelsson (guest, #2622) [Link]

It is already codified in law, and it's called copyright. In most jurisdictions, you are not allowed to alter a work and present it as the original.

To use copyright law to remove the ads, maybe the best way forward would be to contact someone who is very protective WRT their visual presentation (eg Disney or a high profile artist), and show them the ads. Maybe even play stupid, and ask why they have the ads on their page?

The results could be rather interesting.

PS Am I the only one getting flashbacks to Geocities?

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 7:28 UTC (Mon) by osma (subscriber, #6912) [Link]

It's quite simple - providers should not change the content.

In your thinking, does this apply to things like redirects to splash pages / login screens in public WiFi access points? They're changing the content too - taking you to a whole different page than the one you may have expected.

Just curious... I hate splash pages, but I understand the need for them in some scenarios.

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 8:41 UTC (Mon) by josh (subscriber, #17465) [Link]

Yes, the current approach to captive portals is completely ridiculous, and the behavior they exhibit when the first page you try to visit is an HTTPS page is a fine example of that ridiculosity. (I've gotten in the habit of going to "example.org" when I expect a captive portal, specifically because *all* the sites I'd normally visit use HTTPS.)

There are two much better ways to handle that. First, there's a standard for a DHCP server to inform the client that it expects a visit to a specific web page. And second, for the large number of clients that don't yet understand that DHCP extension, just tell the client to visit a specified URL first; either print it in the instructions for accessing the Internet, or make the URL the ESSID.

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 10:50 UTC (Mon) by osma (subscriber, #6912) [Link]

There are two much better ways to handle that. First, there's a standard for a DHCP server to inform the client that it expects a visit to a specific web page. And second, for the large number of clients that don't yet understand that DHCP extension, just tell the client to visit a specified URL first; either print it in the instructions for accessing the Internet, or make the URL the ESSID.

I agree the current approach has its flaws, and I hope the DHCP extension starts getting implemented. But until about 99% of (mobile) devices support the new DCHP way, I don't think any access point owner will be in a hurry to implement it instead of the current "solution", especially if it means that users that don't receive, or don't understand, the instruction to visit a specific web page will be left with no working connection at all. At least the current approach mostly works and gives users the information they need without much effort.

My Nokia N9 does a HTTP request in the background to a known address as soon as it connects to a new network, and if it receives a HTTP redirect (presumably from a captive portal), it will open the browser showing the page that was redirected to.

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 17:58 UTC (Mon) by mathstuf (subscriber, #69389) [Link]

> if it receives a HTTP redirect (presumably from a captive portal), it will open the browser showing the page that was redirected to.

Automatically? I prefer the Android way (as of at least 4.2; probably 4.0) of a notification rather than interrupting what I was doing.

Chromium to start marking HTTP as insecure

Posted Dec 16, 2014 13:59 UTC (Tue) by Wol (guest, #4433) [Link]

> At least the current approach mostly works and gives users the information they need without much effort.

Except it doesn't. I regularly forget to visit a web page when using access points like that, and spend ten minutes puzzling over why thunderbird et al don't work, before I have a "doh" moment and fix it. Or I've fixed it and everything suddenly stops working, because the access point has forgotten about me and I need to re-authenticate.

And with the move more and more towards mobile apps, this problem is going to get worse.

Cheers,
Wol

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 14:04 UTC (Mon) by nim-nim (subscriber, #34454) [Link]

The DHCP method is totally insecure. It's an "hijack-me" setting.

In fact there is no standard way to do secure proxying cleanly. Browser authors know it, but they've resisted for years attempts to fix the situation :
– Google fears better proxying support will help users deploy ad blockers. They've been working hard for years to capture user Internet access (first with Chrome, then with Android). Now that they have a huge deployed base they've started removing "play nice with others" features.
– Mozilla people think the web must be "free" (meaning they can decide whatever they want with site authors without users or network operators interference and ignoring constraints that do not apply for Californian statups)
– Microsoft does not do standards. Custom insecure hacks like the dhcp one works for them (as long as it's in AD)
– who knows what Apple thinks. If it does something it will be Apple specific

All of them hope that if they let proxy support annoying enough proxies and gateways will go away. They don't think twice about adding custom hacks to their web clients to support their own image re compression proxies. It's no longer a technical decision, it's a browser power play.

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 21:34 UTC (Mon) by lsl (subscriber, #86508) [Link]

> In fact there is no standard way to do secure proxying cleanly.

Set up the clients to use your proxy and import the appropriate root CA cert? Doesn't this work anymore? What are the issues? Cert pinning?

Chromium to start marking HTTP as insecure

Posted Dec 16, 2014 9:38 UTC (Tue) by nim-nim (subscriber, #34454) [Link]

This is another hack :
1. the certificate ends up in the web site store, and the web client has no idea it's for use with a specific proxy only.
2. the web client needs to blindly trust the certificate for everything.
3. the web client does not see the original web site certificate anymore, so it effectively disables all its certificate security checks
4. it's too complex for hotels and other transient hotspots

1. + 2. may be sort of acceptable for fixed systems, but it's a disaster for computers that roam on other networks, even more so if they are guests that are then vulnerable to any problem in the handling of the original org secrets.

Chromium to start marking HTTP as insecure

Posted Dec 16, 2014 9:58 UTC (Tue) by cesarb (subscriber, #6266) [Link]

It also won't work for non-browser clients which follow the recommendation of hard-coding the list of allowed CAs for the application's servers, or even hard-coding the server certificate itself, instead of allowing any two-bit CA to issue a certificate for the server's hostname.

I have seen such a recommendation in the wake of one of the CA hacks, and it makes sense: if I always get my certificate from Comodo, why should my client allow certificates from Diginotar? If I decide to change to another CA, it takes only a few hours to upload a new version of the client to Google Play.

And if my client is the only thing which accesses the server, why bother with the traditional CA model? Just hardcode the server certificate in the client, or to be more sophisticated hardcode a self-signed "CA certificate" which will only be used to sign the server certificate.

Chromium to start marking HTTP as insecure

Posted Dec 16, 2014 19:23 UTC (Tue) by dlang (✭ supporter ✭, #313) [Link]

well, in that case you are trusting your proxy's certificates (or CA), and the other checks that you are talking about can and should be implemented on the proxy.

Chromium to start marking HTTP as insecure

Posted Dec 16, 2014 22:15 UTC (Tue) by cesarb (subscriber, #6266) [Link]

> well, in that case you are trusting your proxy's certificates (or CA), and the other checks that you are talking about can and should be implemented on the proxy.

My point is, client software which hardcodes the application server's certificate or CA will simply reject the proxy's certificates or CA, because it's different from what's hardcoded.

An example: suppose I create a service which shows an alert on your smartphone when your bus is late. This service has two components: a server I control, and a client I put for download on Google Play. To protect the user's login, I use TLS between the client (on the user's smartphone) and the server.

Since I control both the client and the server, I don't need the traditional CA model. Instead, I hardcode the server's certificate hash in the client application source code. If it ever changes, I'll hardcode a new server certificate and put a new version of the client on Google Play.

That model is safer than the traditional CA model, since no third-party CA can ever produce a valid certificate for my server. The only way to produce a valid certificate is to either extract the private key from the server, or produce a modified version of the client application which accepts the new certificate.

That model also completely breaks transparent MITM proxies; even if the MITM root CA is imported into the phone, the application will ignore it and reject the connection with an "invalid certificate" error. The application is correct, since it knows that the server certificate does not come from that new CA (there is no distinction between a MITM CA and a normal CA that is new enough to not be in the default root CA store).

And notice that, if the proxy verifies the server certificate, it will reject it as invalid, since it's not signed by any CA on its list! But it's in fact the correct certificate for this application (and only for this application).

----

The MITM proxy model is broken. It exploits one of the worst problems of the current CA model: that any of the hundreds of CAs in the root CA list is trusted to create a valid certificate for any server. When that problem is fixed or worked around (as in my example), MITM proxies stop working.

We already see problems with certificate pining; browsers work around this by allowing any root CA which is on the root CA list but is *not* on the default root CA list (that is, any CA added by the user or by a malware) as trusted even for pinned domains (set security.cert_pinning.enforcement_level to 2 to disable that misfeature).

TLS enforces end-to-end security. MITM proxies, not being on either end, go against the TLS model. The correct solution would be to have a protocol where an explicitly configured client offloads TLS initiation to a trusted proxy, since it makes said proxy one of the ends; I don't think such a protocol exists at the moment, and even if it did exist, it wouldn't be implemented by most clients. If a client did implement it, it would be vulnerable to malware silently enabling the TLS offload, which would be a good reason for security-conscious developers to not implement such a protocol.

Chromium to start marking HTTP as insecure

Posted Dec 16, 2014 22:40 UTC (Tue) by dlang (✭ supporter ✭, #313) [Link]

well, by ignoring the CA stuff and just allowing specific certs, you are 'breaking' TLS as well. I'm not surprised that two different ways of 'breaking' TLS designed to achieve two different results don't interact well.

How would your theoretical security protocol differ from the clients trusting the CA of the proxy and the proxy validating the certs of remote systems? It looks to me like the two are pretty close to functionally identical. And one requires developing a new protocol and changing software to use it, the other works today with existing software.

As far as which is more secure, that's a good question and the answer will vary from situation to situation. It's also a question of what you mean by "more secure"

end-to-end encryption that can't have any MITM is better for the user who always makes the right security choices and keeps their system up to date.

The MITM proxy approach is better for a company that wants to protect their network and their users as it lets them prevent the users from doing some things that they try to do.

some people believe that the only way the Internet should work is with true end-to-end connectivity, and that anything that tries to prevent that is EVIL, while others see that not all endpoints are equal, some are owned and controlled by their users, while others are owned and controlled by someone other than the person sitting at the keyboard.

proxies are valuable for the second case.

Chromium to start marking HTTP as insecure

Posted Dec 17, 2014 0:48 UTC (Wed) by cesarb (subscriber, #6266) [Link]

> well, by ignoring the CA stuff and just allowing specific certs, you are 'breaking' TLS as well. I'm not surprised that two different ways of 'breaking' TLS designed to achieve two different results don't interact well.

The same applies to just allowing a specific CA. It would make for a more complicated example, so I used the simpler "specific cert" example.

Just allowing a specific CA or a specific cert only "breaks" TLS if the server cert is changed in a way which doesn't match. As long as the certificate is signed by that specific CA or is that specific cert, nothing breaks.

After all, there's nothing in the TLS protocol which says that every program has to trust the same global list of CAs. It's perfectly fine, from a TLS point of view, to have in the same computer program A which only trusts ICP-Brasil (because it's a tax form uploader from the government) and program B which doesn't trust ICP-Brasil (because it follows Mozilla's list and ICP-Brasil still isn't in it).

> How would your theoretical security protocol differ from the clients trusting the CA of the proxy and the proxy validating the certs of remote systems?

* It would be explicit, so the client would be expecting the interception; MITM is indistinguishable from an attack.
* The client would know the specific key or CA the proxy uses, and so be able to reject fake proxies, and also be able to reject bypassing the proxy (an attacker would not be able to pretend to be the proxy but connect directly to the server).
* The client could tell the proxy more information, like "this key is supposed to be pinned" or "I know for certain that this server use this specific CA, accept no others".
* The proxy could tell the client more information, like "my connection to this server used the following certificate signed by the following CA" or "I believe this connection is one of these green-bar-EV things".
* It would even be possible to do a "decrypt but do not modify" mode, where not only both the proxy and the client can see the contents, but also both the proxy and the client can validate the certificate and the data. That is, the client would not need to trust the proxy for integrity, only for confidentiality.

> As far as which is more secure, that's a good question and the answer will vary from situation to situation. It's also a question of what you mean by "more secure"

By "more secure", I meant that an unrelated CA can't create a fake certificate for it. If my certificate is from GeoTrust, then Diginotar shouldn't be able to create a certificate for it. Hardcoding or pinning the certificate or the CA achieves that. The current MITM proxy approach requires making the creation of bogus certificates possible, thus it weakens the security for the whole Internet.

Unless I set the hidden parameter I mentioned in the previous comment, the fact that I added the ICP-Brasil root certificate (to securely access government sites) means that ICP-Brasil could in theory create a certificate for any site, even *.addons.mozilla.{org,net} which are pinned by the browser. This bogus result exists only because Mozilla doesn't want to break MITM proxies.

> end-to-end encryption that can't have any MITM is better for the user who always makes the right security choices and keeps their system up to date.
>
> The MITM proxy approach is better for a company that wants to protect their network and their users as it lets them prevent the users from doing some things that they try to do.

You are only considering two points of view: a security-conscious user and a competent company. There are others.

For a programmer, end-to-end encryption without MITM is safer, since there are only two points where the connection can be intercepted or modified: in the client (but if the attacker controls the client, I already lose), or in the server (but if the attacker controls the server, I have bigger problems). With MITM, there's a middlebox modifying the traffic in potentially unspecified ways, and perhaps saving the cleartext content in possibly unsafe places - and that is the case where the middlebox is *not* compromised.

And there are more than security considerations. For me, the best feature of HTTPS is that it completely bypasses broken transparent proxies. All my experiences with transparent proxies have been negative; the sooner they cease being viable, the better. (With a non-transparent proxy, when it breaks one can simply disable the proxy and go direct until it's fixed again.)

Transparent proxies

Posted Dec 17, 2014 11:53 UTC (Wed) by tialaramex (subscriber, #21167) [Link]

"All my experiences with transparent proxies have been negative"

HTTP access to LWN from the office where I sometimes work is via a corporate transparent proxy. It can't even correctly understand how HTTP works, once in a while I'll hit "Preview" and get a 405 error reporting that the proxy tried to perform the operation as a GET instead of a POST. After a few moments I can retry and it'll work.

Imagine the convoluted mess that must be inside that proxy to get this wrong. This is a major "enterprise grade" product, and it can't get technical fundamentals right. What are the chances this thing doesn't have security vulnerabilities that put the company at more risk ?

But of course the _real_ fundamental was to get big corporates to open their wallets unquestioningly, so from a practical point of view it was 100% successful.

Transparent proxies

Posted Dec 18, 2014 9:38 UTC (Thu) by nim-nim (subscriber, #34454) [Link]

Actually, the convoluted mess is the proxy part of the http spec, furthered by the "sod proxies" browser attitude. The http spec states proxy auth must be requested by inserting specific headers in the client traffic. And browsers will only honor those for specific traffic

Thus, very often proxies must do very strange stuff just to convince browsers to authentify themselves. All proxy vendors would gladly dump this if browsers provided a simple auth system that didn't require messing up with the user traffic.

Transparent proxies

Posted Dec 18, 2014 10:11 UTC (Thu) by mchapman (subscriber, #66589) [Link]

> Thus, very often proxies must do very strange stuff just to convince browsers to authentify themselves.

Is authentication with a supposedly "transparent" proxy at all a common scenario? I wouldn't expect that to work. Browsers *should* treat HTTP 407 Proxy Authentication Required responses as hard errors if they don't think they're talking to a proxy.

Transparent proxies

Posted Dec 18, 2014 11:56 UTC (Thu) by tialaramex (subscriber, #21167) [Link]

The transparent proxies I'm familiar with don't send 407.

They impersonate the remote host, redirect to the proxy by IP address and send a normal (non-proxy) HTTP Basic Auth required. Into which the employee is expected to type their secret credentials.

That's right, to "secure" your company the big boys will expect employees to send their credentials in plaintext over HTTP to a site identified only by an arbitrary string of digits‡. Are you laughing yet?

‡ 10.2.83.1 is an internal transparent proxy, 10.43.2.1 is also such a proxy, but 102.6.9.3 may be a bad guy stealing your credentials. Aren't these transparent proxies just great?

Transparent proxies

Posted Dec 18, 2014 13:37 UTC (Thu) by mchapman (subscriber, #66589) [Link]

> The transparent proxies I'm familiar with don't send 407.

They shouldn't. That was the gist of my previous comment: that status code should only be used by real, non-transparent proxies.

> They impersonate the remote host, redirect to the proxy by IP address

At which point they are no longer "transparent".

> ... and send a normal (non-proxy) HTTP Basic Auth required. Into which the employee is expected to type their secret credentials.

There's nothing stopping that from being on a secure site, with a proper (presumably internal) domain name.

> ‡ 10.2.83.1 is an internal transparent proxy, 10.43.2.1 is also such a proxy, but 102.6.9.3 may be a bad guy stealing your credentials. Aren't these transparent proxies just great?

What you've described doesn't seem to me to be a problem with transparent proxying itself. If it's really and truly "transparent", then it is no less secure than any other HTTP traffic over the wider Internet. (If it's a *caching* proxy, then there are of course more security implications. But most of these seem to be solved well enough by the caching proxy software I've used.)

Nor does it seem to be a problem with the HTTP protocol. The caching and proxying sections of the HTTP specification are indeed complex, but that doesn't mean they're fundamentally broken.

The problem you've described instead seems to be with the people that run transparent proxies. Transparent proxy authentication is, as far as I can tell, a contradiction in terms.

Transparent proxies

Posted Dec 18, 2014 20:38 UTC (Thu) by nim-nim (subscriber, #34454) [Link]

>> They impersonate the remote host, redirect to the proxy by IP address
>At which point they are no longer "transparent".

They don't particularly care about transparency, they're only using transparent properties because the result does not need specific configuration in browsers (and trying to keep a park of browsers configured is hell, users install new browsers every time, ie sucks on the internet, firefox is not really manageable centraly, etc)

>> ... and send a normal (non-proxy) HTTP Basic Auth required. Into which the employee is
>> expected to type their secret credentials.

> There's nothing stopping that from being on a secure site, with a proper (presumably
> internal) domain name.

Actually, that's one reason to run a captive portal, this way proxy auth can be protected by https, and you can reuse the same credentials as in other company apps. Now, the redirect dance to convince browsers to display auth is disgraceful, you need another one to put back users on their desired site afterwards, and it doesn't work on https since browsers refuse https redirects unless you hack user TLS sessions.

All this just because the spec "forgot" proxy<>browser signaling (no one in-clear header in unrelated request does not count) so the proxy needs to modify the user traffic to show anything to the user (auth, internal policy message, whatever). Transparent MITM with breakage of user TLS sessions is just the logical and most effective way to force browsers to display proxy messages.

Transparent proxies

Posted Dec 18, 2014 21:54 UTC (Thu) by zlynx (subscriber, #2285) [Link]

> They don't particularly care about transparency, they're only using transparent properties because the result does not need specific configuration in browsers (and trying to keep a park of browsers configured is hell, users install new browsers every time, ie sucks on the internet, firefox is not really manageable centraly, etc)

Well, the approved method to force use of a web proxy is to block all outgoing HTTP and HTTPS that isn't going through the proxy.

Then users can install any browser they like. It just won't work until they configure the proxy.

Much better than non-transparent transparent proxies.

Transparent proxies

Posted Dec 19, 2014 9:43 UTC (Fri) by nim-nim (subscriber, #34454) [Link]

Broken by default means support calls will multiply. Not nice.

Transparent proxies

Posted Dec 19, 2014 7:53 UTC (Fri) by rodgerd (guest, #58896) [Link]

My experience is that companies with a degree of paranoia simply install extra certs and MITM all SSL over a regular HTTP/HTTPS proxy. Unless you use a browser that doesn't rely on the supplied environment's certs (e.g. Firefox and they've only deployed Windows certs) you'll never know that you're being MITMed.

Quite what the liability looks like if someone in the IT team decides to misuse their access to your traffic looks like is an interesting question.

Chromium to start marking HTTP as insecure

Posted Dec 16, 2014 19:22 UTC (Tue) by dlang (✭ supporter ✭, #313) [Link]

#3 just means that the proxy deals with the certificate checks.

It can make it much harder to accept a certificate that doesn't pass the checks, but it's arguable that if you are deploying something like this, you really don't want the end users making that decision anyway.

Chromium to start marking HTTP as insecure

Posted Dec 17, 2014 14:11 UTC (Wed) by nim-nim (subscriber, #34454) [Link]

Actually, you want users make their own checks on the traffic you authorize. Because it's safer this way, it keeps them happy, and no web connexion is used in 100% professional ways nowadays (that's where BYOD took its inspiration from).

Chromium to start marking HTTP as insecure

Posted Dec 17, 2014 19:02 UTC (Wed) by dlang (✭ supporter ✭, #313) [Link]

> Actually, you want users make their own checks on the traffic you authorize.

having them make their own checks after you have made yours is one thing. Trusting the users to do all the checks against the raw Internet is something very different.

BYOD is appropriate for some things and not for others. It's a different topic.

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 21:05 UTC (Sun) by drag (subscriber, #31333) [Link]

> So, in your view if it would remain even distantly _possible_ to do something bad then there's no point whatsoever in any technical intervention to make that more difficult or visible ?

I am guessing that his view is that energy should be spent solving actual problems instead of spending of pushing technologies that only provide a false sense of security against theoretical issues.

Plus if you bombard users with missives about lack of http:// security they will just be trained to ignore them when it actually matters.

Hopefully the developers of chromium take this into account and try to use good logic when it's important to warn users or not.

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 22:42 UTC (Sun) by tialaramex (subscriber, #21167) [Link]

Where did you get the idea that Chromium proposes to "bombard users with missives" ? The point is that an HTTP URL is not "better" than an HTTPS URL with a self-signed certificate that expired six months ago. Both might well just be symptoms of someone's laziness. Both mean you could be subject to MitM or snooping. The trend is towards two things: mutating the UI to show that something is wrong (mostly for professionals to diagnose stuff with some hope that a fraction of users might actually take heed) and prohibiting some insecure things altogether (e.g. "mixed active content" now just doesn't work out of the box in new browsers).

It is conceivable that, at some long future date when http-only is a rare exception, browsers will come back and block it by default. I can't rule it out, but I'd suggest that it's further away from us now than say, the phasing out of IPv4 or Russia joining the EU.

As to your claim that this is about "theoretical issues" let me quote the article that you apparently didn't read:

"We know that active tampering and surveillance attacks, as well as passive surveillance attacks, are not theoretical but are in fact commonplace on the web."

Chromium to start marking HTTP as insecure

Posted Dec 16, 2014 9:59 UTC (Tue) by nim-nim (subscriber, #34454) [Link]

https is a bit better than plain http. It protects you from monitoring by the network nodes you go through, even if it gives you pretty nothing on the destination node, given how broken the CA system is nowadays. However since browsers refuse self-signed certificates they are clearly not on the opportunistic encryption side. And it has its own drawbacks (proxy caching, cpu use, tls maintenance, etc)

However it is quite disingenuous of Google to "protect" us from http given all the efforts they've extended to deploy badware under the user radar (generalised js injections for adds, cookies and supercookies, very complex systems to multiply callbacks to third party sites without the browser showing anything, etc). The old 'mixed content' warning is a joke nowadays, sure it's all in https, but your browser may not be speaking to the site showed in the adress bar anyway.

Chromium to start marking HTTP as insecure

Posted Dec 16, 2014 13:02 UTC (Tue) by mathstuf (subscriber, #69389) [Link]

*cough*RequestPolicy*cough*

Unfortunately, not something I can teach friends and family to use easily :( .

Chromium to start marking HTTP as insecure

Posted Dec 16, 2014 14:09 UTC (Tue) by nim-nim (subscriber, #34454) [Link]

RequestPolicy (and ghostery…) are indeed good ways to check yourself how bad things are nowadays. And why what browsers show you in the address URL is a joke. Even when it's an https joke.

Chromium to start marking HTTP as insecure

Posted Dec 16, 2014 16:08 UTC (Tue) by mathstuf (subscriber, #69389) [Link]

Well, Ghostery isn't FOSS and as such I only use it on my phone (where most of the extensions I use aren't available).

Chromium to start marking HTTP as insecure

Posted Dec 17, 2014 10:05 UTC (Wed) by jezuch (subscriber, #52988) [Link]

> Well, Ghostery isn't FOSS

Yeah, it's a bummer and I'm reluctant to use it too. Hopefully EFF's Privacy Badger[1] is/will be able to supplant it.

[1] https://www.eff.org/privacybadger

Chromium to start marking HTTP as insecure

Posted Dec 17, 2014 11:41 UTC (Wed) by tialaramex (subscriber, #21167) [Link]

The "site showed in the adress bar" is what decides what you see.

Whether they aggregate some untrusted nonsense and paste it into their reply, or they stitch it in using dynamic techniques by running some Javascript provided by a third party hardly matters, they decide what happens and they're on the hook for it.

It's basically the newspaper argument. The publisher may very well not have written everything in the newspaper, but the law says (at least in my country) that the publisher is responsible for whatever is published anyway. Readers are entitled to blame the publisher, not just some anonymous contributor, advertiser or whatever, for anything they read in the publisher's newspaper.

Chromium to start marking HTTP as insecure

Posted Dec 17, 2014 14:08 UTC (Wed) by nim-nim (subscriber, #34454) [Link]

However if you talk to the technical people in charge of the web site they'll tell you everything the site sources externally is not their problem and you know it's not their problem since it's on a separate host. That's why referencing generic js hosted elsewhere is so popular: not because of the "hardship" of copying a js file, nor for performance reasons (you are effectively adding the latency of establishing connexion to a third party site to the page loading), but because web site devs consider no one can blame them for the problems of externally hosted files.

So your argument is disproved by actual social behaviours.

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 18:41 UTC (Mon) by nix (subscriber, #2304) [Link]

I at least am mostly thinking of websites that don't do POST at all, of which there are a great many. If no information is transmitted, nothing secret can be transmitted...

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 19:15 UTC (Mon) by nybble41 (subscriber, #55106) [Link]

Not POST != no information transmitted. It just means that you're not changing anything on the server. Sensitive data can be transmitted via query parameters in GET requests, not to mention the information present in the HTTP headers. Even the fact that you're visiting a certain page may be sensitive data in some circumstances, even when the content of the page is public. There is certainly no reason to share all that data with your ISP and anyone else who may be listening in.

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 22:13 UTC (Mon) by rodgerd (guest, #58896) [Link]

SSL does not protect parameters in GET or HEAD requests.

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 22:26 UTC (Mon) by cesarb (subscriber, #6266) [Link]

> SSL does not protect parameters in GET or HEAD requests.

SSL *does* protect parameters in GET or HEAD requests. SSL protects the whole HTTP request, including parameters and headers. The only thing SSL doesn't protect (yet) is the requested hostname.

Chromium to start marking HTTP as insecure

Posted Dec 16, 2014 2:24 UTC (Tue) by rodgerd (guest, #58896) [Link]

An HTTPS proxy will reveal GET and HEAD query parameters without decrypting the body.

Chromium to start marking HTTP as insecure

Posted Dec 16, 2014 4:27 UTC (Tue) by mchapman (subscriber, #66589) [Link]

> An HTTPS proxy will reveal GET and HEAD query parameters without decrypting the body.

Only if your user-agent is utterly brain-dead.

User agents should use the CONNECT method (https://tools.ietf.org/html/rfc2817) when doing HTTPS through a proxy. The CONNECT method ensures that the traffic between the user-agent and the proxy is encrypted. The "real" HTTP request containing the GET or HEAD query parameters is only sent over the TLS tunnel set up by the CONNECT method.

Of course, it's entirely possible for the tunnelled traffic to be decrypted by the proxy if the proxy's is using a certificate trusted by the user-agent. But that is a completely different problem.

Chromium to start marking HTTP as insecure

Posted Dec 16, 2014 4:35 UTC (Tue) by mchapman (subscriber, #66589) [Link]

> User agents should use the CONNECT method (https://tools.ietf.org/html/rfc2817) when doing HTTPS through a proxy. The CONNECT method ensures that the traffic between the user-agent and the proxy is encrypted.

I should probably clarify this. The intent here is for the traffic between the user agent and the *origin server* to remain encrypted.

Proxies ought to simply do minimal validation on the destination for the CONNECT request (e.g. check that it's requesting a connection to port 443), then simply pass TCP back and forth. The TLS connection is established between the user agent and the origin server.

Chromium to start marking HTTP as insecure

Posted Dec 16, 2014 9:49 UTC (Tue) by nim-nim (subscriber, #34454) [Link]

To perform malware checking (typically, botnets) the proxy needs the full URL. Because botnets do use HTTPS nowadays and they do use public clouds too (so they can end up on a generic amazon domain for example).

Chromium to start marking HTTP as insecure

Posted Dec 16, 2014 12:25 UTC (Tue) by mchapman (subscriber, #66589) [Link]

> To perform malware checking (typically, botnets) the proxy needs the full URL.

*If* you want a proxy to do that for you, then you can set up a trust relationship with that proxy. If your browser trusts the proxy's certificate, then the proxy can do whatever it likes with your "encrypted" traffic.

> Because botnets do use HTTPS nowadays and they do use public clouds too (so they can end up on a generic amazon domain for example).

I'm not sure what relevance that has.

OK, so there's one origin server under a generic Amazon domain, and a botnet also under that generic Amazon domain. If the botnet were to somehow intercept your traffic, what is it going to do with it? The only way it could decrypt that traffic, without you knowing, is if it had a signed and trusted certificate for the origin server's domain, or for the wildcard label under the generic domain. Both of these seem unlikely.

Chromium to start marking HTTP as insecure

Posted Dec 16, 2014 14:15 UTC (Tue) by nim-nim (subscriber, #34454) [Link]

> *If* you want a proxy to do that for you, then you can set up a trust
> relationship with that proxy.

Not possible with current http. Can only set up your web client to blindly trust a CA for everything.

> If your browser trusts the proxy's certificate, then the proxy can do
> whatever it likes with your "encrypted" traffic.

Change that into people with access to the certificate can do whatever they like even when you're not going through the proxy.

> I'm not sure what relevance that has.

Connect opens a tunnel to a specific host. In the cloud lots of things (including badware) use the same generic hosts.

Chromium to start marking HTTP as insecure

Posted Dec 16, 2014 16:17 UTC (Tue) by mathstuf (subscriber, #69389) [Link]

So the plan I've had for a while but haven't gotten around to implementing in uzbl is to have uzbl-core talk to a private mitmproxy[1] as the http(s) proxy and a CA bundle of *only* the one being used by mitmproxy. mitmproxy would then be the one place to configure CA policies and such and where (AFAICT) much more control can be done than what is offered through WebKitGTK/libsoup's APIs. Unfortunately, untangling the "http_proxy is TCP" assumptions is not the easiest thing to do :( .

[1]The proxy being over AF_UNIX so that TCP ports aren't exhausted or open to others on the same machine.

Chromium to start marking HTTP as insecure

Posted Dec 16, 2014 19:25 UTC (Tue) by dlang (✭ supporter ✭, #313) [Link]

> Proxies ought to simply do minimal validation on the destination for the CONNECT request (e.g. check that it's requesting a connection to port 443), then simply pass TCP back and forth.

no, a good proxy should do more than blindly pass traffic after a CONNECT request.

A good proxy should watch to see if the traffic looks like a SSL/TLS handshake. The Sidewinder firewall will do this, but I don't know how many other proxies do this as well.

Chromium to start marking HTTP as insecure

Posted Dec 16, 2014 21:22 UTC (Tue) by zlynx (subscriber, #2285) [Link]

Is there a point to checking, since anything that wants to escape can fake a handshake?

I suppose you could prevent malware from using it to send spam email, but that seems to be more of a job for IDS and/or perimeter firewalls.

Chromium to start marking HTTP as insecure

Posted Dec 16, 2014 21:32 UTC (Tue) by dlang (✭ supporter ✭, #313) [Link]

faking a handshake requires the server and client be written to do that. Not checking allows you to connect to any service that you can reach.

Yes, it's just raising the bar, but it's probably easier for something to actually do a TLS handshake than to fake the start of it (because they can just use an existing library)

Chromium to start marking HTTP as insecure

Posted Dec 16, 2014 22:08 UTC (Tue) by zlynx (subscriber, #2285) [Link]

Without the other parts of the security, raising the bar in the proxy does nothing because the machines can just connect out to the internet directly.

With the firewall rules in place forcing machines to use the proxy there doesn't seem to be much point in checking for TLS in the connect because it will either be a correct HTTPS request that the proxy can't read or it will be a connection request to a service that's blocked by the perimeter firewall systems.

Chromium to start marking HTTP as insecure

Posted Dec 16, 2014 22:21 UTC (Tue) by dlang (✭ supporter ✭, #313) [Link]

well yes, if you allow your clients to connect anywhere on any port it makes no sense to try and limit what they can do on SSL, but in that case, why are you bothering to run a proxy in the first place?

Chromium to start marking HTTP as insecure

Posted Dec 17, 2014 19:36 UTC (Wed) by lsl (subscriber, #86508) [Link]

> A good proxy should watch to see if the traffic looks like a SSL/TLS handshake.

I'm not sure about that. Even if the proxy developers get it right today, it will break horribly at some point in the future and, as people don't update their shit, serve as an obstacle for deployment of TLS-ng (whatever it'll be called) or even some totally unrelated other part of the networking stack. Firewalls breaking stuff is such a widespread problem nowadays that every suggestion for them to determine if something "looks like X" gives me the creeps.

But yeah, if all those middleboxes would at least adhere to your first suggestion and refrain from modifying things, that would be awesome. I can totally live with being denied access to some resource. It's clear and you can go and bug the person responsible for setting this up. Silently modifying traffic (and breaking it in subtle ways), though...this is the pestilence.

Chromium to start marking HTTP as insecure

Posted Dec 17, 2014 20:30 UTC (Wed) by dlang (✭ supporter ✭, #313) [Link]

remember that the same policies that you hate because they break future things that weren't known about are also what blocks the bad guys doing things that break things because they weren't anticipated.

the firewall/proxy doesn't know if the unknown that is going through is some legit thing that's new enough that the firewall doesn't know about it, or a hacker trying to break things.

At that point it has the choice of just blocking things, or trying to 'clean' them up. Arguments can be made either way, both options will break users at different times in different ways. In any case, firewalls need to be maintained and updated frequently. If they aren't, not only will they break things that are too new for them to understand, they will fail to block things that need to be blocked.

Yes, firewalls break things that users try to do. But if you are trying to defend a company, you WANT to break some of the things that users try to do, because there are far too many users who don't have any idea what they are doing, and so they try to do things that you just don't want to allow.

Chromium to start marking HTTP as insecure

Posted Dec 17, 2014 22:06 UTC (Wed) by zlynx (subscriber, #2285) [Link]

In my opinion its ridiculous to trust a browser's Javascript security sandboxing over the host operating system's IP stack.

If your IT is so paranoid it won't allow ECN on TCP/IP then it certainly shouldn't allow Javascript of any sort.

Chromium to start marking HTTP as insecure

Posted Dec 17, 2014 22:38 UTC (Wed) by dlang (✭ supporter ✭, #313) [Link]

> If your IT is so paranoid it won't allow ECN on TCP/IP then it certainly shouldn't allow Javascript of any sort.

when ECN was introduced, it was using bits that were unused prior to that point. It was perfectly reasonable for firewalls to not allow those bits to pass through unmodified. How could the firewall know that it was safe for these bits to be set when they were unused at the time the firewall was written. It could easily have been an attack to take advantage of some bug in some vendor's TCP stack.

You can argue that they should have blocked any traffic with the bit set rather than zeroing the bit, but it's hard to say if that would have caused more grief or not.

Middleboxes (was: Chromium to start marking HTTP as insecure)

Posted Dec 18, 2014 10:31 UTC (Thu) by cesarb (subscriber, #6266) [Link]

> You can argue that they should have blocked any traffic with the bit set rather than zeroing the bit, but it's hard to say if that would have caused more grief or not.

Actually, zeroing the bits was the best response. It just made ECN behave as non-ECN. IIRC, I even suggested it to someone at one point as an alternative (they were concerned about OS fingerprinting, which is a completely bogus concern IMHO).

Blocking ECN, on the other hand, completely breaks things. Their firewall was in front of a server; they have NO way of knowing whether some future, possibly experimental protocol (thus perhaps not even public yet) uses that bit as signaling, expecting it to be ignored if the server does not support the new protocol.

Note that you also have to zero out the negotiation. ECN seems to have grown a way to detect whether the server is cheating and pretending it didn't receive marked packets; if ECN is negotiated between two peers, erasing the marking will cause problems.

> How could the firewall know that it was safe for these bits to be set when they were unused at the time the firewall was written. It could easily have been an attack to take advantage of some bug in some vendor's TCP stack.

It could also easily be an attempt to take advantage of some feature in some vendor's TCP stack.

If a packet-validating firewall (as opposed to one which merely triggers on well-known fields like the TCP port) is in front of a server or client, it *must* know *all* the features which can possibly be used by that server or client's network stack, and the correct way to make it ignore unknown features at each part of the protocol. Otherwise, either a new feature introduced at both ends of the connection will make it get out of sync, or a new feature introduced at the uncontrolled (outside) end will cause connections to be incorrectly blocked.

New features to network protocols are almost always introduced in a backwards-compatible manner. Either or both endpoints set a flag or add an option to tell the other endpoint that they support the new feature. This allows new features to be introduced in an uncoordinated fashion, which is necessary for a decentralized architecture like the Internet. If the other endpoint doesn't support the new feature, it will just ignore it.

When you add a middlebox, this end-to-end model breaks. The middlebox is in front of one of the ends; if the other end is upgraded and now tries to negotiate a new feature, and the middlebox drops the connection in response, the communication between the endpoints breaks. If the middlebox doesn't drop the connection, things keep working, until the endpoint it's "protecting" is also upgraded; now both sides are speaking a new version of the protocol, and the middlebox gets out of sync. The only hope for the middlebox, therefore, is to scribble over the negotiation flags or options in both directions, so either end thinks the other end doesn't support the new feature and they keep speaking the old version of the protocol. But for the middlebox to do that, it has to know *every* place a feature negotiation flag or option can be found, and how to safely overwrite it with a NOP.

And that works only for non-security-sensitive protocols. Security-sensitive protocols tend to validate that an attacker didn't change any of the negotiation flags or options, because changing them could be used to force the connection to use a more vulnerable version of the protocol (a "downgrade attack"). On TLS, it's the Finished message, which checksums the whole protocol negotiation up to that point.

The problem with middleboxes is that they are from a different networking model. In what I could call a "hop-by-hop" or "gateway" model, for a computer at organization A to talk to a computer at organization B, it talks to organization A's gateway, which talks to organization B's gateway, which talks to the destination computer. Any feature negotiation is hop-by-hop; there's no chance of getting out of sync. The gateways are natural places to do all the protocol validation the organization desires.

The Internet, however, uses mostly an "end-to-end" model, where the computer at organization A talks directly to the computer at organization B. There is no place for middleboxes in that model. Any attempt is a kludge, which will sooner or later break.

The only valid solution, for those who insist in a "gateway" model, is to use an application-level gateway, that is, to take the middlebox out of the middle and make it be one of the ends. Instead of scrubbing packets to try to take out anything which could confuse the protected endpoint, make the connection terminate at the gateway and relay its contents to the protected endpoint in a separate connection. It's easy to see that this is much more robust against both new features and unknown bugs.

It works at a higher level, too. If you are concerned about HTTP-level attacks, make the connection terminate at an HTTP gateway, parse the request as an HTTP server would, create a new request from scratch as an HTTP client would, and send it to the real server. That's not transparent, since it will discard any details the gateway doesn't understand, but if you are of a "gateway" model mentality, transparency is precisely what you do not want.

Middleboxes (was: Chromium to start marking HTTP as insecure)

Posted Dec 18, 2014 16:26 UTC (Thu) by raven667 (subscriber, #5198) [Link]

That's the smartest thing I've read all day and the most cogent description of what is wrong with middleboxes on the network and why they don't work well on the Internet. The first firewalls were proxy-based, the first DMZs were just shared computers that you could get a remote desktop on, stateful packet inspection came later as a way to reduce both equipment and administrative cost but the tradeoff is that the reduction in admin cost is at the expense of understanding what your network model is and how your applications work. If you understand what your apps and model are then you can build full validating proxies, if you don't understand what is supposed to be happening then just sprinkling packet filtering, no matter how "deep" the inspection, is not going to robustly cover all your security problems, you might block some ancient worms that your systems wouldn't be affected by anyway but it's not going to catch the next spearphishing attempt or zero-day worm or whatever state-sponsored APT is causing the most trouble these days.

Middleboxes (was: Chromium to start marking HTTP as insecure)

Posted Dec 18, 2014 20:50 UTC (Thu) by dlang (✭ supporter ✭, #313) [Link]

> Actually, zeroing the bits was the best response. It just made ECN behave as non-ECN.

This is what most firewalls (that did anything with ECN) did, and they were blasted for "breaking" ECN

In other cases (like window sizing IIRC) zeroing the bits actually breaks connections for users under some conditions.

By the way, this sort of thing is why I strongly prefer proxy firewalls (even transparent ones), the connections are to the firewall, and the normal operation of the TCP stack sanitizes things without having to try and figure out what's good or bad about a packet (or fragment)

Cisco and Checkpoint did the world a great deal of harm when they convinced people that a firewall should be a packet filter and nothing else.

Middleboxes (was: Chromium to start marking HTTP as insecure)

Posted Dec 19, 2014 9:56 UTC (Fri) by nim-nim (subscriber, #34454) [Link]

The Internet does not use mostly an end to end model. In fact the Internet was originally hop by hop only (because resources were few and you were *happy* to delegate processing to another middle box). See how smtp and pop behave.

This was progressively broken by people who didn't want to think about the complexities of hop by hop and defined lots of "end to end" stuff with "it should work in hop by hop mode, but we'll think about this later".

Also it's easy to write "middleboxes are broken, imagine when the two ends are newer than the middlebox". But in the real life, especially for security needs it's "the middle box is more up to date than the endpoints". Because it's way easier to deploy new badware checks on a single centralized middlebox than on a endpoint park. In fact the whole point of the current cloud craze is that it is way too expensive to try to keep endpoints up to date.

When lwn will run articles about how thunderbird is killing gmail then it will be reasonable to write that centralized systems are the technical holdout.

Chromium to start marking HTTP as insecure

Posted Dec 16, 2014 9:42 UTC (Tue) by cesarb (subscriber, #6266) [Link]

> An HTTPS proxy will reveal GET and HEAD query parameters without decrypting the body.

That makes no sense.

First, GET and HEAD do not have a body. It's other methods like POST and PUT which have a request body.

Second, if a proxy can decrypt the request (where the query parameters are) for a GET and HEAD, it can decrypt the request for a POST (including everything posted by the form); in fact, it can't even distinguish between a GET or a POST before decrypting the request, since the HTTP method is also encrypted!

You seem to be confused as to how HTTPS works. HTTPS does *not* work like this:

GET /blah/blah/blah?password=password HTTP/1.0
Host: www.example.com
[negotiates and switches to encrypted data]
...

Instead, HTTPS, even through a proxy, works like this:

[hostname: www.example.com]
[negotiates and switches to encrypted data]
GET /blah/blah/blah?password=password HTTP/1.0
Host: www.example.com
...

As you can see, the only thing visible without decrypting is the hostname.

Your confusion might come from the common recomendation of "don't put secret data in the query string". But that's not because of HTTPS; it's because of browser history and logging. Both the browser history and common HTTP server logs write down the requested URL, including query parameters; with POST, the information is not in the URL, so it's not saved. That happens before encryption (in the browser) or after decryption (in the destination server), so HTTPS is not involved; in fact, the same recommendation also applies to plain HTTP.

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 20:22 UTC (Mon) by mathstuf (subscriber, #69389) [Link]

The only way you're getting *no* information is to point your browser at /dev/null. Which is pretty boring, IMO.

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 6:53 UTC (Sun) by iabervon (subscriber, #722) [Link]

It depends a lot on what they do for insecure sites. If they do what they used to do (connect, but have some red markings in the location bar), it would be good, because that would make you constantly aware that what you're doing isn't secure, and you could make sure to only type a password when you don't have that awareness. If they make you click through something to access sites that aren't secure, it would be awful, both because it would train people to ignore security warnings, and because having ignored a security warning tends to reduce people's vigilance.

Having a cue for "someone unexpected might be watching you" that's generally there and lets you behave appropriately for that context, so you can feel uncomfortable doing things that require privacy when the cue is present. People do, after all, go out in public all the time, and behave appropriately despite the lack of warnings that they could be seen.

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 4:47 UTC (Mon) by b7j0c (guest, #27559) [Link]

all of the established players of the web want it this way...they can afford https. this will just make the smaller sites seem less desirable

its about creating a class system on the web

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 8:14 UTC (Mon) by dgm (subscriber, #49227) [Link]

<tinfoilhat>
Maybe, maybe not. It can also be a means of preventing ISPs from gaining so much intelligence about their users. They have grown used to having it, so if they lose it they will need to buy it somewhere else. Who owns (at least one of) the endpoints on the very first thing you do when you open a browser?
</tinfoilhat>

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 9:37 UTC (Mon) by cesarb (subscriber, #6266) [Link]

Yahoo?

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 16:44 UTC (Mon) by raven667 (subscriber, #5198) [Link]

That's not tinfoil hat speculation at all, that is exactly the business calculation that an advertising company at the heart of the Internet like Google is making, monitoring done on the internet by ISPs and rogue National Security Agencies is _competition_ for their consumer ad profile business so using encryption to protect their data sources is now a requirement to protect their revenue stream. If you want this behavior profile data then you have to come and pay the source big bucks, you can't leech off their tracking and surveillance infrastructure and then sell cut-rate profiles in competition with them.

If you think that Google is doing this out of the goodness of their heart then you are going to be very confused and disappointed when they don't behave as you predict, if you see this as a cynical protection of their revenue source, that happens to have some good come out of it, then your predictions are going to be a lot closer to observed behavior and you'll be happier for it.

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 17:24 UTC (Mon) by dlang (✭ supporter ✭, #313) [Link]

The other way of looking at it (which matches their statements) is that their business requires a robust, well used Internet. Anything that threatens that threatens them.

Threats they have reacted to in the past include lack of good browsers (sponsoring Mozilla and writing chrome), mobile device lockdown (both the existence of Android, the Nexus line, etc)

Having people being afraid of using the Internet because of Governments spying on them (and remember, it's not just the NSA), is just another threat that could reduce what people are willing to do on the Internet.

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 19:14 UTC (Mon) by raven667 (subscriber, #5198) [Link]

> The other way of looking at it (which matches their statements) is that their business requires a robust, well used Internet. Anything that threatens that threatens them.

But it doesn't require a free and open internet and it is opposed to individual privacy, their protection of privacy only goes as far as preventing others from making money on users that they control, they will fight anything that protects private individuals from Google mining the user data they are capable of collecting.

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 19:20 UTC (Mon) by raven667 (subscriber, #5198) [Link]

> Having people being afraid of using the Internet because of Governments spying on them (and remember, it's not just the NSA), is just another threat that could reduce what people are willing to do on the Internet.

I don't think they are opposed to any government spying, they are opposed to the governments gathering this information on their own and not paying for the privilege through a Google toll-booth. They are interested in controlling public perception to keep the music going, people have to believe their data is safe, it doesn't actually have to be safe if the public has no means to audit what happens to their data because all the analysis that happens is a well protected corporate secret. Google has much higher operational security standards than say the NSA does.

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 19:25 UTC (Mon) by dlang (✭ supporter ✭, #313) [Link]

why do you assume malice? Google is staffed by a lot of techies, what makes you think that they favour giving the government any access to things?

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 20:58 UTC (Mon) by raven667 (subscriber, #5198) [Link]

I didn't assume malice, I'm just pointing out that their interests might not be fully aligned with those of privacy advocates, so that one isn't surprised when those interests diverge. They want to keep their "big data" under lock and key, selling cooked reports to whoever, government, big business, it doesn't matter, whoever can pay. Targeting based on analysis of individual people is their differentiating value and both advertisers and governments want targeting data for propaganda or enforcement purposes. "Malice" and "evil" are simple moral abstractions when the more complex truth is differing alignment.

Chromium to start marking HTTP as insecure

Posted Dec 16, 2014 10:23 UTC (Tue) by nim-nim (subscriber, #34454) [Link]

First, Google has massive conflicts of interests on user privacy. Their core business is selling information about you to ad buyers. They will do nothing that will compromise this data collection, and everything to expand it. Every Google employee understands this.

Second, Google is effectively in a close alliance with the US government. They provide it with the info it wants, and a nice unofficial soft power front for foreign policy. In return the US government helps Google with foreign states, turns a blind eye to its progressive user encroachment, and protects it from the US media lobby. This cooperation happens at the highest Google levels and it does not need participation by most Googlers (even though they would be blind to ignore it : the opening and closing of foreign Google offices follow closely US foreign policy, Google people are more and more often involved in foreign troubles, etc).

If Google management was furious about the NSA that was because they viewed it as encroachment by a partner and stealing the cookies instead of bartering them for something else. They're making sure this stealing won't happen again.

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 19:22 UTC (Mon) by lsl (subscriber, #86508) [Link]

> The other way of looking at it (which matches their statements) is that their business requires a robust, well used Internet. Anything that threatens that threatens them.

Exactly. More than anything else, Google's biggest competitor is the offline ad business. They're a nobody there but for every truckload of money spent on online ads, Google is likely to get a big share from it. Their business (at least partially) depends on the Internet being a safe and mellow place where people are comfortable taking out their money. Because otherwise, Google's customers are simply going to spend their money on offline ads.

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 6:37 UTC (Mon) by salimma (subscriber, #34460) [Link]

A nice compromise would be to warn on using HTTP if a form is detected, especially if said prompt asks for login details.

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 16:28 UTC (Mon) by k8to (subscriber, #15413) [Link]

That would actually be useful.

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 22:06 UTC (Mon) by richmoore (subscriber, #53133) [Link]

You guys do realise that this has been there by default even in IE for over a decade? You've just turned it off - like everyone else.

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 23:42 UTC (Mon) by k8to (subscriber, #15413) [Link]

When was the last time I used IE to submit login details, again?

Chromium to start marking HTTP as insecure

Posted Dec 13, 2014 15:32 UTC (Sat) by philipstorry (subscriber, #45926) [Link]

I'm in two minds. On the one hand, this is good for security. On the other, the timescale may be too quick - we'll need to wait for the "Let's Encrypt" projects to get real traction.

I run my own small vanity^Wpersonal website. It's not the effort to put https on the server that stops me from doing it - that would be five bucks a year and couple of hours. It's the fact that I have embedded content from way back. Pictures embedded from my Flickr account, buttons in sidebars - It'll take a day or two to go through all of that and change it, at least. And until I do, if I enable https all it will do is generate seemingly random security warnings for the end user, depending on which page they're on.

Most of my recent embeds have all been https, so I know it's feasible today. But whilst "sometime next year" is achievable for myself, that's only because I've been thinking about going https already. I've looked at tools that can do the search and replace in my content, and I'm technical enough (I know regular expressions well enough to know that they can be dangerous) that I'm in a decent position to do this and do it well on my small corner of the internet.

But I fear that almost any timescale for this change will be too aggressive for many small non-profits and small companies, who lack the technical know-how to enable https AND not have their older content generate warnings. This is bigger than just server configuration.

Spurious warnings would make the whole project useless, as it will just train users to ignore the warnings. Marking as dubious by default, as they've proposed, is therefore not the right way to do this.

I wonder if we shouldn't look at some kind of flag in DNS, whereby the domain is removed from this kind of checking temporarily to stop spurious warnings. That's an awful technical solution and an awful security solution - but I'm not proposing it as a technical or security solution to fix the issue. I'm proposing it because it's a simple initial action for website owners that prevents spurious warnings, and it focuses website owner's minds on the issue.

Instead of "this browser change happens, all your content must work or your site gets flagged, get working on it" the process becomes "in the future, your site must be secure - set this flag now to exempt it, but know that at some point nothing will check that flag, so you'd better get to work". That gives people more time, and makes this an easier change to handle.

If a redirect to https happens and a certificate is present it should still be handled normally. So if your web server has a decent config, a DNS hijack to falsely set the flag has no effect anyway...
(And if your DNS is hijacked, the attacker will likely redirect to another webserver, surely?)

Eventually (say 2017 or later?), we pull the check for the DNS flag, and all http content is marked unsecure. But at least everyone was aware because they had to do something easy at first, so could start planning.
By making this change in the browser require no work by the website owners, we risk it looking like a very autocratic and thoughtless change - even if it is well intentioned.
Big hosting companies will likely make it easy to use tools like "Let's Encrypt" and make setting the DNS flag a simple check box, so at least the starting point is easier for people to deal with. And website owners are now thinking about the whole change more positively, because the start was easy to manage.

Basically, I don't think a unilateral change in how browsers handle http/https security signals will go down well, unless it's made easy to manage and the timescales are suitably long. As put here, this seems doomed to fail by generating ill will on the content producer/website owner side, and the worst case is that it will make things less secure by training users to ignore these warnings.

Long term, this is the right thing to do, and we should aim for it. We just need to be careful in how we do it.

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 7:11 UTC (Sun) by jeremy (subscriber, #95247) [Link]

It's not the effort to put https on the server that stops me from doing it - that would be five bucks a year and couple of hours. It's the fact that I have embedded content from way back. Pictures embedded from my Flickr account, buttons in sidebars - It'll take a day or two to go through all of that and change it, at least.
Have you ever heard of a tool called sed?

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 10:21 UTC (Sun) by josh (subscriber, #17465) [Link]

Sadly, not every service trivially supports https with the same URLs as http; sometimes you need "https://secure.example.com/...", or some other domain or path. And these change. That's part of what makes the job of extensions like HTTPS Everywhere difficult.

So no, sed wouldn't work unless you go find all the http URLs, find what service they point to, find how that service handles https, and sed URLs for that particular service's domain only.

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 12:44 UTC (Sun) by philipstorry (subscriber, #45926) [Link]

Yep, this is one of the bigger risks when trying to move to https everywhere - the actual rewrite should be the simple bit. The investigation to determine what you need to rewrite - and how - is the hard part.

For a preliminary analysis, I ended up grabbing my entire site via wget, then using grep to pull in all matches for "http:\/\/.*\/" and piping that through a sort|uniq -c pipeline to get an overview of the scope of the problem.

Even that's potentially no guarantee - if you have javascript blocks for ads or analytics, then the URLs might be built dynamically, and wget isn't going to process that javascript and reveal them. But for content that you've produced, it should be OK.

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 12:35 UTC (Sun) by philipstorry (subscriber, #45926) [Link]

Yep, I'm well aware of sed.

But as I use a CMS (like many websites), most of the content is in a database. That makes things much trickier - I'd have to dump the DB, use sed, and then reload the DB. Depending on how the dumped DB is formatted, all kinds of issues might crop up - what if sed hits an EOF character and stops, but the file has more content?

If all websites were just a bunch of files, sed would be fine. But that's not the case. :-(

As it stands, I'm using Drupal and there's a module which will do search-and-replace. I need to do some testing before using it though. Realistically, to do a proper job on any website, you'd be looking at a test run of your https config and the content re-writing tool you use on a VM.
(If the module didn't exist, I'd be dumping my MySQL DB and using something like sed, and then sacrificing lots of chickens before reloading the DB. I'm not a DBA, and doing that kind of thing makes me nervous.)

If your website is something like the old BBC News website - which has thousands if not millions of pages of content - then that test run alone could take days. But for most smaller organisations, I think a couple of hours to a day for the operation is probably about right.

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 19:28 UTC (Sun) by niner (subscriber, #26151) [Link]

With a database you do not need sed. Replacing strings in columns is usually quite simple. For example, as you seem to be using MySQL:
http://dev.mysql.com/doc/refman/5.0/en/string-functions.h...

That said, if your CMS supports search/replace (and if it doesn't, why are you still using it?), that's much safer. Especially if it supports interactive search/replace, so you can inspect each change.

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 10:26 UTC (Mon) by WillC (guest, #96862) [Link]

Or, of course, you could write local URLs as "/path/to/asset.gif" and "//other.host/path/to/whatever..." and they work in either protocol, just like that.

Chromium to start marking HTTP as insecure

Posted Dec 13, 2014 19:29 UTC (Sat) by flussence (subscriber, #85566) [Link]

I would prefer it if everything was marked insecure until I manually intervene, as OpenSSH does.

I sure as heck don't want to blindly trust all the default CAs the browser dumps on my system; every single one of those is a potential SPOF and several *have* been.

Chromium to start marking HTTP as insecure

Posted Dec 13, 2014 19:34 UTC (Sat) by josh (subscriber, #17465) [Link]

We're getting there. Widespread use of certificate pinning is a good first step.

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 11:29 UTC (Sun) by richmoore (subscriber, #53133) [Link]

Minimising the number of CAs is a good step. It's been shown however that users don't notice/care when an ssh host key changes, see https://www.usenix.org/system/files/login/articles/105484... - this is for ssh which is mainly used by techies, I imagine that the effectiveness for browsers would be even worse.

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 13:59 UTC (Sun) by RobSeace (subscriber, #4435) [Link]

That paper was talking about a MITM attack whereby the protocol version was altered, thus presenting what looked like a new key for the host (assuming the client has never before connected to it via SSHv1, for instance)... That's much different than what happens when a changed key of the same type as already recorded for the host is presented... In that case, I don't even think it gives you the option of ignoring it; it just refuses to connect! If you want to "ignore" it, you have to go through the work of manually editing your known key file, and removing the old entry... Hopefully, not something anyone would casually do without verifying that the key has really changed for a legit reason...

But, even in the case of a host you regularly connect to suddenly presenting what appears to be a new key, I'd hope most people would be very suspicious, and not just blindly type "yes"... I would hope an attack of that nature would only succeed very often on people's first connections to the host in question; ie. they were expecting a new/unknown key warning... In that case, yes, most people don't bother verifying that the new key is really the right one...

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 14:06 UTC (Sun) by richmoore (subscriber, #53133) [Link]

The behaviour depends on the client your using. For example if you use putty then a changed host key is just a single push button away from being ignored.

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 16:19 UTC (Sun) by richmoore (subscriber, #53133) [Link]

You missed the part of the paper where he found that the planned test could not be performed since there was no base line data. No one had ever asked about host keys that changed due to reinstalls etc. indicating that users do in fact just blindly ignore this.

Is it really zero?

Posted Dec 14, 2014 17:23 UTC (Sun) by tialaramex (subscriber, #21167) [Link]

Although of course it isn't exactly an exhaustive test.

I wonder how many of the systems administrators I've had over the years would say the same (nobody has ever asked) even though I think almost all of them have presided over a sufficiently poorly managed hardware change to trigger this error for me, and get a question in their INBOX as a result. People aren't very good at remembering such trivial events. It's possible some people reading this are among my past systems administrators. Do you remember getting an email about this from me?

Also, where a key change might be triggered by a known hardware refresh it's possible people were expecting it. I'd still ask (out of an abundance of caution and because I want to encourage a more thoughtful approach to key management) but many people, knowing machine A has been replaced, would be less surprised to see the key mismatch flagged for machine A than it if it were to happen (as it might in an attack) without notice one Monday morning.

I don't doubt that the actual rate of at which people would query an unexpected key change is very low, embarrassingly low, but it may not actually be zero. Asking admins to keep count over time might produce more reliable results.

Chromium to start marking HTTP as insecure

Posted Dec 16, 2014 19:45 UTC (Tue) by jhhaller (subscriber, #56103) [Link]

Much easier to just remove the known_hosts file than to edit and remove the offending line. In an environment when the hosts are frequently reinstalled, not much of importance is lost. I would argue that allowing the offending line to be replaced through the ssh user interface, like PuTTY does, would be a lower risk than the current behavior, with the default behavior to not replace it so one would at least have to type 'y'.

Since I really have no idea whether the host was reinstalled or someone inside my network is running a MITM attack, I assume the former. If I used ssh outside my network, I would be more concerned. While the certificate authentication has it's issues, the SSH option of "trust on first connect" is significantly worse, to the point that it is of little value. The certificate based approach at least reduces the risk to the insecure distribution of trusted public certificates.

Chromium to start marking HTTP as insecure

Posted Dec 13, 2014 19:38 UTC (Sat) by toyotabedzrock (guest, #88005) [Link]

Maybe they should stop hiding the http part of URLs?

Chromium to start marking HTTP as insecure

Posted Dec 13, 2014 21:06 UTC (Sat) by gutschke (subscriber, #27910) [Link]

The proposal actually sounds pretty reasonable to me.

Universal deployment of encryption for all connections on the internet addresses a lot of problems. And the linked blog post does go into detail which problems a) exist right now, and b) can be mitigated by encryption.

Also, this is just an initial proposal to come up with a time line. It does not say that Chrome/Chromium is going to make the switch in 2015. In the past, similar efforts have taken multiple years to complete. This all makes a lot of sense to me, and I don't understand why anybody would complain that it is too early to even start talking about making the switch. In fact, the web community as a whole probably should have started the conversation years ago.

Finally, it is in fact pretty easy these days to switch an entire site to all-https-all-the-time. For many users it is as easy as:

1) make main web server(s) inaccessible from the web,
2) install nginx on a publicly accessible IP address,
3) obtain free SSL certificate from StartSSL,
4) configure nginx to forward all HTTP requests to HTTPS,
5) configure nginx to forward all HTTPS requests to internal web servers,
6) check on https://www.ssllabs.com/ssltest/ that the site gets at least a rating of "A" if not "A+".

All of this shouldn't take more than at most half a day of work. It really is that easy these days. As an added bonus, NGINX can transparently enable SPDY support; so, after all is said and done, the site is probably going to be a lot more responsive for most users.

Of course, there are lots of lose ends. But the above covers 90+% of the work. Mixed-content warnings are probably the biggest remaining issue, but nginx can help with that. And maybe, transitioning the entire infrastructure to more modern tools, instead of relying on a reverse proxy, is a good long-term goal. But that work can happen gradually.

Remembering to regularly update the SSL certificate is the next problem; the Let's-Encrypt effort should help with that, but in the meantime, paying for a certificate from a provider that offers better tools than StartSSL is an option.

Setting up DNSSEC and DANE would also be a good idea at this time. But the benefits are still limited -- and honestly, it's only another day or two of work to get that all straightened away.

In other words, there really is no good excuse to still have plain-HTTP web servers on the public internet. Spend the half-day of work and fix it already!

Chromium to start marking HTTP as insecure

Posted Dec 13, 2014 22:24 UTC (Sat) by gutschke (subscriber, #27910) [Link]

To put my money where my mouth is, here is an example configuration for nginx that will do
what I described. It probably needs to be customized for the particular usage case, but it should
be good enough to get everybody started:
# Forward requests for https://example.org, and use the appropriate certificate
server {
  listen 443;
  listen [::]:443;
  server_name *.example.org example.org;
  ssl on;
  ssl_certificate certs/example-org.crt;
  ssl_certificate_key certs/example-org.key;
  ssl_trusted_certificate certs/certificatechain.pem;
  ssl_session_timeout 5m;
  ssl_session_cache shared:SSL:50m;
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_prefer_server_ciphers on;
  ssl_ciphers "ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:HIGH:!aNULL:!MD5:!kEDH";
  ssl_buffer_size 8k;
  ssl_stapling on;
  ssl_stapling_verify on;
  add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";
  resolver 8.8.8.8;
  autoindex off;

  location / {
    proxy_pass http://192.168.1.1;  # Substitute IP address of internal web server
    proxy_set_header Host $host;
    proxy_intercept_errors on;
    proxy_redirect http:// https://;
    proxy_set_header X-Real-IP $remote_addr;
  }
}

# Redirect http://example.org to HTTPS
server {
  listen 80;
  listen [::]:80;
  server_name *.example.org example.org;
  rewrite ^ https://example.org$request_uri? permanent;
}

Chromium to start marking HTTP as insecure

Posted Dec 13, 2014 23:52 UTC (Sat) by murukesh (guest, #97031) [Link]

Regarding that rewrite to https://, see http://wiki.nginx.org/Pitfalls#Taxing_Rewrites

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 0:36 UTC (Sun) by gutschke (subscriber, #27910) [Link]

Thanks for catching that. Yes, that looks a little nicer.

In practice, it doesn't matter all that much, as this rule will trigger relatively rarely ... only the very first time a user types the URL into the browser and forgets to include the "https://". As we enabled HSTS, the browser should remember and in the future it should always automatically switch to TLS, even if the user forgot to tell it.

But of course you are right, and there is no excuse for needlessly invoking a regexp match, when it is not actually required.

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 17:10 UTC (Sun) by nix (subscriber, #2304) [Link]

I think you just disproved your own argument. It is *not* simple: even you, who thought it was simple, messed up your first attempt at it! (And I'm still not sure what the right thing to do is, nor what to do if I don't have a spare machine, only a spare network port on some existing machine that's already running an internal webserver -- does the redirection need changing? What about all the relative URIs on pages served by that site? Do they all get rewritten too?)

This is *not* simple, sorry.

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 18:22 UTC (Sun) by gutschke (subscriber, #27910) [Link]

"murukesh" asked me to change
rewrite ^ https://example.org$request_uri? permanent;
into
return 301 https://example.org$request_uri;

Both lines do exactly the same thing ("301" is HTTP's way of saying "permanent"). But the latter avoids an unnecessary comparison by regexp. "murukesh" is entirely correct that his version is considered more idiomatically correct. But even if you didn't make that change and used my example configuration verbatim, I bet you will not notice any difference. I doubt even most benchmarks could tell the difference. "^" is the simplest regexp possible. It's not going to cost any appreciable amount of performance.

If you have spare IP addresses or spare machines, things are pretty easy. But even if you don't have any spare resources, things are not really that much more difficult.

Move your existing server to somewhere, where it isn't accessible from the internet. This could be a private IP address (e.g. 192.168.1.1 or even 127.0.0.1) or it could be just a non-standard port that your firewall blocks (e.g. port 81 instead of port 80) -- or both.

Then use the example configuration file that I showed earlier. Replace each instance of "example.org" with your domain name. And replace the one instance of "http://192.168.1.1" with a URL that points to your original server (e.g. http://127.0.0.1:81).

You still need to get keys and certificates for your domain, and then put them into the files "example-org.key", "example-org.crt", and "certificatechain.pem". In my example configuration file, I assume that you put them into "/etc/nginx/certs/", but you are welcome to specify a different absolute path.

There are plenty of resources online describing how to generate a private key and how to obtain (free) certificates. But if you need help, don't hesitate to ask. I'll be happy to answer any of your questions. It's not really difficult, but it does admittedly require a little bit of reading if you have never done any of this before.

Relative URLs all work automatically and you don't need to do anything special. Absolute URLs are more problematic, if they include the scheme (i.e. if they say "http://..."). You can use nginx to find these and rewrite them for you. But that makes things more complicated, so I didn't want to include that in my basic example configuration file. It probably requires branching out to the embedded LUA interpreter in nginx.

For most people, it is probably a better solution to simply fix their website to never have absolute URLs. But if you want me to post an example how use LUA for rewrites, I can look it up for you later in the day.

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 18:55 UTC (Mon) by nix (subscriber, #2304) [Link]

Relative URLs all work automatically and you don't need to do anything special. Absolute URLs are more problematic, if they include the scheme (i.e. if they say "http://..."). You can use nginx to find these and rewrite them for you. But that makes things more complicated, so I didn't want to include that in my basic example configuration file. It probably requires branching out to the embedded LUA interpreter in nginx.
This is looking less and less simple by the minute! (At least a nonstandard port works -- that's what I was hoping.)

I note that I have in the past been told 'never use relative URLs, only absolute URLs' for various reasons -- and now it turns out this causes problems here... groan.

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 19:47 UTC (Mon) by gutschke (subscriber, #27910) [Link]

When you say that you always use absolute URLs, do you mean something like "http://example.org/x/y/z.html" or do you mean "/x/y/z.html". The former is problematic, as it includes the scheme and the host name. Usually, it makes things complicated if you include information about the server in the URL, as it ends up being very difficult to make any changes to your infrastructure.

The latter of course is fine; it's still an absolute path, but it doesn't needless encode redundant information. And you might in fact want to do that, whenever you otherwise would have had to do something like "../y/z.html". Relative URLs with ".." in the path are always awkward; not every server supports this syntax, and it can also make it difficult to reason about path-based security. Maybe, that's what you were remembering, when you said you avoid relative URLs.

Another very clean solution is to put a "<base>" tag at the top of your HTML file. That way, if you ever need to make changes to URLs, they are contained in a single place.

A reason why some people think they might need to encode scheme and host name, is that they always want to redirect their users to the secure site, and to the canonical host name (e.g. "example.org" instead of "www.example.org"). While the goal is laudable, the approach of encoding this data in the content of the page is bad. It is much better to tell the web server that should generate a HTTP redirect, whenever a request arrives for the wrong destination. That way, you avoid the layering violation.

There are a very small number of remaining cases, where encoding scheme and host name is needed. Sometimes, it can be worked around with Javascript, sometimes it can't. Those are the ones you need to look for an either edit or teach your web server to detect and rewrite on the fly. This is similar to what people used to do with server-side-includes back in the dark ages.

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 22:03 UTC (Mon) by gracinet (subscriber, #89400) [Link]

IIRC, http://example.org/x/y/z.html is an absolute URI, whereas /x/y/z.html is a relative URI with absolute path.

Most of the times, when you write a web app, making assumptions on the URL that users see in the browser is a bad idea in my experience. It can change due to various policy or business decisions, including the name of the organization the application is running for ! In a proper CMS, all of that should stay dynamic.

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 22:57 UTC (Mon) by gutschke (subscriber, #27910) [Link]

Sweet. I didn't know the different names "absolute URL" vs. "relative URL w/ absolute path". That makes the conversation so much easier :-)

And yes, I agree with you, keeping things dynamic makes life a lot easier. I have had to embed various web apps and components within other web sites, and it is so much easier whenever the app was written with embeddability and portability in mind.

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 11:29 UTC (Sun) by mpr22 (guest, #60784) [Link]

Please don't use <pre>...</pre> with long lines in LWN comments; the CSS is not written in a way that allows web browsers to cope gracefully with it.

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 17:54 UTC (Sun) by gutschke (subscriber, #27910) [Link]

Have you tried the new UI that LWN is currently testing? I switched to it recently, and I no longer see any problems with long lines. Of course you are right, and in the transition phase from the old to the new UI, some people will still occasionally suffer from that problem.

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 18:10 UTC (Sun) by osma (subscriber, #6912) [Link]

Yes, it's rather simple to put a https-to-http proxy in front of your web server. But there are also pitfalls:

Your HTTP web server will now see connections from localhost, not the original IP. This might cause issues with logging, access control, sessions and the like. Luckily there are solutions like mod_rpaf, mod_extract_forwarded and mod_remoteip (but the fact there are three alternatives, plus patched versions of mod_rpaf, already shows it's not that simple). You also need to configure them appropriately.

Also I've seen web applications try to detect whether the connection uses HTTPS, by checking for environment variables normally set by mod_ssl. But if you use a proxy, those won't get set - unless you have the right patched version of mod_rpaf.

In one particular "worst case" scenario I've had to resort to this kind of stack:

* Pound to proxy HTTPS to HTTP
* Varnish as a caching reverse proxy
* Pound again, to proxy HTTP to HTTPS
* Apache with mod_ssl running the original web application

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 18:30 UTC (Sun) by gutschke (subscriber, #27910) [Link]

I would have thought that anybody who really cares about IP addresses can be told to look at the "X-Real-IP" header these days. That's been working for many years now. But I guess your example shows that there are still some services out there that haven't been updated. Sad, but probably true.

Maybe, that means now is a good time to upgrade the infrastructure. If things are this outdated, who knows what other issues and security problems lurk in that old code base. But yeah, it sucks to be stuck maintaining those systems.

As for detecting HTTPS, I don't think I have had any particular problems with that so far. Would it not have sufficed to change the proxy URL to "https://..."? (I had to do this once, to re-write an old SSL connection to support TLS). Or do I misunderstand your example and it was something more complicated than that?

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 4:21 UTC (Mon) by plugwash (subscriber, #29694) [Link]

The problem is that you must not just accept headers like x-forwarded-for or x-real-ip unconditionally. You must be very careful to ensure you only use headers set by trusted reverse proxys and not headers set by attackers.

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 4:30 UTC (Mon) by gutschke (subscriber, #27910) [Link]

That is certainly true, and that's why I keep saying that the original web server has to be made inaccessible from the public internet.

The nginx reverse proxy then unconditionally sets the X-Real-IP header and overwrites any existing such header, if provided by untrusted sources. So, this already works as intended.

Of course, if you do rely on IP addresses for anything other than logging purposes (and I am a little uneasy about trusting IP addresses for anything), you should have unittests that regularly test that this assertion holds true and you didn't accidentally poke a hole into your security somewhere.

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 8:32 UTC (Mon) by osma (subscriber, #6912) [Link]

As for detecting HTTPS, I don't think I have had any particular problems with that so far. Would it not have sufficed to change the proxy URL to "https://..."? (I had to do this once, to re-write an old SSL connection to support TLS). Or do I misunderstand your example and it was something more complicated than that?

Web applications have many reasons, some better than others, for detecting whether the connection uses HTTPS or not. For example, it may affect URLs of generated links, session cookie handling, showing the user a link to the HTTPS version, or maybe issuing a redirect to HTTPS when a HTTP connection is detected.

Custom web applications are not a problem, as they can be customized to the environment, but packages like WordPress, Drupal, phpMyAdmin, phpBB, Horde etc. can be (I'm mentioning only PHP applications, but I believe the problem is more general).

For example, based on a cursory inspection of phpMyAdmin, I see dozens of code lines about HTTPS. Some code seems to be about cookies, others about URL generation, yet others about forced redirects to HTTPS. I wouldn't trust this code to work correctly without having the proper HTTPS environment variables set. One could maybe set them manually in Apache configuration directives and hope for the best...

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 10:19 UTC (Sun) by kleptog (subscriber, #1183) [Link]

Unfortunately, if you have a domain something.foo.com then you can't do squat, you have to wait until foo.com organises something.

In my case, I haven't yet found out how to get an SSL certificate for something.debian.net.

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 12:00 UTC (Sun) by jcristau (subscriber, #41237) [Link]

FWIW gandi let me buy one for https://france.debian.net/

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 8:39 UTC (Sun) by yoe (subscriber, #25743) [Link]

Looks like I'll have to switch away from chromium again--this is a dumb idea.

There is a major difference between 'this site is insecure because they messed up' and 'this site is insecure because it doesn't matter'. Making them both show security warnings will train users to ignore them when it really matters, and that will be a net reduction in security.

Chromium to start marking HTTP as insecure

Posted Dec 14, 2014 10:53 UTC (Sun) by chirlu (subscriber, #89906) [Link]

Sounds much like that comment, so see the discussion there.

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 15:56 UTC (Mon) by mstone_ (subscriber, #66309) [Link]

Or the secure mode becomes normal and there's a net reduction in warnings. Fundamentally, it's a hard problem for the browser to distinguish "intended to be insecure" from "MITM forcing insecure mode" even beyond the other arguments about privacy & stupid ISP tricks. This isn't the 1980s internet anymore (as much as that sucks) so HTTPS-only (or an improved replacement) is probably an inevitable move eventually.

Chromium to start marking HTTP as insecure

Posted Dec 15, 2014 18:52 UTC (Mon) by iabervon (subscriber, #722) [Link]

Of course, "this site is insecure because an attacker tricked you into using http" or "this site is insecure because the major CA screwed up again" don't make your list. In my experience, the common situations are "this site doesn't care about security, and maybe I shouldn't either", "this site paid someone a bunch of money to make me happy, and is either secure or malicious" and "this site offers real security to people with a personal relationship to it, but not to me, and I don't care".

Really, there are two dimensions that matter: "can the browser determine that this connection is secure" and "do I need a secure connection". Allowing the connection info to answer the second question and not giving much information to the user about the first is just terrible practice.

Reasonable step to stop ISP messing with contents

Posted Dec 15, 2014 6:51 UTC (Mon) by proski (guest, #104) [Link]

Apparently Google wants to punish ISPs for (mis)using their infrastructure to inject ads instead of buying ads from Google. It's understandable and logical, yet it will be a sad day when the first web page ever created is marked as insecure.

Reasonable step to stop ISP messing with contents

Posted Dec 15, 2014 16:29 UTC (Mon) by idupree (subscriber, #71169) [Link]

I bet the first web page ever created could upgrade its server; it already claims to be Apache and supports HTTP/1.1 and byte-range requests.

$ curl --head http://info.cern.ch/hypertext/WWW/TheProject.html
HTTP/1.1 200 OK
Date: Mon, 15 Dec 2014 16:23:17 GMT
Server: Apache
Last-Modified: Thu, 03 Dec 1992 08:37:20 GMT
ETag: "40521e06-8a9-291e721905000"
Accept-Ranges: bytes
Content-Length: 2217
Connection: close
Content-Type: text/html

(That's a wonderful Last-Modified header, I have to say.)

Reasonable step to stop ISP messing with contents

Posted Dec 15, 2014 18:39 UTC (Mon) by nix (subscriber, #2304) [Link]

Judging by the fact that it is accurate down to the second I'll bet it comes ultimately from the original file timestamp :)

Chromium to start marking HTTP as insecure

Posted Dec 16, 2014 5:45 UTC (Tue) by gutschke (subscriber, #27910) [Link]

Found a fun web site that demonstrates how these days HTTPS turns out to often be faster than HTTP (thanks to things like SPDY, and thanks to being able to disable broken transparent proxies):
https://www.httpvshttps.com/

For some details on why this happens, take a look at https://thethemefoundry.com/blog/why-we-dont-use-a-cdn-sp...

Chromium to start marking HTTP as insecure

Posted Dec 16, 2014 10:56 UTC (Tue) by arekm (subscriber, #4846) [Link]

That's not HTTPS being faster than HTTP. It's SPDY faster than HTTP.

If SPDY could work over HTTP then it would be faster than SPDY over HTTPS.

Chromium to start marking HTTP as insecure

Posted Dec 18, 2014 2:42 UTC (Thu) by gutschke (subscriber, #27910) [Link]

It's unlikely that SPDY would ever reliably work over an unencrypted channel. There are too many middle boxes out there that are going to break things really horribly.

Websockets ran into the same issue, and websockets look at lot more like plain HTTP than what SPDY does. So, these days, everybody uses websockets over TLS only.

The noteworthy thing here is that even with the added overhead of TLS, the end-to-end user experience is actually better than with plain old HTTP. That is certainly an impressive technical achievement.

How to address small site costs?

Posted Dec 16, 2014 14:57 UTC (Tue) by david.a.wheeler (subscriber, #72896) [Link]

Many people run small sites and are not made of money. Currently certs add extra costs. The cert itself, of course, costs money, and some "free" ones charge a lot if you want to revoke it. There's the price for a unique IP address (shared sites often require you to pay extra for unique IP addresses to be allowed to add the cert, even if you only have users with modern browsers). Never mind the time it takes to add the cert, which for many is non-trivial. TLS also counters most free CDN services - so if you want to use them to counter DoS attacks, you have to drop the CDNs or pay more. CDNs are actually more important because TLS also eliminates most caching systems.

For sites like google.com these costs are a trivial part of the rent. For small sites, this is non-trivial. Yes, the actual encryption CPU time has become trivial, but that is not all. If you want everyone to be forced to use one of a few sites that doesn't matter, but it does matter to others.

Suggestions?

How to address small site costs?

Posted Dec 16, 2014 21:27 UTC (Tue) by rodgerd (guest, #58896) [Link]

> Many people run small sites and are not made of money. Currently certs add extra costs.

It also adds control points: the CAs.

How to address small site costs?

Posted Dec 16, 2014 23:19 UTC (Tue) by foom (subscriber, #14868) [Link]

1) Certs *are* free today, and they're going to be getting freer and easier to deploy soon (via EFF/Mozilla's "Let's Encrypt" project). Also, if you don't care about using HTTPs, why would you ever bother to revoke a cert?
2) Ask your hosting provider to not require a dedicated IP, or switch to a different hosting provider -- times have changed; dedicated IPs aren't needed for most sites anymore. The provider just needs to update their policies. (presumably customer demand would help with that...)
3) There's at least one CDN (Cloudflare) which offers free TLS CDN service to everyone, cert included. I don't know of any reason why other free CDNs couldn't also provide free TLS service, if they don't also, already.

Today, certs definitely add admin overhead -- using the CA websites to install new certs every year is quite annoying -- especially if you have multiple hostnames. But cash? No, not really. And, by the time this change has gotten to Chromium, I'm sure EFF/Mozilla's solution will be in place, solving the admin-overhead issue as well.

How to address small site costs?

Posted Dec 23, 2014 11:33 UTC (Tue) by robbe (subscriber, #16131) [Link]

> Certs *are* free today
Citation needed. The only free certs I know of have a non-commercial policy. Looks like letsencrypt will change that...

How to address small site costs?

Posted Dec 17, 2014 17:40 UTC (Wed) by nturner (subscriber, #55735) [Link]

For what it's worth, StartSSL currently charges $25 to revoke a certificate.[1] We can debate whether or not that's "a lot", but it's less than I expected.

[1] https://www.startssl.com/?app=25#72

How to address small site costs?

Posted Dec 17, 2014 20:01 UTC (Wed) by mathstuf (subscriber, #69389) [Link]

The problem is that if there is an issue, Mozilla guidelines state that the cert must be revoked (regardless of cash). But they're "too big to fail" at this point :( .


Copyright © 2014, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds