Chromium to start marking HTTP as insecure
Posted Dec 13, 2014 14:52 UTC (Sat) by gvy (guest, #11981) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 13, 2014 17:14 UTC (Sat) by clopez (subscriber, #66009) [Link]
Chromium to start marking HTTP as insecure
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 2:54 UTC (Sun) by okusi (guest, #96501) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 4:38 UTC (Mon) by b7j0c (guest, #27559) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 13, 2014 15:25 UTC (Sat) by nix (subscriber, #2304) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 13, 2014 18:14 UTC (Sat) by mathstuf (subscriber, #69389) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 13, 2014 19:26 UTC (Sat) by josh (subscriber, #17465) [Link]
And that's precisely the reason to do this.
Chromium to start marking HTTP as insecure
Posted Dec 13, 2014 20:17 UTC (Sat) by rodgerd (guest, #58896) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 13, 2014 20:27 UTC (Sat) by josh (subscriber, #17465) [Link]
It's not perfect; there's a reason why Chromium is outlining a long transition plan here. But I'd like to see us work towards a world where there simply aren't any plaintext protocols left.
Chromium to start marking HTTP as insecure
Posted Dec 13, 2014 22:59 UTC (Sat) by jospoortvliet (subscriber, #33164) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 13, 2014 23:35 UTC (Sat) by gvy (guest, #11981) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 0:49 UTC (Sun) by gutschke (subscriber, #27910) [Link]
This doesn't mean that it is impossible to steal the private key, but it requires a deliberate targeted attack on the web server. Attacking the CA is not sufficient to obtain the private key.
Being able to issue fake signed certificates is a real problem, but there are ways to mitigate this issue. Public audit records would be great, but that's a very new technology and I am not sure which CA if any support them. In any case, Chrome has started informing about the lack of audit records and I suspect at some point this will become an actual security warning.
Another good option is DANE, but unfortunately there currently is no support for it in any browsers that I am aware of. It also requires DNSSEC, which takes a bit of effort to set up the first time round.
Pinning would also work to detect compromised certificates, but I am unclear on how well that works at this time. Feel free to comment, if you know more.
Chromium to start marking HTTP as insecure
Posted Dec 16, 2014 9:43 UTC (Tue) by nim-nim (subscriber, #34454) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 17, 2014 12:46 UTC (Wed) by Tjebbe (subscriber, #34055) [Link]
So they go for certificate pinning, which imho is a half-baked DANE.
But let's hope they come around at some point in the near future.
Chromium to start marking HTTP as insecure
Posted Dec 17, 2014 22:33 UTC (Wed) by flussence (subscriber, #85566) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 14:40 UTC (Sun) by edt (guest, #842) [Link]
The yellow idea makes a little sense. It would be worth adding a link to the icon that explains what http implies and why you might want to be aware that you are not protected by https. (eg. http lets anyone seeing the data stream see what you are looking at)
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 16:09 UTC (Sun) by tialaramex (subscriber, #21167) [Link]
So please give half a dozen different examples of such sites and why everybody should be fine with those two constraints for those sites and not ask for better.
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 17:06 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 17:32 UTC (Sun) by vapier (subscriber, #15768) [Link]
while some of this you might consider hyperbole, there are free wifi providers (e.g. coffee shops) today as well as isp's that are actively injecting javascript into just about every http connection.
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 17:39 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 18:26 UTC (Sun) by tialaramex (subscriber, #21167) [Link]
And yet, a _legislative_ solution in the form of a ban and presumably new law enforcement powers in order to implement the ban, with all the associated infringement of people's normal rights that implies, is apparently very welcome even though it would almost certainly be ineffective.
I can't tell whether you just haven't thought about this very hard or whether you're actively in favour of leaving things as vulnerable as possible for some reason.
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 21:03 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link]
And I also totally hate TLS/SSL - it's an ugly protocol that should not have had been born at all.
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 21:54 UTC (Sun) by cesarb (subscriber, #6266) [Link]
Law is never simple. What's the definition of "provider"? If I open my home's guest wifi to a visitor, am I a "provider"?
Law is not always obeyed. The bigger the provider is, the easier it is to find out they're not following the law; but if every coffee shop is considered a "provider", how would them all be monitored for compliance?
Also, which law? I live in a country different from yours. A law forbidding providers from meddling with the content in my country would have no effect on your provider, and vice versa.
Finally, criminals do not obey the law. A criminal which takes over my ISP's router could make it rewrite the content; TLS protect against that.
And why not both? A law forbidding content manipulation and technical means to detect content manipulation complement themselves. With TLS, if a provides tries to change the content, it complains loudly, which would be perfect for the enforcement of your hypothetical law.
> And I also totally hate TLS/SSL - it's an ugly protocol that should not have had been born at all.
Yes, TLS has its warts. It could have been made better; for instance, it could always use Diffie-Hellman to negotiate a shared key, like IKEv2 does. But not having been born at all? It or something like it was inevitable; it came from the desire to protect HTTP requests from eavesdroppers. If it wasn't Netscape, it would be Microsoft; if it weren't Microsoft, it would be somebody else.
Since we have to live with it, it's more constructive to fix TLS's warts; developers are working on things like the Encrypt-Then-MAC extension, the TLS 1.3 work, and so on.
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 23:26 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link]
> Law is not always obeyed. The bigger the provider is, the easier it is to find out they're not following the law; but if every coffee shop is considered a "provider", how would them all be monitored for compliance?
You report that someone misbehaves to FCC and they gladly fine the violator. Additionally, if FCC shuts down companies that enable ad injection then it'd be enough to stop that.
In reality, I haven't seen these problems with injected ads. However, I did see problems with HTTPS negotiations taking far longer than simple HTTP requests, especially on slow cellular connections.
> Yes, TLS has its warts. It could have been made better; for instance, it could always use Diffie-Hellman to negotiate a shared key, like IKEv2 does.
TLS can use ECDH, that's not an issue. The issue is that the whole certificate system is braindead in the extreme in all of its facets. Starting from certificate format itself.
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 8:32 UTC (Mon) by spaetz (subscriber, #32870) [Link]
My country disagrees and makes me responsible for all illegal stuff flowing through,in contrast to what they consider a provider....
Chromium to start marking HTTP as insecure
Posted Dec 16, 2014 9:42 UTC (Tue) by nim-nim (subscriber, #34454) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 10:10 UTC (Mon) by tialaramex (subscriber, #21167) [Link]
They will gladly send you a response saying that, alas, they have limited resources available to investigate this sort of thing, but they've made a note of it.
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 6:29 UTC (Mon) by filipjoelsson (guest, #2622) [Link]
To use copyright law to remove the ads, maybe the best way forward would be to contact someone who is very protective WRT their visual presentation (eg Disney or a high profile artist), and show them the ads. Maybe even play stupid, and ask why they have the ads on their page?
The results could be rather interesting.
PS Am I the only one getting flashbacks to Geocities?
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 7:28 UTC (Mon) by osma (subscriber, #6912) [Link]
It's quite simple - providers should not change the content.
In your thinking, does this apply to things like redirects to splash pages / login screens in public WiFi access points? They're changing the content too - taking you to a whole different page than the one you may have expected.
Just curious... I hate splash pages, but I understand the need for them in some scenarios.
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 8:41 UTC (Mon) by josh (subscriber, #17465) [Link]
There are two much better ways to handle that. First, there's a standard for a DHCP server to inform the client that it expects a visit to a specific web page. And second, for the large number of clients that don't yet understand that DHCP extension, just tell the client to visit a specified URL first; either print it in the instructions for accessing the Internet, or make the URL the ESSID.
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 10:50 UTC (Mon) by osma (subscriber, #6912) [Link]
There are two much better ways to handle that. First, there's a standard for a DHCP server to inform the client that it expects a visit to a specific web page. And second, for the large number of clients that don't yet understand that DHCP extension, just tell the client to visit a specified URL first; either print it in the instructions for accessing the Internet, or make the URL the ESSID.
I agree the current approach has its flaws, and I hope the DHCP extension starts getting implemented. But until about 99% of (mobile) devices support the new DCHP way, I don't think any access point owner will be in a hurry to implement it instead of the current "solution", especially if it means that users that don't receive, or don't understand, the instruction to visit a specific web page will be left with no working connection at all. At least the current approach mostly works and gives users the information they need without much effort.
My Nokia N9 does a HTTP request in the background to a known address as soon as it connects to a new network, and if it receives a HTTP redirect (presumably from a captive portal), it will open the browser showing the page that was redirected to.
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 17:58 UTC (Mon) by mathstuf (subscriber, #69389) [Link]
Automatically? I prefer the Android way (as of at least 4.2; probably 4.0) of a notification rather than interrupting what I was doing.
Chromium to start marking HTTP as insecure
Posted Dec 16, 2014 13:59 UTC (Tue) by Wol (guest, #4433) [Link]
Except it doesn't. I regularly forget to visit a web page when using access points like that, and spend ten minutes puzzling over why thunderbird et al don't work, before I have a "doh" moment and fix it. Or I've fixed it and everything suddenly stops working, because the access point has forgotten about me and I need to re-authenticate.
And with the move more and more towards mobile apps, this problem is going to get worse.
Cheers,
Wol
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 14:04 UTC (Mon) by nim-nim (subscriber, #34454) [Link]
In fact there is no standard way to do secure proxying cleanly. Browser authors know it, but they've resisted for years attempts to fix the situation :
– Google fears better proxying support will help users deploy ad blockers. They've been working hard for years to capture user Internet access (first with Chrome, then with Android). Now that they have a huge deployed base they've started removing "play nice with others" features.
– Mozilla people think the web must be "free" (meaning they can decide whatever they want with site authors without users or network operators interference and ignoring constraints that do not apply for Californian statups)
– Microsoft does not do standards. Custom insecure hacks like the dhcp one works for them (as long as it's in AD)
– who knows what Apple thinks. If it does something it will be Apple specific
All of them hope that if they let proxy support annoying enough proxies and gateways will go away. They don't think twice about adding custom hacks to their web clients to support their own image re compression proxies. It's no longer a technical decision, it's a browser power play.
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 21:34 UTC (Mon) by lsl (subscriber, #86508) [Link]
Set up the clients to use your proxy and import the appropriate root CA cert? Doesn't this work anymore? What are the issues? Cert pinning?
Chromium to start marking HTTP as insecure
Posted Dec 16, 2014 9:38 UTC (Tue) by nim-nim (subscriber, #34454) [Link]
1. + 2. may be sort of acceptable for fixed systems, but it's a disaster for computers that roam on other networks, even more so if they are guests that are then vulnerable to any problem in the handling of the original org secrets.
Chromium to start marking HTTP as insecure
Posted Dec 16, 2014 9:58 UTC (Tue) by cesarb (subscriber, #6266) [Link]
I have seen such a recommendation in the wake of one of the CA hacks, and it makes sense: if I always get my certificate from Comodo, why should my client allow certificates from Diginotar? If I decide to change to another CA, it takes only a few hours to upload a new version of the client to Google Play.
And if my client is the only thing which accesses the server, why bother with the traditional CA model? Just hardcode the server certificate in the client, or to be more sophisticated hardcode a self-signed "CA certificate" which will only be used to sign the server certificate.
Chromium to start marking HTTP as insecure
Posted Dec 16, 2014 19:23 UTC (Tue) by dlang (✭ supporter ✭, #313) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 16, 2014 22:15 UTC (Tue) by cesarb (subscriber, #6266) [Link]
My point is, client software which hardcodes the application server's certificate or CA will simply reject the proxy's certificates or CA, because it's different from what's hardcoded.
An example: suppose I create a service which shows an alert on your smartphone when your bus is late. This service has two components: a server I control, and a client I put for download on Google Play. To protect the user's login, I use TLS between the client (on the user's smartphone) and the server.
Since I control both the client and the server, I don't need the traditional CA model. Instead, I hardcode the server's certificate hash in the client application source code. If it ever changes, I'll hardcode a new server certificate and put a new version of the client on Google Play.
That model is safer than the traditional CA model, since no third-party CA can ever produce a valid certificate for my server. The only way to produce a valid certificate is to either extract the private key from the server, or produce a modified version of the client application which accepts the new certificate.
That model also completely breaks transparent MITM proxies; even if the MITM root CA is imported into the phone, the application will ignore it and reject the connection with an "invalid certificate" error. The application is correct, since it knows that the server certificate does not come from that new CA (there is no distinction between a MITM CA and a normal CA that is new enough to not be in the default root CA store).
And notice that, if the proxy verifies the server certificate, it will reject it as invalid, since it's not signed by any CA on its list! But it's in fact the correct certificate for this application (and only for this application).
----
The MITM proxy model is broken. It exploits one of the worst problems of the current CA model: that any of the hundreds of CAs in the root CA list is trusted to create a valid certificate for any server. When that problem is fixed or worked around (as in my example), MITM proxies stop working.
We already see problems with certificate pining; browsers work around this by allowing any root CA which is on the root CA list but is *not* on the default root CA list (that is, any CA added by the user or by a malware) as trusted even for pinned domains (set security.cert_pinning.enforcement_level to 2 to disable that misfeature).
TLS enforces end-to-end security. MITM proxies, not being on either end, go against the TLS model. The correct solution would be to have a protocol where an explicitly configured client offloads TLS initiation to a trusted proxy, since it makes said proxy one of the ends; I don't think such a protocol exists at the moment, and even if it did exist, it wouldn't be implemented by most clients. If a client did implement it, it would be vulnerable to malware silently enabling the TLS offload, which would be a good reason for security-conscious developers to not implement such a protocol.
Chromium to start marking HTTP as insecure
Posted Dec 16, 2014 22:40 UTC (Tue) by dlang (✭ supporter ✭, #313) [Link]
How would your theoretical security protocol differ from the clients trusting the CA of the proxy and the proxy validating the certs of remote systems? It looks to me like the two are pretty close to functionally identical. And one requires developing a new protocol and changing software to use it, the other works today with existing software.
As far as which is more secure, that's a good question and the answer will vary from situation to situation. It's also a question of what you mean by "more secure"
end-to-end encryption that can't have any MITM is better for the user who always makes the right security choices and keeps their system up to date.
The MITM proxy approach is better for a company that wants to protect their network and their users as it lets them prevent the users from doing some things that they try to do.
some people believe that the only way the Internet should work is with true end-to-end connectivity, and that anything that tries to prevent that is EVIL, while others see that not all endpoints are equal, some are owned and controlled by their users, while others are owned and controlled by someone other than the person sitting at the keyboard.
proxies are valuable for the second case.
Chromium to start marking HTTP as insecure
Posted Dec 17, 2014 0:48 UTC (Wed) by cesarb (subscriber, #6266) [Link]
The same applies to just allowing a specific CA. It would make for a more complicated example, so I used the simpler "specific cert" example.
Just allowing a specific CA or a specific cert only "breaks" TLS if the server cert is changed in a way which doesn't match. As long as the certificate is signed by that specific CA or is that specific cert, nothing breaks.
After all, there's nothing in the TLS protocol which says that every program has to trust the same global list of CAs. It's perfectly fine, from a TLS point of view, to have in the same computer program A which only trusts ICP-Brasil (because it's a tax form uploader from the government) and program B which doesn't trust ICP-Brasil (because it follows Mozilla's list and ICP-Brasil still isn't in it).
> How would your theoretical security protocol differ from the clients trusting the CA of the proxy and the proxy validating the certs of remote systems?
* It would be explicit, so the client would be expecting the interception; MITM is indistinguishable from an attack.
* The client would know the specific key or CA the proxy uses, and so be able to reject fake proxies, and also be able to reject bypassing the proxy (an attacker would not be able to pretend to be the proxy but connect directly to the server).
* The client could tell the proxy more information, like "this key is supposed to be pinned" or "I know for certain that this server use this specific CA, accept no others".
* The proxy could tell the client more information, like "my connection to this server used the following certificate signed by the following CA" or "I believe this connection is one of these green-bar-EV things".
* It would even be possible to do a "decrypt but do not modify" mode, where not only both the proxy and the client can see the contents, but also both the proxy and the client can validate the certificate and the data. That is, the client would not need to trust the proxy for integrity, only for confidentiality.
> As far as which is more secure, that's a good question and the answer will vary from situation to situation. It's also a question of what you mean by "more secure"
By "more secure", I meant that an unrelated CA can't create a fake certificate for it. If my certificate is from GeoTrust, then Diginotar shouldn't be able to create a certificate for it. Hardcoding or pinning the certificate or the CA achieves that. The current MITM proxy approach requires making the creation of bogus certificates possible, thus it weakens the security for the whole Internet.
Unless I set the hidden parameter I mentioned in the previous comment, the fact that I added the ICP-Brasil root certificate (to securely access government sites) means that ICP-Brasil could in theory create a certificate for any site, even *.addons.mozilla.{org,net} which are pinned by the browser. This bogus result exists only because Mozilla doesn't want to break MITM proxies.
> end-to-end encryption that can't have any MITM is better for the user who always makes the right security choices and keeps their system up to date.
>
> The MITM proxy approach is better for a company that wants to protect their network and their users as it lets them prevent the users from doing some things that they try to do.
You are only considering two points of view: a security-conscious user and a competent company. There are others.
For a programmer, end-to-end encryption without MITM is safer, since there are only two points where the connection can be intercepted or modified: in the client (but if the attacker controls the client, I already lose), or in the server (but if the attacker controls the server, I have bigger problems). With MITM, there's a middlebox modifying the traffic in potentially unspecified ways, and perhaps saving the cleartext content in possibly unsafe places - and that is the case where the middlebox is *not* compromised.
And there are more than security considerations. For me, the best feature of HTTPS is that it completely bypasses broken transparent proxies. All my experiences with transparent proxies have been negative; the sooner they cease being viable, the better. (With a non-transparent proxy, when it breaks one can simply disable the proxy and go direct until it's fixed again.)
Transparent proxies
Posted Dec 17, 2014 11:53 UTC (Wed) by tialaramex (subscriber, #21167) [Link]
HTTP access to LWN from the office where I sometimes work is via a corporate transparent proxy. It can't even correctly understand how HTTP works, once in a while I'll hit "Preview" and get a 405 error reporting that the proxy tried to perform the operation as a GET instead of a POST. After a few moments I can retry and it'll work.
Imagine the convoluted mess that must be inside that proxy to get this wrong. This is a major "enterprise grade" product, and it can't get technical fundamentals right. What are the chances this thing doesn't have security vulnerabilities that put the company at more risk ?
But of course the _real_ fundamental was to get big corporates to open their wallets unquestioningly, so from a practical point of view it was 100% successful.
Transparent proxies
Posted Dec 18, 2014 9:38 UTC (Thu) by nim-nim (subscriber, #34454) [Link]
Thus, very often proxies must do very strange stuff just to convince browsers to authentify themselves. All proxy vendors would gladly dump this if browsers provided a simple auth system that didn't require messing up with the user traffic.
Transparent proxies
Posted Dec 18, 2014 10:11 UTC (Thu) by mchapman (subscriber, #66589) [Link]
Is authentication with a supposedly "transparent" proxy at all a common scenario? I wouldn't expect that to work. Browsers *should* treat HTTP 407 Proxy Authentication Required responses as hard errors if they don't think they're talking to a proxy.
Transparent proxies
Posted Dec 18, 2014 11:56 UTC (Thu) by tialaramex (subscriber, #21167) [Link]
They impersonate the remote host, redirect to the proxy by IP address and send a normal (non-proxy) HTTP Basic Auth required. Into which the employee is expected to type their secret credentials.
That's right, to "secure" your company the big boys will expect employees to send their credentials in plaintext over HTTP to a site identified only by an arbitrary string of digits‡. Are you laughing yet?
‡ 10.2.83.1 is an internal transparent proxy, 10.43.2.1 is also such a proxy, but 102.6.9.3 may be a bad guy stealing your credentials. Aren't these transparent proxies just great?
Transparent proxies
Posted Dec 18, 2014 13:37 UTC (Thu) by mchapman (subscriber, #66589) [Link]
They shouldn't. That was the gist of my previous comment: that status code should only be used by real, non-transparent proxies.
> They impersonate the remote host, redirect to the proxy by IP address
At which point they are no longer "transparent".
> ... and send a normal (non-proxy) HTTP Basic Auth required. Into which the employee is expected to type their secret credentials.
There's nothing stopping that from being on a secure site, with a proper (presumably internal) domain name.
> ‡ 10.2.83.1 is an internal transparent proxy, 10.43.2.1 is also such a proxy, but 102.6.9.3 may be a bad guy stealing your credentials. Aren't these transparent proxies just great?
What you've described doesn't seem to me to be a problem with transparent proxying itself. If it's really and truly "transparent", then it is no less secure than any other HTTP traffic over the wider Internet. (If it's a *caching* proxy, then there are of course more security implications. But most of these seem to be solved well enough by the caching proxy software I've used.)
Nor does it seem to be a problem with the HTTP protocol. The caching and proxying sections of the HTTP specification are indeed complex, but that doesn't mean they're fundamentally broken.
The problem you've described instead seems to be with the people that run transparent proxies. Transparent proxy authentication is, as far as I can tell, a contradiction in terms.
Transparent proxies
Posted Dec 18, 2014 20:38 UTC (Thu) by nim-nim (subscriber, #34454) [Link]
They don't particularly care about transparency, they're only using transparent properties because the result does not need specific configuration in browsers (and trying to keep a park of browsers configured is hell, users install new browsers every time, ie sucks on the internet, firefox is not really manageable centraly, etc)
>> ... and send a normal (non-proxy) HTTP Basic Auth required. Into which the employee is
>> expected to type their secret credentials.
> There's nothing stopping that from being on a secure site, with a proper (presumably
> internal) domain name.
Actually, that's one reason to run a captive portal, this way proxy auth can be protected by https, and you can reuse the same credentials as in other company apps. Now, the redirect dance to convince browsers to display auth is disgraceful, you need another one to put back users on their desired site afterwards, and it doesn't work on https since browsers refuse https redirects unless you hack user TLS sessions.
All this just because the spec "forgot" proxy<>browser signaling (no one in-clear header in unrelated request does not count) so the proxy needs to modify the user traffic to show anything to the user (auth, internal policy message, whatever). Transparent MITM with breakage of user TLS sessions is just the logical and most effective way to force browsers to display proxy messages.
Transparent proxies
Posted Dec 18, 2014 21:54 UTC (Thu) by zlynx (subscriber, #2285) [Link]
Well, the approved method to force use of a web proxy is to block all outgoing HTTP and HTTPS that isn't going through the proxy.
Then users can install any browser they like. It just won't work until they configure the proxy.
Much better than non-transparent transparent proxies.
Transparent proxies
Posted Dec 19, 2014 9:43 UTC (Fri) by nim-nim (subscriber, #34454) [Link]
Transparent proxies
Posted Dec 19, 2014 7:53 UTC (Fri) by rodgerd (guest, #58896) [Link]
Quite what the liability looks like if someone in the IT team decides to misuse their access to your traffic looks like is an interesting question.
Chromium to start marking HTTP as insecure
Posted Dec 16, 2014 19:22 UTC (Tue) by dlang (✭ supporter ✭, #313) [Link]
It can make it much harder to accept a certificate that doesn't pass the checks, but it's arguable that if you are deploying something like this, you really don't want the end users making that decision anyway.
Chromium to start marking HTTP as insecure
Posted Dec 17, 2014 14:11 UTC (Wed) by nim-nim (subscriber, #34454) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 17, 2014 19:02 UTC (Wed) by dlang (✭ supporter ✭, #313) [Link]
having them make their own checks after you have made yours is one thing. Trusting the users to do all the checks against the raw Internet is something very different.
BYOD is appropriate for some things and not for others. It's a different topic.
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 21:05 UTC (Sun) by drag (subscriber, #31333) [Link]
I am guessing that his view is that energy should be spent solving actual problems instead of spending of pushing technologies that only provide a false sense of security against theoretical issues.
Plus if you bombard users with missives about lack of http:// security they will just be trained to ignore them when it actually matters.
Hopefully the developers of chromium take this into account and try to use good logic when it's important to warn users or not.
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 22:42 UTC (Sun) by tialaramex (subscriber, #21167) [Link]
It is conceivable that, at some long future date when http-only is a rare exception, browsers will come back and block it by default. I can't rule it out, but I'd suggest that it's further away from us now than say, the phasing out of IPv4 or Russia joining the EU.
As to your claim that this is about "theoretical issues" let me quote the article that you apparently didn't read:
"We know that active tampering and surveillance attacks, as well as passive surveillance attacks, are not theoretical but are in fact commonplace on the web."
Chromium to start marking HTTP as insecure
Posted Dec 16, 2014 9:59 UTC (Tue) by nim-nim (subscriber, #34454) [Link]
However it is quite disingenuous of Google to "protect" us from http given all the efforts they've extended to deploy badware under the user radar (generalised js injections for adds, cookies and supercookies, very complex systems to multiply callbacks to third party sites without the browser showing anything, etc). The old 'mixed content' warning is a joke nowadays, sure it's all in https, but your browser may not be speaking to the site showed in the adress bar anyway.
Chromium to start marking HTTP as insecure
Posted Dec 16, 2014 13:02 UTC (Tue) by mathstuf (subscriber, #69389) [Link]
Unfortunately, not something I can teach friends and family to use easily :( .
Chromium to start marking HTTP as insecure
Posted Dec 16, 2014 14:09 UTC (Tue) by nim-nim (subscriber, #34454) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 16, 2014 16:08 UTC (Tue) by mathstuf (subscriber, #69389) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 17, 2014 10:05 UTC (Wed) by jezuch (subscriber, #52988) [Link]
Yeah, it's a bummer and I'm reluctant to use it too. Hopefully EFF's Privacy Badger[1] is/will be able to supplant it.
Chromium to start marking HTTP as insecure
Posted Dec 17, 2014 11:41 UTC (Wed) by tialaramex (subscriber, #21167) [Link]
Whether they aggregate some untrusted nonsense and paste it into their reply, or they stitch it in using dynamic techniques by running some Javascript provided by a third party hardly matters, they decide what happens and they're on the hook for it.
It's basically the newspaper argument. The publisher may very well not have written everything in the newspaper, but the law says (at least in my country) that the publisher is responsible for whatever is published anyway. Readers are entitled to blame the publisher, not just some anonymous contributor, advertiser or whatever, for anything they read in the publisher's newspaper.
Chromium to start marking HTTP as insecure
Posted Dec 17, 2014 14:08 UTC (Wed) by nim-nim (subscriber, #34454) [Link]
So your argument is disproved by actual social behaviours.
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 18:41 UTC (Mon) by nix (subscriber, #2304) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 19:15 UTC (Mon) by nybble41 (subscriber, #55106) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 22:13 UTC (Mon) by rodgerd (guest, #58896) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 22:26 UTC (Mon) by cesarb (subscriber, #6266) [Link]
SSL *does* protect parameters in GET or HEAD requests. SSL protects the whole HTTP request, including parameters and headers. The only thing SSL doesn't protect (yet) is the requested hostname.
Chromium to start marking HTTP as insecure
Posted Dec 16, 2014 2:24 UTC (Tue) by rodgerd (guest, #58896) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 16, 2014 4:27 UTC (Tue) by mchapman (subscriber, #66589) [Link]
Only if your user-agent is utterly brain-dead.
User agents should use the CONNECT method (https://tools.ietf.org/html/rfc2817) when doing HTTPS through a proxy. The CONNECT method ensures that the traffic between the user-agent and the proxy is encrypted. The "real" HTTP request containing the GET or HEAD query parameters is only sent over the TLS tunnel set up by the CONNECT method.
Of course, it's entirely possible for the tunnelled traffic to be decrypted by the proxy if the proxy's is using a certificate trusted by the user-agent. But that is a completely different problem.
Chromium to start marking HTTP as insecure
Posted Dec 16, 2014 4:35 UTC (Tue) by mchapman (subscriber, #66589) [Link]
I should probably clarify this. The intent here is for the traffic between the user agent and the *origin server* to remain encrypted.
Proxies ought to simply do minimal validation on the destination for the CONNECT request (e.g. check that it's requesting a connection to port 443), then simply pass TCP back and forth. The TLS connection is established between the user agent and the origin server.
Chromium to start marking HTTP as insecure
Posted Dec 16, 2014 9:49 UTC (Tue) by nim-nim (subscriber, #34454) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 16, 2014 12:25 UTC (Tue) by mchapman (subscriber, #66589) [Link]
*If* you want a proxy to do that for you, then you can set up a trust relationship with that proxy. If your browser trusts the proxy's certificate, then the proxy can do whatever it likes with your "encrypted" traffic.
> Because botnets do use HTTPS nowadays and they do use public clouds too (so they can end up on a generic amazon domain for example).
I'm not sure what relevance that has.
OK, so there's one origin server under a generic Amazon domain, and a botnet also under that generic Amazon domain. If the botnet were to somehow intercept your traffic, what is it going to do with it? The only way it could decrypt that traffic, without you knowing, is if it had a signed and trusted certificate for the origin server's domain, or for the wildcard label under the generic domain. Both of these seem unlikely.
Chromium to start marking HTTP as insecure
Posted Dec 16, 2014 14:15 UTC (Tue) by nim-nim (subscriber, #34454) [Link]
Not possible with current http. Can only set up your web client to blindly trust a CA for everything.
> If your browser trusts the proxy's certificate, then the proxy can do
> whatever it likes with your "encrypted" traffic.
Change that into people with access to the certificate can do whatever they like even when you're not going through the proxy.
> I'm not sure what relevance that has.
Connect opens a tunnel to a specific host. In the cloud lots of things (including badware) use the same generic hosts.
Chromium to start marking HTTP as insecure
Posted Dec 16, 2014 16:17 UTC (Tue) by mathstuf (subscriber, #69389) [Link]
[1]The proxy being over AF_UNIX so that TCP ports aren't exhausted or open to others on the same machine.
Chromium to start marking HTTP as insecure
Posted Dec 16, 2014 19:25 UTC (Tue) by dlang (✭ supporter ✭, #313) [Link]
no, a good proxy should do more than blindly pass traffic after a CONNECT request.
A good proxy should watch to see if the traffic looks like a SSL/TLS handshake. The Sidewinder firewall will do this, but I don't know how many other proxies do this as well.
Chromium to start marking HTTP as insecure
Posted Dec 16, 2014 21:22 UTC (Tue) by zlynx (subscriber, #2285) [Link]
I suppose you could prevent malware from using it to send spam email, but that seems to be more of a job for IDS and/or perimeter firewalls.
Chromium to start marking HTTP as insecure
Posted Dec 16, 2014 21:32 UTC (Tue) by dlang (✭ supporter ✭, #313) [Link]
Yes, it's just raising the bar, but it's probably easier for something to actually do a TLS handshake than to fake the start of it (because they can just use an existing library)
Chromium to start marking HTTP as insecure
Posted Dec 16, 2014 22:08 UTC (Tue) by zlynx (subscriber, #2285) [Link]
With the firewall rules in place forcing machines to use the proxy there doesn't seem to be much point in checking for TLS in the connect because it will either be a correct HTTPS request that the proxy can't read or it will be a connection request to a service that's blocked by the perimeter firewall systems.
Chromium to start marking HTTP as insecure
Posted Dec 16, 2014 22:21 UTC (Tue) by dlang (✭ supporter ✭, #313) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 17, 2014 19:36 UTC (Wed) by lsl (subscriber, #86508) [Link]
I'm not sure about that. Even if the proxy developers get it right today, it will break horribly at some point in the future and, as people don't update their shit, serve as an obstacle for deployment of TLS-ng (whatever it'll be called) or even some totally unrelated other part of the networking stack. Firewalls breaking stuff is such a widespread problem nowadays that every suggestion for them to determine if something "looks like X" gives me the creeps.
But yeah, if all those middleboxes would at least adhere to your first suggestion and refrain from modifying things, that would be awesome. I can totally live with being denied access to some resource. It's clear and you can go and bug the person responsible for setting this up. Silently modifying traffic (and breaking it in subtle ways), though...this is the pestilence.
Chromium to start marking HTTP as insecure
Posted Dec 17, 2014 20:30 UTC (Wed) by dlang (✭ supporter ✭, #313) [Link]
the firewall/proxy doesn't know if the unknown that is going through is some legit thing that's new enough that the firewall doesn't know about it, or a hacker trying to break things.
At that point it has the choice of just blocking things, or trying to 'clean' them up. Arguments can be made either way, both options will break users at different times in different ways. In any case, firewalls need to be maintained and updated frequently. If they aren't, not only will they break things that are too new for them to understand, they will fail to block things that need to be blocked.
Yes, firewalls break things that users try to do. But if you are trying to defend a company, you WANT to break some of the things that users try to do, because there are far too many users who don't have any idea what they are doing, and so they try to do things that you just don't want to allow.
Chromium to start marking HTTP as insecure
Posted Dec 17, 2014 22:06 UTC (Wed) by zlynx (subscriber, #2285) [Link]
If your IT is so paranoid it won't allow ECN on TCP/IP then it certainly shouldn't allow Javascript of any sort.
Chromium to start marking HTTP as insecure
Posted Dec 17, 2014 22:38 UTC (Wed) by dlang (✭ supporter ✭, #313) [Link]
when ECN was introduced, it was using bits that were unused prior to that point. It was perfectly reasonable for firewalls to not allow those bits to pass through unmodified. How could the firewall know that it was safe for these bits to be set when they were unused at the time the firewall was written. It could easily have been an attack to take advantage of some bug in some vendor's TCP stack.
You can argue that they should have blocked any traffic with the bit set rather than zeroing the bit, but it's hard to say if that would have caused more grief or not.
Middleboxes (was: Chromium to start marking HTTP as insecure)
Posted Dec 18, 2014 10:31 UTC (Thu) by cesarb (subscriber, #6266) [Link]
Actually, zeroing the bits was the best response. It just made ECN behave as non-ECN. IIRC, I even suggested it to someone at one point as an alternative (they were concerned about OS fingerprinting, which is a completely bogus concern IMHO).
Blocking ECN, on the other hand, completely breaks things. Their firewall was in front of a server; they have NO way of knowing whether some future, possibly experimental protocol (thus perhaps not even public yet) uses that bit as signaling, expecting it to be ignored if the server does not support the new protocol.
Note that you also have to zero out the negotiation. ECN seems to have grown a way to detect whether the server is cheating and pretending it didn't receive marked packets; if ECN is negotiated between two peers, erasing the marking will cause problems.
> How could the firewall know that it was safe for these bits to be set when they were unused at the time the firewall was written. It could easily have been an attack to take advantage of some bug in some vendor's TCP stack.
It could also easily be an attempt to take advantage of some feature in some vendor's TCP stack.
If a packet-validating firewall (as opposed to one which merely triggers on well-known fields like the TCP port) is in front of a server or client, it *must* know *all* the features which can possibly be used by that server or client's network stack, and the correct way to make it ignore unknown features at each part of the protocol. Otherwise, either a new feature introduced at both ends of the connection will make it get out of sync, or a new feature introduced at the uncontrolled (outside) end will cause connections to be incorrectly blocked.
New features to network protocols are almost always introduced in a backwards-compatible manner. Either or both endpoints set a flag or add an option to tell the other endpoint that they support the new feature. This allows new features to be introduced in an uncoordinated fashion, which is necessary for a decentralized architecture like the Internet. If the other endpoint doesn't support the new feature, it will just ignore it.
When you add a middlebox, this end-to-end model breaks. The middlebox is in front of one of the ends; if the other end is upgraded and now tries to negotiate a new feature, and the middlebox drops the connection in response, the communication between the endpoints breaks. If the middlebox doesn't drop the connection, things keep working, until the endpoint it's "protecting" is also upgraded; now both sides are speaking a new version of the protocol, and the middlebox gets out of sync. The only hope for the middlebox, therefore, is to scribble over the negotiation flags or options in both directions, so either end thinks the other end doesn't support the new feature and they keep speaking the old version of the protocol. But for the middlebox to do that, it has to know *every* place a feature negotiation flag or option can be found, and how to safely overwrite it with a NOP.
And that works only for non-security-sensitive protocols. Security-sensitive protocols tend to validate that an attacker didn't change any of the negotiation flags or options, because changing them could be used to force the connection to use a more vulnerable version of the protocol (a "downgrade attack"). On TLS, it's the Finished message, which checksums the whole protocol negotiation up to that point.
The problem with middleboxes is that they are from a different networking model. In what I could call a "hop-by-hop" or "gateway" model, for a computer at organization A to talk to a computer at organization B, it talks to organization A's gateway, which talks to organization B's gateway, which talks to the destination computer. Any feature negotiation is hop-by-hop; there's no chance of getting out of sync. The gateways are natural places to do all the protocol validation the organization desires.
The Internet, however, uses mostly an "end-to-end" model, where the computer at organization A talks directly to the computer at organization B. There is no place for middleboxes in that model. Any attempt is a kludge, which will sooner or later break.
The only valid solution, for those who insist in a "gateway" model, is to use an application-level gateway, that is, to take the middlebox out of the middle and make it be one of the ends. Instead of scrubbing packets to try to take out anything which could confuse the protected endpoint, make the connection terminate at the gateway and relay its contents to the protected endpoint in a separate connection. It's easy to see that this is much more robust against both new features and unknown bugs.
It works at a higher level, too. If you are concerned about HTTP-level attacks, make the connection terminate at an HTTP gateway, parse the request as an HTTP server would, create a new request from scratch as an HTTP client would, and send it to the real server. That's not transparent, since it will discard any details the gateway doesn't understand, but if you are of a "gateway" model mentality, transparency is precisely what you do not want.
Middleboxes (was: Chromium to start marking HTTP as insecure)
Posted Dec 18, 2014 16:26 UTC (Thu) by raven667 (subscriber, #5198) [Link]
Middleboxes (was: Chromium to start marking HTTP as insecure)
Posted Dec 18, 2014 20:50 UTC (Thu) by dlang (✭ supporter ✭, #313) [Link]
This is what most firewalls (that did anything with ECN) did, and they were blasted for "breaking" ECN
In other cases (like window sizing IIRC) zeroing the bits actually breaks connections for users under some conditions.
By the way, this sort of thing is why I strongly prefer proxy firewalls (even transparent ones), the connections are to the firewall, and the normal operation of the TCP stack sanitizes things without having to try and figure out what's good or bad about a packet (or fragment)
Cisco and Checkpoint did the world a great deal of harm when they convinced people that a firewall should be a packet filter and nothing else.
Middleboxes (was: Chromium to start marking HTTP as insecure)
Posted Dec 19, 2014 9:56 UTC (Fri) by nim-nim (subscriber, #34454) [Link]
This was progressively broken by people who didn't want to think about the complexities of hop by hop and defined lots of "end to end" stuff with "it should work in hop by hop mode, but we'll think about this later".
Also it's easy to write "middleboxes are broken, imagine when the two ends are newer than the middlebox". But in the real life, especially for security needs it's "the middle box is more up to date than the endpoints". Because it's way easier to deploy new badware checks on a single centralized middlebox than on a endpoint park. In fact the whole point of the current cloud craze is that it is way too expensive to try to keep endpoints up to date.
When lwn will run articles about how thunderbird is killing gmail then it will be reasonable to write that centralized systems are the technical holdout.
Chromium to start marking HTTP as insecure
Posted Dec 16, 2014 9:42 UTC (Tue) by cesarb (subscriber, #6266) [Link]
That makes no sense.
First, GET and HEAD do not have a body. It's other methods like POST and PUT which have a request body.
Second, if a proxy can decrypt the request (where the query parameters are) for a GET and HEAD, it can decrypt the request for a POST (including everything posted by the form); in fact, it can't even distinguish between a GET or a POST before decrypting the request, since the HTTP method is also encrypted!
You seem to be confused as to how HTTPS works. HTTPS does *not* work like this:
GET /blah/blah/blah?password=password HTTP/1.0
Host: www.example.com
[negotiates and switches to encrypted data]
...
Instead, HTTPS, even through a proxy, works like this:
[hostname: www.example.com]
[negotiates and switches to encrypted data]
GET /blah/blah/blah?password=password HTTP/1.0
Host: www.example.com
...
As you can see, the only thing visible without decrypting is the hostname.
Your confusion might come from the common recomendation of "don't put secret data in the query string". But that's not because of HTTPS; it's because of browser history and logging. Both the browser history and common HTTP server logs write down the requested URL, including query parameters; with POST, the information is not in the URL, so it's not saved. That happens before encryption (in the browser) or after decryption (in the destination server), so HTTPS is not involved; in fact, the same recommendation also applies to plain HTTP.
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 20:22 UTC (Mon) by mathstuf (subscriber, #69389) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 6:53 UTC (Sun) by iabervon (subscriber, #722) [Link]
Having a cue for "someone unexpected might be watching you" that's generally there and lets you behave appropriately for that context, so you can feel uncomfortable doing things that require privacy when the cue is present. People do, after all, go out in public all the time, and behave appropriately despite the lack of warnings that they could be seen.
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 4:47 UTC (Mon) by b7j0c (guest, #27559) [Link]
its about creating a class system on the web
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 8:14 UTC (Mon) by dgm (subscriber, #49227) [Link]
Chromium to start marking HTTP as insecure
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 16:44 UTC (Mon) by raven667 (subscriber, #5198) [Link]
If you think that Google is doing this out of the goodness of their heart then you are going to be very confused and disappointed when they don't behave as you predict, if you see this as a cynical protection of their revenue source, that happens to have some good come out of it, then your predictions are going to be a lot closer to observed behavior and you'll be happier for it.
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 17:24 UTC (Mon) by dlang (✭ supporter ✭, #313) [Link]
Threats they have reacted to in the past include lack of good browsers (sponsoring Mozilla and writing chrome), mobile device lockdown (both the existence of Android, the Nexus line, etc)
Having people being afraid of using the Internet because of Governments spying on them (and remember, it's not just the NSA), is just another threat that could reduce what people are willing to do on the Internet.
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 19:14 UTC (Mon) by raven667 (subscriber, #5198) [Link]
But it doesn't require a free and open internet and it is opposed to individual privacy, their protection of privacy only goes as far as preventing others from making money on users that they control, they will fight anything that protects private individuals from Google mining the user data they are capable of collecting.
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 19:20 UTC (Mon) by raven667 (subscriber, #5198) [Link]
I don't think they are opposed to any government spying, they are opposed to the governments gathering this information on their own and not paying for the privilege through a Google toll-booth. They are interested in controlling public perception to keep the music going, people have to believe their data is safe, it doesn't actually have to be safe if the public has no means to audit what happens to their data because all the analysis that happens is a well protected corporate secret. Google has much higher operational security standards than say the NSA does.
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 19:25 UTC (Mon) by dlang (✭ supporter ✭, #313) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 20:58 UTC (Mon) by raven667 (subscriber, #5198) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 16, 2014 10:23 UTC (Tue) by nim-nim (subscriber, #34454) [Link]
Second, Google is effectively in a close alliance with the US government. They provide it with the info it wants, and a nice unofficial soft power front for foreign policy. In return the US government helps Google with foreign states, turns a blind eye to its progressive user encroachment, and protects it from the US media lobby. This cooperation happens at the highest Google levels and it does not need participation by most Googlers (even though they would be blind to ignore it : the opening and closing of foreign Google offices follow closely US foreign policy, Google people are more and more often involved in foreign troubles, etc).
If Google management was furious about the NSA that was because they viewed it as encroachment by a partner and stealing the cookies instead of bartering them for something else. They're making sure this stealing won't happen again.
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 19:22 UTC (Mon) by lsl (subscriber, #86508) [Link]
Exactly. More than anything else, Google's biggest competitor is the offline ad business. They're a nobody there but for every truckload of money spent on online ads, Google is likely to get a big share from it. Their business (at least partially) depends on the Internet being a safe and mellow place where people are comfortable taking out their money. Because otherwise, Google's customers are simply going to spend their money on offline ads.
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 6:37 UTC (Mon) by salimma (subscriber, #34460) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 16:28 UTC (Mon) by k8to (subscriber, #15413) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 22:06 UTC (Mon) by richmoore (subscriber, #53133) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 23:42 UTC (Mon) by k8to (subscriber, #15413) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 13, 2014 15:32 UTC (Sat) by philipstorry (subscriber, #45926) [Link]
I run my own small vanity^Wpersonal website. It's not the effort to put https on the server that stops me from doing it - that would be five bucks a year and couple of hours. It's the fact that I have embedded content from way back. Pictures embedded from my Flickr account, buttons in sidebars - It'll take a day or two to go through all of that and change it, at least. And until I do, if I enable https all it will do is generate seemingly random security warnings for the end user, depending on which page they're on.
Most of my recent embeds have all been https, so I know it's feasible today. But whilst "sometime next year" is achievable for myself, that's only because I've been thinking about going https already. I've looked at tools that can do the search and replace in my content, and I'm technical enough (I know regular expressions well enough to know that they can be dangerous) that I'm in a decent position to do this and do it well on my small corner of the internet.
But I fear that almost any timescale for this change will be too aggressive for many small non-profits and small companies, who lack the technical know-how to enable https AND not have their older content generate warnings. This is bigger than just server configuration.
Spurious warnings would make the whole project useless, as it will just train users to ignore the warnings. Marking as dubious by default, as they've proposed, is therefore not the right way to do this.
I wonder if we shouldn't look at some kind of flag in DNS, whereby the domain is removed from this kind of checking temporarily to stop spurious warnings. That's an awful technical solution and an awful security solution - but I'm not proposing it as a technical or security solution to fix the issue. I'm proposing it because it's a simple initial action for website owners that prevents spurious warnings, and it focuses website owner's minds on the issue.
Instead of "this browser change happens, all your content must work or your site gets flagged, get working on it" the process becomes "in the future, your site must be secure - set this flag now to exempt it, but know that at some point nothing will check that flag, so you'd better get to work". That gives people more time, and makes this an easier change to handle.
If a redirect to https happens and a certificate is present it should still be handled normally. So if your web server has a decent config, a DNS hijack to falsely set the flag has no effect anyway...
(And if your DNS is hijacked, the attacker will likely redirect to another webserver, surely?)
Eventually (say 2017 or later?), we pull the check for the DNS flag, and all http content is marked unsecure. But at least everyone was aware because they had to do something easy at first, so could start planning.
By making this change in the browser require no work by the website owners, we risk it looking like a very autocratic and thoughtless change - even if it is well intentioned.
Big hosting companies will likely make it easy to use tools like "Let's Encrypt" and make setting the DNS flag a simple check box, so at least the starting point is easier for people to deal with. And website owners are now thinking about the whole change more positively, because the start was easy to manage.
Basically, I don't think a unilateral change in how browsers handle http/https security signals will go down well, unless it's made easy to manage and the timescales are suitably long. As put here, this seems doomed to fail by generating ill will on the content producer/website owner side, and the worst case is that it will make things less secure by training users to ignore these warnings.
Long term, this is the right thing to do, and we should aim for it. We just need to be careful in how we do it.
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 7:11 UTC (Sun) by jeremy (subscriber, #95247) [Link]
It's not the effort to put https on the server that stops me from doing it - that would be five bucks a year and couple of hours. It's the fact that I have embedded content from way back. Pictures embedded from my Flickr account, buttons in sidebars - It'll take a day or two to go through all of that and change it, at least.Have you ever heard of a tool called
sed?
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 10:21 UTC (Sun) by josh (subscriber, #17465) [Link]
So no, sed wouldn't work unless you go find all the http URLs, find what service they point to, find how that service handles https, and sed URLs for that particular service's domain only.
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 12:44 UTC (Sun) by philipstorry (subscriber, #45926) [Link]
For a preliminary analysis, I ended up grabbing my entire site via wget, then using grep to pull in all matches for "http:\/\/.*\/" and piping that through a sort|uniq -c pipeline to get an overview of the scope of the problem.
Even that's potentially no guarantee - if you have javascript blocks for ads or analytics, then the URLs might be built dynamically, and wget isn't going to process that javascript and reveal them. But for content that you've produced, it should be OK.
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 12:35 UTC (Sun) by philipstorry (subscriber, #45926) [Link]
But as I use a CMS (like many websites), most of the content is in a database. That makes things much trickier - I'd have to dump the DB, use sed, and then reload the DB. Depending on how the dumped DB is formatted, all kinds of issues might crop up - what if sed hits an EOF character and stops, but the file has more content?
If all websites were just a bunch of files, sed would be fine. But that's not the case. :-(
As it stands, I'm using Drupal and there's a module which will do search-and-replace. I need to do some testing before using it though. Realistically, to do a proper job on any website, you'd be looking at a test run of your https config and the content re-writing tool you use on a VM.
(If the module didn't exist, I'd be dumping my MySQL DB and using something like sed, and then sacrificing lots of chickens before reloading the DB. I'm not a DBA, and doing that kind of thing makes me nervous.)
If your website is something like the old BBC News website - which has thousands if not millions of pages of content - then that test run alone could take days. But for most smaller organisations, I think a couple of hours to a day for the operation is probably about right.
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 19:28 UTC (Sun) by niner (subscriber, #26151) [Link]
That said, if your CMS supports search/replace (and if it doesn't, why are you still using it?), that's much safer. Especially if it supports interactive search/replace, so you can inspect each change.
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 10:26 UTC (Mon) by WillC (guest, #96862) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 13, 2014 19:29 UTC (Sat) by flussence (subscriber, #85566) [Link]
I sure as heck don't want to blindly trust all the default CAs the browser dumps on my system; every single one of those is a potential SPOF and several *have* been.
Chromium to start marking HTTP as insecure
Posted Dec 13, 2014 19:34 UTC (Sat) by josh (subscriber, #17465) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 11:29 UTC (Sun) by richmoore (subscriber, #53133) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 13:59 UTC (Sun) by RobSeace (subscriber, #4435) [Link]
But, even in the case of a host you regularly connect to suddenly presenting what appears to be a new key, I'd hope most people would be very suspicious, and not just blindly type "yes"... I would hope an attack of that nature would only succeed very often on people's first connections to the host in question; ie. they were expecting a new/unknown key warning... In that case, yes, most people don't bother verifying that the new key is really the right one...
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 14:06 UTC (Sun) by richmoore (subscriber, #53133) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 16:19 UTC (Sun) by richmoore (subscriber, #53133) [Link]
Is it really zero?
Posted Dec 14, 2014 17:23 UTC (Sun) by tialaramex (subscriber, #21167) [Link]
I wonder how many of the systems administrators I've had over the years would say the same (nobody has ever asked) even though I think almost all of them have presided over a sufficiently poorly managed hardware change to trigger this error for me, and get a question in their INBOX as a result. People aren't very good at remembering such trivial events. It's possible some people reading this are among my past systems administrators. Do you remember getting an email about this from me?
Also, where a key change might be triggered by a known hardware refresh it's possible people were expecting it. I'd still ask (out of an abundance of caution and because I want to encourage a more thoughtful approach to key management) but many people, knowing machine A has been replaced, would be less surprised to see the key mismatch flagged for machine A than it if it were to happen (as it might in an attack) without notice one Monday morning.
I don't doubt that the actual rate of at which people would query an unexpected key change is very low, embarrassingly low, but it may not actually be zero. Asking admins to keep count over time might produce more reliable results.
Chromium to start marking HTTP as insecure
Posted Dec 16, 2014 19:45 UTC (Tue) by jhhaller (subscriber, #56103) [Link]
Since I really have no idea whether the host was reinstalled or someone inside my network is running a MITM attack, I assume the former. If I used ssh outside my network, I would be more concerned. While the certificate authentication has it's issues, the SSH option of "trust on first connect" is significantly worse, to the point that it is of little value. The certificate based approach at least reduces the risk to the insecure distribution of trusted public certificates.
Chromium to start marking HTTP as insecure
Posted Dec 13, 2014 19:38 UTC (Sat) by toyotabedzrock (guest, #88005) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 13, 2014 21:06 UTC (Sat) by gutschke (subscriber, #27910) [Link]
Universal deployment of encryption for all connections on the internet addresses a lot of problems. And the linked blog post does go into detail which problems a) exist right now, and b) can be mitigated by encryption.
Also, this is just an initial proposal to come up with a time line. It does not say that Chrome/Chromium is going to make the switch in 2015. In the past, similar efforts have taken multiple years to complete. This all makes a lot of sense to me, and I don't understand why anybody would complain that it is too early to even start talking about making the switch. In fact, the web community as a whole probably should have started the conversation years ago.
Finally, it is in fact pretty easy these days to switch an entire site to all-https-all-the-time. For many users it is as easy as:
1) make main web server(s) inaccessible from the web,
2) install nginx on a publicly accessible IP address,
3) obtain free SSL certificate from StartSSL,
4) configure nginx to forward all HTTP requests to HTTPS,
5) configure nginx to forward all HTTPS requests to internal web servers,
6) check on https://www.ssllabs.com/ssltest/ that the site gets at least a rating of "A" if not "A+".
All of this shouldn't take more than at most half a day of work. It really is that easy these days. As an added bonus, NGINX can transparently enable SPDY support; so, after all is said and done, the site is probably going to be a lot more responsive for most users.
Of course, there are lots of lose ends. But the above covers 90+% of the work. Mixed-content warnings are probably the biggest remaining issue, but nginx can help with that. And maybe, transitioning the entire infrastructure to more modern tools, instead of relying on a reverse proxy, is a good long-term goal. But that work can happen gradually.
Remembering to regularly update the SSL certificate is the next problem; the Let's-Encrypt effort should help with that, but in the meantime, paying for a certificate from a provider that offers better tools than StartSSL is an option.
Setting up DNSSEC and DANE would also be a good idea at this time. But the benefits are still limited -- and honestly, it's only another day or two of work to get that all straightened away.
In other words, there really is no good excuse to still have plain-HTTP web servers on the public internet. Spend the half-day of work and fix it already!
Chromium to start marking HTTP as insecure
Posted Dec 13, 2014 22:24 UTC (Sat) by gutschke (subscriber, #27910) [Link]
To put my money where my mouth is, here is an example configuration for nginx that will do
# Forward requests for https://example.org, and use the appropriate certificate
server {
listen 443;
listen [::]:443;
server_name *.example.org example.org;
ssl on;
ssl_certificate certs/example-org.crt;
ssl_certificate_key certs/example-org.key;
ssl_trusted_certificate certs/certificatechain.pem;
ssl_session_timeout 5m;
ssl_session_cache shared:SSL:50m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:HIGH:!aNULL:!MD5:!kEDH";
ssl_buffer_size 8k;
ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";
resolver 8.8.8.8;
autoindex off;
location / {
proxy_pass http://192.168.1.1; # Substitute IP address of internal web server
proxy_set_header Host $host;
proxy_intercept_errors on;
proxy_redirect http:// https://;
proxy_set_header X-Real-IP $remote_addr;
}
}
# Redirect http://example.org to HTTPS
server {
listen 80;
listen [::]:80;
server_name *.example.org example.org;
rewrite ^ https://example.org$request_uri? permanent;
}
Chromium to start marking HTTP as insecure
Posted Dec 13, 2014 23:52 UTC (Sat) by murukesh (guest, #97031) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 0:36 UTC (Sun) by gutschke (subscriber, #27910) [Link]
In practice, it doesn't matter all that much, as this rule will trigger relatively rarely ... only the very first time a user types the URL into the browser and forgets to include the "https://". As we enabled HSTS, the browser should remember and in the future it should always automatically switch to TLS, even if the user forgot to tell it.
But of course you are right, and there is no excuse for needlessly invoking a regexp match, when it is not actually required.
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 17:10 UTC (Sun) by nix (subscriber, #2304) [Link]
This is *not* simple, sorry.
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 18:22 UTC (Sun) by gutschke (subscriber, #27910) [Link]
Both lines do exactly the same thing ("301" is HTTP's way of saying "permanent"). But the latter avoids an unnecessary comparison by regexp. "murukesh" is entirely correct that his version is considered more idiomatically correct. But even if you didn't make that change and used my example configuration verbatim, I bet you will not notice any difference. I doubt even most benchmarks could tell the difference. "^" is the simplest regexp possible. It's not going to cost any appreciable amount of performance.
If you have spare IP addresses or spare machines, things are pretty easy. But even if you don't have any spare resources, things are not really that much more difficult.
Move your existing server to somewhere, where it isn't accessible from the internet. This could be a private IP address (e.g. 192.168.1.1 or even 127.0.0.1) or it could be just a non-standard port that your firewall blocks (e.g. port 81 instead of port 80) -- or both.
Then use the example configuration file that I showed earlier. Replace each instance of "example.org" with your domain name. And replace the one instance of "http://192.168.1.1" with a URL that points to your original server (e.g. http://127.0.0.1:81).
You still need to get keys and certificates for your domain, and then put them into the files "example-org.key", "example-org.crt", and "certificatechain.pem". In my example configuration file, I assume that you put them into "/etc/nginx/certs/", but you are welcome to specify a different absolute path.
There are plenty of resources online describing how to generate a private key and how to obtain (free) certificates. But if you need help, don't hesitate to ask. I'll be happy to answer any of your questions. It's not really difficult, but it does admittedly require a little bit of reading if you have never done any of this before.
Relative URLs all work automatically and you don't need to do anything special. Absolute URLs are more problematic, if they include the scheme (i.e. if they say "http://..."). You can use nginx to find these and rewrite them for you. But that makes things more complicated, so I didn't want to include that in my basic example configuration file. It probably requires branching out to the embedded LUA interpreter in nginx.
For most people, it is probably a better solution to simply fix their website to never have absolute URLs. But if you want me to post an example how use LUA for rewrites, I can look it up for you later in the day.
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 18:55 UTC (Mon) by nix (subscriber, #2304) [Link]
Relative URLs all work automatically and you don't need to do anything special. Absolute URLs are more problematic, if they include the scheme (i.e. if they say "http://..."). You can use nginx to find these and rewrite them for you. But that makes things more complicated, so I didn't want to include that in my basic example configuration file. It probably requires branching out to the embedded LUA interpreter in nginx.This is looking less and less simple by the minute! (At least a nonstandard port works -- that's what I was hoping.)
I note that I have in the past been told 'never use relative URLs, only absolute URLs' for various reasons -- and now it turns out this causes problems here... groan.
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 19:47 UTC (Mon) by gutschke (subscriber, #27910) [Link]
The latter of course is fine; it's still an absolute path, but it doesn't needless encode redundant information. And you might in fact want to do that, whenever you otherwise would have had to do something like "../y/z.html". Relative URLs with ".." in the path are always awkward; not every server supports this syntax, and it can also make it difficult to reason about path-based security. Maybe, that's what you were remembering, when you said you avoid relative URLs.
Another very clean solution is to put a "<base>" tag at the top of your HTML file. That way, if you ever need to make changes to URLs, they are contained in a single place.
A reason why some people think they might need to encode scheme and host name, is that they always want to redirect their users to the secure site, and to the canonical host name (e.g. "example.org" instead of "www.example.org"). While the goal is laudable, the approach of encoding this data in the content of the page is bad. It is much better to tell the web server that should generate a HTTP redirect, whenever a request arrives for the wrong destination. That way, you avoid the layering violation.
There are a very small number of remaining cases, where encoding scheme and host name is needed. Sometimes, it can be worked around with Javascript, sometimes it can't. Those are the ones you need to look for an either edit or teach your web server to detect and rewrite on the fly. This is similar to what people used to do with server-side-includes back in the dark ages.
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 22:03 UTC (Mon) by gracinet (subscriber, #89400) [Link]
Most of the times, when you write a web app, making assumptions on the URL that users see in the browser is a bad idea in my experience. It can change due to various policy or business decisions, including the name of the organization the application is running for ! In a proper CMS, all of that should stay dynamic.
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 22:57 UTC (Mon) by gutschke (subscriber, #27910) [Link]
And yes, I agree with you, keeping things dynamic makes life a lot easier. I have had to embed various web apps and components within other web sites, and it is so much easier whenever the app was written with embeddability and portability in mind.
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 11:29 UTC (Sun) by mpr22 (guest, #60784) [Link]
Please don't use <pre>...</pre> with long lines in LWN comments; the CSS is not written in a way that allows web browsers to cope gracefully with it.
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 17:54 UTC (Sun) by gutschke (subscriber, #27910) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 18:10 UTC (Sun) by osma (subscriber, #6912) [Link]
Your HTTP web server will now see connections from localhost, not the original IP. This might cause issues with logging, access control, sessions and the like. Luckily there are solutions like mod_rpaf, mod_extract_forwarded and mod_remoteip (but the fact there are three alternatives, plus patched versions of mod_rpaf, already shows it's not that simple). You also need to configure them appropriately.
Also I've seen web applications try to detect whether the connection uses HTTPS, by checking for environment variables normally set by mod_ssl. But if you use a proxy, those won't get set - unless you have the right patched version of mod_rpaf.
In one particular "worst case" scenario I've had to resort to this kind of stack:
* Pound to proxy HTTPS to HTTP
* Varnish as a caching reverse proxy
* Pound again, to proxy HTTP to HTTPS
* Apache with mod_ssl running the original web application
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 18:30 UTC (Sun) by gutschke (subscriber, #27910) [Link]
Maybe, that means now is a good time to upgrade the infrastructure. If things are this outdated, who knows what other issues and security problems lurk in that old code base. But yeah, it sucks to be stuck maintaining those systems.
As for detecting HTTPS, I don't think I have had any particular problems with that so far. Would it not have sufficed to change the proxy URL to "https://..."? (I had to do this once, to re-write an old SSL connection to support TLS). Or do I misunderstand your example and it was something more complicated than that?
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 4:21 UTC (Mon) by plugwash (subscriber, #29694) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 4:30 UTC (Mon) by gutschke (subscriber, #27910) [Link]
The nginx reverse proxy then unconditionally sets the X-Real-IP header and overwrites any existing such header, if provided by untrusted sources. So, this already works as intended.
Of course, if you do rely on IP addresses for anything other than logging purposes (and I am a little uneasy about trusting IP addresses for anything), you should have unittests that regularly test that this assertion holds true and you didn't accidentally poke a hole into your security somewhere.
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 8:32 UTC (Mon) by osma (subscriber, #6912) [Link]
As for detecting HTTPS, I don't think I have had any particular problems with that so far. Would it not have sufficed to change the proxy URL to "https://..."? (I had to do this once, to re-write an old SSL connection to support TLS). Or do I misunderstand your example and it was something more complicated than that?
Web applications have many reasons, some better than others, for detecting whether the connection uses HTTPS or not. For example, it may affect URLs of generated links, session cookie handling, showing the user a link to the HTTPS version, or maybe issuing a redirect to HTTPS when a HTTP connection is detected.
Custom web applications are not a problem, as they can be customized to the environment, but packages like WordPress, Drupal, phpMyAdmin, phpBB, Horde etc. can be (I'm mentioning only PHP applications, but I believe the problem is more general).
For example, based on a cursory inspection of phpMyAdmin, I see dozens of code lines about HTTPS. Some code seems to be about cookies, others about URL generation, yet others about forced redirects to HTTPS. I wouldn't trust this code to work correctly without having the proper HTTPS environment variables set. One could maybe set them manually in Apache configuration directives and hope for the best...
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 10:19 UTC (Sun) by kleptog (subscriber, #1183) [Link]
In my case, I haven't yet found out how to get an SSL certificate for something.debian.net.
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 12:00 UTC (Sun) by jcristau (subscriber, #41237) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 8:39 UTC (Sun) by yoe (subscriber, #25743) [Link]
There is a major difference between 'this site is insecure because they messed up' and 'this site is insecure because it doesn't matter'. Making them both show security warnings will train users to ignore them when it really matters, and that will be a net reduction in security.
Chromium to start marking HTTP as insecure
Posted Dec 14, 2014 10:53 UTC (Sun) by chirlu (subscriber, #89906) [Link]
Sounds much like that comment, so see the discussion there.
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 15:56 UTC (Mon) by mstone_ (subscriber, #66309) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 15, 2014 18:52 UTC (Mon) by iabervon (subscriber, #722) [Link]
Really, there are two dimensions that matter: "can the browser determine that this connection is secure" and "do I need a secure connection". Allowing the connection info to answer the second question and not giving much information to the user about the first is just terrible practice.
Reasonable step to stop ISP messing with contents
Posted Dec 15, 2014 6:51 UTC (Mon) by proski (guest, #104) [Link]
Apparently Google wants to punish ISPs for (mis)using their infrastructure to inject ads instead of buying ads from Google. It's understandable and logical, yet it will be a sad day when the first web page ever created is marked as insecure.
Reasonable step to stop ISP messing with contents
Posted Dec 15, 2014 16:29 UTC (Mon) by idupree (subscriber, #71169) [Link]
$ curl --head http://info.cern.ch/hypertext/WWW/TheProject.html
HTTP/1.1 200 OK
Date: Mon, 15 Dec 2014 16:23:17 GMT
Server: Apache
Last-Modified: Thu, 03 Dec 1992 08:37:20 GMT
ETag: "40521e06-8a9-291e721905000"
Accept-Ranges: bytes
Content-Length: 2217
Connection: close
Content-Type: text/html
(That's a wonderful Last-Modified header, I have to say.)
Reasonable step to stop ISP messing with contents
Posted Dec 15, 2014 18:39 UTC (Mon) by nix (subscriber, #2304) [Link]
Chromium to start marking HTTP as insecure
Posted Dec 16, 2014 5:45 UTC (Tue) by gutschke (subscriber, #27910) [Link]
For some details on why this happens, take a look at https://thethemefoundry.com/blog/why-we-dont-use-a-cdn-sp...
Chromium to start marking HTTP as insecure
Posted Dec 16, 2014 10:56 UTC (Tue) by arekm (subscriber, #4846) [Link]
If SPDY could work over HTTP then it would be faster than SPDY over HTTPS.
Chromium to start marking HTTP as insecure
Posted Dec 18, 2014 2:42 UTC (Thu) by gutschke (subscriber, #27910) [Link]
Websockets ran into the same issue, and websockets look at lot more like plain HTTP than what SPDY does. So, these days, everybody uses websockets over TLS only.
The noteworthy thing here is that even with the added overhead of TLS, the end-to-end user experience is actually better than with plain old HTTP. That is certainly an impressive technical achievement.
How to address small site costs?
Posted Dec 16, 2014 14:57 UTC (Tue) by david.a.wheeler (subscriber, #72896) [Link]
Many people run small sites and are not made of money. Currently certs add extra costs. The cert itself, of course, costs money, and some "free" ones charge a lot if you want to revoke it. There's the price for a unique IP address (shared sites often require you to pay extra for unique IP addresses to be allowed to add the cert, even if you only have users with modern browsers). Never mind the time it takes to add the cert, which for many is non-trivial. TLS also counters most free CDN services - so if you want to use them to counter DoS attacks, you have to drop the CDNs or pay more. CDNs are actually more important because TLS also eliminates most caching systems.
For sites like google.com these costs are a trivial part of the rent. For small sites, this is non-trivial. Yes, the actual encryption CPU time has become trivial, but that is not all. If you want everyone to be forced to use one of a few sites that doesn't matter, but it does matter to others.
Suggestions?
How to address small site costs?
Posted Dec 16, 2014 21:27 UTC (Tue) by rodgerd (guest, #58896) [Link]
It also adds control points: the CAs.
How to address small site costs?
Posted Dec 16, 2014 23:19 UTC (Tue) by foom (subscriber, #14868) [Link]
Today, certs definitely add admin overhead -- using the CA websites to install new certs every year is quite annoying -- especially if you have multiple hostnames. But cash? No, not really. And, by the time this change has gotten to Chromium, I'm sure EFF/Mozilla's solution will be in place, solving the admin-overhead issue as well.
How to address small site costs?
Posted Dec 23, 2014 11:33 UTC (Tue) by robbe (subscriber, #16131) [Link]
How to address small site costs?
Posted Dec 17, 2014 17:40 UTC (Wed) by nturner (subscriber, #55735) [Link]
How to address small site costs?
Posted Dec 17, 2014 20:01 UTC (Wed) by mathstuf (subscriber, #69389) [Link]
Copyright © 2014, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds