LWN.net Logo

Advertisement

GStreamer, Embedded Linux, Android, VoD, Smooth Streaming, DRM, RTSP, HEVC, PulseAudio, OpenGL. Register now to attend.

Advertise here

Encouraging a wider view

By Jake Edge
September 25, 2013
Linux Security Summit

For his keynote at the 2013 Linux Security Summit, Ted Ts'o weaved current events and longstanding security problem areas together. He encouraged the assembled kernel security developers to look beyond the kernel and keep the "bigger picture" in mind. His talk kicked off the summit, which was co-located with the Linux Plumbers Conference (and others) in New Orleans, Louisiana.

Adversaries

Ts'o began by looking at the adversaries we face today, starting with the secret services of various governments—our own and foreign governments, no matter where we live. Beyond that, though, there are organized cyber-criminals who maintain botnets and other services for hire. He noted that there is a web service available for solving CAPTCHAs, where a rural farmer with no knowledge of English (or even Roman characters, perhaps) will solve one in realtime. "Isn't capitalism wonderful?", he asked.

The historic assumptions made about the budgets of our adversaries may not be accurate, he said. Many in the room will know about the problems he is describing, but the general public does not. How do we get the rest of the world to understand these issues, he asked.

Beyond criminals, we have also seen the rise of cyber-anarchists recently. These folks are causing trouble "for the fun of it". They have different motivations than other kinds of attackers. No matter what you might think of their politics, he said, they can cause a lot of problems for "systems we care about".

Ts'o related several quotes from Robert Morris, who was the chief scientist at the US National Security Agency (NSA)—and father of Robert T. Morris of Morris worm "fame". Morris was also an early Multics and Unix developer, who was responsible for the crypt() function used for passwords. The upshot of Morris's statements was that there is more than one way to attack security and that underestimating the "time, expense, and effort" an adversary will expend is foolhardy. Morris's words were targeted at cryptography, but are just as applicable to security. In addition, it is fallible humans who have to use security software, so Morris's admonition to "exploit your opponent's weaknesses" can be turned on its head: our opponents may have vast resources, but developers need to "beware the stupid stuff", Ts'o said.

The CA problem

In May, Ts'o and his Google team were at a hotel in Yosemite for a team-building event where he encountered some kind of man-in-the-middle attack that highlighted the problems in the current SSL certificate system. While trying to update his local IMAP mail cache, which uses a static set of certificates rather than trust the certificate authority (CA) root certificates, his fetch failed because the po14.mit.edu certificate had, seemingly, changed—to a certificate self-signed by Fortinet. That company makes man-in-the-middle proxy hosts to enable internet surveillance by companies and governments.

He dug further, trying other sites such as Gmail and Harvard University, but those were not being intercepted. In addition, requesting a certificate for the MIT host from elsewhere on the internet showed that the certificate had not actually changed. Something was targeting traffic from the hotel (and, perhaps, other places as well) to MIT email hosts for reasons unknown. The bogus certificate was self-signed, which would hopefully raise red flags in most email clients, but the problem persisted for the weekend he was there—at least.

As people in the room are aware, but, again, the rest of the world isn't, the CA system is broken, Ts'o said. He referred to a Defcon 19 presentation [YouTube] by Moxie Marlinspike about the problems inherent in SSL and the CA system. While Marlinspike's solution may not be workable, his description of the problem is quite good, Ts'o said.

It comes down to the problem that some certificate issuers are "too big to jail", so that punishing them by banning their root certificates is unworkable. Marlinspike estimated that banning Comodo (which famously allowed fraudulent certificates to be issued) would have caused 20-25% of HTTPS servers on the internet to go dark. Comodo got to that level of popularity by being one of the cheapest available providers, of course. There are some 650 root authorities that are currently blindly trusted to run a tight ship, with no way to punish them if they don't, Ts'o said.

There are some solutions like certificate pinning, which Google started and various browser vendors have adopted, but that solution doesn't scale. Many have claimed that DNSSEC is a solution, but Marlinspike has argued otherwise—the actors are different, but the economic incentives are the same. Instead of trusting a bunch of CAs, Verisign will have to be trusted instead.

Ts'o doesn't know how to solve the CA problem, but did have a selfish request: he would like to see certificates be cached and warnings be issued when those certificates change. Unfortunately, it won't work for the average non-technical person, nor would it be all that easy because OpenSSL and libraries that call it are typically unconnected from the user interface, but it would make him happier.

Linux security solutions

A short program that just did setuid(0) and spawned a shell led to Ts'o's question of "when is a setuid program not a setuid program?". He showed that the program wasn't owned by root with the setuid bit set, yet it gave a root shell. It worked because the file had CAP_SETUID set in its file capabilities—something that all of the security scanning tools he looked at completely ignored. File capabilities have been around since 2.6.30, but no one is paying attention, which is "kind of embarrassing". Worse yet, there is no way to disable file capabilities in the kernel, he said.

Linux capabilities are meant to split up root's powers into discrete chunks, but their adoption has been slow. The idea is that capabilities are by default not inherited by children, so parents need the right to pass on their capabilities, and the child executable has to have the right to accept them. But there is a "compatibility mode" that has been created where root-spawned processes inherit all of the parent's capabilities. This is done so that running shell scripts as root continues to work, but that mode leads to another problem.

Of the 30 or so powers granted by capabilities, over half can (sometimes) be used to gain full root privileges. You must be able to use those capabilities in an "unrestricted way", which may or may not be true depending on how the system is set up. But many would not be a privilege-escalation problem at all if it weren't for the compatibility mode.

So, why not use SELinux instead, he asked. It can do all of the things that capabilities were intended to do, although the policy has to be set up correctly. Unfortunately, the policy is several megabytes of source that is difficult to understand, change, or use.

As it turns out, though, things have "gotten a lot better" in the SELinux world, according to Ts'o. Every few years, he turns on SELinux to see how well it is working. "Usually, it screws me to the wall" and he has to immediately disable it. In one case, he even had to reinstall his system because of it. But when he tried it just prior to the summit, it mostly worked for him.

The audit2allow program, which looks at the SELinux denials and generates new policy, is "a huge win". On his system, it generated 400 lines of local policy to make things work. Overall, it is much better and he will probably leave it running on his system. There is still a ways to go, particularly in the area of documentation. There is plenty of beginner documentation and expert documentation (i.e. the source code), but information for intermediate users is lacking. That leads to those users just turning off SELinux. The problems he ran into (which were fewer than his earlier tries, but still present) may have been partly due to the SELinux policy packages for Debian testing; perhaps Fedora users would have had a better time, he said.

His experiment with SELinux showed another problem, though. He now gets email every two hours from logcheck with a vast number of complaints. It is clear that his logcheck configuration files are out of sync with the SELinux installation. How to handle security policy and configuration with respect to various kinds of distribution packages is a difficult problem. Right now, the SELinux policy package maintainers and logcheck package maintainers would need to coordinate, but that doesn't scale well. Does logcheck also need to coordinate with AppArmor as well, or should the policy packages be handling the configuration needed for logcheck? There is no obvious solution to that problem, but perhaps automated tools a la audit2allow might help, he said.

Wrapping up

Turning to the summit itself, Ts'o noted all of the different example topics listed in the call for participation, which included ideas like system hardening, virtualization, cryptography, and so on. The program committee did a good job on that list, he said, but what ended up on the schedule? An update to Linux Security Module (LSM) A, a change to LSM B, a new feature for LSM C, and composing (i.e. stacking) LSMs. That's not completely fair, Ts'o said, as there are other topics on the list like kernel address space layout randomization (ASLR) and embedded Linux security, but his point was clear.

He encouraged Linux security developers to think more widely. The program committee can only choose from the topics that are submitted and people submit what they can get funding to work on. The executives of the companies they work for only fund those things that users really care about, so how can we get users to care about security, he asked.

It turns out that perhaps "NSA" is part of the answer, he said—to widespread laughs. But the best outcome from the Snowden revelations is that people are talking about security again. According to Ts'o, US President Obama has been quoted as saying "never let a good crisis go to waste". Security researchers and developers should follow that advice, he said.

A business case needs to be made for better Linux security, Ts'o said. After the kernel.org compromise, some companies were interested in funding Linux security work, but after two months or so, that interest all dried up. It may be that the NSA surveillance story also dies away, but Glenn Greenwald is something of an expert at dribbling out the details from Snowden. That may give this particular crisis longer legs.

Security folks need to find a way for security countermeasures to take advantage of the power of scale, he said. Both Google and the NSA have figured out that if you can invest a large amount into fixed costs and bring the incremental costs way down, you can service a lot of users. Cyber-criminals have also figured this out; the security community needs to do so as well.

In the kernel developers' panel that had been held at LinuxCon the day before, Linus Torvalds suggested that he would be willing to lose some of the best kernel developers if they would export kernel culture to various user-space projects. The same applies to security, Ts'o said. The security of the libraries needs to improve, hardware support for random number generation needs to be more widely available, and so on. Though there have been concerns about the RDRAND instruction in Intel processors because it is not auditable, Ts'o said he would much rather have it available than not.

Similarly, the trusted platform module (TPM) available in most systems is generally not used. Some TPM implementations are suspect, but there is no incentive for manufacturers to improve them since they aren't really used. It is hard enough to get a manufacturer to add $0.25 to the bill of materials (BOM) for a device; without a business case (i.e. users), it is likely impossible.

Security technology is not useful unless it gets used. In fact, as the file capabilities example showed, it can actually be actively harmful if it isn't used.

Ts'o concluded by suggesting that the assembled developers think about a "slightly bigger picture" than LSMs and the composition of LSMs. Those topics are important, but there is far more out there that needs fixing. As he noted, though, it will take a push from users to get the needed funding to address many of these issues.

[ I would like to thank LWN subscribers for travel assistance to New Orleans for the Linux Security Summit. ]


(Log in to post comments)

Encouraging a wider view

Posted Sep 26, 2013 10:43 UTC (Thu) by spender (subscriber, #23067) [Link]

I've said many of the same things in public (https://twitter.com/grsecurity/status/380828847790227456) and at the 2010 LSS (http://grsecurity.net/spender_summit.pdf, security != access control). Saying it to the LSS audience though I think means the message will fall on deaf ears. The kernel community has a problem of only being capable of learning lessons the hard way.

-Brad

Encouraging a wider view

Posted Sep 26, 2013 16:30 UTC (Thu) by freemars (subscriber, #4235) [Link]

re: the CA Problem

One problem is our current https:// scheme confuses authenticated and encrypted, which aren't the same thing at all. I'm trying to push the need for a couple new web standards.

In the familiar https: some certificate authority -- possibly squirming under the thumb of a Three Letter Agency -- swears blind the web site is the one it claims to be.

In the new httpe: the website owner swears blind she generated the public and private keys and nobody else knows them (both). The website owner may be squirming as well; the hope is the various TLAs will soon run out of thumbs. Also, it's a lot harder to keep several million national security letters secret than 50. The httpe: webserver should do a couple other tricks to help security -- it should automatically generate new keys at irregular intervals (say daily-to-monthly plus once per reboot) to make brute force attacks less fruitful. It negotiates a session key with the browser client before a specific page is requested, reducing the amount of metadata that can be collected. It should be willing to carry a bit of water for the browser -- doing some Tor-style session forwarding to try to reduce man in the middle attacks.

My final new web version would be httpes: where the web server and browser software establish an encrypted session first, then verify the web server has been blessed by some CA, and finally moves on to send the page. This is the standard I would want my bank to use.

Encouraging a wider view

Posted Sep 26, 2013 17:02 UTC (Thu) by nybble41 (subscriber, #55106) [Link]

> In the new httpe: the **MITM attacker** swears blind she generated the public and private keys and nobody else knows them (both).

FTFY. Encryption is pointless if you don't know who the decryption key belongs to.

> It should be willing to carry a bit of water for the browser -- doing some Tor-style session forwarding to try to reduce man in the middle attacks.

If you're using a Tor-style protocol in .onion mode (no outproxy) then you don't need any other encryption. The traffic is already secured end-to-end. You just need authentication, so that you know who you're talking to.

Encouraging a wider view

Posted Oct 1, 2013 9:39 UTC (Tue) by intgr (subscriber, #39733) [Link]

> FTFY. Encryption is pointless if you don't know who the decryption key belongs to.

No! This is the fallacy that got us into the whole wiretapping problem in the first place.

The reality is, eavesdropping unencrypted traffic is simply too easy and too cheap. MITMing connections such that it doesn't break any clients/servers, is complicated. And most importantly, MITM is very easy to detect by someone controlling the endpoints.

While it's true that corporate and public access networks could get away with large-scale MITM, it would be unacceptable to do it on every end-user who pays for their own network bandwidth -- on the scale at which NSA is currently eavesdropping. And it simply cannot be done on someone else's network that you have compromised, without raising red flags all over.

Further, if your protocol initiates encryption *before* opportunistic authentication, then any MITM risks breaking this authentication if they don't know in advance. While the attacker could probe the server to check whether it provides certificates, there are edge cases that would still break. In the case of SSL, URL-specific client certificate authentication: the server renegotiates in the middle of a SSL session, after receiving the HTTP request to an authenticated URL.

All of this was already anticipated in a tcpcrypt IETF draft in 2011, but sadly the project looks dead for now. (Though there was some discussion about resurrecting it recently: http://www.ietf.org/mail-archive/web/perpass/current/msg0...)

Encouraging a wider view

Posted Oct 1, 2013 12:48 UTC (Tue) by dlang (✭ supporter ✭, #313) [Link]

> And most importantly, MITM is very easy to detect by someone controlling the endpoints.

this is only true if you can validate who the key belongs to, otherwise you have no way to detect the difference between a legitimate remote endpoint and a MITM endpoint

Yes, doing encryption even without validating the remote endpoint will raise the cost of doing a MITM, but is it enough?

I suggest that you take a look at the report from Ted Tso about how he discovered a MITM aimed at the MIT IMTP server.

Encouraging a wider view

Posted Oct 1, 2013 13:24 UTC (Tue) by intgr (subscriber, #39733) [Link]

> this is only true if you can validate who the key belongs to

I meant to say if you control *both* endpoints, then it's easy to detect altered traffic by simply comparing the data sent from one end and data received on the other end.

And it doesn't matter that there are very few people actually checking it. Even if it's just some developers debugging SSL issues while developing an SSL library. If NSA (or anyone) was MITMing a significant portion of Internet traffic, someone, somewhere will notice that things are amiss. Once detected and reported, ISPs and middlemen can be forced to cease or explain what's going on. Organizations like EFF may get involved, like they did to detect BitTorrent throttling: https://www.eff.org/pages/switzerland-network-testing-tool

It's simply impossible to remain hidden if you do traffic manipulation on a large scale.

Compare it to our current situation with un-encrypted Internet: nobody has any who is eavesdropping what; there's no way to detect passive eavesdropping. Igorance is bliss?

> report from Ted Tso about how he discovered a MITM aimed at the MIT IMTP server

I can't find this report on Google, but it seems you're actually proving my point -- MITM will be detected if it's done on a sufficiently large scale. If the traffic was in the clear, there's no way at all that anyone could detect it.

The large-scale MITM attack in Iran against Gmail was also detected because encryption was used.

Encouraging a wider view

Posted Oct 1, 2013 13:38 UTC (Tue) by dlang (✭ supporter ✭, #313) [Link]

they don't need to change any data as part of their MITM (the NSA wants to read the data, not change it), so just comparing what was sent with what was received would not help any.

Ted discovered this because he had the particular cert pinned (i.e. he knew who the cert belonged to), so when he saw a different cert, he knew something was wrong.

and the new cert that was being presented was 'valid', just not the correct one.

so this seems like exactly the case that having encryption without knowing who the cert belongs to would have done no good.

Encouraging a wider view

Posted Oct 1, 2013 13:51 UTC (Tue) by intgr (subscriber, #39733) [Link]

> they don't need to change any data as part of their MITM

The what? You can't eavesdrop on any modern crypto protocol (including SSL, tcpcrypt) without modifying traffic. Even if the connection is un-authenticated.

> and the new cert that was being presented was 'valid'

They had to swap the certificate. They modified the traffic. The modification was detected. Which was my point.

Encouraging a wider view

Posted Oct 1, 2013 14:55 UTC (Tue) by freemars (subscriber, #4235) [Link]

Exactly. An attacker (let's call her 'Nsa') needs to decrypt packets and re-encrypt them with her faked key. And this needs to be done in real time, or Nsa's cover is blown. This is a DoS attack against Nsa.

Encouraging a wider view

Posted Oct 4, 2013 18:01 UTC (Fri) by elanthis (guest, #6227) [Link]

> And this needs to be done in real time, or Nsa's cover is blown. This is a DoS attack against Nsa.

No. The NSA just needs to store the encrypted packets and then decrypt them later at their leisure. They've already admitted to doing this in many cases.

Let's not also forget that even with a MITM attack, they aren't routing all packets to their buildings for real-time decryption. They're still injecting the code to read the unencrypted traffic into the existing infrastructure (either at the end-points or at a common existing intermediary) and then streaming that data efficiently to their data stores.

There's nothing to DoS.

Encouraging a wider view

Posted Oct 4, 2013 20:06 UTC (Fri) by khim (subscriber, #9252) [Link]

No. The NSA just needs to store the encrypted packets and then decrypt them later at their leisure.

That's totally different kind of attack. Almost undetectable, yes, but also millions of billion times more computationally expensive.

They've already admitted to doing this in many cases.

The've admitted that they keep encrypted sessions but nobody knows how many of them they can actually decrypt.

And if they can “decrypt them later at their leisure” it's still “DoS attack against NSA” - just somewhat less effective.

Let's not also forget that even with a MITM attack, they aren't routing all packets to their buildings for real-time decryption.

It's the only way to perform a MITM attack, sorry.

They're still injecting the code to read the unencrypted traffic into the existing infrastructure (either at the end-points or at a common existing intermediary) and then streaming that data efficiently to their data stores.

There are no "common existing intermediary" if you, e.g. connect to Google from your home and they need permit from court to actually hack your computer. Yes, I know, they can hack Google itself and/or your computer but now we are at stage “asteroid can kill you any time thus it's pointless to watch traffic lights”.

Encouraging a wider view

Posted Oct 1, 2013 16:33 UTC (Tue) by nybble41 (subscriber, #55106) [Link]

> I meant to say if you control *both* endpoints, then it's easy to detect altered traffic by simply comparing the data sent from one end and data received on the other end.

This presumes that you have a secure way of knowing what data was actually sent, on a channel not controlled by the MITM attacker. If you have such a channel, why not just use that instead?

Anyway, "useless" was perhaps a bit strong. Unauthenticated encryption would make it more difficult to implement large-scale passive eavesdropping operations like the NSA's, but it doesn't offer any actual privacy guarantees. It's more on the level of security through obscurity, since it won't protect you against a determined, active attack. Given the choice, you should always authenticate. Certainly you should never transmit any truly private data without authenticating the receiver--you could be handing your secrets directly to your worst enemy and you'd never know it until it was too late.

If the information _isn't_ truly private, though, why bother encrypting it in the first place? Just raising the noise level to hide the real secrets?

Encouraging a wider view

Posted Oct 2, 2013 10:05 UTC (Wed) by intgr (subscriber, #39733) [Link]

> Unauthenticated encryption would make it more difficult to implement large-scale passive eavesdropping operations like the NSA's

You're missing my point. It's not just "more difficult"; it's extremely unlikely that they could get away with it on a large scale.

Even with unauthenticated encryption, they would have to perform active MITM attacks to eavesdrop on encrypted connections. There is no such thing as passive MITM, they have to decrypt and re-encrypt all data passing through. If it's done on a large scale, it will eventually be detected by tech-savvy users, even if by accident. Since you can collect evidence about the traffic manipulation, you can demand explanations from your ISP -- unlike passive eavesdropping, which is undetectable. You can tell them to stop altering data sent via a connection you paid for or lose business.

It would very likely force NSA to limit eavesdropping only to people being targeted, instead of the ubiquitous surveillance we have now. Yes, it won't "guarantee privacy", but it would make the majority of us more secure.

> This presumes that you have a secure way of knowing what data was actually sent, on a channel not controlled by the MITM attacker

Only in theory. In practice it's impossible to fool everyone all of the time about what traffic is being captured. Imagine tcpdump running over a MITMed SSH session; in order to fool the user, they would have to detect what packets are being printed out onto the console and substitute all that with the ones the person is "supposed" to see. Or what about Wireshark running over VNC? The attacker would have to re-render whole images passing over the network to cover up what traffic is being captured.

Not to mention that SSH does authentication (key pinning), so I don't think that's even on the table.

> Given the choice, you should always authenticate.

Yes, nobody is arguing against that.

Encouraging a wider view

Posted Sep 26, 2013 17:57 UTC (Thu) by mathstuf (subscriber, #69389) [Link]

Convergence[1] might be worth looking into as well.

[1]http://convergence.io/

Appraisal of DNSSEC-based certificate verification

Posted Sep 27, 2013 8:43 UTC (Fri) by shane (subscriber, #3335) [Link]

From the article:
Many have claimed that DNSSEC is a solution, but Marlinspike has argued otherwise—the actors are different, but the economic incentives are the same. Instead of trusting a bunch of CAs, Verisign will have to be trusted instead.
I don't think this is completely fair, because the current DNS-based solution need not be a replacement for the CA system, but can be used in addition to the CA system. So in order for an attacker to spoof a host both the CA and someone in the DNS hierarchy above your domain would have to be compromised. It adds a little special sauce to your TLS security.

Appraisal of DNSSEC-based certificate verification

Posted Sep 27, 2013 9:56 UTC (Fri) by error27 (subscriber, #8346) [Link]

People are saying that the NSA already has prepared a work around for that?

http://ppjg.me/2012/05/11/one-company-to-rule-them-all/

Appraisal of DNSSEC-based certificate verification

Posted Sep 27, 2013 10:27 UTC (Fri) by mpr22 (subscriber, #60784) [Link]

An interesting article, which I lack the effort to actively verify.

It's a shame the author has chosen to post it to a site whose maintainers, at a first glance, appear to have never met a conspiracy theory they didn't like.

Appraisal of DNSSEC-based certificate verification

Posted Sep 27, 2013 10:35 UTC (Fri) by jschrod (subscriber, #1646) [Link]

I don't understand the hyperbole of that article.

Yes, MarkMonitor is a registrar. Google and others need a registrar for their domain names. They chose one. The registrar could set up false name server delegation records. Well, news at 11. That's why we want DNSSEC.

And while they might be potentially a CA at the same time, they're not an approved Firefox CA, AFAICS. I don't have Chrome or a current IE fired up now, is MarkMonitor really an approved CA there?

MarkMonitor

Posted Sep 27, 2013 10:42 UTC (Fri) by shane (subscriber, #3335) [Link]

The article author seems to have correctly identified MarkMonitor's business (it's on their web site, so that's not a tricky bit of journalism). However, note that *every* record in .COM is *already* in the hands of one company - the .COM registry operator, Verisign.

The US government has *already* issued DNS domain takedowns via this mechanism, so I'm not sure what additional problems that companies using MarkMonitor could add:

http://blog.easydns.org/2012/03/05/the-ramifications-of-u...

As for why these varied companies are using this service, my guess is that they do a good job and have kick-ass sales people. We don't expect companies to do *everything* themselves... Google and Microsoft both use light bulbs, but we wouldn't worry about the NSA "turning the lights out" if they both bought them from Philips...

Appraisal of DNSSEC-based certificate verification

Posted Sep 27, 2013 20:01 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

It's not really viable. While NSA can push Verisign it has absolutely no control over the other top-level domains.

For example, one can host a site in .ua (Ukraine). In this case to surreptitiously intercept your traffic NSA will have to redirect the whole .ua top-level domain and use faked certificates - it's possible if they have full control over your pipe.

But it can be beaten fairly easy - just use 'sticky' DNSSEC keys. Since there are just over 300 top-level domains and DNSSEC key rotation happens rarely it's not that burdensome. Also, the mechanics of redirection itself are quite complicated.

This is way better than having 500 CAs each of which can be used to create a certificate for ANY site.

Warning about certificate changes doesn't work in todays world

Posted Sep 28, 2013 8:00 UTC (Sat) by ras (subscriber, #33059) [Link]

> Ts'o doesn't know how to solve the CA problem, but did have a selfish request: he would like to see certificates be cached and warnings be issued when those certificates change.

I had the same lament as Ted a year or two ago, which I mentioned to a Mozilla developer. He pointed me to a Firefox plugin called certificate patrol that does just what Ted asks. I installed it immediately.

After a few months I turned it off. You get flooded with warnings about certificates changing, particularly from Google sites.

The problem is Content Delivery Networks. The pages owners (quite rightfully) issue a different certificate to each machine that serves the same page. In Google's case the number of machines is so large there is no way you will meet them all in a reasonable time frame. No only does Google own a large number of machines. They serve their pages from a large number of domains, so just whitelisting domains doesn't work either.

In the end, I concluded checking for certificate changes is one of those seemingly straightforward ideas that doesn't work.

Warning about certificate changes doesn't work in todays world

Posted Sep 28, 2013 8:54 UTC (Sat) by peter-b (subscriber, #66996) [Link]

In that context, the obvious solution seems to be to trust the next certificate up in the hierarchy of trust, surely? If Google are using that many certificates, surely they are deriving them from a "Google root certificate" rather than purchasing them all individually from the CA...?

Warning about certificate changes doesn't work in todays world

Posted Sep 28, 2013 22:04 UTC (Sat) by dark (subscriber, #8483) [Link]

I think it mainly requires a new feature: the ability to say "remember this certificate, but remember the older ones too". That'll help with sites that keep switching between certificates.

The problem with Google only really started when they started using two different roots, which is something they're doing on purpose to prevent API users from hardcoding their original root cert.

Does anyone know if certpatrol is still being developed?

Warning about certificate changes doesn't work in todays world

Posted Sep 29, 2013 6:33 UTC (Sun) by ras (subscriber, #33059) [Link]

> In that context, the obvious solution seems to be to trust the next certificate up in the hierarchy of trust, surely?

I had not thought about it that deeply - but now that you mention it - yes. When I installed it that is what I thought it would do. It is after all the signing cert that you are trusting. The rest are irrelevant.

That highlights another design flaw in the X509 PKI system I guess. A single signing bit was a mistake. Something that allowed the owner of foo.com to sign anything under foo.com (but nothing else) would be far more useful.

> I think it mainly requires a new feature: the ability to say "remember this certificate, but remember the older ones too". That'll help with sites that keep switching between certificates.

Funny. I though it did that. Clearly I didn't think about it deeply enough, as it obviously doesn't.

> Does anyone know if certpatrol is still being developed?

It was updated regularly until October 14, 2011. Then nothing. So no. But it is open source, so the world would be a better place if someone picked up the ball. :D

"A Good Crisis"

Posted Sep 30, 2013 18:48 UTC (Mon) by Baylink (subscriber, #755) [Link]

Google attributes that quote, with relatively good confidence, to ex-Obama CoS Rahm Emmanuel, which I am more inclined to believe.

And I personally have no problem with the underlying view, either; you uses what you can gets. :-)

Jake: you're staff, are you not? LWN is *supposed* to pay for you to attend such things; I shouldn't think you need to 'disclose' it...

"A Good Crisis"

Posted Sep 30, 2013 19:01 UTC (Mon) by jake (editor, #205) [Link]

> Jake: you're staff, are you not? LWN is *supposed* to pay for you
> to attend such things; I shouldn't think you need to 'disclose' it...

I am indeed on the staff. When we started doing the disclosures, we figured we would go ahead and thank subscribers as well when they are footing the bill. iirc, there was an article without a disclosure at one point that someone asked about, so it heads off that question too ...

jake


"A Good Crisis"

Posted Sep 30, 2013 19:48 UTC (Mon) by raven667 (subscriber, #5198) [Link]

As a matter of history I believe that quote is a bit older and should be attributed to Winston Churchill, although it was referenced more recently by Rahm Emmanuel causing a minor scandal which is why Google associates it so strongly.

Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds