A new Dual EC DRBG flaw
The dual elliptic curve deterministic random bit generator (Dual EC DRBG) cryptographic algorithm has a dubious history—it is believed to have been backdoored by the US National Security Agency (NSA)—but is mandated by the FIPS 140-2 US government cryptographic standard. That means that any cryptographic library project that is interested in getting FIPS 140-2 certified needs to implement the discredited random number algorithm. But, since certified libraries cannot change a single line—even to fix major, fatal bugs—having a non-working version of Dual EC DRBG may actually be the best defense against the backdoor. Interestingly, that is exactly where the OpenSSL project finds itself.
OpenSSL project manager Steve Marquess posted
the tale to the openssl-announce mailing list on December 19. It is,
he said, "an unusual bug report for an unusual situation
". It
turns out that the Dual EC DRBG implementation in OpenSSL is fatally
flawed, to the point where using it at all will either crash or stall the
program. Given that the FIPS-certified code cannot be changed without
invalidating the certification, and that the bug has existed since the
introduction of Dual EC DRBG into OpenSSL, it is clear that no one has
actually used that algorithm from OpenSSL. It did, however, pass the
testing required for the certification somehow.
It is also interesting to note that the financial sponsor of the feature
adding support for Dual EC DRBG, who is not named, did so after the
algorithm was already known to be questionable. It was part of a request to
implement all of SP
800-90A, which is a suite of four DRBGs that Marquess called
"more or less mandatory
" for FIPS certification. At the time, the project
recognized the "dubious reputation
" for Dual EC DRBG, but also
considers OpenSSL to be a comprehensive library and toolkit: "As such it implements many algorithms
of varying strength and utility, from worthless to robust.
" Dual EC
DRBG was not even enabled by default, but it was put into the library.
The bug was discovered by Stephen Checkoway and Matt Green of the Johns Hopkins University Information Security Institute, Marquess said. Though there is a one-line patch to fix the problem included with the bug report, there are no plans to apply it. Instead, OpenSSL will be removing the Dual EC DRBG code from its next FIPS-targeted version. The US National Institute of Standards and Technology (NIST), which oversees FIPS and other government cryptography standards, has recently recommended not using Dual EC DRBG [PDF]. Since that recommendation, Dual EC DRBG has been disabled in OpenSSL anyway. Because there is essentially the same amount of testing required for fixing or removing the algorithm (for FIPS recertification), removal seems like the right course.
The problem stems from a requirement in FIPS that each block of output random numbers not match the previous block. It is, effectively, a crude test that the algorithm is actually producing random-looking data (and not repeating blocks of zeroes, for example). When there is no previous block to compare against, OpenSSL generates one that should be discarded after the comparison. But the Dual EC DRBG implementation botched the discard operation by not updating the state correctly.
Dual EC DRBG was under suspicion for other reasons even before it was adopted by NIST in 2006. In 2007, Bruce Schneier raised the alarm about an NSA backdoor in the algorithm. For one thing, Dual EC DRBG is different than the other three algorithms specified in SP 800-90A in that it is three orders of magnitude slower and that it was only added at the behest of the NSA. It was found that the elliptic curve constants chosen by NIST (with unspecified provenance) could be combined with another set of numbers—not generally known, except possibly by the NSA—to predict the output of the random number generator after observing 32 bytes of its output. Those secret numbers could have been generated at the same time the EC constants were, but it is unknown if they actually were.
The NIST standards were a bit unclear about whether the EC constants were required, but Marquess noted that the testing lab required using the constants (aka "points"):
So, what we have here is a likely backdoored algorithm that almost no one used
(evidently unless they were paid
$10 million) added to an open-source cryptography library funded by
money from an unnamed
third party. After "rigorous" testing, that code was certified as
conforming to a US government cryptographic standard, but it never actually
worked at all. According to Marquess: "Frankly the FIPS 140-2 validation testing isn't very useful for catching
'real world' problems.
"
It is almost comical (except to RSA's BSafe customers, anyway), but it does highlight some fundamental problems in the US (and probably other) government certification process. Not finding this bug is one thing, but not being able to fix it (or, more importantly, being unable to fix a problem in an actually useful cryptographic algorithm) without spending lots of time and money on recertification seems entirely broken. The ham-fisted way that the NSA went about putting the backdoor into the standard is also nearly amusing. If all its attempts were similarly obvious and noisy, we wouldn't have much to worry about—unfortunately that seems unlikely to be the case.
One other thing to possibly consider: did someone on the OpenSSL project "backdoor" the Dual EC DRBG implementation such that it could never work, but would pass the certification tests? Given what was known about the algorithm and how unlikely it was that it would ever be used by anyone with any cryptographic savvy, it may have seemed like a nice safeguard to effectively disable the backdoor. Perhaps that is far-fetched, but one can certainly imagine a developer being irritated by having to implement the NSA's broken random number generator—and doing something about it. Either way, we will probably never really know for sure.
Index entries for this article | |
---|---|
Security | OpenSSL |
Security | Random number generation |
Posted Jan 1, 2014 16:54 UTC (Wed)
by tseaver (guest, #1544)
[Link]
Perhaps we should call this "slapsticking" the NSA's backdoor?
Posted Jan 1, 2014 17:38 UTC (Wed)
by brugolsky (guest, #28)
[Link]
Posted Jan 1, 2014 17:44 UTC (Wed)
by freemars (subscriber, #4235)
[Link] (1 responses)
Posted Jan 2, 2014 4:15 UTC (Thu)
by eternaleye (guest, #67051)
[Link]
DRBGs/PRNGs are used either when a) you don't have a hardware source of true randomness (generally either thermal, quantum, or radiological) b.) that hardware source generates randomness slower than you need (and so you want to expand it using something relatively fast) or c.) you want to pool randomness from multiple sources in a way that reduces the risks if one or more is compromised.
Linux's /dev/random and /dev/urandom are PRNGs that are seeded by hardware entropy sources; the BSDs generally use one based on the Yarrow construction from Schneier - both instances of c).
Stream ciphers are (essentially) PRNGs that you then XOR with your plaintext, a use case that _requires_ the same seed generate the same output (deterministic).
Posted Jan 1, 2014 17:47 UTC (Wed)
by jeff_marshall (subscriber, #49255)
[Link] (3 responses)
"That means that any cryptographic library project that is interested in getting FIPS 140-2 certified needs to implement the discredited random number algorithm"
FIPS 140-2 requires the use of a validated algorithm for deterministic random bit generation (FIPS-speak for PRNG), but alternatives exist and always have. See Annex C [1] for the complete list, which includes DRBGs based on both cryptographic hash functions and block ciphers.
Having been part of a FIPS validation in the past, I know that neither the FIPS lab nor the NIST people ever brought up our choice to use a different DRBG. So, I'm curious to know where the idea the Dual EC DRBG is/was mandatory comes from.
[1] http://csrc.nist.gov/publications/fips/fips140-2/fips1402...
Posted Jan 1, 2014 18:15 UTC (Wed)
by jake (editor, #205)
[Link] (2 responses)
It comes from Marquess's statement:
SP800-90A is a more or less mandatory part of FIPS 140-2,
(part of which I quote in the article).
I don't have any first-hand knowledge of what is required, sorry if I got it wrong.
jake
Posted Jan 1, 2014 19:06 UTC (Wed)
by jeff_marshall (subscriber, #49255)
[Link] (1 responses)
SP800-90A actually includes several DRBG algorithms, and any given product is likely to only use one of them.
So Marquess's statement that SP800-90A is basically mandatory (most non-trivial uses of cryptography require a DRBG at some point and 140-2 Annex C allows the use of the SP800-90A DRBGs) shouldn't be taken to imply that Dual EC DRBG is also mandatory as one could chose to implement another algorithm from that SP.
Generally I think either CTR DRBG or HMAC DRBG from SP800-90A are more likely to be chosen than Dual EC DRBG depending upon whether the application also needs a hash or block cipher whose implementation can be used as a building block for the DRBG implementation.
Posted Jan 1, 2014 21:09 UTC (Wed)
by wahern (subscriber, #37304)
[Link]
As I commented earlier this year, before the Snowden RSA disclosure, all the NSA needed to do was lean on commercial vendors to use Dual_EC_DRBG as the default, as it apparently did with BSafe and perhaps others. That better alternatives existed in the standard was a ruse.
My Slashdot comment:
If the NSA was only concerned with open source cryptographic products and protocols, you would have a point. But aside from government procurement, NIST standards are in practice used to specify deliverables for corporate security products. Getting Duel_EC_DRBG into a NIST standard is the equivalent of putting a backdoor into an ISO standard for door locks.
Once in the standard, the NSA can then lean on vendors to use the broken algorithm, and the vast majority of users of that product would be none the wiser. Most corporate security products are opaque and proprietary, and the purchasing agents are unlikely to have a clue about the problem. All they want to see is "NIST-approved".
All we can do is conjecture, but I don't think the scenario is that outlandish. To my mind it seems more like standard operating procedure than unlikely conspiracy. The fact that the backdoor is clumsy reflects less on the carelessness of the NSA, and more on the exceptional skills of the civilian community. We're smarter now. The NSA has fewer tricks up its sleeve, but it's not like they can just quit and go home.
-- http://it.slashdot.org/comments.pl?sid=4090525&cid=44570807
Posted Jan 1, 2014 22:55 UTC (Wed)
by Arach (guest, #58847)
[Link]
Posted Jan 1, 2014 22:57 UTC (Wed)
by richmoore (guest, #53133)
[Link]
I've not had a chance to look at the code in detail, but this certainly seems reasonable.
Posted Jan 2, 2014 1:23 UTC (Thu)
by Thue (guest, #14277)
[Link]
The standard does say on page 77
> One of the following NIST approved curves with associated points shall be used in applications requiring certification under [FIPS 140].
and it seems likely that this was what mandated the backdoored points.
But the standard actually also allows you to output less bits in the output function, on page 65. Using about half the X coordinate as output (instead of all but 16 bits) should actually also stop the backdoor attack. At least according to Certicom's Daniel Brown, who patented the backdoor for Dual EC DRBG in 2005, as well as ways to mitigate the backdoor. OpenSSL could actually have used an "outlen" parameter of about half the key length, which would probably have been safe.
Standard: http://csrc.nist.gov/publications/nistpubs/800-90A/SP800-...
Posted Jan 6, 2014 23:10 UTC (Mon)
by marcH (subscriber, #57642)
[Link] (1 responses)
No version control?
Posted Jan 7, 2014 8:07 UTC (Tue)
by khim (subscriber, #9252)
[Link]
Posted Jan 9, 2014 18:22 UTC (Thu)
by BrucePerens (guest, #2510)
[Link] (6 responses)
A very long time ago, I had a minor involvement in a project with John Whethersby's "Open Source Software Institute" to sponsor Ben Laurie to take OpenSSL through FIPS 140 certification. The project took years longer than expected (I think because NIST was reluctant to certify Open Source) and sponsorship did not cover all of the expenses, Ben took the project to completion with a mostly-unpaid investment of personal time. John Whethersby might know who the sponsor is, but due to his continuing involvement in business with U.S. Government regarding Open Source there's probably no point in asking him. I try to stay on the corporate side these days, it's a lot easier to deal with. NSA has had a lot of time to get up to speed with the issue of encryption and I think it's safe to say that they are 10 years ahead of the state of the art now and that they have a lot of custom silicon in house. People who believe that any form of algorythmic encryption protects them from NSA are self-decieved. Only the most carefully handled one-time pad has a chance of defeating them.
Posted Jan 9, 2014 18:45 UTC (Thu)
by BrucePerens (guest, #2510)
[Link] (1 responses)
Oh right. Defense Medical Logistics (part of the US Government) and HP are the sponsors (and I guess I was there part of the time this was being worked upon so HP may mean me). I remember the person from Defense Medical Logistics telling me he was frustrated with his vendors and the incredible budget outlay they required. This is all public knowledge. Posted Jan 9, 2014 19:17 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
So NSA has enough silicon to boil oceans and explode supernovae stars? That's what is required to brute-force 128-bit and 256-bit keys.
As for algorithmic breakthroughs - even in the case of DES and differential cryptoanalysis (the most well-known NSA breakthrough) the threats were mostly theoretical.
It's highly unlikely that they have a practical attack against widely-used ciphers that allows to recover keys without using 2^80 bytes of chosen plaintext or anything similar.
Of course, NSA might certainly use careful side-channel attacks or they might be able to brute-force keys with insufficient entropy.
Posted Jan 10, 2014 1:59 UTC (Fri)
by BrucePerens (guest, #2510)
[Link] (1 responses)
You are assuming that the PRNG has near-perfect entropy and that the search space is thus as large as you think it is. This might be a reasonably safe assumption, but it's less than provable. Also note that the search space for quantum computers would be the square root of the search space size for von Neuman ones. We can only use NSA's actions to forcast their capabilities. Right now there is an effort to make encryption mandatory for every HTTP connection in the next version of the HTTP standard. U.S. government does not seem to have the slightest interest in this activity, and it is very likely to go forward. My assumption is that they are not bothered, for some reason, and we can hope to deter corporate eavesdropping, but NAS knows something we don't.
Posted Jan 10, 2014 2:06 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Now, if we're talking about keys derived from users' input then it's a whole different story. It's very much feasible to brute-force most of passwords that are easily memorable.
>This might be a reasonably safe assumption, but it's less than provable. Also note that the search space for quantum computers would be the square root of the search space size for von Neuman ones.
>We can only use NSA's actions to forcast their capabilities. Right now there is an effort to make encryption mandatory for every HTTP connection in the next version of the HTTP standard.
Posted Jan 23, 2014 22:16 UTC (Thu)
by ballombe (subscriber, #9523)
[Link]
So it is reasonable to assume that the NSA could not break ECDSA in 2005.
Posted Jan 24, 2014 18:58 UTC (Fri)
by DavidJohnston (guest, #85852)
[Link]
It is mandated by FIPS 140-2 that a DRBG (PRNG) within the FIPS boundary be compliant to SP800-90 (SP800-90A more recently).
SP800-90A gives 4 options (hash, hmac, ctr and dual_ec) and any one will do.
No honest implementer uses the dual_ec DRBG, event before the recent flap, because it is obviously stupid (slow, complex to implement without exposing to side channel attacks) and had published attacks in 2006.
Posted Feb 2, 2014 10:25 UTC (Sun)
by swisspgg (guest, #95325)
[Link]
I (and many others) would prefer to see the people in charge explain how they plan to avoid this from happening in the future.
Confidence should be built on premises that users can trust - and this can only start with accountability (who did what, when, for which alleged reason - and who else endorsed the move, after which checks, done when, and for which alleged reasons).
Failure to do so will inevitably lead users from seeking alternate solutions, which is not the goal pursued here, I presume.
A new Dual EC DRBG flaw
> "backdoor" the Dual EC DRBG implementation such that it could never work,
> but would pass the certification tests?
I'm visualizing a bucket of whitewash / wallpaper paste, propped atop the
slightly open door, ready for the hapless spook.
Clever
A new Dual EC DRBG flaw
A new Dual EC DRBG flaw
A new Dual EC DRBG flaw
A new Dual EC DRBG flaw
> mandatory comes from.
for any module of non-trivial complexity.
A new Dual EC DRBG flaw
A new Dual EC DRBG flaw
"Generally I think either CTR DRBG or HMAC DRBG from SP800-90A are more likely to be chosen than Dual EC DRBG depending upon whether the application also needs a hash or block cipher whose implementation can be used as a building block for the DRBG implementation."
A new Dual EC DRBG flaw
An example of how to backdoor dual EC
OpenSSL could have implemented a non-backdoored version
2005 patent: https://www.google.com/patents/CA2594670A1
A new Dual EC DRBG flaw
Version control shows addition of the whole algorithm as one commit and mistake is already there. What now? Was it, indeed, a mistake or was it done intentionally? We'll never know, really: any answer will be suspicious.
A new Dual EC DRBG flaw
Sponsorship
Sponsorship
Sponsorship
Sponsorship
Sponsorship
I think that for most practical PRNGs it's not feasible to find their bias. And it's also ultimately futile because you'd have to re-do the analysis for each new version of PRNG.
So just use 256-bit keys. Building a quantum computer capable of iterating through the 128-bit keyspace is very faaaar in the future if even practically possible, and it'll have the same issue with the boiling oceans.
NSA is not omnipotent. Also, it's far easier for them to force most of cloud providers to provide direct taps to their internal networks.
NSA capability
If the NSA had the capability to break ECDSA, then they would have generate the points in a 'nothing up my sleeves' way to avoid the perception they planted a backdoor.
A new Dual EC DRBG flaw
A new Dual EC DRBG flaw