Verified cryptography for Firefox 57
The first result of this collaboration, an implementation of the Curve25519 key establishment algorithm (RFC7748), has just landed in Firefox Nightly. Curve25519 is widely used for key-exchange in TLS, and was recently standardized by the IETF. As an additional bonus, besides being formally verified, the HACL* Curve25519 implementation is also almost 20% faster on 64 bit platforms than the existing NSS implementation (19500 scalar multiplications per second instead of 15100) which represents an improvement in both security and performance to our users."
Posted Sep 14, 2017 15:30 UTC (Thu)
by ms (subscriber, #41272)
[Link] (34 responses)
Posted Sep 14, 2017 15:55 UTC (Thu)
by rvfh (guest, #31018)
[Link] (1 responses)
Posted Sep 28, 2017 5:04 UTC (Thu)
by Garak (guest, #99377)
[Link]
Verified cryptography for Firefox 57 (mozilla security blog)
than presumably ms and I would have had different reactions. Something akin to, e.g.
Firefox takes a Quantum leap forward with new developer edition (ars technica)
I.e. if it weren't for the "(ars technica)" there, I would be disturbed by the verbiage as an LWN headline choice. Sigh.
Posted Sep 14, 2017 15:56 UTC (Thu)
by ledow (guest, #11753)
[Link] (15 responses)
You can't ever verify that it's unbreakable. That's like asking "we guarantee that someone can't invent new maths in the next decade that we currently know nothing of?"
Code verification, however, just verifies that the code does what it says it would do and doesn't get caught in the traps available.
To be honest, it doesn't even mean that the code doesn't leak memory, or pass out encrypted data to the world, or have a backdoor for the NSA. Unless the specification it was "verified" against says that too.
The algorithm is mathematically verified to be converted to code correctly. That's about the best way you can put it. And you can't stick that in a headline.
The algorithm could still be wrong. Or processors could still mess up certain assumptions (e.g. that people can't sniff the key mid-operation), etc.
To be honest, given the expense of verifying code like this, I'd be happier if they just put that effort/money into verifying ECC itself, rather than a particular implementation of it.
Posted Sep 14, 2017 16:11 UTC (Thu)
by ms (subscriber, #41272)
[Link] (1 responses)
You totally could - especially on LWN :)
Posted Sep 16, 2017 17:05 UTC (Sat)
by dahveed311 (subscriber, #103741)
[Link]
Posted Sep 14, 2017 20:41 UTC (Thu)
by HelloWorld (guest, #56129)
[Link] (12 responses)
Posted Sep 15, 2017 19:08 UTC (Fri)
by simcop2387 (subscriber, #101710)
[Link] (3 responses)
Posted Sep 16, 2017 16:42 UTC (Sat)
by HelloWorld (guest, #56129)
[Link] (2 responses)
Posted Sep 16, 2017 18:22 UTC (Sat)
by mpr22 (subscriber, #60784)
[Link] (1 responses)
Posted Sep 16, 2017 19:48 UTC (Sat)
by HelloWorld (guest, #56129)
[Link]
Posted Sep 16, 2017 6:03 UTC (Sat)
by ncm (guest, #165)
[Link] (7 responses)
It is sometimes possible to make proofs about existence of algorithms, but all too often the proof relies on hidden assumptions that your opponent is happy to violate. Despite that the Halting Problem is proven insoluble, we routinely prove that particular programs reliably terminate.
All there is to say whenever somebody does what you have just proven impossible is "huh" -- an expressian universal to all languages, even those not used to express proofs.
A proof that a library function really satisfies a spec despite all the peculiar properties of what programming languages call integers is a notable achievement. A proof that the spec has all the properties claimed for it would be another.
Posted Sep 16, 2017 8:55 UTC (Sat)
by HelloWorld (guest, #56129)
[Link] (5 responses)
> Despite that the Halting Problem is proven insoluble, we routinely prove that particular programs reliably terminate.
Posted Sep 17, 2017 4:36 UTC (Sun)
by ncm (guest, #165)
[Link] (4 responses)
Posted Sep 17, 2017 13:11 UTC (Sun)
by HelloWorld (guest, #56129)
[Link] (3 responses)
Posted Sep 17, 2017 22:21 UTC (Sun)
by matthias (subscriber, #94967)
[Link] (1 responses)
It is very unlikely that a proof that some encryption is secure can be made without any assumptions. It works for the One-Tine-Pad, where even the information theory says, that the encryption cannot be broken. But this is a very special case. The BB84 quantum cryptography protocol was proved secure. However, the proof assumes that our understanding of quantum mechanics is correct. I do not see any way, how such a proof could be done without this assumption.
Somewhere below, you give the example that there is no rational solution for x^2=2. Of course mathematicians can disprove the existence of some well defined mathematical object. The point is that breaking a cryptographic scheme is not that well defined. Once you establish a precise mathematical definition, this definition will imply certain assumptions, e.g., model of computation.
I want to mention two more very prominent assumptions that no one has proven, but that are fundamental to all complexity theory:
You should not misunderstand me. I do not say that proofs are worthless. I did my PHD in theoretical computer science, so I know the value of proofs. A formal proof saying ECC is unbreakable (by Turing machines) would be worth a lot (especially as it would imply P!=NP). However, if one of the two above assumptions would turn out to be false, someone could come up with different maths (this could be what ledow meant). I believe in both assumptions, but I do not believe that we can ever proof them to be true.
Posted Sep 19, 2017 2:32 UTC (Tue)
by HelloWorld (guest, #56129)
[Link]
Posted Sep 18, 2017 16:11 UTC (Mon)
by ncm (guest, #165)
[Link]
Posted Sep 16, 2017 9:30 UTC (Sat)
by farnz (subscriber, #17727)
[Link]
Note that with the Halting Problem, the get-out is that you can divide programs into three piles, but not two:
The trick to proving termination in real-world situations is twofold; first, minimise the number of programs you put into category 3. Then, declare that all programs in category 3 are unknowable, and it's an error case to deal with one of them - for many practical uses of the halting problem, this amounts to declaring that programs in category three do not halt.
Posted Sep 14, 2017 16:19 UTC (Thu)
by xtifr (guest, #143)
[Link] (3 responses)
Posted Sep 14, 2017 18:34 UTC (Thu)
by riddochc (guest, #43)
[Link]
Posted Sep 14, 2017 20:55 UTC (Thu)
by HelloWorld (guest, #56129)
[Link]
Anyway, I don't think you should be calling people ignorant in cases like this.
Posted Sep 21, 2017 17:09 UTC (Thu)
by smitty_one_each (subscriber, #28989)
[Link]
Posted Sep 14, 2017 18:01 UTC (Thu)
by flussence (guest, #85566)
[Link] (4 responses)
Posted Sep 14, 2017 18:46 UTC (Thu)
by derobert (subscriber, #89569)
[Link] (3 responses)
Posted Sep 15, 2017 11:34 UTC (Fri)
by hmh (subscriber, #3838)
[Link] (2 responses)
And code full of side-channels would still be perfectly correct, you need some rather extreme constraints to have noise, power, time, cache, and even execution-unit/performance-counter side-channel protections... for example, not only two multiplications need to always take the same time, they must always do the same number of muls/adds/etc (time, power) *in the same sequence* (power, RF noise), with the same memory ordering pattern (cache)...
Do note gnupg (and openssl, etc) have all sort of defenses to mitigate known such side-channels that have been proved to be exploitable, but they're not perfect. This one reason why I mostly assume only half of the secret keys are not being leaked somehow (thus using twice the currently recommended size at the very least).
Posted Sep 17, 2017 4:38 UTC (Sun)
by biergaizi (guest, #92498)
[Link] (1 responses)
In contrast, it's nearly impossible to implement AES withing leaking timings, until the algorithm implemented by the hardware. The design of CBC mode in SSL/TLS is the worst one, first there's Lucky13 attack exploiting timings, the fix is using constant time code, e.g. no "if" statement is allowed, implementing logic using XOR/AND/NOT, etc, program like Brainf**k, unfortunately the fix itself causes another timing leak, the padding oracle of CVE-2016-2107, only discovered years later.
It's 2017, and one can say if someone still designs algorithms and cryptosystems on paper, but practically completely impossible to implement on physical computers without leaking timings, then the design itself is not good at all in the first place.
Immature fields can be an exception, like Post-Quantum Cryptography. Unfortunately we waited for 15 years for an algorithm design that solves practical issues, and now a new wave of "secure on paper, extremely difficult to implement on physical computers" algorithms are coming, inevitably...
Posted Sep 24, 2017 3:09 UTC (Sun)
by tterribe (guest, #66972)
[Link]
An excellent property for side-channel-resistant attacks to have is that all memory accesses are completely deterministic, i.e., the locations that are read from or written to are not dependent on the secret key or the plain text.
Posted Sep 14, 2017 22:42 UTC (Thu)
by roc (subscriber, #30627)
[Link] (5 responses)
People who are interested should read about the overarching project: https://project-everest.github.io/
"Verified as unbreakable" is not yet present in this case, but it's a realistic goal for the future, for the right definition of the term. We are able to verify that various cryptographic protocols are "unbreakable", in the sense that if the endpoints implement those protocols correctly and certain algorithmic hardness results hold, then an adversary who can only observe and manipulate what's on the wire learns nothing about certain data (e.g. the contents of packets). Work like Everest can prove the correctness of the endpoint implementations. The hardness assumptions can only be defeated by an adversary who makes mathematical breakthroughs --- we're talking Fields Medal-worthy work --- or if you use too-small key sizes. Against verified crypto, plausible attacks will be either direct attacks against endpoints, or attacks that learn something other than what the protocol was protecting (e.g. packet timing instead of packet contents).
Posted Sep 14, 2017 23:20 UTC (Thu)
by HelloWorld (guest, #56129)
[Link] (1 responses)
Posted Sep 15, 2017 8:35 UTC (Fri)
by Wol (subscriber, #4433)
[Link]
The whole point of a proof in maths is that it shows your assumptions are self-consistent.
The whole point of a proof in science is that it shows your assumptions are in line with reality.
And a lot of the comments here show that people have difficulty grasping the difference (yes, that seems to be the norm :-(
That's why, when you're presenting a proof like this, it's important to state all your assumptions (like the grandparent post does), but it's also why peoples' eyes glaze over because it's all maths and so many people seem to be scared of maths :-(
Cheers,
Posted Sep 15, 2017 11:07 UTC (Fri)
by ballombe (subscriber, #9523)
[Link] (2 responses)
Why ? because there a precedent for breaking similar algorithms, while there is no precedent for negative proofs
The truth is a large number of mathematicians expect ECDL to be solvable in subexponential time, so are unlikely to work toward proving the opposite.
Posted Sep 15, 2017 13:34 UTC (Fri)
by Wol (subscriber, #4433)
[Link] (1 responses)
I think it was Fermat's, but I'm sure there have been several, where some mathematician presented a proof of something completely unrelated and then added, almost in a footnote, "by the way, Fermat's is a trivial subset of this proof".
I think everyone knew a big announcement was coming, but really didn't expect a Fermat proof to be announced and dismissed in such a trivial manner!
Cheers,
Posted Sep 19, 2017 14:09 UTC (Tue)
by gowen (guest, #23914)
[Link]
Posted Sep 15, 2017 10:51 UTC (Fri)
by BillyBob (guest, #118071)
[Link]
Verified cryptography for Firefox 57
Verified cryptography for Firefox 57
parenthetical correctness
Verified cryptography for Firefox 57
Verified cryptography for Firefox 57
Verified cryptography for Firefox 57
Verified cryptography for Firefox 57
An encryption algorithm can be considered unbreakable by conventional (that is, non-quantum) computers if there is no algorithm that can decrypt an encrypted text in less than exponential time on average without knowing the key. I can't see why proving that would be any less possible than proving that there is no sorting algorithm faster than O(n*log(n)), and this has in fact been proven. Is it difficult? Sure, it might require completely new approaches to complexity theory. But saying it's impossible is tantamount to saying that nobody can ever invent this new math that we currently know nothing of. Oh, the irony…
Verified cryptography for Firefox 57
Verified cryptography for Firefox 57
Those boil down to the same thing. Do you actually know what exponential complexity means in this context?
For an N-bit key, a cryptanalytical attack that takes K * 2^(ceil(N/64)) operations where K << 2^64 is still, in the strict mathematical sense, exponential in N... but N has to be pretty damn big before you're safe from that attack.
Verified cryptography for Firefox 57
Verified cryptography for Firefox 57
sorting
sorting
Yes, the O(n * log(n)) result applies to comparison-based algorithms, which radix sort is not. This is well-known and not a real counter-argument to my point, because you can avoid the problem by simply not making that sort of assumption.
It is proven that there is no program that can reliably determine for all programs whether or not they're going to halt. This is very different from saying that there is no program that can reliably determine for any programs whether or not they're going to halt. The fact that we can prove termination for *some* programs doesn't say anything about the halting problem, and that you seem to think it does tells more about your math education (or lack thereof) than anything else.
sorting
sorting
sorting
1. The Church-Turing Thesis stating that the functions computable in our universe are exactly the Turing computable functions.
2. Consistency of ZFC (fundamental to maths in general, cannot be proven in ZFC).
sorting
sorting
sorting
Verified cryptography for Firefox 57
Verified cryptography for Firefox 57
Verified cryptography for Firefox 57
I don't see why it wouldn't be possible. An encryption algorithm can be considered unbreakable if there is no algorithm that can decrypt an encrypted text in less than exponential time on average without knowing the key. And I wouldn't know any reason in principle why such a property would be impossible to prove. Mathematicians prove the non-existence of things all the time. Euclid did it more than 2000 years ago when he proved that there are no natural numbers p and q such that (p/q)^2 = 2.
Verified cryptography for Firefox 57
Verified cryptography for Firefox 57
Verified cryptography for Firefox 57
Verified cryptography for Firefox 57
Verified cryptography for Firefox 57
Verified cryptography for Firefox 57
Verified cryptography for Firefox 57
finally someone who knows what he's talking about
finally someone who knows what he's talking about
Wol
Verified cryptography for Firefox 57
If you prove that ECDL cannot be solved in polynomial time in general, you will probably get the Field medal if you are young enough.
However, if you prove that ECDL can be solved in subexponential time, chance are you will not be awarded the Field medal.
(this is an instance of the P=NP problem).
Verified cryptography for Firefox 57
Wol
Verified cryptography for Firefox 57
I think it was Fermat's, but I'm sure there have been several, where some mathematician presented a proof of something completely unrelated and then added, almost in a footnote, "by the way, Fermat's is a trivial subset of this proof".
This was Wiles and Fermat, but its worth noting that reducing FLT to a footnote was a jokey way by Wiles to bury the lede. He knew what the big story was here, and everyone in the room (who were basically the worlds foremost minds on modular forms) knew the best-known implication of the Taniyama-Shinmura Conjecture - and hence where Wiles was likely to end up - well before that point.
FWIW - I believe the lectures ended "Corollary: Fermat's Last Theorem"
Verified cryptography for Firefox 57