Government agency dragging its heels on OpenSSL validation (NewsForge)
According to CMVP director Randy Easter, a typical testing cycle runs from several weeks to a few months, and the goal for NIST is to process reports generated by the labs after testing within six to nine weeks. Once processed, NIST either sends additional questions back to the testing lab or moves forward with granting validation. The process typically takes less than a year. Because testing on OpenSSL has now taken more than twice that long, some have begun questioning the review process and whether the open source toolkit is getting a fair shake by the agency."
Posted Jan 22, 2006 16:16 UTC (Sun)
by sveinrn (guest, #2827)
[Link] (2 responses)
"Program testing can be used to show the presence of bugs, but never to show their absence!" Dijkstra
"Beware of bugs in the above code; I have only proved it correct, not tried it." Knuth
Posted Jan 23, 2006 20:56 UTC (Mon)
by iabervon (subscriber, #722)
[Link] (1 responses)
It would be very easy for them to make a binary package of OpenSSL which they've validated, slightly less easy to validate someone else's binary package, less easy to validate a binary package without source, and much more difficult to validate whatever binary somebody might someday get by compiling the source.
Posted Jan 24, 2006 2:29 UTC (Tue)
by sveinrn (guest, #2827)
[Link]
But as far as I can see, it should be possible to validate the code based on what the ISO standard for C99 (or some other standard if it is not written in C...) specifies that the code should do. And if the standard is ambiguous, one have to either demonstrate that all interpretations of the code allowed by the standard leads to the same result or replace the code.
It should not be necessary to compile and test the code with all supported compilers and every possible compiler option under all supported operatings systems running on all supported hardware platforms. If the code survives a validation at the source code level, any bugs left would have to be the result of a buggy compiler, library, OS or cpu.
Posted Jan 23, 2006 9:34 UTC (Mon)
by rganesan (guest, #1182)
[Link]
Posted Jan 23, 2006 17:40 UTC (Mon)
by carcassonne (guest, #31569)
[Link]
"Additionally, Sargent said that hash codes and digital signatures were inserted into the code to ensure things were in their proper order and compiling correctly."
That's interesting. Does anyone have some URLs to share about how to implement such checks ?
From the NewsForge article it seems that validating source code is more difficult than validating binary modules. I found two nice quotes on that subject on wikipedia: Government agency dragging its heels on OpenSSL validation (NewsForge)
It is more difficult to validate source code that to validate a binary to which you have the source. Source code can have bugs involving incorrect annotations, such that a compiler would be permitted to make optional optimizations which would change the result. Given a binary, you can make sure that the compiler generated the right code. If other people are going to compile it and trust your validation, you need to make sure that the compiler was required by the language standards to generate the right code, not just that it happened to do so in the case you tested.Government agency dragging its heels on OpenSSL validation (NewsForge)
Of course it is easier to validate when you have both the source and binary. But when you have the OpenSSL source it should be very easy to create a binary. Government agency dragging its heels on OpenSSL validation (NewsForge)
I don't know if the NewsForge article has anything to do with it but OpenSSL has passed validation on Sunday, Jan 22 2006.Government agency dragging its heels on OpenSSL validation (NewsForge)
The artcile mentions, about make certain that the OpenSSL code is compiled right by the guys making the verification:Government agency dragging its heels on OpenSSL validation (NewsForge)