User: Password:
|
|
Subscribe / Log in / New account

Give Debian maintainers the deserved blame

Give Debian maintainers the deserved blame

Posted May 15, 2008 16:44 UTC (Thu) by rfunk (subscriber, #4054)
Parent article: Debian vulnerability has widespread effects

While I agree that the OpenSSL code and procedures should have been 
documented better, I don't think enough attention is being given to the 
statement that Ben Laurie emphasizes:

** Never fix a bug you don’t understand. **

I would add that this especially applies to crypto code, and even more 
especially to crypto code in a widely-used crypto library -- a library 
that is widely used because people trust that library to get crypto right.

As a longtime Debian user I'm embarrassed and saddened that Debian screwed 
this up so badly.


(Log in to post comments)

Give Debian maintainers the deserved blame

Posted May 15, 2008 23:00 UTC (Thu) by dvdeug (subscriber, #10998) [Link]

And how do you know if you understand the bug or not? The way Ben Laurie puts it, it's
basically "trust us; we're smarter than you." The Debian maintainer asked openssl-dev if it
was okay, and they said it was. There was obviously a failure to communicate, but I'd like a
better answer then "treat OpenSSL like it's proprietary software".

People make mistakes--all people. I've seen Debian take responsibility and try and fix things.
I've seen the OpenSSL people blame Debian for having the gall to change free software, and for
not communicating with a secret mailing list, with a large bit of whining about their poor
resources. I haven't seen any statements from OpenSSL people saying "we will do this in the
future to help distributions communicate with us and effectively fix bugs". People who take
responsibility are hard to vilify; those who use a screwup they were involved in as an excuse
to vilify others tend to get more blame. Probably a good thing.

Give Debian maintainers the deserved blame

Posted May 15, 2008 23:26 UTC (Thu) by rfunk (subscriber, #4054) [Link]

For starters, if you're just trying to silence warnings, then you probably don't understand 
the bug.
Also, if you know you have know idea what effect your change would have on the very 
facility that the line is intended to affect, then you probably don't understand the bug.

The Debian maintainer asked openssl-dev if it was ok to comment out two lines of 
code, "when debugging applications".  The response was "if it helps with debugging, go 
ahead."  There was nothing saying "I'm checking this into Debian".

It's enlightening to read the original openssl-dev post:
http://marc.info/?l=openssl-dev&m=114651085826293&...

"When debbuging applications that make use of openssl using valgrind, it can show alot 
of warnings about doing a conditional jump based on an unitialised value."
...
"But I have no idea what effect this really has on the RNG."

Give Debian maintainers the deserved blame

Posted May 16, 2008 1:26 UTC (Fri) by dvdeug (subscriber, #10998) [Link]

At least one part of the bug was that it was making Valgrind spew out warnings when trying to
debug user programs. The warnings themselves were the visible part of the bug.

He did not say "if it was ok to comment out two lines of 
code, "when debugging applications"." He asked if it was okay to comment out two lines of
code, nothing about limiting when. And even if it did help with debugging, surely the optimal
response would point out that the result would be a dangerously crippled library? A lot of
debugging systems make it out into the real world so problems can be discovered without
installing a special "debuggable" version of the program.

Kurt should have made it more clear what he was going to do with the patch, but the people
replying should have taken a better look at the patch even without that. A bad patch is not
just the fault of its creator; everyone who signs off on it also has to take some part of the
blame.

Give Debian maintainers the deserved blame

Posted May 16, 2008 12:55 UTC (Fri) by rfunk (subscriber, #4054) [Link]

In his first line he gave the context of debugging applications.

He never gave a patch.  He pointed to a couple lines.

Nor did he give any any context to the two lines he was talking about, other than the 
#ifndef PURIFY around the second line.  The fact that he gave no context is huge for me, 
because it makes no sense to comment out lines that generate warnings without looking 
at the context of those lines.

Finally, nobody "signed off" on anything.  One guy said, "If it helps with debugging, I'm in 
favor of removing them."  That's not the same as, "sure, they're useless lines, delete 
them for production.  And then give that version to countless people to run in production 
too."

Give Debian maintainers the deserved blame

Posted May 16, 2008 15:29 UTC (Fri) by ranmachan (subscriber, #21283) [Link]

In this case, the warning _is_ the bug.
The code was fine and had no bug, it intentionally fed unitialised memory to the entropy pool.
The 'bug' was in the interaction with valgrind, which rightly operates under the assumption
that using uninitalised memory is 'bad' and thus generates a warning.  However here it hit the
(probably only) corner case where using unitialised memory is 'good' and thus the warning was
bogus.
Someone asked the Debian Maintainer to fix this warning.  
Instead of using a valgrind-specific workaround (add information to the executable which tells
valgrind 'treat this as initialised memory') he chose to remove the code feeding uninitialised
memory to the entropy pool.
If he hadn't botched up the patch and crippled the entropy pool, this would be a reasonable
solution to the problem, since Linux has a good internal entropy generator (/dev/random) which
is also used to feed the pool.
AFAICS adding uninitialised memory is more of a fallback "in case there is no /dev/random or
/dev/random is broken, we still have _some_ more entropy than just the pid and time" (or
whatever else gets mixed in).

But: IANASE (I am not a security expert)

Give Debian maintainers the deserved blame

Posted May 16, 2008 15:44 UTC (Fri) by rfunk (subscriber, #4054) [Link]

The problem here isn't about whether or not to add uninitialized memory to the entropy 
pool.  Removing that line was fine.  The problem was removing the other line, which is 
where all entropy (except the PID) was added.

The Debian maintainer didn't look at the wider context to see that one of those lines was 
absolutely necessary, and that the routine it was in just may have been wrongly called 
with potentially-uninitialized memory once or twice.

This reminds me of Linus's argument about debuggers making programmers stupid, 
making them focus on narrow scope rather than understanding the way the code is 
supposed to work in context.  In this case it was valgrind that made the programmer 
stupid.

Give Debian maintainers the deserved blame

Posted May 18, 2008 13:08 UTC (Sun) by liljencrantz (guest, #28458) [Link]

No. 

Accessing uninitialized data leads to undefined behaviour. It can be argued that this
undefined behaviour should logically/morally/statistically be limited to filling that memory
region with arbitrary data, while letting the system be otherwise unaffected, but that is not
required by the standard. The question of what undefined means w.r.t. the C standard has come
up many, many times in the past where users have repeatedly expected that code triggering
undefined behaviour should still result in what they feel is «reasonable behaviour», e.g. a
limit to the definition of undefined. This point of view has never been accepted, as it has
time and time again been found that doing so will decrease the performance or the reliability
of software. It would be perfectly standards compliant for the compiler to emit code causing
the system to crash, or to eject a spear from the monitor into the users head. The former
behaviour would probably be ideal.

As has been said many, many times earlier in this debate, relying on 'clever tricks' that
happen to work on most modern systems is a very bad idea. Causing programs like Valgrind to
spew out errors making debugging harder is the least of the problems with this code; it will
very likely cause programs to crash under some future environemnt and it makes the code
significantly harder to understand. Because uninitialized memory is also something that an
attacker may be able to guess or even modify, this is also a rather significant information
leak. All said and done, this part of the OpenSSL code is a very large and scary bug. The fact
that the OpenSSL developers seem to be unwilling to admit to just how bad the quality of this
code was really scares me w.r.t. the overall quality of OpenSSL. If this type of gung ho,
«works for me» attitude is common in OpenSSL, there are likely many more of these issues
lurking around in that code base.

Note 1: Obviously, the bug created by the DD while trying to fix the original bug was much
bigger than the original bug.

Note 2: The existance of extremely shoddy code in OpenSSL is not what scares me - we are all
fallible. What scares me is the response of the OpenSSL team.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds