By Jonathan Corbet
April 16, 2008
On April 4, CERT put out
a
scary advisory about the GNU Compiler Collection (GCC). This advisory
raises some interesting issues on when such advisories are appropriate,
what programmers must do to write secure code, and whether compilers should
perform optimizations which could open up security holes in poorly-written
code.
In summary, the advisory states:
Some versions of gcc may silently discard certain checks for
overflow. Applications compiled with these versions of gcc may be
vulnerable to buffer overflows. [...]
Application developers and vendors of large codebases that cannot
be audited for use of the defective length checks are urged to
avoiding [sic] the use of gcc versions 4.2 and later.
This advisory has disappointed a number of GCC developers, who feel that
their project has been singled out in an unfair way. But the core issue is
one that C programmers should be aware of, so a closer look is called for.
To understand this issue, consider the following code fragment:
char buffer[BUFLEN];
char *buffer_end = buffer + BUFLEN;
/* ... */
unsigned int len;
if (buffer + len >= buffer_end)
die_a_gory_death("len is out of range\n");
Here, the programmer is trying to ensure that len (which might
come from an untrusted source) fits within the range of buffer.
There is a problem, though, in that if len is very large, the
addition could cause an overflow, yielding a pointer value which is less
than buffer. So a more diligent programmer might check for that case
by changing the code to read:
if (buffer + len >= buffer_end || buffer + len < buffer)
loud_screaming_panic("len is out of range\n");
This code should catch all cases; ensuring that len is within
range. There is only one little problem: recent versions of GCC will
optimize out the second test (returning the if statement to the
first form shown above), making overflows possible again. So any code
which relies upon this kind of test may, in fact, become vulnerable to a
buffer overflow attack.
This behavior is allowed by the C standard, which states that, in a correct
program, pointer addition will not yield a pointer value outside of the
same object. So the compiler can assume that the test for
overflow is always false and may thus be eliminated from the expression. It
turns out that GCC is not alone in taking advantage of this fact: some
research by GCC developers turned up other compilers (including PathScale,
xlC, LLVM, TI Code Composer Studio, and Microsoft Visual C++ 2005) which
perform the same optimization. So it seems that the GCC developers have a
legitimate reason to be upset: CERT would appear to be telling people to
avoid their compiler in favor of others - which do exactly the same thing.
The right solution to the problem, of course, is to write code which
complies with the C standard. In this case, rather than doing pointer
comparisons, the programmer should simply write something like:
if (len >= BUFLEN)
launch_photon_torpedoes("buffer overflow attempt thwarted\n");
There can be no doubt, though, that incorrectly-written code exists. So
the addition of this optimization to GCC 4.2 may cause that bad code to
open up a vulnerability which was not there before. Given that, one might
question whether the optimization is worth it. In response to a statement
(from CERT) that, in the interest of security, overflow tests should not be
optimized away, Florian Weimer said:
I don't think this is reasonable. If you use GCC and its C
frontend, you want performance, not security. After all, the real
issue is not the missing comparison instruction, but the fact that
this might lead to subsequent unwanted code execution. There are C
implementations that run more or less unmodified C code in an
environment which can detect such misuse, but they come at a
performance cost few are willing to pay.
Joe Buck added:
Furthermore, there are a number of competitors to GCC. These
competitors do not advertise better security than GCC. Instead
they claim better performance (though such claims should be taken
with a grain of salt). To achieve high performance, it is
necessary to take advantage of all of the opportunities for
optimization that the C language standard permits.
It is clear that the GCC developers see their incentives as strongly
pushing toward more aggressive optimization. That kind of optimization
often must assume that programs are written correctly; otherwise the
compiler is unable to remove code which, in a correctly-written
(standard-compliant) program, is unnecessary. So the removal of pointer
overflow checks seems unlikely to go away, though it appears that some new warnings will be added to alert
programmers to potentially buggy code. The compiler may not stop
programmers from shooting themselves in the foot, but it can often warn
them that it is about to happen.
(
Log in to post comments)