Weekly edition Kernel Security Distributions Contact Us Search Archives Calendar Subscribe Write for LWN LWN.net FAQ Sponsors

# Sure, but then you deserve what you get...

## Sure, but then you deserve what you get...

Posted May 24, 2011 16:48 UTC (Tue) by farnz (guest, #17727)
In reply to: Sure, but then you deserve what you get... by marcH
Parent article: What Every C Programmer Should Know About Undefined Behavior #3/3

Reading the bug report, GCC specifically takes advantage of the undefinedness of signed overflow. We start with the following code (all code in a C-like pseudocode): ``` ```

``````int *value;
int i;
int val = 0x03020100;

for (i = 0; i < 256/4; i++) {
value[i] = val;
val += 0x04040404;
}
``````
``` ```

Step 1 of the failure determines that the only time val equals 0x04030200 is when the loop exit condition is true. It thus rewrites the program to look as if the programmer had written: ``` ```

``````int *value;
int i;
int val = 0x03020100;

for (i = 0; val != 0x04030200; i++) {
value[i] = val;
val += 0x04040404;
}
``````
``` ```

Next, GCC detects that in the absence of overflow, `(val != 0x04030200)` is always true. It thus rewrites the program to look as if the programmer had written: ``` ```

``````int *value;
int i;
int val = 0x03020100;

for (i = 0; true; i++) {
value[i] = val;
val += 0x04040404;
}
``````
``` ```

This code then translates using the naïve interpretation to the assembler output in the bug report. Note that a finite loop has become infinite; this is a useful optimization because it's not uncommon for real world code to have conditionals that depend solely on constants, such that for this build, the conditional is always true or always false.

Sure, but then you deserve what you get...

Posted May 24, 2011 22:18 UTC (Tue) by marcH (subscriber, #57642) [Link]

I understand that there is a bad looking inconsistency between:
- optimization step 1 computes the result of a signed overflow
- optimization step 2 assumes there is never any signed overflow

I did not like the "loophole" sentence because I (mis?)understood it as: "gcc is punishing everyone who has not read the standard, on purpose".
I mean: the inconsistency between step 1 and step 2 is not evil! It is just an accident that happens to be allowed by the standard and that has benefits in other cases when step 1 and step 2 do not collide that bad.

Sure, but then you deserve what you get...

Posted May 24, 2011 23:14 UTC (Tue) by iabervon (subscriber, #722) [Link]

Actually, I suspect that gcc actually transforms the condition to "val < 0x104030200". How would gcc, a C program, compute the result of a signed overflow without risking crashing? It's more comprehensible as "step 1 assumes that, because there is no signed overflow, the arbitrary-precision value of the expression 0x03020100 + (i*0x04040404) is the value of val; step 2 notices that the condition is always true due to the limited range of the variable."

This also avoids the issue that it would be really hard to determine that "val != 0x04030200" is always true without determining that the code actually hits a signed overflow.

Actually, the first transformed code is probably:

```int *value;
int val;

for (val = 0x03020100; val < 0x104030200; val += 0x04040404, value++)
*value = val;
```

Eliminating "i" is reasonably likely to be huge on x86 come register allocation, so it's a good optimization if it works. And gcc can assume it works because the programmer can't let signed arithmetic overflow. Of course, at this point, it already doesn't work as expected; the second optimization just makes the assembly confusing. The second optimization is really for "if (size >= 0x100000000) return -EINVAL;" where the programmer cares about a 32-bit limit in code that could be built with either 32-bit or 64-bit ints; in some builds it's important, and in all builds it's correct, but the compiler can eliminate it in cases where it doesn't matter.

Posted May 25, 2011 5:12 UTC (Wed) by khim (subscriber, #9252) [Link]

How would gcc, a C program, compute the result of a signed overflow without risking crashing?

Have you seen this page? Specifically the libraries part? Do you know why GMP, MPFR and MPC are requirements, not options? Actually I think this particular optimization does not use multiprecision arithmetic, but if it's needed in some passes it is available to GCC despite the fact that it's C program.

P.S. Note that is you really want "overflow", not "undefined behavior" you can do that and the fact that GCC is a C program does not stop you.

Posted May 25, 2011 5:44 UTC (Wed) by iabervon (subscriber, #722) [Link]

Those libraries compute the well-defined results of calculations with large numbers; they don't give the result of signed overflow. They aren't going to help for figuring out the results of running (unoptimized, on the target machine):

```if (MAXINT + 1 == -MAXINT) printf("Wow, 1's complement!\n");
```

because that's a matter of processor architecture, not mathematics. And certainly gcc isn't going to try eliciting the undefined behavior itself and replicating it, because the undefined behavior might be "your compiler crashes processing unreachable code", which the C specification doesn't allow.

Posted May 26, 2011 8:27 UTC (Thu) by marcH (subscriber, #57642) [Link]

> Those libraries compute the well-defined results of calculations with large numbers; they don't give the result of signed overflow.

Of course they can do this; the latter is just one modulus operation away from the former.

> because that's a matter of processor architecture, not mathematics.

Surprise: processors architectures are rooted in arithmetics. All of them, even though they differ with each other.

Posted May 27, 2011 9:45 UTC (Fri) by dgm (subscriber, #49227) [Link]

> Surprise: processors architectures are rooted in arithmetics. All of them, even though they differ with each other.

Still not a matter of mathematics: both 1's and 2's compliment are equally correct. The matter is about the choice made by the processor designers, and thus, a matter of processor architecture.

Posted May 27, 2011 14:03 UTC (Fri) by marcH (subscriber, #57642) [Link]

It's a matter of both: 1. Choose the architecture, 2. Apply the maths specific to this architecture.

GMP or else can help for the latter (of course not for the former).

Sure, but then you deserve what you get...

Posted May 25, 2011 1:27 UTC (Wed) by foom (subscriber, #14868) [Link]

Yes, it has optimization benefits. That doesn't mean it's actually a good feature.

Overloading "signedness" with "cannot overflow" makes no sense, it's simply an accident of history.
But back in history, C compilers weren't smart enough to take advantage of the leeway given: they in fact did exhibit implementation-defined behavior, not undefined behavior, in the face of a signed overflow. They acted like the hardware acts upon signed overflow. It's only fairly recently that optimizers have taken advantage of this loophole in the standard that allows them to blow up your program if you have any signed int overflows.

Of course, assuming that an unsigned int cannot overflow would also have optimization benefits! Does it really make sense that a loop gets slower just because you declare the loop counter as an "unsigned int" instead of an "int"?

Sure, but then you deserve what you get...

Posted May 25, 2011 2:48 UTC (Wed) by iabervon (subscriber, #722) [Link]

Actually, if the compiler were forced to consider overflow, it would just have to think a little harder before making the same optimizations. The compiler could determine that 1<<32 / GCD(1<<32, 0x04040404) > 256/64 and thus that the first time i >= 256/64, val == 0x04030200 with unsigned math, and val != 0x04030200 before that. Your loop would have to get slower (or, more likely, take an additional register) if it was going to use the same value in "val" multiple times, simply because it becomes necessary to track another piece of information.

(Also note that it doesn't matter if you declare the loop counter as an "unsigned int"; the nominal loop counter actually gets discarded entirely, in favor of a proxy loop counter, which is what could overflow.)

Sure, but then you deserve what you get...

Posted May 25, 2011 3:31 UTC (Wed) by iabervon (subscriber, #722) [Link]

Actually, if the compiler is forced to consider overflow, it just has to think a little harder before making the same optimizations. The compiler can determine that 1<<32 / GCD(1<<32, 0x04040404) > 256/4 and thus that the first time i >= 256/4, val == 0x04030200 with unsigned math, and val != 0x04030200 before that. Your loop would have to get slower (or, more likely, take an additional register) if it was going to use the same value in "val" multiple times, simply because it becomes necessary to track another piece of information.

(Also note that it doesn't matter if you declare the loop counter as an "unsigned int"; the nominal loop counter actually gets discarded entirely, in favor of a proxy loop counter, which is what could overflow.)

With gcc 4.4.5, replacing "int val" with "unsigned int val" makes it actually generate what you would expect of:

```for (val = 0x03020100; val != 0x04030200; val += 0x04040404, value++)
*value = val;
```

which avoids the "i = 0", "*value = val" is a simpler instruction than "value[i] = val", and uses one fewer register; but it still actually works. If the constant you're adding is 0x04000000, the optimization doesn't work, and gcc produces the slower code. The code it produces for "unsigned int" is the fastest working code possible, so it's not getting slower in any meaningful way by using "unsigned int" (I mean, it loops faster without testing the end condition, but...).

Sure, but then you deserve what you get...

Posted May 26, 2011 17:21 UTC (Thu) by anton (guest, #25547) [Link]

Does it really make sense that a loop gets slower just because you declare the loop counter as an "unsigned int" instead of an "int"?
It gets slower? Doesn't the code with int loop infinitely when compiled with gcc? Great optimization!

Sure, but then you deserve what you get...

Posted May 28, 2011 20:48 UTC (Sat) by BenHutchings (subscriber, #37955) [Link]

C compilers weren't smart enough to take advantage of the leeway given: they in fact did exhibit implementation-defined behavior, not undefined behavior, in the face of a signed overflow. They acted like the hardware acts upon signed overflow.

Which was to crash, in many cases. Signed overflow caused a processor exception, just like division by zero, because the result could not be represented.