|
|
Log in / Subscribe / Register

Signed overflow optimization hazards in the kernel

Signed overflow optimization hazards in the kernel

Posted Aug 17, 2012 6:45 UTC (Fri) by wahern (subscriber, #37304)
In reply to: Signed overflow optimization hazards in the kernel by baldrick
Parent article: Signed overflow optimization hazards in the kernel

You have that in reverse. Conversion to unsigned is always well defined. Conversion to signed where the value cannot be represented is implemented-defined:

C99 6.3.1.3 Signed and unsigned integers
  1. When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new type, it is unchanged.
  2. Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.49)
  3. Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.


to post comments

Signed overflow optimization hazards in the kernel

Posted Aug 17, 2012 21:57 UTC (Fri) by pdewacht (subscriber, #47633) [Link] (2 responses)

But given that Linux is only intended to be compiled by gcc, we can rely on its implementation-defined behavior:
The result of, or the signal raised by, converting an integer to a signed integer type when the value cannot be represented in an object of that type (C90 6.2.1.2, C99 6.3.1.3).
For conversion to a type of width N, the value is reduced modulo 2^N to be within range of the type; no signal is raised.

(and I don't see how any compiler for a two's complement computer could define different behavior.)

Signed overflow optimization hazards in the kernel

Posted Aug 18, 2012 19:48 UTC (Sat) by PaulMcKenney (✭ supporter ✭, #9624) [Link]

True enough!

However, the Linux kernel's code can be legitimately used in any GPLv2 project, including those that might run on systems with non-twos-complement signed integer arithmetic. This sharing of code among compatibly licensed projects is a very good thing, in my view.

Which means in this case, where there is a solution that meets the C standard, and which loses nothing by doing so (at least on x86 and Power), it only makes sense to use that C-standard solution.

Signed overflow optimization hazards in the kernel

Posted Aug 18, 2012 23:58 UTC (Sat) by giraffedata (guest, #1954) [Link]

The result of ... converting an integer to a signed integer type when the value cannot be represented in an object of that type ...

For conversion to a type of width N, the value is reduced modulo 2^N to be within range of the type;

I must be reading that wrongly, because that's not at all what GCC does. With -m32, int is a signed integer type of width 32. UINT_MAX reduced modulo 2^32 is UINT_MAX, which is not within the range of int. So this does not describe what (int)UINT_MAX does.

Rather, what GCC does appears to be the opposite of what the standard requires for conversion from negative number to unsigned integer (add UINT_MAX+1 until it fits): it subtracts UINT_MAX+1 until the value is within the range of int (in this case -1).


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds