|
|
Subscribe / Log in / New account

Is this an useful optimization ??

Is this an useful optimization ??

Posted Apr 21, 2008 16:00 UTC (Mon) by mikov (guest, #33179)
In reply to: Is this an useful optimization ?? by wahern
Parent article: GCC and pointer overflows

Instructions that trap are not going away, if only because they're useful in virtual machines--or the like--to which C can be targeted.

I disagree. A predictable conditional jump is better, simpler and more efficient than a completely unpredictable trap. Even if the trap doesn't have to go through the OS signal handler (which it probably does on most OS-es), it has to save context, etc.

One could argue for adding more convenient instructions for detecting an overflow and making a conditional jump on it. I strongly suspect that instructions that trap are currently a dead end.

Anyway, we know x86 doesn't have instructions that trap on overflow. (Well, except "into", but no C compiler will generate that). Do PPC and ARM have them and are they used often?

That Java and C# stipulate a fixed size is useless in practice; it doesn't help in the slightest the task of constraining range, which is almost always defined by the data, and similar external context. Any code which silently relies on a Java primitive type wrapping is poor code.

That is not my experience at all. C99 defines "int_least32_t" and the like exactly to address that problem. Many C programmers like to believe that their code doesn't rely on the size of "int", but they are usually wrong. Most pre-C99 code would break horribly if compiled in a 16-bit environment, or where integer widths are not powers of 2, or are not two's complement.

Honestly, I find the notion that one can write code without knowing how wide the integer types are, in a language which doesn't implicitly handle the overflow (unlike Python, Lisp, etc), to be absurd.

I am 100% with you on the unsigned types, though.

I also agree that in practice Java is as susceptible to arithmetic bugs as C. However it is for a different reason than the one you are implying. It is simply because in practice Java and C have _exactly_ the same integer semantics.

Java simply specifies things which 90% of the C programmers mistakenly take for granted. Wrap-around on overflow, truncating integer division, etc.


to post comments


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds