As you noted yourself, there is a subtle difference between undefined and implementation
defined behavior. This particular case is clearly an example of *implementation defined*. It
behaves in the same way on 99% of all CPUs today and there is nothing undefined about it. Not
only that, but it is not an uncommon idiom and can be useful in a number of situations. How
can you call it undefined ???
You speak about architectures with different pointer representations, etc. I have heard that
argument before and I do not buy it. Such implementations
mostly do not exist, or if they do, are not an important target for Standard C today, let
alone GCC. (Emphasis on the word "standard").
Considering the direction where hardware is going, such architectures are even less likely to
appear in the future. Instructions that trap for any reason are terribly inconvenient for
superscalar or OoO execution.
Anyway, I think I may not have been completely clear about my point. I am *not* advocating
that the standard should require some particular behavior on integer and pointer overflows. I
am however not convinced that it is a good idea to explicitly *prohibit* the behavior which is
natural to most computers in existence.
As it is now, Standard C is (ahem) somewhat less than useful for writing portable programs (to
put it extremely mildly). It is somewhat surprising that the Standard and (apparently) most
compiler implementors are also making less useful for *non-portable* applications :-)
(I have to admit that years ago I was fascinated with the aesthetic "beauty" of portable C
code, or may be I liked the challenge. Then I came to realize what a horrible waste of time it
was. Take Java for example - for all its horrible suckiness, it does a few things right - it
defines *precisely* the size of the integer types, the behavior on overflow and integer
remainder of negative numbers.
There is a lot of good in x86 being the universal CPU winner - we can finally forget the
idiosyncrasies of the seventies ...)