Instructions that trap are not going away, if only because they're useful in virtual
machines--or the like--to which C can be targeted.
Relying too heavily on pointer arithmetic in algorithms is not the smartest thing to do. The
largest integral type supported on the computer I'm typing on is 64-bit (GCC, long long), but
pointers are only 32-bits. Parsing a 5GB ISO Base Media file (aka QuickTime Mov), I can keep
track of various information using the unsigned 64-bit integral; if I had written all my
algorithms to rely on pointer arithmetic to store or compute offsets, I'd be screwed.
C precisely defines the behavior of overflow for unsigned types. Java's primitives suck
because they're all signed; the fact that it wraps (because Java effectively stipulates a
two's-complement implementation) is useless. In fact, I can't even remember the last time I
used (or at least wanted) a signed type, in any language. Having to deal with that extra
dimension is a gigantic headache, and it's worth noting that Java is just as susceptible to
arithmetic bugs as C. I'd argue more so, because unwarranted reliance on such behavior invites
error, and such reliance is easier to justify/excuse in Java because it so narrowly stipulates
C's integral types are in some ways superior to many other languages' specifically because
they're so loosely defined by the spec. Short of transparently supporting big integers, it
forces you to focus more on values than representations. That Java and C# stipulate a fixed
size is useless in practice; it doesn't help in the slightest the task of constraining range,
which is almost always defined by the data, and similar external context. Any code which
silently relies on a Java primitive type wrapping is poor code. Using comments is always
second to using masks, and other techniques, where the code speaks for itself more clearly
than a comment ever could.
A better system, of course, would utilize type ranges a la Ada.
Anyhow, I know the parent's point had more to do with pointers, but this just all goes to show
that good code doesn't rely on underlying representations, but only the explicit logic of the
programmer, and the semantics of the language.