I think that this exploit is a cascade failure initiated by the ill-conceived gcc patch. If you look at the original gcc patch submitted by Cygnus, you'll read the following:
"If all paths to a null pointer test use the same pointer in a memory load/store, then the pointer must have a non-null value at the test. Otherwise, one of the load/stores leading to the test would have faulted."
The assumption that "load/stores [that use the null pointer] [...] would have faulted" is generally wrong.
On many common architectures, virtual memory location 0 can be mapped and used. Most "small" embedded platforms allow its use. x86 surely does, too.
I think that the optimization was relatively ill conceived. The memory load/store should not be assumed to fail unless the underlying architecture universally guarantees that, or at least the target does. In this case, the target definitely does not guarantee anything of the sort, and the optimization is simply broken. I don't think theres much more to that. As a workaround, all linux kernel builds should disable the particular optimization (if possible).
I deal daily with code where the compiled C code dereferences NULL pointers simply because the underlying architecture supports it, and not letting you do it wastes one byte/word of RAM. On chips with only 256 bytes of RAM you may not want that. On x86 in real mode, the IDT starts at address 0 and some odd tasks like copying the IDT have to start at 0.
Methinks that the NULL pointer concept should be purged from the C standard, it is really up to the developer to ensure that a pointer is valid, and using magic values to indicate invalid pointers is very much environment-dependent. It has no place in a rather platform-agnostic language standard, IMHO.