You're probably right--it will never go away. The easiest way to fix signed overflow is to not use signed integers. Yet people continue to use (int) and (long) reflexively. Java doesn't even have unsigned integers, just for purities sake. The C++ crowd still debates whether size_t is better than a signed integer. Never mind that in real-life signed arithmetic is rare. SLoC-for-SLoC, the vast majority of arithmetic is mundane management of data for which negative numbers are unnecessary and awkward, and where unsigned overflow is usually entirely and reliably benign. It's even often a negative feedback effect--by using modulo arithmetic you thwart someone trying to overflow your buffers by producing the opposite result. Finally, corruption isn't much of an issue because garbage in is garbage out; no software can fix that.
Throwing exceptions on signed overflow will probably increase vulnerabilities. I don't think preventing a small number of privilege escalation attacks is worth the cost in dramatically increasing DoS attacks.
The symlink issue is a little disconcerting. It'd probably take less time grepping through the entire Debian source archive for "/tmp" and "TMPDIR", blacklisting stupid apps, and replacing bad code with mkstemp(3) or tmpfile(3), than debating how to hack the kernel to paper over idiocy.
Come to think of it, there is an operating system which takes exactly this approach--fixing classes of vulnerabilities by fixing the code. But the name escapes me at the moment ;)