"what you are missing is that linux has for years not actually allocated all that extra ram for a fork, instead it has marked the ram as being shared, but copy-on-write (COW)"
I don't believe I've said anything to contradict this.
On systems with a MMU, fork copies the page tables and not the pages themselves such that the new processes share physical ram until they are written to.
" so that if the memory is not written to, it is never duplicated."
Whether you've realized it or not, the problem of over-committed memory remains present. At the time the kernel receives the "fork()" syscall from a large process (imagine 1.5GB working set) which uses more ram than is available to the child, it has to choose between two bad choices:
1. Either deny the request up front due to low memory constraints.
2. over-commit memory in a gamble that neither the parent nor the child will change too many pages.
Both answers are seriously flawed. I gave two examples of applications which demonstrate either the inefficiency of fork(), or the risky over commit behavior.
Most administrators will agree that the "OOM Killer" has no place in stable production environments. The only way to guaranty well behaved processes are not killed is for the kernel to guaranty resources by not over-committing them. This spells trouble for interfaces like fork(), which depend on over-committed memory to work efficiently.
Without over-committed memory, a large process would find itself unable to issue fork/exec calls to spawn a small process.
If the parent is a tiny daemon who's only purpose is to spawn children, this isn't such a big deal. However, it is a disappointment that the fork syscall is either very risky, or a resource hog when called from large parents.
Even if fork had no other problems, this is an excellent reason to seek alternatives.