Yep I used to set it in my own code to protect the rest of the system from my mistakes, and still do when am forced to use a system without memory cgroups, but many of the servers I help to maintain host sites for people developing in php, who 'coincidentally' (seeing as this is respectable lwn not slashdot!) have no clue about the underlying system or scalability, and they do all kinds of crazy things!
Also I guess I don't really understand the internals of how rlimits work, what they apply to, things like the allocating of costs of shared memory etc, whereas I followed the development of cgroups so have a better understanding. So do processes inherit the parents limits but get their own counters? So you set say 10M data limit, you can end up with three processes with 5M each just fine? I think that's what you meant by new set of limits, or do you mean forked procs get reset to the hard limit or no limit? I guess this is with the exception of the user wide process limit?
It's hard to tell the answers to these just from the man pages which is all the looking I've done so far, 'tho a few mins experimenting I could figure it out, but any main/quick details you wouldn't mind sharing off the top of your head I would be grateful, there're still a couple of older webservers without cgroup support I'm trying to help family with so rlimits would no doubt be helpful!