Memory cgroups will solve that problem for you, they are the number 1 thing I have found that improves system stability in *years*. Very simple to implement, assume cgroups is mounted under /cgroup with memory controller enabled (or for separate control, I mount my memory controller under /cgroup/memory so I can put tasks under memory control groups without putting them also under others)
Create shell script wrapper for what you want to run:
That puts it into a 1200meg group, no matter how many processes it forks, the entire lot cannot go over that 1200, and if they do, an OOM killer will kick in within only that group. You can also put similar lines at the top of scripts in /etc/init.d for example (obviously not needing the 'exec' line if you're adding to an existing startup script).
As long as you don't give any group 100% memory (I tend to put everything in 80% groups by default) no single runaway process or set of processes can ever bring the entire system down because there's always that 20% left it cannot touch.