Once more, it's "back to the future" time in computer science (;-))
What you describe here is a superset of a problem that we suffered in the days of the mainframe, that of optimizing resource usage against a "success" criteria. One wanted, in those days, to adjust dispatch priority and disk storage to benefit a program that was overloaded, to get it out of trouble.
Resource management initially allowed one to set guaranteed minimums, and to share when one wasn't using all your allocation, but rather statically. IBM then introduced a scheme that allowed it to be done in a way that often diagnosed a slowdown and added more resources, called "goal mode". It still exists on mainframes.
Modern resource management schemes don't go quite that far. We guarantee minimums, provide maximums so as to avoid letting us shoot ourselves in the foot, make sharing of unused resources easy, and selective penalize memory hogs by making them page against themselves.
We need a modern goal mode: as Linux is a hot-bed of resource management research, I wouldn't be unduly surprised to see it happen here.