I believe the RHEL support engineers were finding systems with mysterious fork/clone failures
that were caused by the kernel not being able to find 8K of continuous memory. It's really
easy to allocate 4K since it's the i386 page size, but two pages next to each other can fail.
Big Java programs using a lot of threads would fail to get a new thread. Apache servers would
fail to spawn a new child. Etc.
However, since then (2.6.16?) the memory system has also been reworked a bunch and I don't
know if it's still such a problem to get a 8K alloc.
You *would* think those big programs would now be running on x86_64 systems with the 8K stacks
and having the same problems, if they still existed. Or maybe they get around it by
installing 16 GB RAM instead.