Systems running PHP are naturally beset with more than the usual number of
challenges from the outset. In some cases, though, it can get even worse;
For example, a common configuration for PHP web-servers includes
apache's prefork MPM, mod_php and a PHP opcode cache utilizing
shared memory. In certain failure modes, all requests serviced by
PHP result in a segfault. Enabling coredumps might lead to 10-20
coredumps per second, all attempting to write a 150-200MB core
file. This leads to the whole system becoming entirely unresponsive
for many minutes.
Edward's response to this non-fun situation was a patch limiting the number
of core dumps which can
be underway simultaneously; any dumps which would exceed the limit would
simply be skipped.
It was generally agreed that a better approach would be to limit the I/O
bandwidth of offending processes when contention gets too high. But that
approach is not entirely straightforward to implement, especially since
core dumps are considered to be special and not subject to normal bandwidth
control. So what's likely to happen instead is a variant of Edward's patch
where processes trying to dump core simply wait if too many others are
already doing the same.
to post comments)