Not logged in
Log in now
Create an account
Subscribe to LWN
LWN.net Weekly Edition for June 20, 2013
Pencil, Pencil, and Pencil
Dividing the Linux desktop
LWN.net Weekly Edition for June 13, 2013
A report from pgCon 2013
Once the OOM Killer starts shooting down processes, I can't imagine that system will remain in a usable state much longer. You've ran out of memory and your processes (and work in progress) are gone.
Given that major distros default OOM Killer to off, who is the target market for the OOM Killer?
Taming the OOM killer
Posted Feb 8, 2009 19:05 UTC (Sun) by anton (guest, #25547)
Once the OOM Killer starts shooting down processes, I can't imagine that system will remain in a usable state much longer.
Concerning memory overcommitment, I think that this is a good idea
for most programs (which are not written to survive failing memory
allocations). And relying on overcommitment can simplify programming:
e.g., allocate a big chunk and put your growing structure there
instead of reallocating all the time. And when you do it, do it right
(i.e., echo 1 >/proc/sys/vm/overcommit_memory), not
the half-hearted Linux default, which gives us the
disadvantages of overcommitment (i.e., the OOM killer) combined with
the disadvantages of no overcommitment (unpredictable allocation
echo 1 >/proc/sys/vm/overcommit_memory
Concerning critical system programs, those should be written to
survive failing memory allocations, should get really-committed memory
if the allocation succeeds, and consequently should not be OOM-killed.
I have outlined this idea in more depth, and I think
that AIX and/or Solaris implement something similar. Instead of my
per-process idea, the MAP_NORESERVE flag allows to switch between
commitment and overcommitment on a per-allocation basis (not sure how
useful that is, as well as the default of committing memory).
Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds