User: Password:
|
|
Subscribe / Log in / New account

Taming the OOM killer

Taming the OOM killer

Posted Feb 16, 2009 9:04 UTC (Mon) by michaeljt (subscriber, #39183)
In reply to: Taming the OOM killer by dlang
Parent article: Taming the OOM killer

> how much swap did you allocate? any idea how much was used?

Definitely too much (1 GB for 2 GB of RAM), as I realised after reading this: http://kerneltrap.org/node/3202. That page was also what prompted my last comment. It seems a bit strange to me that increasing swap size should so badly affect system performance in this situation, and I wondered whether this could be fixed with the right tweak, such as limiting the amount of virtual memory available to processes, say to a default of 80 percent of physical RAM. This would still allow for large processes to fork, but might catch runaway processes a bit earlier. I think that if I find some time, I will try to work out how to do that (assuming you don't answer in the mean time to tell me why that is a really bad idea, or that there already is such a setting).


(Log in to post comments)

Taming the OOM killer

Posted Feb 16, 2009 15:38 UTC (Mon) by dlang (subscriber, #313) [Link]

have you looked into setting the appropriate values in ulimit?

Taming the OOM killer

Posted Feb 17, 2009 8:23 UTC (Tue) by michaeljt (subscriber, #39183) [Link]

> have you looked into setting the appropriate values in ulimit?

Indeed. I set ulimit -v 1600000 (given that I have 2GB of physical RAM) and launched a known bad process (gnash on a page I know it can't cope with). gnash crashed after a few minutes, without even slowing down my system. I just wonder why this is not done by default. Of course, one could argue that this is a user or distribution problem, but given that knowledgeable people can change the value, why not in the kernel? (Again, to say 80% of physical RAM. I tried with 90% and gnash caused a noticeable performance degradation.) This is not a rhetorical question, I am genuinely curious.

Taming the OOM killer

Posted Feb 17, 2009 8:29 UTC (Tue) by dlang (subscriber, #313) [Link]

simple, the kernel doesn't know what is right for you. how can it know that you really don't want this program that you start to use all available ram (even at the expense of other programs)

the distro is in the same boat. if they configured it to do what you want, they would have other people screaming at them that they would rather see the computer slow down than have programs die (you even see people here arguing that)

Taming the OOM killer

Posted Feb 17, 2009 14:27 UTC (Tue) by michaeljt (subscriber, #39183) [Link]

> simple, the kernel doesn't know what is right for you. how can it know that you really don't want this program that you start to use all available ram (even at the expense of other programs)

It does take a decision though - to allow all programmes to allocate as much RAM as they wish by default, even if it is not present, is very definitely a policy decision. Interestingly Wine fails to start if I set ulimit -v in this way (I can guess why). I wonder whether disabling overcommit would also prevent it from working?


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds