User: Password:
|
|
Subscribe / Log in / New account

More memory - less OOM situations

More memory - less OOM situations

Posted Jun 14, 2010 10:58 UTC (Mon) by alsuren (guest, #62141)
In reply to: More memory - less OOM situations by giraffedata
Parent article: Another OOM killer rewrite

I've always blindly followed the old recommendation of having twice as much swap as RAM, so I don't think I've ever hit the OOM condition. Swappy-swappy-death is no fun though. Is there a way to make the OOM killer detect swappy-swappy-death and kick in then?

Maybe we should all just disable swap entirely, and then forkbomb our machines periodically to see how good the OOM killer really is.


(Log in to post comments)

More memory - less OOM situations

Posted Jun 17, 2010 22:47 UTC (Thu) by efexis (guest, #26355) [Link]

I do, either disable or severely limit swap. The only time I ever tend to hit OOM is when something's misbehaving (infinite loop + memory leak, fork bomb etc) so I get my system back once swap's exhausted, no matter how much swap there is, it'll never be bigger than infinite, so making it big seems to have the single effect of delaying how quickly I can recover the system.

But best thing I've found to do is just put everything in seperate cgroups with memory limits set at around 80% by default, so no single thing can take out the whole system. OOM killer works just great then, killing where needs to be killed. Example cgroup report:
$ cgroup_report
CGroup                           Mem: Used       %Full
apache                            244,396K      24.43%
compile                               524K       0.03%
mysql51                            81,340K       8.32%
mysql55                            85,756K       8.78%
netserv/courier/authdaemond        16,444K       3.28%
netserv/courier/courier            52,452K       5.24%
netserv/courier/esmtpd-msa            260K  (no limit)
netserv/courier/esmtpd              2,768K       0.55%
netserv/courier/imapd             785,568K      78.55%
netserv/courier/pop3d               8,492K       0.84%
netserv/courier                          0
netserv/courier/webmaild                 0
netserv                           176,320K      12.03%
network                               300K       0.03%
rsyncd                                264K       0.10%
sshd                              282,904K      28.29%
system/incron                            0
system                             15,588K       1.55%
system/watchers/sif                 1,144K       3.66%
system/watchers                          0
.                                   6,936K  (no limit)
Systems without cgroups though, yikes, when they go wrong they make my head hurt. I'm sure cgroups are the answer to everything :-p

More memory - less OOM situations

Posted Jun 19, 2010 1:52 UTC (Sat) by giraffedata (subscriber, #1954) [Link]

The old way to do this is with rlimits. I've always set rlimits on vsize of every process -- my default is half of real memory (there's a bunch of swap space in reserve too). Before Linux, rlimits (under a different name) were the norm, but on Linux the default is unlimited and I think I'm the only one who changes it.

Rlimits have a severe weakness in that a process just has to fork to get a whole fresh set of limits, but they do catch the common case of the single runaway process.

More memory - less OOM situations

Posted Jun 20, 2010 8:53 UTC (Sun) by efexis (guest, #26355) [Link]

Yep I used to set it in my own code to protect the rest of the system from my mistakes, and still do when am forced to use a system without memory cgroups, but many of the servers I help to maintain host sites for people developing in php, who 'coincidentally' (seeing as this is respectable lwn not slashdot!) have no clue about the underlying system or scalability, and they do all kinds of crazy things!

Also I guess I don't really understand the internals of how rlimits work, what they apply to, things like the allocating of costs of shared memory etc, whereas I followed the development of cgroups so have a better understanding. So do processes inherit the parents limits but get their own counters? So you set say 10M data limit, you can end up with three processes with 5M each just fine? I think that's what you meant by new set of limits, or do you mean forked procs get reset to the hard limit or no limit? I guess this is with the exception of the user wide process limit?

It's hard to tell the answers to these just from the man pages which is all the looking I've done so far, 'tho a few mins experimenting I could figure it out, but any main/quick details you wouldn't mind sharing off the top of your head I would be grateful, there're still a couple of older webservers without cgroup support I'm trying to help family with so rlimits would no doubt be helpful!

Cheers :-)

More memory - less OOM situations - rlimits

Posted Jun 20, 2010 17:16 UTC (Sun) by giraffedata (subscriber, #1954) [Link]

I guess I don't really understand the internals of how rlimits work

But what you're asking about is externals. I don't know much about the internals myself.

Some of this is hard to document because the Linux virtual memory system keeps changing. But I think this particular area has been neglected enough over the years that the answer that I researched about 10 years ago is still valid.

The rlimit is on vmsize (aka vsize). Vmsize is the total amount of virtual address space in the process. That includes shared memory, memory mapped files, memory mapped devices, and pages that use no memory or swap space because they're implied zero. Procps 'ps' tells you this figure with --format=vsize (it's also "sz" in ps -l, but in pages, while "vsize" is in KiB).

A new process inherits all its parent's rlimits, so a parent with a 10M limit can use 5M and fork two children that do likewise and use a total of 15M just fine.

The user-wide process rlimit is an exception to the idea that a process can extend its resource allocation through the use of children, but it's not an exception to the basic idea that a child inherits the parent's full limit -- it's just a bizarre definition of limit intended to hack around the basic weakness of rlimit that we've been talking about. Apparently, fork bombs were a big enough problem at some time to deserve a special hack.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds