User: Password:
|
|
Subscribe / Log in / New account

More memory - less OOM situations

More memory - less OOM situations

Posted Jun 20, 2010 8:53 UTC (Sun) by efexis (guest, #26355)
In reply to: More memory - less OOM situations by giraffedata
Parent article: Another OOM killer rewrite

Yep I used to set it in my own code to protect the rest of the system from my mistakes, and still do when am forced to use a system without memory cgroups, but many of the servers I help to maintain host sites for people developing in php, who 'coincidentally' (seeing as this is respectable lwn not slashdot!) have no clue about the underlying system or scalability, and they do all kinds of crazy things!

Also I guess I don't really understand the internals of how rlimits work, what they apply to, things like the allocating of costs of shared memory etc, whereas I followed the development of cgroups so have a better understanding. So do processes inherit the parents limits but get their own counters? So you set say 10M data limit, you can end up with three processes with 5M each just fine? I think that's what you meant by new set of limits, or do you mean forked procs get reset to the hard limit or no limit? I guess this is with the exception of the user wide process limit?

It's hard to tell the answers to these just from the man pages which is all the looking I've done so far, 'tho a few mins experimenting I could figure it out, but any main/quick details you wouldn't mind sharing off the top of your head I would be grateful, there're still a couple of older webservers without cgroup support I'm trying to help family with so rlimits would no doubt be helpful!

Cheers :-)


(Log in to post comments)

More memory - less OOM situations - rlimits

Posted Jun 20, 2010 17:16 UTC (Sun) by giraffedata (subscriber, #1954) [Link]

I guess I don't really understand the internals of how rlimits work

But what you're asking about is externals. I don't know much about the internals myself.

Some of this is hard to document because the Linux virtual memory system keeps changing. But I think this particular area has been neglected enough over the years that the answer that I researched about 10 years ago is still valid.

The rlimit is on vmsize (aka vsize). Vmsize is the total amount of virtual address space in the process. That includes shared memory, memory mapped files, memory mapped devices, and pages that use no memory or swap space because they're implied zero. Procps 'ps' tells you this figure with --format=vsize (it's also "sz" in ps -l, but in pages, while "vsize" is in KiB).

A new process inherits all its parent's rlimits, so a parent with a 10M limit can use 5M and fork two children that do likewise and use a total of 15M just fine.

The user-wide process rlimit is an exception to the idea that a process can extend its resource allocation through the use of children, but it's not an exception to the basic idea that a child inherits the parent's full limit -- it's just a bizarre definition of limit intended to hack around the basic weakness of rlimit that we've been talking about. Apparently, fork bombs were a big enough problem at some time to deserve a special hack.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds