User: Password:
|
|
Subscribe / Log in / New account

Requesting 'real' memory

Requesting 'real' memory

Posted Feb 1, 2008 20:14 UTC (Fri) by giraffedata (subscriber, #1954)
In reply to: Requesting 'real' memory by zooko
Parent article: Avoiding the OOM killer with mem_notify

You can set sys.vm.overcommit_memory to policy #2. Unfortunately, it isn't entirely clear that this will banish the OOM killer entirely, or if it will just make it very rare.

It's entirely clear to me that it banishes the OOM killer entirely. The only reason the OOM killer exists is that sometimes the processes use more virtual memory than there is swap space to put its contents. With Policy 2, virtual memory isn't created in the first place unless there is a place to put the contents.


(Log in to post comments)

Requesting 'real' memory

Posted Feb 1, 2008 20:47 UTC (Fri) by zooko (guest, #2589) [Link]

But doesn't the kernel itself dynamically allocate memory?  And when it does so, can't it
thereby use up memory so that some user process will be unable to use memory that it has
already malloc()'ed?  Or do I misunderstand?

Requesting 'real' memory

Posted Feb 1, 2008 21:26 UTC (Fri) by giraffedata (subscriber, #1954) [Link]

The kernel reserves at least one page frame for anonymous virtual memory (actually, it's a whole lot more than that, but in theory one frame is enough for all the processes to access all their virtual memory as long as there is adequate swap space).

So any kernel real memory allocation can fail, and the code is painstakingly written to allow it to handle that failure gracefully (more gracefully than killing an arbitrary process). It allocates memory ahead of time so as to avoid deadlocks and failures at a time that there is no graceful way to handle it.

Requesting 'real' memory

Posted Feb 1, 2008 21:50 UTC (Fri) by zooko (guest, #2589) [Link]

Right, but I wasn't asking about the kernel's memory allocation failing -- I was asking about
the kernel's virtual memory allocation succeeding by using memory that had already been
offered to a process as the result of malloc().

Oh -- perhaps I misunderstood and you were answering my question.  Are you saying that the
kernel will fail to dynamically allocate memory rather than allocate memory which has already
been promised to a process (when overcommit_memory == 2)?

Thanks,

Zooko

Requesting 'real' memory

Posted Feb 1, 2008 23:05 UTC (Fri) by giraffedata (subscriber, #1954) [Link]

The kernel doesn't use virtual memory at all (well, to be precise let's just say it doesn't use paged memory at all). The kernel's memory is resident from the moment it is allocated, it can't ever be swapped out, and the kernel uses no swap space.

Requesting 'real' memory

Posted Feb 5, 2008 23:05 UTC (Tue) by dlang (subscriber, #313) [Link]

the problem that you will have when you disable overallocating memory is that when your 200M
firefox process tries to spawn a 2k program (to handle some mime type) it first forks, and
will need 400M of ram, even though it will immediatly exec the 2k program and never touch the
other 399.99M of ram.

with overallocation enabled this will work. with it disabled you have a strong probability of
running out of memory instead.

yes, it's more reproducable, but it's also a completely avoidable failure.

Requesting 'real' memory

Posted Feb 6, 2008 3:25 UTC (Wed) by giraffedata (subscriber, #1954) [Link]

I wonder why we still have fork. As innovative as it was, fork was immediately recognized, 30 years ago, as impractical. vfork took most of the pain away, but there is still this memory resource allocation problem, and some others, and fork gives us hardly any value. A fork-and-exec system call would fix all that.

Meanwhile, if you have the kind of system that can't tolerate even an improbable crash, and it has processes with 200M of anonymous virtual memory, putting up an extra 200M of swap space which will probably never be used is a pretty low price for the reliability of guaranteed allocation.

Requesting 'real' memory

Posted Feb 6, 2008 5:26 UTC (Wed) by dlang (subscriber, #313) [Link]

many people would disagree with your position that vfork is better then fork. (the issue came
up on the lkml within the last week and was dismissed with something along the lines of 'vfork
would avoid this, but the last thing we want to do is to push more people to use vfork')

I agree that a fexec (fork-exec) or similar call would be nice to have, but it wouldn't do
much good for many years (until a significant amount of software actually used it)

as for your comment of just add swap space to avoid problems with strict memory allocation.

overcommit will work in every case where strict allocation will work without giving
out-of-memory errors, and it will work in many cases where strict allocation would result in
errors. overcommit will also work in many (but not all) cases where strict allocation would
result in out of memory errors.

if it's trivial to add swap space to avoid the OOM errors in strict allocation, that same swap
space can be added along with overcommit and the system will continue to work in even more
cases.

the only time strict allocation will result in a more stable system is when your resources are
fixed and your applications are fairly well behaved (and properly handle OOM conditions), even
then the scenerio of one app allocating 99% of your ram, preventing you from running other
apps, is still a very possible situation. the only difference is that the timing of the OOM
error is more predictable (assuming that you can predict what software will be run when in the
first place)

Requesting 'real' memory

Posted Feb 7, 2008 0:35 UTC (Thu) by giraffedata (subscriber, #1954) [Link]

Many people would disagree with your position that vfork is better then fork

No, they wouldn't, because I was talking about the early history of fork and comparing the original fork with the original vfork. The original fork physically copied memory. The original vfork didn't, making it an unquestionable improvement for most forks. A third kind of fork, with copy-on-write, came later and obsoleted both. I didn't know until I looked it up just now that a distinct vfork still exists on modern systems.

the only time strict allocation will result in a more stable system is when your resources are fixed and your applications are fairly well behaved (and properly handle OOM conditions)

The most important characteristic of a system that benefits from strict allocation is that there be some meaningful distinction between a small failure and a catastrophic one. If all your memory allocations must succeed for your system to meet requirements, then it's not better to have a fork fail than to have some process randomly killed, and overallocation is better because it reduces the probability of failure.

But there are plenty of applications that do make that distinction. When a fork fails, such an application can reject one piece of work with a "try again later" and a hundred of those is more acceptable than one SIGKILL.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds