User: Password:
|
|
Subscribe / Log in / New account

Requesting 'real' memory

Requesting 'real' memory

Posted Feb 1, 2008 1:21 UTC (Fri) by zlynx (subscriber, #2285)
In reply to: Requesting 'real' memory by zooko
Parent article: Avoiding the OOM killer with mem_notify

I ran my Linux laptop with strict overcommit enabled for a while.  Unfortunately, it does not
help.  Almost all desktop applications expect memory allocation to succeed.  From some of the
application errors I saw, developers seem to have become very lax about checking for NULL from
malloc.

C++ and Python applications did better, because they get an exception, and they have to do
*something* with it.


(Log in to post comments)

Requesting 'real' memory

Posted Feb 1, 2008 3:28 UTC (Fri) by zooko (guest, #2589) [Link]

Even if what you say is true, I would think that this would make the effects of memory
exhaustion more deterministic/reproducible/predictable.

C++ and Python apps, and also C apps that use malloc sparingly, would be less likely to crash
than others, I guess.

Perhaps this degree of predictability isn't enough to be useful.

Requesting 'real' memory

Posted Feb 1, 2008 17:56 UTC (Fri) by zlynx (subscriber, #2285) [Link]

I did not notice any extra predictability.  The effect was that the desktop programs crash
apparently randomly.  It was much like the OOM killer.  And just like the OOM killer, it was
generally the big stuff that blew up, like Evolution and Open Office.  I lost gnome-terminal a
few times.

The C++ and Python apps still crashed, they were simply more polite about it.

By the way, I don't read it that way, but your phrasing "Even if what you say is true" *could*
be offensive.  It seems to be saying that I wrote untruthfully.

Even if you don't see the same effect on your system, I did see it just the way I described it
on mine.

Requesting 'real' memory

Posted Feb 1, 2008 19:34 UTC (Fri) by giraffedata (subscriber, #1954) [Link]

Desktop applications aren't where I would expect to see deterministic memory allocation exploited. Allocation failures and crashes aren't such a big deal with these applications because if things fall apart, there's a user there to pick up the pieces. Overallocation and OOM Killer may well be the optimum memory management scheme for desktop systems.

Where it matters is business-critical automated servers. For those, application writers do spend time considering running out of memory -- at least they do in cases where an OOM killer doesn't make it all pointless anyway. They check the success of getting memory and do it at a time when there is some reasonable way to respond to not getting it.

And they shouldn't spend time worrying about freeing up swap space for other processes (i.e. mem_notify is no good). That resource management task belongs to the kernel and system administrator.


Copyright © 2018, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds