User: Password:
|
|
Subscribe / Log in / New account

This is argument for the controller, not against it...

This is argument for the controller, not against it...

Posted Feb 17, 2011 5:55 UTC (Thu) by khim (subscriber, #9252)
In reply to: Go's memory management, ulimit -v, and RSS control by alkbyby
Parent article: Go's memory management, ulimit -v, and RSS control

In practice programs don't cope well with the case where malloc returns 0. Worse: often they DO contain code which tries to do something but in reality it's poorly tested, buggy and often destroys more data then it saves.

It's probably good idea to provide some warning when application is close to the limit (it's usually much easier to cope with "low memory" problem rather then "no memory" problem), but that's separate issue.


(Log in to post comments)

This is argument for the controller, not against it...

Posted Feb 18, 2011 1:08 UTC (Fri) by giraffedata (subscriber, #1954) [Link]

Yes, killing the process is almost always better than failing the allocation. ISTR seeing Linux do that sometimes for rlimit violation; maybe there's a switch for that.

Besides the fact that programmers just don't take the time to tediously check every memory allocation, there's not much they can do anyway if there is no memory available. It takes a pretty intelligent program to be able to function when the memory well is dry and adjust itself to the available memory. For programs that are that intelligent, there should be a way to do an explicit conditional memory allocation.

This is argument for the controller, not against it...

Posted Feb 19, 2011 22:34 UTC (Sat) by nix (subscriber, #2304) [Link]

I check most allocations when doing so does not make the program too ugly (which means everywhere other than in tiny string allocations, pretty much). I don't care about functioning under OOM conditions, but being told *which* allocation failed can sometimes point to catastrophic memory leaks and that sort of thing. It's saved my bacon more than once.

If a crashing process was hit with SIGABRT so it dumped core when it ran the machine out of memory, that might be OK... but the problem is that you then can't free up the memory until the core dump is finished, and the machine is in dire straits until then, and that the core dump is likely gigantic. If there was a way to automatically dump a backtrace... (unfortunately core_pattern pipes don't help here because you have to suck in the entire dump to get a backtrace, and if you're out of memory that probably means you have to write it to disk...)


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds