LWN: Comments on "Expedited memory reclaim from killed processes" https://lwn.net/Articles/785709/ This is a special feed containing comments posted to the individual LWN article titled "Expedited memory reclaim from killed processes". en-us Sat, 18 Oct 2025 17:50:31 +0000 Sat, 18 Oct 2025 17:50:31 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Expedited memory reclaim from killed processes https://lwn.net/Articles/786914/ https://lwn.net/Articles/786914/ roblucid <div class="FormattedComment"> Why can't the Suricata rules be read from a RO map file, with a seperate means for updating &amp; changing rulesets and then the equivalent of sighup to get the ruleset re-read and container processes re-reading stuff, perhaps a counter incremented in some ruleset RW meta data, set so a newer version available can be read in by containers as they restart?<br> <p> The disk backed data files can be shared amongst 1000's of VMs, right?<br> <p> Then the VM system can be sure it's safe to fork without committing much memory and the apparent need for over-commit vanishes. I admit I haven't tried it and as I used VMs for isolation and jails for data sharing, not the kind of efficiency hack but conceptually I don't see why software developed in a stricter world couldn't handle the case reasonably.<br> <p> Sparse arrays, are perhaps a better case for over-commit but again I wonder about memory map file and/or smarter data structures wouldn't be feasible for the programs which actually deliberately require these features, rather than by accident due to a permissive environment.<br> </div> Fri, 26 Apr 2019 18:26:30 +0000 Expedited memory reclaim from killed processes https://lwn.net/Articles/786748/ https://lwn.net/Articles/786748/ nix <blockquote> JFTR: On Linux, applications can actually handle SISEGV, </blockquote> I'd be surprised if there were any Unixes on which this was not true, given that SIGSEGV in particular was one of the original motivations for the existence of signal handling in the first place. Thu, 25 Apr 2019 14:30:35 +0000 Expedited memory reclaim from killed processes https://lwn.net/Articles/786308/ https://lwn.net/Articles/786308/ lkundrak <div class="FormattedComment"> This is, in fact, what the original Bourne shell infamously managed memory: <a href="https://www.in-ulm.de/~mascheck/bourne/segv.html">https://www.in-ulm.de/~mascheck/bourne/segv.html</a><br> </div> Fri, 19 Apr 2019 15:03:15 +0000 Expedited memory reclaim from killed processes https://lwn.net/Articles/786253/ https://lwn.net/Articles/786253/ rweikusat2 JFTR: On Linux, applications can actually handle SISEGV, <pre> #include &lt;signal.h&gt; #include &lt;stdio.h&gt; #include &lt;unistd.h&gt; static void do_brk(int unused) { sbrk(128); } int main(int argc, char **argv) { unsigned *p; signal(SIGSEGV, do_brk); p = sbrk(0); *p = atoi(argv[1]); printf("%u\n", *p); return 0; } </pre> If the signal handler is disabled, this program segfaults. Otherwise, the handler extends the heap and the faulting instruction then succeeds when being restarted. SIGSEGV is a synchronous signal, hence, this would be entirely sufficient to implement some sort of OOM-handling strategy in an application, eg, free some memory and retry or wait some time and retry. Thu, 18 Apr 2019 15:24:44 +0000 Man pages https://lwn.net/Articles/786211/ https://lwn.net/Articles/786211/ corbet The man pages <i>are</i> actively maintained. I am sure that Michael would appreciate a patch fixing the error. Thu, 18 Apr 2019 12:54:19 +0000 Expedited memory reclaim from killed processes https://lwn.net/Articles/786200/ https://lwn.net/Articles/786200/ farnz <p>Ah - on other systems (Solaris, at least, and IRIX had the same functionality under a different name), which do not normally permit any overcommit, it allows you to specifically flag a memory range as "can overcommit". If application-controlled overcommit ever becomes a requirement on Linux, supporting the Solaris (and documented) semantics would be a necessary part. Thu, 18 Apr 2019 07:26:57 +0000 Expedited memory reclaim from killed processes https://lwn.net/Articles/786197/ https://lwn.net/Articles/786197/ thestinger <div class="FormattedComment"> To clarify, I'm quoting the inaccurate description of MAP_NORESERVE. The actual functionality is omitting the memory from heuristic overcommit, which has no impact in the non-overcommit memory accounting mode.<br> <p> Mappings that aren't committed and cannot be committed without changing protections don't have an accounting cost (see the official documentation that I linked) so the way to reserve lots of address space is by mapping it as PROT_NONE.<br> <p> To make memory that has used not be accounted again while keeping the address space, you clobber it with new PROT_NONE memory using mmap with MAP_FIXED. It may seem that you achieve the same thing with madvise MADV_DONTNEED + mprotect to PROT_NONE but that doesn't work since it doesn't actually go through it all to check if it can reduce the accounted memory (for good reason).<br> </div> Thu, 18 Apr 2019 06:40:54 +0000 Expedited memory reclaim from killed processes https://lwn.net/Articles/786195/ https://lwn.net/Articles/786195/ thestinger <div class="FormattedComment"> See <a href="https://www.kernel.org/doc/Documentation/vm/overcommit-accounting">https://www.kernel.org/doc/Documentation/vm/overcommit-ac...</a> and the sources.<br> <p> The linux-man-pages documentation is often inaccurate, as it is in this case. MAP_NORESERVE does not do what it describes at all:<br> <p> <font class="QuotedText">&gt; When swap space is not reserved one might get SIGSEGV upon a write if no physical memory is available.</font><br> </div> Thu, 18 Apr 2019 06:32:36 +0000 Expedited memory reclaim from killed processes https://lwn.net/Articles/786194/ https://lwn.net/Articles/786194/ thestinger <div class="FormattedComment"> MAP_NORESERVE is a no-op with overcommit disabled or full overcommit enabled. It only has an impact on heuristic overcommit, by bypassing the immediate failure heuristic.<br> </div> Thu, 18 Apr 2019 06:26:07 +0000 Expedited memory reclaim from killed processes https://lwn.net/Articles/786046/ https://lwn.net/Articles/786046/ rweikusat2 <div class="FormattedComment"> Considering that the amount of memory which will be needed isn't known in advance, one would need to use MAP_FIXED in a "known good" location, anyway. There are some more complication with this approach as well. And the Linux default policy of "real COW", ie do not copy anyting unless it has to be done and don't (try to) reserve anything unless it's demonstrably required handles this cause just fine.<br> <p> <p> <p> </div> Mon, 15 Apr 2019 19:14:07 +0000 Expedited memory reclaim from killed processes https://lwn.net/Articles/786043/ https://lwn.net/Articles/786043/ farnz <p>You could also, assuming it's backed by an <tt>mmap</tt>ed file, just use <tt>MAP_FIXED</tt> to ensure that all the pointers match in every Suricata process; this works out best on 64-bit systems, as you need a big block of VA space available that ASLR et al won't claim. Mon, 15 Apr 2019 17:30:12 +0000 Expedited memory reclaim from killed processes https://lwn.net/Articles/786042/ https://lwn.net/Articles/786042/ rweikusat2 <div class="FormattedComment"> For completeness¸ before someone pulls that out of his hat and triumphantly waves it through the air: Changing to a custom allocator had been sufficient as the parent could have put all of its data into a shared memory segment/ memory mapped file and children could have inherited the mapping via fork.<br> <p> But that's still more complicated than just relying on the default behaviour based on knowing how the application will use the inherited memory.<br> <p> </div> Mon, 15 Apr 2019 17:10:15 +0000 Expedited memory reclaim from killed processes https://lwn.net/Articles/786041/ https://lwn.net/Articles/786041/ rweikusat2 <div class="FormattedComment"> <font class="QuotedText">&gt; That's a bad design. You shouldn't design software that depends on implicit assumptions.</font><br> <p> This statement means nothing (as it stands).<br> <p> <font class="QuotedText">&gt; A better designed software would store rules in a file and map it explicitly into the target processes. This way there's no problem </font><br> <font class="QuotedText">&gt; with overcommit - the kernel would know that the data is meant to be immutable.</font><br> <p> A much more invasive change to suricata (this is an open source project I'm not anyhow associated with) could have gotten rid of all the pointers in its internal data structures. Assuming this had been done and the code had also been changed to use a custom memory allocator instead of the libc one, one could have used a shared memory segment/ memory mapped file to implement the same kind of sharing. I'm perfectly aware of this. But this complication isn't really necessary with Linux as sharing-via-fork works just as well and is a lot easier to implement.<br> </div> Mon, 15 Apr 2019 17:03:41 +0000 Expedited memory reclaim from killed processes https://lwn.net/Articles/786038/ https://lwn.net/Articles/786038/ Cyberax <div class="FormattedComment"> That's a bad design. You shouldn't design software that depends on implicit assumptions.<br> <p> A better designed software would store rules in a file and map it explicitly into the target processes. This way there's no problem with overcommit - the kernel would know that the data is meant to be immutable.<br> </div> Mon, 15 Apr 2019 15:54:48 +0000 Expedited memory reclaim from killed processes https://lwn.net/Articles/786010/ https://lwn.net/Articles/786010/ farnz <p>I believe opting-in to overcommit is already possible, with the <tt>MAP_NORESERVE</tt> flag - which essentially says that the mapped range can be overcommitted, and defines behaviour if you write to it when there is insufficient commit available. <p>There's a bit of a chicken-and-egg problem here, though - heuristic overcommit exists because it's easier for system administrators to tell the OS to lie to applications that demand too much memory than it is for those self-same administrators to have the applications retooled to handle overcommit sensibly. <p>And even if you are retooling applications, it's often easier to simply turn on features like <a href="http://static.lwn.net/kerneldoc/vm/ksm.html">Kernel Same-page Merging</a> to cope with duplication (e.g. in the Suricata ruleset in-memory form) than it is to handle al the fun that comes from opt-in overcommit. Mon, 15 Apr 2019 15:01:53 +0000 Expedited memory reclaim from killed processes https://lwn.net/Articles/785995/ https://lwn.net/Articles/785995/ epa <div class="FormattedComment"> That’s certainly a valid use case. Ideally, though, the overcommit would be opt-in, and the side effects would be paid only by the processes that had used it — so if a Suricata process went rogue and started writing to the shared data, causing lots of copy-on-write pages to be allocated, it could be whacked by an OOM killer. <br> <p> Perhaps programming languages could have better support for marking a data structure read-only, which would then notify the kernel to mark the corresponding pages read-only. Then you could allocate the necessary structure and mark it read-only before forking. <br> </div> Mon, 15 Apr 2019 13:56:52 +0000 Expedited memory reclaim from killed processes https://lwn.net/Articles/785991/ https://lwn.net/Articles/785991/ k3ninho <div class="FormattedComment"> oops. thanks.<br> <p> K3n.<br> </div> Mon, 15 Apr 2019 12:20:07 +0000 Expedited memory reclaim from killed processes https://lwn.net/Articles/785958/ https://lwn.net/Articles/785958/ rweikusat2 <div class="FormattedComment"> Something I implemented in the past: Imagine a system with numerous virtualized (container-based) VPN servers which uses suricata for network traffic inspection and alert generation in case of suspicious activity. Suricata processes are typically huge (&gt;3G isn't uncommon). Each virtual VPN server needs its own suricata process but they all use the same ruleset. As the memory used to store the processed rule information isn't ever written to after the corresponding data structures were created, it's possible to use a per-host suricata server process which loads the rules once. A VPN server suricata process is then created by contacting the host suricata server which forks a child and moves it into the container. All VPN server suricata processes thus share a single<br> copy of the memory used for rule data structures. <br> <p> Without overcommit, that is, lazy memory/ swap allocation as the need arises, this wouldn't work (in this way).<br> <p> "Refuste to try because the system might otherwise run out of memory in future" is not a feature. The system might as well not. The kernel doesn't know this. Or it might run out of memory for a different reason. The original UNIX fork worked by copying the forking core image to the swap space to be swapped in for execution at some later time. This was a resource management strategy for seriously constrained hardware, not a heavenly relevation of the one true way of implementing fork.<br> </div> Sun, 14 Apr 2019 20:15:15 +0000 Expedited memory reclaim from killed processes https://lwn.net/Articles/785952/ https://lwn.net/Articles/785952/ nix <blockquote> which is only moved into dedicated address space for the child when it's changed </blockquote> The address space is always dedicated to the child after fork(), even before CoW. The *physical memory* is not. Sun, 14 Apr 2019 12:45:18 +0000 Expedited memory reclaim from killed processes https://lwn.net/Articles/785947/ https://lwn.net/Articles/785947/ k3ninho <div class="FormattedComment"> <font class="QuotedText">&gt;What about a system low memory watershed?</font><br> The Linux kernel exists in a perpetual low-available-memory state. Inherently it's a caching strategy responsible for a lot of the speed in Linux. I don't know the statistics but several multiples of your machine's physical memory are allocated in the virtual memory addressing space by a design choice called memory over-commit. <br> <p> But which bits are meaningful over-commit that you'd count towards a system low memory state and which are merely 'maybe useful'? This question is a cache invalidation problem for those 'maybe useful' bits -- a problem already solved elsewhere in the kernel, which provides a pattern we can follow. That pattern is the least-recently-used list so, while you can't predict what user or network interaction comes next, you keep track of access times and then have the bottom end of the list for old and unused items. Pick a level of granularity for parts of a process's memory space and track the least-recently-used bits, hoping to find a ratio of maybe-useful vs definitely-useful memory commit that you'd use to set the line at which the oom-killer gets invoked.<br> <p> This isn't the whole picture -- the fork()+exec() paradigm can leave child processes sharing over-committed memory with their parents, which is only moved into dedicated address space for the child when it's changed -- a pattern called copy-on-write. We'd need to do more work to be certain that this memory is definitely-useful, for example it might be read-only state that each child needs from the parent, reading it often.<br> <p> There's excellent write-up in lwn's history:<br> Taming the OOM killer -- <a href="https://lwn.net/Articles/317814/">https://lwn.net/Articles/317814/</a><br> Improving the OOM killer -- <a href="https://lwn.net/Articles/684945/">https://lwn.net/Articles/684945/</a><br> <p> K3n.<br> </div> Sun, 14 Apr 2019 07:21:23 +0000 Expedited memory reclaim from killed processes https://lwn.net/Articles/785938/ https://lwn.net/Articles/785938/ mjthayer <div class="FormattedComment"> <font class="QuotedText">&gt; "It would be great if we can identify urgent cases without userspace hints, so I'm open to suggestions that do not involve additional flags"</font><br> <p> What about a system low memory watershed?<br> </div> Sat, 13 Apr 2019 18:51:49 +0000 Expedited memory reclaim from killed processes https://lwn.net/Articles/785919/ https://lwn.net/Articles/785919/ knan <div class="FormattedComment"> One man's design problem, another man's feature. Cleaning up after a 1.5TB rss process takes time, regardless of overcommit.<br> </div> Sat, 13 Apr 2019 12:18:41 +0000 Expedited memory reclaim from killed processes https://lwn.net/Articles/785901/ https://lwn.net/Articles/785901/ wahern <div class="FormattedComment"> I don't doubt the criticisms of the patch. But in my most recent foray digging into the OOM logic, my general sense is that some of these timing and priority issues are largely a consequence of the complexity added to forestall the OOM killer and minimize the harm. For example, right now I'm helping to deal with the OOM killer being invoked in situations where there's plenty of uncommitted memory, but because of heavy concurrent dirtying the reclaim code attempting/waiting on page flush gives up too soon. Without overcommit there would be no reason to give up--"gives up to too soon" is a characterization only meaningful in a best effort, no guarantee environment. <br> <p> Overcommit seems to have resulted in kernel code that emphasizes mitigations rather than on providing consistent, predictable, and tunable behavior. There is no correct or even best heuristic for mitigating OOM. It's a constantly moving target. Any interface to those heuristic mechanisms and policies that look best today will look horrible tomorrow.<br> <p> One may want to opt into overcommit and wrestle with such lose-lose scenarios, but designing *that* interface (opt-in) is much easier because you're already in a position of being able to contain the system-wide fallout--architectural, performance, code maintenance, etc. Likewise, one may want the ability to cancel ("give up") some operation, but an interface describing when to cancel (a finite set of criteria) is much easier to devise and use than one describing when not to cancel (a possibly infinite set of criteria).<br> <p> So, yeah, I don't doubt that one sh*t sandwich can be objectively preferable to another sh*t sandwich, but the spectacle of sh*t sandwich review is tragic.<br> <p> </div> Sat, 13 Apr 2019 03:55:12 +0000 Expedited memory reclaim from killed processes https://lwn.net/Articles/785896/ https://lwn.net/Articles/785896/ quotemstr <div class="FormattedComment"> Believe me, I'm no fan of overcommit --- but in this specific instance, it's amazingly not overcommit's fault. The same delay in memory reclaim can occur in a strict commit system! It's about the timing of page reclaim, not about (which is the point of overcommit) how many pages get allocated in the first place and from what sources.<br> </div> Sat, 13 Apr 2019 02:05:51 +0000 Expedited memory reclaim from killed processes https://lwn.net/Articles/785892/ https://lwn.net/Articles/785892/ wahern <div class="FormattedComment"> The fundamental design problem is overcommit. Everything else is just the interminable nightmare. Blame overcommit on fork, broken 1980s database software, or w'ever. That's almost worse as its just an admission that we traded a few years of pain for a lifetime of debilitation.<br> <p> <p> </div> Sat, 13 Apr 2019 00:22:28 +0000