LWN: Comments on "Avoiding - and fixing - memory fragmentation" https://lwn.net/Articles/211505/ This is a special feed containing comments posted to the individual LWN article titled "Avoiding - and fixing - memory fragmentation". en-us Thu, 18 Sep 2025 14:44:16 +0000 Thu, 18 Sep 2025 14:44:16 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net You LWN guys rock. https://lwn.net/Articles/212465/ https://lwn.net/Articles/212465/ rvfh I'll take the opportunity of your comment to (at last) say that I am a very happy newly-subscribed too. I want information, and that's exactly what I get with LWN. Wanted to subscribe for years! Makes me feel better to retribute the work of the LWN team.<br> <p> Keep up the excellent work guys!<br> Sun, 03 Dec 2006 18:54:19 +0000 Avoiding - and fixing - memory fragmentation https://lwn.net/Articles/212434/ https://lwn.net/Articles/212434/ bluefoxicy Combining both of the described methods would indeed be excellent. Page Clustering probably represents much more of a contributor than lumpy reclaim; Page Clustering keeps memory in a more optimal state, and creates a self-stabilizing system.<br> <p> What interests me with Page Clustering is the movable pages. even if memory gets fragmented after strange stress (it's never impossible), movable pages can be moved into their respective areas. In other words, when memory enters a bad state, the kernel can actually shift it back towards a good state as needed instead of just flailing.<br> <p> We know from file systems that the Page Clustering technique works wonders when memory is not full; slow memory (disk) with more than 5% free space rarely experiences fragmentation due to excellent file system drivers that work by clustering related data in the best-fit gap. When file systems get more full, they start fragmenting; similarly, if you can actually fill your memory more than 95% with non-reclaimable memory, you'll cause this sort of fragmentation.<br> <p> Movable memory in this analogy would be something like a file system that contiguates files and directories into minimum-size chunks. My home directory has 185 inodes in it and due to bad management I've managed to make it take between 15 and 30 seconds to parse when the disk isn't busy and it's not cached; if those dentries were moved back together, at least into 64K chunks, it'd be only 5 or 6 8mS seeks and perform as fast as the 1000+ file directories that open in under 2 seconds.<br> <p> Movable memory DOES allow this in the Page Clustering model, and such an implementation would let memory recover from a sudden spike as needed; a sudden spike of failures finding high-order memory would cause non-releasable memory to be shifted around and grouped back together, returning the system to a state where high-order blocks are again easy to come by.<br> <p> No real point here, just felt like talking about self-stabilizing systems and tipping my hat to Mel for his excellent design.<br> Sun, 03 Dec 2006 04:54:23 +0000 Lumpy reclaim https://lwn.net/Articles/212416/ https://lwn.net/Articles/212416/ jzbiciak Lumpy reclaim might distort the LRU a bit, but think you'll see a nice pattern emerge. If a "hot" page gets reclaimed early because it shares a higher order frame with a "cold" page, the hot page will refault soon and get a new page as an order-0 allocation. I believe this should tend to migrate hot pages to share higher-order frames. Thus, I'd imagine the performance issues might be temporary, and if combined with Mel's code, rare.<br> <p> Or, I could just be full of it.<br> Sat, 02 Dec 2006 23:34:51 +0000 Avoiding - and fixing - memory fragmentation https://lwn.net/Articles/212010/ https://lwn.net/Articles/212010/ jospoortvliet you know, guys, i love articles like these. they are easy to read and <br> understand for non-kernel hackers, while providing an interesting insight <br> in certain kernel-development issues.<br> <p> even tough you're supporting Novell*, i'm really happy with my LWN <br> subscription...<br> <p> <p> <p> <p> * yeah, that's me trying to make a joke<br> Thu, 30 Nov 2006 13:07:20 +0000