LWN: Comments on "Kswapd and high-order allocations" https://lwn.net/Articles/101230/ This is a special feed containing comments posted to the individual LWN article titled "Kswapd and high-order allocations". en-us Wed, 22 Oct 2025 08:21:21 +0000 Wed, 22 Oct 2025 08:21:21 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Why not copy around data in physical RAM? https://lwn.net/Articles/150919/ https://lwn.net/Articles/150919/ mmarq i'm not a kernel hacker either... just a curious that try to understand, so...<br> <p> "Most pointers contain the virtual address of some memory area, those don't matter. But some actually contain the physical or bus address... If you tried to move those pages around, you'd have a lot of fun handling the random application coredumps and occasional kernel panic. "<br> <p> Just dont move them!... but there are many bits residing in physical memory that are obvious candidates, that shouldn't get in the way of those that cannot be moved or vice-versa. Is it something really stupid to advocate the creation of memory pools adressable by the kernel ?<br> <p> My idea(stupid or not) is that pages marked as "obvious candidates" for swap should not be imediately swaped but trowned "defragmented" to a *reserved* portion of physical memory, very usefull because i suspect each time more truth that what is swapable now could be absolutely required next second in a highly CPU context swaping of 'highly threaded' world of applications and services... thus making kswapd lasy and stop him from wasting useful CPU cicles better used by a proper defragmentation code.<br> <p> Other idea is that disk cache, should always be created as two separeted physical memory pools, program and data. Better, a *program cache pool* should be created, requiring that program bits 'should' enter this pool already in a *continuous order*, that is defragmented(and this is possible because programs bits only change when are upgraded,i.e. almost never in CPU time!), and not trowned in the general physical memory space 'highly competition' pool for any 4K page of physical memory, when or where ever available.<br> <p> This program cache is certainly not a hot requirement for server systems, but could be a killer feature for workstation/desktop, because differently from a RAMDisk it would be quicker and more versatile as in the possibility of making their size hot dynamic, holding defragmented program bits from not only any required runtime but also other executables from /bin, /usr/bin or /usr/sbin scheduled from a simple algorithm, based on simple parameters as many times runned and usefulness.<br> <p> Belive none of this will deprecate server performance, and in the middle you should get a much bigger pool of continously adressable memory pages. <br> Fri, 09 Sep 2005 01:14:33 +0000 Which cards can handle scatter DMA? https://lwn.net/Articles/101649/ https://lwn.net/Articles/101649/ smoogen Well which cards are currently written to handle this so as to avoid the high allocation problem? And are those cards any good?<br> Fri, 10 Sep 2004 23:19:57 +0000 Why not copy around data in physical RAM? https://lwn.net/Articles/101452/ https://lwn.net/Articles/101452/ joern The big problem with this approach are pointers. Most pointers contain the virtual address of some memory area, those don't matter. But some actually contain the physical or bus address. For example, ethernet hardware usually has DMA engines and writes to certain pages in main memory. If you tried to move those pages around, you'd have a lot of fun handling the random application coredumps and occasional kernel panic.<br> Thu, 09 Sep 2004 14:56:31 +0000 Why not copy around data in physical RAM? https://lwn.net/Articles/101446/ https://lwn.net/Articles/101446/ iabervon I believe someone mentioned this possibility in the thread, and Linus said that it is much more possible now than it was before rmap, but that rmap isn't actually quite complete, so there are pages you couldn't move. He seemed open to an implementation, but the poster of the original patch says making kswapd do a better job of making space is orthogonal to making kswapd keep trying until it actually makes space, and not what he's working on at the moment.<br> Thu, 09 Sep 2004 14:47:04 +0000 Why not copy around data in physical RAM? https://lwn.net/Articles/101358/ https://lwn.net/Articles/101358/ rwmj I actually suggested this on kernel-list many years ago. Alan Cox's (correct) objection was that it would require scanning lists to find out where a page was used. As you point out, rmap means you don't need to do that scanning any more.<br> <p> So the idea would be, if higher-order allocations are not available, pick the largest available allocation, then start evicting physical pages used above this allocation.<br> <p> The only case when this wouldn't work is when trying to do an atomic allocation - but it's very hard to satisfy large, atomic allocations anyway.<br> <p> Rich.<br> Thu, 09 Sep 2004 10:56:56 +0000 Why not copy around data in physical RAM? https://lwn.net/Articles/101351/ https://lwn.net/Articles/101351/ scarabaeus If a request for a contiguous memory area fails, why cannot the kernel copy physical memory around while leaving the virtual addresses unchanged? I.e. if physical page n is free, n+1 is not, but m is, and we want two consecutive pages, copy the content of n+1 to m and update the page table accordingly. I'm not a kernel hacker, but my impression was that the reverse-mapping code enables you to do that.<br> Thu, 09 Sep 2004 10:27:57 +0000