Toward a swap abstraction layer
Toward a swap abstraction layer
Posted May 24, 2023 16:44 UTC (Wed) by farnz (subscriber, #17727)In reply to: Toward a swap abstraction layer by SLi
Parent article: Toward a swap abstraction layer
10 GiB of swap is a huge amount, unless you're spreading it across multiple Optane SSDs; to avoid the unresponsiveness issue, you want the OS to be able to random write the swap area in page-sized chunks in under half a second. On my laptop, with a fast SSD (1.4M IOPS), that limits swap to 6 GiB at most, and I actually use 128 MiB (with 64 GiB RAM), which makes for a good balance - when things go wrong, it doesn't thrash, but I see anonymous pages swapped out when my free memory gets low (noting that it's often unused, since it's not unknown for me to not fill RAM with page cache and anonymous pages in a single session, and I shut down the laptop overnight).
The rules about swap being sized proportionally to RAM come from systems without overcommit. In that situation, you do need a lot of swap, but most of it is never used; Linux has overcommit, and thus you only need a very small amount of swap to ensure that unused anonymous pages can be swapped in preference to paging out in-use code.
Posted May 24, 2023 16:49 UTC (Wed)
by SLi (subscriber, #53131)
[Link] (1 responses)
Posted May 24, 2023 16:52 UTC (Wed)
by farnz (subscriber, #17727)
[Link]
Yes, it really is. It's the difference between the machine going sluggish when I run close to OOM, because it's paging code in and out all the time, and it's having to page the code I'm actually using, and the machine using swap to page out anonymous data, giving systemd-oomd time to react and kill off the thing that's eating all my RAM.
Without it, I find my system enters thrash a lot more easily than it does with a tiny amount of swap.
Posted May 28, 2023 23:43 UTC (Sun)
by pturmel (guest, #95781)
[Link] (1 responses)
Posted May 30, 2023 12:42 UTC (Tue)
by farnz (subscriber, #17727)
[Link]
I should note that I suspect that a significant fraction of my gains from this are because I have systemd-oomd running, and it takes action when swap is significantly used. My working theory is that if I'm hitting 90% full swap, I'm at a point where I've exceeded my system's capabilities, and systemd-oomd should kick in and kill things before I start paging out the executable code I'm actively using.
Posted May 29, 2023 9:10 UTC (Mon)
by kleptog (subscriber, #1183)
[Link] (2 responses)
And the fact that if you want to support hibernation, your memory has to fit in swap. IIRC that was the argument some distributions had to default sizing the swap so large. I think most distributions don't do that anymore, and hibernation has basically disappeared as an option in most setups.
I think it would be nice to be able to reserve some disk space for "hibernation-but-not-swap", but that's not possible AFAIK.
Posted May 29, 2023 12:20 UTC (Mon)
by mb (subscriber, #50428)
[Link] (1 responses)
You can do that by only enabling an additional swap partition just before hibernation and disabling it right after resume.
Posted May 30, 2023 13:38 UTC (Tue)
by gioele (subscriber, #61675)
[Link]
That's more or less the behavior of the `resume=` kernel argument, isn't it?
From https://man7.org/linux/man-pages/man7/kernel-command-line...
> resume=, resumeflags=
Toward a swap abstraction layer
Toward a swap abstraction layer
Toward a swap abstraction layer
Toward a swap abstraction layer
Toward a swap abstraction layer
Toward a swap abstraction layer
Toward a swap abstraction layer
>
> You can do that by only enabling an additional swap partition just before hibernation and disabling it right after resume.
>
> Enables resume from hibernation using the specified device and mount options. All fstab(5)-like paths are supported. For details, see systemd-hibernate-resume-generator(8).