|
|
Log in / Subscribe / Register

Does it actually work?

Does it actually work?

Posted Apr 10, 2025 12:39 UTC (Thu) by khim (subscriber, #9252)
In reply to: Does it actually work? by jhe
Parent article: Three ways to rework the swap subsystem

> I think the underlying cause is that some applications (Firefox, Electron) are too memory hungry to run on 4 or 8 GB.

Yet, somehow, it works fine with macOS or Windows. That's what makes things really weird: Linux claims that it has very efficient and quick swap subsystem yet, in practice, it works like a crap.

I can open 100 Windows of Chrome and VSCode (or XCode) and pile of other apps that use 50GB or 100GB of swap on Windows and MacOS system with 8GiB RAM – and it would work. Yes, it wouldn't be flying, not with this kind of memory pressure… but system would be usable.

Yet on Linux with much smaller memory pressure system is totally unresponsive.

I always assumed that it was because no one cared or used swap on Linux, and that means it's useless… but that article claims that certain refactorings are not done because there would be regressions… regressions compared to what? To the non-usable swap of today? Does it even matter? Who uses swap on Linux, why and, most importantly, how?


to post comments

Does it actually work?

Posted Apr 10, 2025 13:13 UTC (Thu) by Wol (subscriber, #4433) [Link] (1 responses)

aiui, swapping and paging are two different beasts. Do MacOs and Windows use paging? Certainly I thought Windows had a page file.

That could quite possibly be it ...

Cheers,
Wol

Does it actually work?

Posted Apr 10, 2025 13:32 UTC (Thu) by khim (subscriber, #9252) [Link]

> Do MacOs and Windows use paging?

Windows and MacOS have essentially the same API as Linux. MacOS is POSIX and Windows can run Linux binaries (WSL1, WSL2 runs full Linux kernel).

So no, that's not it.

I'm pretty sure they both include tons of tweaks and hacks to ensure that even if system is heavily trashing it still stays responsive, but the end result: if system is heavily overloaded it, definitely, becomes “sluggish”, but noting like Linux does where switch from graphic to text console may take hours (literally) and then you couldn't log in on text console because of timeouts (measured in minutes).

And if keeping system usable when it's swapping is explicit non-goal then I wonder why does anyone care to benchmark things that are not supposed to be used, anyway.

As I have said: I always assumed that Linux just simply keeps swap in some vestigial form for a nostalgia reasons (and no one cares to do anything to it) – and this is done to keep it working great in a “normal” situation (when swap is not used).

This even may be a sane stance if you recall that most Linux system don't really use swap (but Android and ChromeOS use swap code to implement zram… would explain these additions to that code that article discusses).

But when I read about some regressions and other such things… hey, that means that someone, somewhere still uses swap on Linux, for something.

The whole system have looked ever more mysterious the more I read an article: because it certainly read as if it comes from some parallel universe where something else but “zero swap but some zram” is used…

But… how and why? Why do they care about speed… and what kind of speed they do care about? Because for me swap on Linux always had one and one speed only: unusable. Was that it's 10x of “unusable” threshold or 100x of “unusable” threshold… I don't know: one thinks that should happen in seconds start taking hours system is no longer usable and measuring “speed of swap” doesn't make much sense, after that point.

Of course for me “speed of swap” is “slowdown compared to the situation when there are enough memory and swap is not used” and, maybe, there are some other ways to measure speed of swap, but… again: who, how and why does that?

That meta-mystery remained uncovered in the article…

Does it actually work?

Posted Apr 11, 2025 4:58 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

> Yet, somehow, it works fine with macOS or Windows. That's what makes things really weird: Linux claims that it has very efficient and quick swap subsystem yet, in practice, it works like a crap.

Windows doesn't do overcommit (unless you REALLY try with MEM_RESERVE flag for VirtualAlloc). If a program allocates RAM, then there is a page in memory or in the swap file to back it. This naturally limits the amount of memory that has to be materialized out of thin air if there's a "bank run" on uncommitted RAM.

Does it actually work?

Posted Apr 11, 2025 8:21 UTC (Fri) by khim (subscriber, #9252) [Link] (1 responses)

That's entirely different kettle of fish. You can disable overcommit on Linux, add 1TiB swap on the 4GiB desktop… and it would still become absolutely unresponsive if you would run two rustc processes that try to use 20GiB each.

And yes, miracles are impossible: if you would try to run some process that needs 1TiB of RAM while physically there are only 1GiB then even on macOS or Windows you would have to wait, most likely, till the heat death of universe.

But using 2x, 3x, 10x more RAM than your machine does have, physically? With dozen of apps? That's Ok: system is becoming more and more sluggish but it stays usable. On Linux using even 2x more is, normally, a prelude to hitting “reset”.

Does it actually work?

Posted Apr 14, 2025 8:37 UTC (Mon) by taladar (subscriber, #68407) [Link]

It is really less of a factor how much you overcommit and more how much of the overcommitted memory is in active use. Once that goes beyond your physical memory you have a problem no matter what you do with swap.


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds