|
|
Subscribe / Log in / New account

Proactive compaction for the kernel

Proactive compaction for the kernel

Posted Apr 22, 2020 10:47 UTC (Wed) by jtaylor (subscriber, #91739)
Parent article: Proactive compaction for the kernel

From the description and the current patch it seems the backoff is absolute only if no progress is made.

This would mean it is always compacting with the same frequency even if only very little progress is made because some process constantly fragments some pages.

Wouldn't it be better to have the backoff time depend on how much progress was made last time (or some exponential decaying value of the last iterations progresses)?
So if there is much to compact with the current workload it runs more often, but if there is not much to do it runs less often.

One does have to be careful this does not negatively effect performance so that distributions actually enable it.
Back when redhat backported anonymous transparent huge pages to their rhel7(?) a few years back the compaction was terrible for performance and as a consequence and many blog posts which said turn of thp (instead of only the compaction) it seems many distributions still have thp turned off by default today.


to post comments

Proactive compaction for the kernel

Posted Apr 22, 2020 12:00 UTC (Wed) by scientes (guest, #83068) [Link]

My hunch would be that compaction should never be done on-demand (but the background process should be influenced by demand). Perhaps if the VM addresses were assigned contiguously it would be possible to convert the allocations later to a huge page in a background process.

Proactive compaction for the kernel

Posted Apr 22, 2020 19:00 UTC (Wed) by jccleaver (guest, #127418) [Link]

> From the description and the current patch it seems the backoff is absolute only if no progress is made.
> This would mean it is always compacting with the same frequency even if only very little progress is made because some process constantly fragments some pages.
> Wouldn't it be better to have the backoff time depend on how much progress was made last time (or some exponential decaying value of the last iterations progresses)?
> So if there is much to compact with the current workload it runs more often, but if there is not much to do it runs less often.

Agreed. This definitely calls out for some sort of efficiency barrier, where if there was less than a (arbitrary number) 2% improvement, then increase back off in most cases.

There should be a visible setting (say backoff values >95) that cause all but the tiniest improvements to be considered worthwhile, though, especially if the system truly is mostly idle.

Unrelated side-note: Does compaction happen at hibernation time, or before dropping to deep sleep states? Seems like a good opportunity.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds