A very large inactive list gives pages a long time to be referenced before being evicted; that can reduce the number of pages kicked out of memory only to be read back in shortly thereafter. But a large inactive list comes at the cost of a smaller active list; that can slow down the system as a whole by causing lots of soft page faults ...
That's not actually the tradeoff. The tradeoff is primarily about reducing the number of page reads from disk, without regard to the cost of soft page faults.
The tradeoff is the weight the policy gives to frequency of access vs the weight it gives to recency of access in forecasting when a page will next be accessed.
Let's say you have two pages. Alpha was accessed once a minute ago and never since. Beta was last accessed two minutes ago, but also 3, 4, and 5 minutes ago. Which page is least likely to be referenced again in the next 5 minutes?
A large active / small inactive apportionment is more likely to say Alpha. A small active / large inactive would more likely say Beta.
To make the proper list size choice, you have to employ a model of typical process page usage patterns, which you probably just do implicitly by observing how changing the sizes affects paging rates. The proposed modification is supposed to be better because it effectively develops that model of access patterns automatically on the fly, and with finer grain.
Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds