|| ||Con Kolivas <firstname.lastname@example.org>|
|| ||linux kernel mailing list <email@example.com>|
|| ||[PATCH] Autoregulate vm swappiness 2.6.0-test8|
|| ||Thu, 23 Oct 2003 23:37:50 +1000|
|| ||Andrew Morton <firstname.lastname@example.org>|
The vm_swappiness dial in 2.6 was never quite the right setting without me
constantly changing it depending on the workload. If I was copying large
files or encoding video it was best at 0. If I was using lots of applications
it was best much higher. Furthermore it depended on the amount of ram in the
machine I was using. This patch was done just for fun a while back but it
turned out to be quite effectual so I thought I'd make it available for the
wider community to play with. Do whatever you like with it.
This patch autoregulates the vm_swappiness dial in 2.6 by making it equal to
the percentage of physical ram consumed by application pages.
This has the effect of preventing applications from being swapped out if the
ram is filling up with cached data.
Conversely, if many applications are in ram the swappiness increases which
means the application currently in use gets to stay in physical ram while
other less used applications are swapped out.
For desktop enthusiasts this means if you are copying large files around like
ISO images or leave your machine unattended for a while it will not swap out
your applications. Conversely if the machine has a lot of applications
currently loaded it will give the currently running applications preference
and swap out the less used ones.
The performance effect on larger boxes seems to be either unchanged or slight
improvement (1%) in database benchmarks.
The value in vm_swappiness is updated only when the vm is under pressure to
swap and you can check the last vm_swappiness value under pressure by
Manually setting the swappiness with this patch in situ has no effect. This
patch has been heavily tested without noticable harm. Note I am not sure of
the best way to do this so it may look rather crude.
Patch against 2.6.0-test8
--- linux-2.6.0-test8-base/mm/vmscan.c 2003-10-19 20:24:36.000000000 +1000
+++ linux-2.6.0-test8-am/mm/vmscan.c 2003-10-22 17:56:18.501329888 +1000
@@ -47,7 +47,7 @@
* From 0 .. 100. Higher means more swappy.
-int vm_swappiness = 60;
+int vm_swappiness = 0;
static long total_memory;
@@ -595,11 +595,13 @@ refill_inactive_zone(struct zone *zone,
int pgdeactivate = 0;
int nr_pages = nr_pages_in;
+ int pg_size;
LIST_HEAD(l_hold); /* The pages which were snipped off */
LIST_HEAD(l_inactive); /* Pages to go onto the inactive_list */
LIST_HEAD(l_active); /* Pages to go onto the active_list */
struct page *page;
struct pagevec pvec;
+ struct sysinfo i;
int reclaim_mapped = 0;
@@ -642,6 +644,16 @@ refill_inactive_zone(struct zone *zone,
mapped_ratio = (ps->nr_mapped * 100) / total_memory;
+ * Autoregulate vm_swappiness to be application pages % -ck.
+ pg_size = get_page_cache_size() - i.bufferram ;
+ vm_swappiness = 100 - (((i.freeram + i.bufferram +
+ (pg_size - swapper_space.nrpages)) * 100) /
+ (i.totalram ? i.totalram : 1));
* Now decide how much we really want to unmap some pages. The mapped
* ratio is downgraded - just because there's a lot of mapped memory
* doesn't necessarily mean that page reclaim isn't succeeding.