|| ||KAMEZAWA Hiroyuki <firstname.lastname@example.org> |
|| ||"email@example.com" <firstname.lastname@example.org> |
|| ||[RFC][PATCH] synchrouns swap freeing at zapping vmas |
|| ||Thu, 21 May 2009 16:41:00 +0900|
|| ||"email@example.com" <firstname.lastname@example.org>,
|| ||Article, Thread
In these 6-7 weeks, we tried to fix memcg's swap-leak race by checking
swap is valid or not after I/O. But Andrew Morton pointed out that
"trylock in free_swap_and_cache() is not good"
Oh, yes. it's not good.
Then, this patch series is a trial to remove trylock for swapcache AMAP.
Patches are more complex and larger than expected but the behavior itself is
much appreciate than prevoius my posts for memcg...
This series contains 2 patches.
1. change refcounting in swap_map.
This is for allowing swap_map to indicate there is swap reference/cache.
2. synchronous freeing of swap entries.
For avoiding race, free swap_entries in appropriate way with lock_page().
After this patch, race between swapin-readahead v.s. zap_page_range()
will go away.
Note: the whole code for zap_page_range() will not work until the system
or cgroup is very swappy. So, no influence in typical case.
There are used trylocks more than this patch treats. But IIUC, they are not
racy with memcg and I don't care them.
(And....I have no idea to remove trylock() in free_pages_and_swapcache(),
which is called via tlb_flush_mmu()....preemption disabled and using percpu.)
These patches + Nishimura-san's writeback fix will do complete work, I think.
But test is not enough.
Any comments are welcome.
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to email@example.com. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"firstname.lastname@example.org"> email@example.com </a>