|From:||Glauber Costa <glommer-AT-parallels.com>|
|To:||"linux-mm-AT-kvack.org" <linux-mm-AT-kvack.org>, Mel Gorman <mgorman-AT-suse.de>, Andi Kleen <andi-AT-firstfloor.org>, Peter Zijlstra <a.p.zijlstra-AT-chello.nl>, Cgroups <cgroups-AT-vger.kernel.org>, Ying Han <yinghan-AT-google.com>, Tejun Heo <tj-AT-kernel.org>, linux-kernel <linux-kernel-AT-vger.kernel.org>, "devel-AT-openvz.org" <devel-AT-openvz.org>, Konstantin Khorenko <khorenko-AT-parallels.com>, James Bottomley <JBottomley-AT-Parallels.com>|
|Date:||Thu, 13 Sep 2012 15:32:27 +0400|
Hello everybody. I've just finished a round of benchmarks for kmemcg code. All the results can be found at: http://glommer.net/kmemcg-benchmarks-13092012/ The benchmarks were run in a 2-socket, 24-cpu machine. I haven't run all possible configurations I have envisioned, because I wanted this posted early rather than later. I've also had un-official runs in my 4-cpu i7 laptop and in a 6-way single socket AMD box. They would need to be re-run to be publishable, since they are quite raw and ad-hoc (like, I was not running perf stat always in the same way, doing some things manually, etc) But they overall point to consistent results. You can find a guide to that data in the README file in that dir, and the actual data in the results* dir. The chosen allocator for this is the SLAB. A summary and discussion of the data follows: fork intensive workload, elapsed time: =============================================== base-NotCompiled : 16.76 +- 0.87% [ + 0.00 % ] kmemcg-stack-Unset: 16.28 +- 1.10% [ - 2.86 % ] kmemcg-stack-Set : 16.96 +- 0.65% [ + 1.19 % ] kmemcg-slab-Unset : 16.71 +- 1.16% [ + 0.28 % ] kmemcg-slab-Set : 17.11 +- 0.48% [ + 2.08 % ] fork + user mem, elapsed time: =============================================== base-NotCompiled : 4.88 +- 0.35% [ + 0.00 % ] kmemcg-stack-Unset: 4.87 +- 0.36% [ - 0.34 % ] kmemcg-stack-Set : 4.85 +- 0.37% [ - 0.76 % ] kmemcg-slab-Unset : 4.84 +- 0.39% [ - 0.79 % ] kmemcg-slab-Set : 4.84 +- 0.35% [ - 0.78 % ] So in general, I don't see a big difference, with almost all measurements falling inside the 2-sigma range. From the fork intensive workload, two things pop out: first, kmem patches applied, but kmem not used, actually performs slightly better than no patches at all. I don't know why this is, and it might even be a glitch. But it consistently happened in my laptop and in the 6-way AMD machine. Also, we can see that in that workload, which is slab intensive, kmemcg-slab-Set performs slightly worse. Being worse is inline with expectations, but I don't consider the hit to be too big. Please let me know of any additional work you would like to see done here. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to firstname.lastname@example.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"email@example.com"> firstname.lastname@example.org </a>
Copyright © 2012, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds