4K, why not also 64K?
Posted Mar 12, 2009 17:49 UTC (Thu) by james
In reply to: 4K, why not also 64K?
Parent article: Linux and 4K disk sectors
4K page sizes are not necessarily brain damage. They're a tradeoff: with 64K pages, you may get sixteen times more memory in the same size TLBs, but you lose a lot of memory if you're dealing with a lot of small datastructures that have to be page-aligned -- mmap for example.
Linus Torvalds has a classic rant on the subject at realworldtech.com (the rant starts a page into his post):
actually done the math. Even 64kB pages is totally
useless for a lot of file system access stuff: you need
to do memory management granularity on a smaller basic
size, because otherwise you just waste all your memory on
So reasonable page sizes range from 4kB to 16kB (and
quite frankly, 16kB is pushing it - exactly because
it has fragmentation issues that blow up memory usage
by a huge amount on some loads). Anything bigger than
that is no longer useful for general-purpose file access
through mmap, for example.
For a particular project I care about, if I were to use
a cache granularity of 4kB, I get about 20% lossage due to
memory fragmentation as compared to using a 1kB allocation
size, but hey, that's still ok. For 8kB blocks, it's
another 21% memory fragmentation cost on top of the 4kB
case. For 16kB, about half the memory is wasted on
fragmentation. For 64kB block sizes, the project that takes
280MB to cache in 4kB blocks, now takes 1.4GB!
to post comments)