I find myself looking at huge pages and thinking that huge pages are a feature that will be useful only for special-purpose single-use machines (basically just the two Mel mentions: simulation and databases, and perhaps here and there virtualization where the machine has *lots* of memory) until the damn things are swappable. Obviously we can't swap them as a unit (please wait while we write a gigabyte out), so we need to break the things up and swap bits of them, then perhaps reaggregate them into a Gb page once they're swapped back in. Yes, it's likely to be a complete sod to implement, but it would also give that nice TLB-hit speedup feeling without having to worry if you're about to throw the rest of your system into thrash hell as soon as the load on it increases. (Obviously once you *do* start swapping speed goes to hell anyway, so the overhead of taking TLB misses is lost in the noise. In the long run, defragmenting memory on swapin and trying to make bigger pages out of it without app intervention seems like a good idea, especially on platforms like PPC with a range of page sizes more useful than 4Kb/1Gb.)
IIRC something similar to this being discussed in the past, maybe as part of the defragmentation work? ... I also dimly remember Linus being violently against it but I can't remember why.