|| ||Linus Torvalds <torvalds-AT-linux-foundation.org>|
|| ||Mark Gross <mgross-AT-linux.intel.com>|
|| ||Re: The performance and behaviour of the anti-fragmentation related
|| ||Fri, 2 Mar 2007 09:16:17 -0800 (PST)|
|| ||Andrew Morton <akpm-AT-linux-foundation.org>,
Balbir Singh <balbir-AT-in.ibm.com>, Mel Gorman <mel-AT-skynet.ie>,
npiggin-AT-suse.de, clameter-AT-engr.sgi.com, mingo-AT-elte.hu,
jschopp-AT-austin.ibm.com, arjan-AT-infradead.org, mbligh-AT-mbligh.org,
On Fri, 2 Mar 2007, Mark Gross wrote:
> > Yes, the same issues exist for other DRAM forms too, but to a *much*
> > smaller degree.
> DDR3-1333 may be better than FBDIMM's but don't count on it being much
Hey, fair enough. But it's not a problem (and it doesn't have a solution)
today. I'm not sure it's going to have a solution tomorrow either.
> > Also, IN PRACTICE you're never ever going to see this anyway. Almost
> > everybody wants bank interleaving, because it's a huge performance win on
> > many loads. That, in turn, means that your memory will be spread out over
> > multiple DIMM's even for a single page, much less any bigger area.
> 4-way interleave across banks on systems may not be as common as you may
> think for future chip sets. 2-way interleave across DIMMs within a bank
> will stay.
.. and think about a realistic future.
EVERYBODY will do on-die memory controllers. Yes, Intel doesn't do it
today, but in the one- to two-year timeframe even Intel will.
What does that mean? It means that in bigger systems, you will no longer
even *have* 8 or 16 banks where turning off a few banks makes sense.
You'll quite often have just a few DIMM's per die, because that's what you
want for latency. Then you'll have CSI or HT or another interconnect.
And with a few DIMM's per die, you're back where even just 2-way
interleaving basically means that in order to turn off your DIMM, you
probably need to remove HALF the memory for that CPU.
In other words: TURNING OFF DIMM's IS A BEDTIME STORY FOR DIMWITTED
There are maybe a couple machines IN EXISTENCE TODAY that can do it. But
nobody actually does it in practice, and nobody even knows if it's going
to be viable (yes, DRAM takes energy, but trying to keep memory free will
likely waste power *too*, and I doubt anybody has any real idea of how
much any of this would actually help in practice).
And I don't think that will change. See above. The future is *not* moving
towards more and more DIMMS. Quite the reverse. On workstations, we are
currently in the "one or two DIMM's per die". Do you really think that
will change? Hell no. And in big servers, pretty much everybody agrees
that we will move towards that, rather than away from it.
- forget about turning DIMM's off. There is *no* actual data supporting
the notion that it's a good idea today, and I seriously doubt you can
really argue that it will be a good idea in five or ten years. It's a
hardware hack for a hardware problem, and the problems are way too
complex for us to solve in time for the solution to be relevant.
- aim for NUMA memory allocation and turning off whole *nodes*. That's
much more likely to be productive in the longer timeframe. And yes, we
may well want to do memory compaction for that too, but I suspect that
the issues are going to be different (ie the way to do it is to simply
prefer certain nodes for certain allocations, and then try to keep the
jobs that you know can be idle on other nodes)
Do you actually have real data supporting the notion that turning DIMM's
off will be reasonable and worthwhile?
to post comments)