Not logged in
Log in now
Create an account
Subscribe to LWN
Pencil, Pencil, and Pencil
Dividing the Linux desktop
LWN.net Weekly Edition for June 13, 2013
A report from pgCon 2013
Little things that matter in language design
GTK applications' current "best practice" of "ignore the RAM use, they can buy more" has already destroyed the usefulness of old hardware with a modern Linux software stack.
Posted Oct 6, 2010 0:16 UTC (Wed) by mpr22 (subscriber, #60784)
Posted Oct 6, 2010 1:23 UTC (Wed) by dlang (✭ supporter ✭, #313)
yes we are doing more with our systems, but nowhere near that much more.
Posted Oct 6, 2010 9:23 UTC (Wed) by marcH (subscriber, #57642)
(Here I am ignoring SSDs, still too new to be part of The History)
Posted Oct 6, 2010 11:04 UTC (Wed) by dlang (✭ supporter ✭, #313)
in terms of size, drives have grown at least 1000x
in terms of sequential I/O speeds they have improved drastically (I don't think quite 1000x, but probably well over 100x, so I think it's in the ballpark)
in terms of seek time, they've barely improved 10x or so
this is ignoring things like SSDs, high-end raid controllers (with battery backed NVRAM caches) and so on which distort performance numbers upwards.
byt yes, the performance difference between the CPU registers and disk speeds is being stretched over time.
jut the difference in speed between the registers and ram is getting stretched to the point where people are seriously talking that it may be a good idea to start thinking of ram as a block device, accessed in blocks of 128-256 bytes (the cache line size for the CPU), right now the CPU hides this from you by 'transparently' moving the blocks in and out of the cache of the various processors for you so that if you choose to you can ignore this.
but when you are really after performance, a high end system starts looking very strange. You have several sets of processors that share a small amount of high-speed storage (L2/L3 cache) and have larger amount of lower speed storage (the memory directly connected to that CPU), plus a network to access the lower speed storage connected to other CPUs. Then you have a lower speed network to talk to the southbridge chipset to interact with the outside world (things like you monitor/keyboard/disk drives, PCI-e cards, etc).
This is a rough description of NUMA and the types of things that you can run into on large multi-socket systems, but the effect starts showing up on surprisingly small systems (which is why per-cpu variables and such things are used so frequently)
Posted Oct 14, 2010 19:29 UTC (Thu) by Wol (guest, #4433)
Three slots, max capacity 256Mb per slot, three 256Mb chips in the machine.
"That's no problem, they can just buy a new machine ..."
Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds