Filesystem block size
Posted Sep 20, 2007 8:57 UTC (Thu) by forthy
Parent article: LinuxConf.eu wrapup
One part of the filesystem cruft IMHO is the block size limit.
Remember, the rule of thumb for random access to blocked devices is that
access time and transfer time should be about the same. For current
disks, half a megabyte appears to be the sweet spot (maybe on some large
terabyte disks even one megabyte); 15 years ago, 4k was right. Linux so
far limited block size to 4k, and only has recently allowed filesystems
to use other block sizes, too; mostly to mount medias formatted on
platforms with larger blocks (like the 8k blocks on IA64).
A new file system therefore should be able to put data together in
blocks that will be able to grow in future as well (transfer rate
increases with the square root of density or capacity, while seek time
can be assumed to be almost constant). It needs to take care that data
can be packed into these larger blocks - if you have half a megabyte as
minimum transfer unit, you don't put a directory into one block, but a
whole directory subtree, maybe with associated inodes and everything. You
also put a bunch of small files into one single block, as well.
Essentially, as processors get faster and disks get larger, the
behavior starts to shift. Main memory today reacts more like disks in the
early days (cached block access instead of direct access), and disks tend
to behave more like tapes (longer sequential access). Make sure you get
all the informations you need with as few seeks as possible.
to post comments)