Posted Jan 31, 2013 11:34 UTC (Thu) by epa (subscriber, #39769)
Parent article: Quotes of the week
Disk size increases quadratically but the number of files on disk surely does not. More likely there is a linear growth in the number of files and linear growth of their average size. So the complexity of the filesystem is only linear - but the problem is that filesystem structures tend to be scattered over the disk rather than concentrated in one place.
If the size of an allocation unit increases as disk capacity increases then the number of bits to address the disk remains constant. (For example if the filesystem uses 32 bits to identify locations on disk, then a four terabyte disk will have an allocation unit of about one kilobyte, and bigger disks will have bigger chunks.) So the size of the filesystem metadata should grow only with the number of files, provided you're willing to accept wasting more space as the filesystem gets bigger. (If the average file size also grows in proportion with the disk size, then the proportion of disk space wasted on rounding up to a whole allocation unit remains constant.)
So if the filesystem were a bit simpler, keeping all its metadata in a fixed block at the start of the disk and perhaps using larger allocation units for larger filesystems, the time taken to fsck should increase only linearly with the number of files.