User: Password:
|
|
Subscribe / Log in / New account

Quotes of the week

Quotes of the week

Posted Jan 31, 2013 11:34 UTC (Thu) by epa (subscriber, #39769)
Parent article: Quotes of the week

Disk size increases quadratically but the number of files on disk surely does not. More likely there is a linear growth in the number of files and linear growth of their average size. So the complexity of the filesystem is only linear - but the problem is that filesystem structures tend to be scattered over the disk rather than concentrated in one place.

If the size of an allocation unit increases as disk capacity increases then the number of bits to address the disk remains constant. (For example if the filesystem uses 32 bits to identify locations on disk, then a four terabyte disk will have an allocation unit of about one kilobyte, and bigger disks will have bigger chunks.) So the size of the filesystem metadata should grow only with the number of files, provided you're willing to accept wasting more space as the filesystem gets bigger. (If the average file size also grows in proportion with the disk size, then the proportion of disk space wasted on rounding up to a whole allocation unit remains constant.)

So if the filesystem were a bit simpler, keeping all its metadata in a fixed block at the start of the disk and perhaps using larger allocation units for larger filesystems, the time taken to fsck should increase only linearly with the number of files.


(Log in to post comments)

Quotes of the week

Posted Jan 31, 2013 18:57 UTC (Thu) by niner (subscriber, #26151) [Link]

Such filesystems would also be painfully slow, so you just traded fs runtime performance for fsck performance. Doesn't sound like a very good deal to me.

Quotes of the week

Posted Jan 31, 2013 20:16 UTC (Thu) by daniel (guest, #3181) [Link]

And apropos to that...

https://www.usenix.org/conference/fast13/birds-feather-se...

Everybody welcome. Incremental and online fsck on the agenda, both for Tux3 specifically and filesystems in general. Contact me for details on attendance if interested, or drop by oftc.net #tux3.

Quotes of the week

Posted Jan 31, 2013 19:54 UTC (Thu) by dlang (subscriber, #313) [Link]

Actually, lots of files are not getting bigger, while some files are getting huge.

I am in the process of copying data from my home file server, the first ~550K files were about 33G, the next 80K files are the remaining 2.4TB (and it's probably <10K files that account for all of the space)

so if you get inefficient at storing small files, it can have a surprisingly large effect.

Quotes of the week

Posted Jan 31, 2013 21:58 UTC (Thu) by khim (subscriber, #9252) [Link]

so if you get inefficient at storing small files, it can have a surprisingly large effect.

Why? Sure, efficiency for these files will go down, but efficiency for all files will still be pretty good.

Quotes of the week

Posted Jan 31, 2013 22:21 UTC (Thu) by dlang (subscriber, #313) [Link]

it depends how you measure efficancy.

if you have 500K tiny files take an extra meg of disk space, but save a meg of disk space on 2K giant files, your overall efficiency is not improved

extents work pretty well at consolidating disk allocation units, so the win on modern filesystems would not be very large.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds