The 2006 Linux Filesystems Workshop (Part III)
Posted Jul 11, 2006 23:13 UTC (Tue) by dlang
(✭ supporter ✭
In reply to: The 2006 Linux Filesystems Workshop (Part III)
Parent article: The 2006 Linux Filesystems Workshop (Part III)
this is an area where changing the basic chunk size could have a huge effect.
the discussion was to split the disk into 1G chunks, but that can result in a LOT of chunks (3000+ on my home server :-). changing the chunk size can drasticly reduce the number of chunks needed, and therfor the number of potential places for the directories to get copied.
In addition, it would be possible to make a reasonable sized chunk that holds the beginning of every file (say the first few K, potentially with a blocksize less then 4K) and have all directories exist on that chunk, then only files that are larger would exist in the other chunks (this would also do wonders for things like updatedb that want to scan all files)
this master chunk would be absolutly critical, so it would need to be backed up or mirrored (but it's easy enough to make the first chunk be on a raid, even if it's just mirroing to another spot on the same drive)
this sounds like something that will spawn endless variations and tinkering once the basic capabilities are in place.
to post comments)