Not logged in
Log in now
Create an account
Subscribe to LWN
LWN.net Weekly Edition for December 5, 2013
Deadline scheduling: coming soon?
LWN.net Weekly Edition for November 27, 2013
ACPI for ARM?
LWN.net Weekly Edition for November 21, 2013
Runtime filesystem consistency checking
Posted Apr 3, 2012 18:58 UTC (Tue) by martinfick (subscriber, #4455)
Posted Apr 3, 2012 20:00 UTC (Tue) by drag (subscriber, #31333)
Also while reliability and capacity has both increased, capacity has far outstripped reliability. So that while today's drives are generally more reliable then older ones (as in bad/corrupt blocks lost per GB) the chances of you losing part of your data is much higher simply because there is so much more of it.
This sort of stuff why online fsck and scrubs (reading in data and comparing it to checksums to detect and correct corruption) is so important on modern file systems. Previously the only people that needed to care were ones that could justify the expense of purchasing big SAN devices and whatnot.
Posted Apr 3, 2012 20:01 UTC (Tue) by cmccabe (guest, #60281)
SSDs don't have these limitations, however.
Posted Apr 4, 2012 8:00 UTC (Wed) by dgm (subscriber, #49227)
Disk capacity may have increased, but disk platters are exactly the same size as before: 3.5 inches. So, moving the read head around should cost mostly the same as before. The only factor I can think of is that the head has to be more precisely positioned, and that may (or may not) be more costly because of physical limitations (rebounds).
On the other hand there are two factors that should make seek time decrease: improved machinery and more density. More density means that more data goes faster under the read head, so more often seeks can be satisfied without moving the read head, just waiting for the data to pass below.
Posted Apr 4, 2012 9:22 UTC (Wed) by epa (subscriber, #39769)
Or maybe the point is that larger filesystems necessarily require more random accesses and hence more disk seeks when you fsck them. Larger RAM would mitigate this but I don't know whether increased RAM for caching has kept pace with filesystem sizes enough. An fsck expert would be able to give some numbers.
Posted Apr 4, 2012 10:27 UTC (Wed) by khim (subscriber, #9252)
Actually the original poster was wrong: seeks are no more expensive. They have the same cost, but you need more of them. Even if you'll grown filesystem data structures to reduce fragmentation undeniable fact is that number is tracks is growing and time to read a single track is constant.
This means that time needed to read the whole disk from the beginning to the end is growing.
Posted Apr 4, 2012 12:17 UTC (Wed) by epa (subscriber, #39769)
Posted Apr 4, 2012 12:59 UTC (Wed) by khim (subscriber, #9252)
More or less. This means that when you go from Linux 0.1 (with typical size of HDD 200-300MB) to Linux 3.0 (with typical size of HDD 2-4TB) filesystem slows by a factor of 100, not by a factor of 10'000. But 100x slowdown is still a lot.
Posted Apr 4, 2012 16:01 UTC (Wed) by wazoox (subscriber, #69624)
Posted Apr 4, 2012 19:41 UTC (Wed) by khim (subscriber, #9252)
Contemporary 4TB HDDs are especially slow because they use 5 plates (where your 1TB disks probably used 2 or 3). This means that not only you see the slowdown from growing number of tracks, you see additional slowdown from growing number of plates!
Thankfully in this direction 5 is the limit: I doubt we'll see return of 30 plates monsters like the infamous Winchester… all 3.5" HDDs to date had 5 plates or less.
Posted Apr 5, 2012 9:18 UTC (Thu) by misiu_mp (guest, #41936)
Posted Apr 5, 2012 10:00 UTC (Thu) by khim (subscriber, #9252)
More plates means more heads, with possibility for concurrency - that should increase sequential transfer speed.
Good idea. Sadly it's about ten years too late. Today's tracks are too small: when the head is on a track on one plate all other heads are not on this same track. In fact they are not on track at all. They just randomly drift between 2-3 tracks adjacent to each other. That's why you can only use one head actively (how can we use even one if it's all is so unstable? well, it's easy: there are active scheme which dynamically moves head to keep it on track).
If data is written cylinder-wise, the latency should be similar to one-plate disk.
Latency of seeks - yes, number of tracks - no. If you use the same plates then filesystem on a single plate HDD will be roughly five times faster then filesystem on five plates HDD.
That is the main reason we don't see that many of them.
The main reason we don't see many of them is cost. They are more expensive to produce and since they are less reliable they incur more warranty overhead. They are also slower, but this secondary problem.
Posted Apr 13, 2012 8:47 UTC (Fri) by ekj (subscriber, #1524)
Posted Apr 5, 2012 19:01 UTC (Thu) by cmccabe (guest, #60281)
From a programmer's perspective, the growth in hard disk capacity has not been matched by a corresponding increase in either throughput or worst-case latency.
Because hard disk throughput has not kept pace, in a high performance setup, your only hope for reasonable throughput is to use RAID with striping. But RAID increases the minimum size that you can read-- before, that minimum was a sector-- with RAID, it's a stripe. This makes hard disks even less of a random-access medium, since you never want to be reading just a few bytes-- you want to read a whole RAID stripe at a time in order to be efficient.
Most programmers don't know about these details because the database does all this for you.
Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds