Other error rates that need to be looked at.
Posted Jul 6, 2006 23:01 UTC (Thu) by vaurora
In reply to: Other error rates that need to be looked at.
Parent article: The 2006 Linux Filesystems Workshop (Part III)
The integrity of data can be checked as well, if we have checksums for it. There are a lot of options for doing this. I like the following two best:
Checksum in the indirect block pointing to the data block
Optional per-file checksum updated on close, invalidated by a write or writable mmap() - gets gross with large write-mostly files like logs
As for your ideas about layers, check out the architecture of ZFS:
At the workshop, we talked a little bit about creating a more useful volume manager interface (i.e., not one that looks exactly like a dumb disk). We also talked about working with manufacturers to improve the interfaces to dumb disks in ways that file systems find useful.
to post comments)