User: Password:
|
|
Subscribe / Log in / New account

The return of SEEK_HOLE

The return of SEEK_HOLE

Posted Apr 29, 2011 20:44 UTC (Fri) by dlang (subscriber, #313)
In reply to: The return of SEEK_HOLE by chad.netzer
Parent article: The return of SEEK_HOLE

from the article, quote:

The interface created at Sun used the lseek() system call, which is normally used to change the read/write position within a file. If the SEEK_HOLE option is provided to lseek(), the offset will be moved to the beginning of the first hole which starts after the specified position. The SEEK_DATA option, instead, moves to the beginning of the first non-hole region which starts after the given position. A "hole," in this case, is defined as a range of zeroes which need not correspond to blocks which have actually been omitted from the file, though in practice it almost certainly will. Filesystems are not required to know about or report holes; SEEK_HOLE is an optimization, not a means for producing a 100% accurate map of every range of zeroes in the file.

note specifically: A "hole," in this case, is defined as a range of zeroes which need not correspond to blocks which have actually been omitted from the file

so this seems to be implying that this isn't just reporting what holes currently exist, but holes that could potentially exist, even if they haven't been punched out yet. at that point the question of what should be reported arises.


(Log in to post comments)

The return of SEEK_HOLE

Posted Apr 29, 2011 21:55 UTC (Fri) by nybble41 (subscriber, #55106) [Link]

Note also: "Filesystems are not required to know about or report holes; SEEK_HOLE is an optimization, not a means for producing a 100% accurate map of every range of zeroes in the file."

Ergo, an implementation which only reported filesystem-level blocks of zeros actually omitted from the file would be perfectly valid. The interface is allowed, but not *required*, to report "holes that could potentially exist". In practice I would expect filesystems to only report omissions, as scanning arbitrarily large amounts of stored data for the first non-zero byte would be prohibitively expensive (and can be done just as easily from userspace).

SEEK_HOLE and SEEK_DATA are meant as optimizations. It makes little sense to save the application the trouble of scanning for ranges of zeros in stored data at the expense of moving the same task into the filesystem. On the other hand, if the filesystem already knows that there is a hole--for example, because it was omitted from the stored data--then SEEK_HOLE and SEEK_DATA allow it to save the application some unnecessary reads.

The return of SEEK_HOLE

Posted Apr 29, 2011 22:19 UTC (Fri) by dlang (subscriber, #313) [Link]

I don't see anything saying that SEEK_HOLE must report every actual hole either.

so an implementation that reported every 0 in the file would be valid

and an implementation that didn't report any holes in the file would be valid (although useless)

I'm arguing that it would be better to allow the flexibility to define what a hole is if applications are going to be modified to make use of this feature.

I'm not sure if the application should define the hole, or if it should be something that's tunable at the system (or device) level. I can definitely see a reluctance to have the app try and figure out what size hole is relevant, but at the same time, the ability to find potential holes without having to push the data all the way to userspace just to find 0's int he file seems like a useful optimisation for a small amount of code.

The return of SEEK_HOLE

Posted Apr 29, 2011 23:29 UTC (Fri) by chad.netzer (subscriber, #4257) [Link]

Either reporting too many, or too few holes is potentially less efficient, but should be allowed simply because it won't change the content of the file. So, while the interface allows it, it doesn't claim (imo) that the interface *must* report any and all actual or potential holes. Doing so would work, but would be a pessimization.

I assume the wording of the specification has to be "loose" like this to cover cases where the filesystem converts zero data blocks to holes (via block data scrubbing), or a file block of zeros gets rewritten as actual zeros (an optimization like zero-block data deduplication, for example) so that while the logical content of the file has not changed, the "hole" structure is different and the previous lseek(SEEK_HOLE) may no longer be a hole. This is a lesser constraint than if the content itself is altered, and should still work.

"so an implementation that reported every 0 in the file would be valid" - Yes, although it should at least adhere to the _PC_MIN_HOLE_SIZE as a lower bound. If that lower bound can be 1, clients should be prepared for that; in particular, backup software might need to detect and refuse to bother with bookkeeping such small holes, and just read and store the zeros verbatim.

"the ability to find potential holes without having to push the data all the way to userspace just to find 0's int he file seems like a useful optimisation for a small amount of code." - A filesystem could choose to "scrub" the data in the background and look for places to add holes, but whether its userspace or kernel, the act of looking for potential holes will involve processing a lot of data blocks, and could be tricky when done on active filesystems. The copying to userspace is trivial, compared to the block reads (even on non-rotating media). Whereas, creating the file with holes initially can often be done efficiently, since the writing application may know where the holes belong at the start. (ie. compare "time dd if=/dev/zero of=/var/tmp/non-sparse-file bs=1M count=1000" vs. "time dd if=/dev/zero of=/var/tmp/sparse-file bs=1M count=1 seek=999")

That said, I wonder if any of the compressing files system try to aggressively find ways to make files sparser (given that they have to process all the data anyway)? My guess is that sparseness is not much of a win on those filesystems, so they don't bother.

The return of SEEK_HOLE

Posted Apr 29, 2011 23:49 UTC (Fri) by dlang (subscriber, #313) [Link]

where is PC_MIN_HOLE_SIZE defined? (is it just hard-coded in the source?)

I think that I'm saying that PC_MIN_HOLE_SIZE and what alignment it needs to have should be configurable at least on the device (including logical device) level

if the purpose of this is to allow backups and copies to deal with holes efficiently, it seems like it would be good to be able to tune how aggressively to look for holes (or possible holes, if things are layered, you may not know for sure if the holes are real or not). remember that this is all happening long after the file was created (and after it may have been mangled by other tools that filled in holes because they didn't know any better)

as for compressed filesystems, since a string of 0's compresses _really_ well, I suspect that none of them look for the special case of a full block of 0's aligned on a block boundry as it probably would take just about as much to record that special case as it takes to record that they are zero anyway ;-)

if de-duplication logic forces holes to be replaced with a block of 0's (even a shared one), the authors of that code should be fired they are moving in the wrong direction (the block of 0's now takes up space and I/O where it didn't before)

The return of SEEK_HOLE

Posted Apr 30, 2011 19:23 UTC (Sat) by jrn (subscriber, #64214) [Link]

See "man pathconf".

Linux doesn't support Solaris's _PC_MIN_HOLE_SIZE currently. It doesn't seem very useful --- it just lets applications know, any hole will be at least such-and-such size (e.g., 512 bytes).

The return of SEEK_HOLE

Posted May 4, 2011 18:27 UTC (Wed) by chad.netzer (subscriber, #4257) [Link]

> if de-duplication logic forces holes to be replaced with a block of 0's (even a shared one), the authors of that code should be fired

It was a pure hypothetical, but for example some systems can convert an online volume to de-duped mode and back, all while serving files from it. I could see (in such cases of intermediate online filesystem conversions, or other hypothetical situations) that a filesystem could choose to not honor, or incorrectly report the SEEK_HOLE values. In such cases, the API would allow backups to still work, just less efficiently. So, my point is that the SEEK_HOLE API is not bound by any particular filesystem constraint.

> if the purpose of this is to allow backups and copies to deal with holes efficiently, it seems like it would be good to be able to tune how aggressively to look for holes

You don't want the filesystem to "look" for holes; it just knows them outright, if it supports them, based on what data blocks are actually stored. The "looking" for all potential holes can already be (and is) done in userspace for any filesystem, at the cost of examining a lot of zeros. Anyway, that's my view.

The return of SEEK_HOLE

Posted May 4, 2011 19:01 UTC (Wed) by dlang (subscriber, #313) [Link]

it just seems conceptually wrong to me that finding holes (or potential holes) should be a two step process.

step 1 use SEEK_HOLE to find holes the filesystem knows about

step 2 read the remainder of the file through userspace to look for additional holes (or holes that SEEK_HOLE didn't report.

examining a range of memory to find if it's exclusively zero seems like the type of thing that is amiable to optimisation based on the particular CPU in use. Since the kernel is already optimised this way it would seem to be better to leverage this rather than require multiple userspace tools to all implement the checking (with the optimisations)

the full details of what extents are used for a file seems like it isn't the right answer, both because it's complex, but also because it's presenting a lot of information that isn't useful (i.e. you don't care if a block of real data is in one block, or fragmented into lots of blocks), but at the same time it seems a bit wasteful to find the holes by doing a separate system call for each hole boundary.

The return of SEEK_HOLE

Posted May 4, 2011 19:54 UTC (Wed) by chad.netzer (subscriber, #4257) [Link]

> examining a range of memory to find if it's exclusively zero seems like the type of thing that is amiable to optimisation based on the particular CPU in use.

Perhaps, but it's almost certainly I/O bound, not CPU.

If you *really* want to aggressively replace long runs of zeros with holes, in existing files (ie. make them sparser), a background userspace scrubber could be employed; although doing it in-place without forcing a copy (new inode) is tricky. At least some Linux filesystems have, or will have, the ability to "punch holes":

http://permalink.gmane.org/gmane.comp.file-systems.xfs.ge...

The return of SEEK_HOLE

Posted Apr 30, 2011 3:16 UTC (Sat) by jrn (subscriber, #64214) [Link]

> so this seems to be implying that this isn't just reporting what holes currently exist, but holes that could potentially exist

I think you're misreading it.

This is about reporting holes, but nobody wanted to guarantee that such a thing as a hole exists. So the semantics are: if SEEK_HOLE reports a “hole”, the content there consists of NUL bytes. That's it (though naturally enough any sane kernel is only going to report large blocks of NUL bytes, for example by reporting the actual holes, and userspace programs are likely to rely on that assumption for reasonable performance).

The return of SEEK_HOLE

Posted Apr 30, 2011 3:29 UTC (Sat) by dlang (subscriber, #313) [Link]

the question is the defintion of 'large blocks'

what may be a large block for a filesystem running on one device may not be a large block for another device.

I'm not saying that it makes sense to have it report down to every single null byte in the file, but I do think that there should be some ability to define what 'large block' means outside of editing the source.

The return of SEEK_HOLE

Posted Apr 30, 2011 19:10 UTC (Sat) by jrn (subscriber, #64214) [Link]

I think I'm missing something. I'll repeat what I already said and you can tell me where I go wrong. The "large blocks" is a consequence of an implementation that doesn't care about the size of blocks. It just reports holes.

Perhaps you're talking about the holes feature in general, and saying that users or applications should be able to configure when a seek while writing will create a hole? Then I would understand a little better.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds