"This system call is meant to operate on ranges of data within a file. Of particular interest, perhaps, is the FALLOCATE_FL_PUNCH_HOLE operation, which removes a block of data from an arbitrary location within a file. Declaring a volatile range can be thought of as a form of hole punching, but with a kernel-determined delay. If memory is tight, the hole could be punched immediately; otherwise the operation could complete at some later time, or not at all."
So why wouldn't volatile data, be useful on disk files systems with directories like /var/cache? It seems to me to be a useful response to "FS full" conditions rather than return ENOSPACE to some (important) application writing to /var/spool. Similarly perhaps a Desktop Environment, might like to implement a "Recycle Bin" for deleted files marking them as volatile. If FS atimes are respected when choosing what to reclaim, then the kernel can actually do a rough LRU policy. But I guess that would be just TOO convenient and useful to applicaton programmers, so we'll just tell everyone to re-implement time stamping and cache management, over and over, rather than provide a simple to use reuseable feature.
Balkanised features for swap & block device backed filesystems, introduce finicky requirements infecting applications with implementation specifics, remember fsync() issues? "What do you mean, the filesystem can't efficiently sync the contents of this 1 tiny file?"