It seems to me that a forgetful file system is the perfect model for a volatile cache. You can use the last access-time and file size as a metric for simple LRU removal together with the classic open(), read(), write(), mmap(), unmap(), close(), unlink() interfaces for access.
The open() syscall can trivially increment a reference count on some file and prevent the content being reclaimed while open. Open() can resolve within the kernel whether to succeed and return a file descriptor, or fail in the case that the file has already been reclaimed (e.g. due to memory pressure) and return a suitable error code such as ENOENT.
Once a valid descriptor has been obtained, read() and write() can trivially access the file contents, or mmap() could be used to further increase the reference count and create a memory mapping. Once the reference count is again zero, such volatile files within the filesystem would again be eligible to be reclaimed and removed at any time.
I think the major benefits of this would be that the user-space interface is traditional and easy to understand and there is no need to handle signals or actually use mmap() or munmap() to benefit from such a system. As you suggest, there is also the notion of a useful unit - a file. Discarding individual pages is a nonsense to userspace, whereas the idea that a file may at some point atomically be deleted is much easier to grasp and use without bugs. Files also allow applications to decide what is a useful unit to keep/lose, simply by deciding what to store within some file.
This obviously maps well to a browser's cache and similar such applications, though it could be argued that open() and mmap() are too heavy to use when backing a malloc() type allocator unless used at a very coarse level, in which case the benefits or reclaim could be reduced (it becomes all or nothing). That said dropping the volatile files of a process maybe a kind first stage which keeps a system running before oom_killer.