> Examples which have come up include checksums used for integrity checking or pages which have been compressed or encrypted.
Checksums are an obvious need, but I don't understand the reference to compressed or encrypted pages.
Surely if the kernel is compressing or encrypting it has to do it to a bounce buffer (or maybe extract the page from the page-cache completely before transforming it). So I don't see what this has to do with stable pages...
Of course RAID5 has always used bounce buffers to get the stability needed for xor calculations. If an incoming dirty page was known to be stable already, I could avoid the copy operation!! So a bio flag saying "these pages are stable" would be good.
And I think this is an amusing juxtaposition:
> The truth of the matter seems to be that most of the performance worries are overblown; in the absence of a deliberate attempt to show problems, the performance degradation is not measurable.
> Ted Ts'o stated that the "dirty secret" is that kernel developers do not normally stress filesystems very much, so they tend not to notice performance problems.
The performance issue caused by stable pages would be increased latency for a 'write' system call when there is lots of free memory. With delayed allocation this should not normally need to wait for IO at all, so occasionally having to wait for IO could cause unwelcome latency spikes in some applications. However I suspect most current filesystems already have latency spikes for write for one reason or another, so I suspect it would be hard to notice a regression here.