>> it can significantly improve the performance of I/O bound applications by avoiding the need for those applications to wait for data
> Note that this effect isn't present if the application doesn't have any processing to do.
Nope. Two reasons:
1) Most people want to write simple single-threaded programs that do synchronous I/O. That means they only issue one request at a time. If their app is I/O bound, then there will always be some latency between the end of one request and the beginning of the next. The disk will be 100% idle during that latency (barring other access). Read-ahead means the disk is busy instead.
2) When *other* applications have the CPU, the disk could go idle too. Readahead could be still issuing requests on behalf of the app during that time.
In theory, a sophisticated (async reads) app might help replicate #1, and a big enough I/O request buffer might help #2. But let's face it, sync code is much easier to write and debug.