Applications have other even simpler ways to receive preferable treatment for their file-backed pages: e.g. just keep referencing each page frequently. (Garbage collectors tend to do this already, admittedly generally for anonymous pages, but that isn't a requirement.)
And Alan's disks can do 50Mb/s, but can they do that if the incoming workload is very heavily seeky, as is often the case for major faults of pages from text pages of binaries and swap? I doubt it can manage more than 1--5Mb/s in that situation. Even prefaulting neighbouring pages isn't going to help too much there.