Posted Feb 6, 2010 3:03 UTC (Sat) by giraffedata
In reply to: Improving readahead
Parent article: Improving readahead
I guess I wasn't clear, or I totally misparsed your sentence:
When readahead works well, it can significantly improve the performance of I/O bound applications by avoiding the need for those applications to wait for data and by increasing I/O transfer size
You say readahead improves application performance by doing A and B. B is increasing transfer size, so I assumed A is something separate. Not having to wait for I/O isn't a reason for higher performance; it's essentially the definition (like saying a car arrives at its destination sooner because it goes faster). So I asked myself what wait-reducing effect you were referring to, and I figured you meant that when an application does processing while the I/O is going on, when it gets around to requesting the I/O, there's no waiting left to do. Well, an clearer way to explain that performance-improving effect is to say the application can overlap processing and I/O.
Note that this effect isn't present if the application doesn't have any processing to do. Readahead won't help that application -- won't reduce the wait for I/O -- except by one of the other effects. That's obvious when you refer to the overlap, but not when you just say it reduces wait time.
So much for the language analysis. The key thing for people to remember about readhead is that it improves application performance by reducing waiting time for I/O in these three ways:
- the application can overlap processing and I/O without explicit threading.
- the system can do larger I/Os.
- the system can order I/Os better.
to post comments)