LWN: Comments on "The CFQ "low latency" mode" https://lwn.net/Articles/355904/ This is a special feed containing comments posted to the individual LWN article titled "The CFQ "low latency" mode". en-us Sun, 14 Sep 2025 13:15:12 +0000 Sun, 14 Sep 2025 13:15:12 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net The CFQ "low latency" mode https://lwn.net/Articles/357185/ https://lwn.net/Articles/357185/ efexis <i>"letting a disk stay idle when there's work to do is wrong!"</i> <br><br> Except that the disk is as good as idle while it's seeking... you can't read or write while it's happening.<br> <br> I already know this to be true, I come across it on a server I part manage, which tries to schedule backups for several sites all at once. The disk thrashes, it grinds to a halt, and it takes forever to finish. So, I wrote a small bash script that when the load gets high, sends a STOP signal to all the backup processes, and then sends just one of them a CONT signal, so only that one process is running. Every few seconds it will STOP that task, and CONT a different one. The backups complete in a much shorter time, and system responsiveness is much better while it's happening. Why? Because the heads don't move away from the current reader's position as often, even though it does that same issue-read, wait, process, issue-read, wait, process etc... read pattern. So, with the amount of time the drive spends seeking reduced, the SAME drive is able to complete the SAME amount of work in LESS time with LESS effect on the rest of the system.<br> <br> This is just a fact, it's real, it works, as much as it may sound counter-intuitive to you, the numbers don't lie. Thu, 15 Oct 2009 17:57:09 +0000 The CFQ "low latency" mode https://lwn.net/Articles/357130/ https://lwn.net/Articles/357130/ guest <div class="FormattedComment"> That only works if both readers issue requests without waiting for results. That's not how programms usually work - if they issue a read request, they wait for it to succeed before sending the next read.<br> <p> Anyway, if you have such workloads and you do *not* pause, what happens? You perform the first seek to A, read, A, seek to B, read B and in the meantime, more requests for A have arrived. If it's only one, you still seem to be fast enough despite seeking - just seek back to A and go on. If the seek takes too long, multiple request should have been queued already and you can coalesque them and handle them with one seek.<br> <p> IMHO, letting a disk stay idle when there's work to do is wrong!<br> </div> Thu, 15 Oct 2009 14:50:24 +0000 The CFQ "low latency" mode https://lwn.net/Articles/356742/ https://lwn.net/Articles/356742/ dlang <div class="FormattedComment"> it's not necessarily as different as you are making it out to be.<br> <p> remember that seeks are _expensive_, you can transfer a LOT of data in the time of one seek that you can avoid doing.<br> <p> so throughput optimizations like this can be relevant to the total disk response capabilities.<br> </div> Tue, 13 Oct 2009 18:06:35 +0000 The CFQ "low latency" mode https://lwn.net/Articles/356607/ https://lwn.net/Articles/356607/ giraffedata OK, I'll buy that. Letting the disk sit idle can improve the throughput capacity for a limited workload like that (limited not because there are times when there is no work available but because there are only two streams and each apparently doesn't want to have more than one I/O at a time outstanding). <p> What I was thinking is that when people ask about disk throughput (capacity), it's usually on a system that drives the disk a lot harder than that -- i.e. the disk's basic capacity is in question. That means requesters throw large amounts of I/O at the disk and the speed is then determined by how quickly the disk can move the I/Os through. In the A-B scenario you describe, I would ask about the disk's response time, not its throughput, because it's the waiting for a response that governs the speed of this system. Tue, 13 Oct 2009 00:22:28 +0000 The CFQ "low latency" mode https://lwn.net/Articles/356512/ https://lwn.net/Articles/356512/ Yenya <div class="FormattedComment"> No, you can increase also the _throughput_ (= the number of sectors handled in a given -large- period of time) by adding pauses shorter than the seek time. To be more specific - let's have two readers: A and B, each reading from its own part of the disk [A], and [B], respectively. For the sake of simplicity let's assume that two subsequent operations within the area [A] or within the area [B] do not require seek and are fast, while the read from the area [A] followed by the read from the area [B] requires seek, which is much slower. Then it is definitely better from the throughput point of view to issue the operations in the following order:<br> <p> [A]-pause-[A]-seek-[B]-pause-[B]-seek-[A]-pause-[A]-...<br> <p> than the "no-pause" variant of<br> <p> [A]-seek-[B]-seek-[A]-seek-[B]-seek-...<br> <p> It is not a bursty workload or a response-time-critical workload. It is an "unlimited supply of work" batch workload by my definition. And it has higher throughput with the pauses added than without them.<br> </div> Mon, 12 Oct 2009 07:56:10 +0000 The CFQ "low latency" mode https://lwn.net/Articles/356293/ https://lwn.net/Articles/356293/ giraffedata <blockquote> You just can't improve throughput by deliberately letting the disk sit idle. <blockquote> In fact, you can. Think avoiding some seeks and issuing sequential operations with a shorter-than-a-seek-time delays in between. </blockquote> </blockquote> <p> You'll have to be more specific. <p> It sounds like you're talking about a strategy for improving response time for a bursty workload, whereas throughput is meaningful only for a non-bursty unlimited supply of work. Fri, 09 Oct 2009 15:48:37 +0000 The CFQ "low latency" mode https://lwn.net/Articles/356203/ https://lwn.net/Articles/356203/ Yenya <div class="FormattedComment"> <font class="QuotedText">&gt; You just can't improve throughput by deliberately letting the disk sit idle. </font><br> <p> In fact, you can. Think avoiding some seeks and issuing sequential operations with a shorter-than-a-seek-time delays in between.<br> </div> Fri, 09 Oct 2009 07:12:23 +0000 The CFQ "low latency" mode https://lwn.net/Articles/356155/ https://lwn.net/Articles/356155/ giraffedata <blockquote> Normally the scheduler will try to delay many new I/O requests for a short time in the hope that they can be joined with other requests which may come shortly thereafter. This behavior will minimize disk seeks and maximize I/O request size, so it is clearly good for throughput. </blockquote> <p> No, that's not clear at all. Minimizing disk seeks and maximizing I/O request size is clearly good for disk <b>efficiency</b> -- minimizing disk <b>utilization</b> for a given workload -- but for <b>throughput</b> to be meaningful, utilization has to be about 100%. When that's the case, I/O backs up into the Linux I/O queue so that no extra delays are necessary in order to join requests with other requests. <p> You just can't improve throughput by deliberately letting the disk sit idle. Thu, 08 Oct 2009 23:20:18 +0000 What the 'low_latency' knob does https://lwn.net/Articles/356018/ https://lwn.net/Articles/356018/ Yenya <div class="FormattedComment"> Do you think the low-latency knob could help also for situations like ordinary filesystem traffic versus background md-RAID resync?<br> <p> I had a pretty bad experience with CFQ on my FTP server (ftp.linux.cz, SW RAID-5 over 8x 1TB SATA drives) - the resync of the array with CFQ took about 3 days with the overall system responsiveness being pretty bad, while with the deadline iosched (which is what I am running now) it takes less than a day, and even then the system latency for things like typing commands to a ssh session is good (read: no noticeable change against the fully reconstructed array).<br> <p> -Yenya<br> </div> Thu, 08 Oct 2009 12:44:32 +0000 What the 'low_latency' knob does https://lwn.net/Articles/355987/ https://lwn.net/Articles/355987/ axboe <div class="FormattedComment"> The description of the 'low_latency' mode isn't very accurate unfortunately, perhaps Jon didn't have his coffee before reading over it :-). Let me attempt to rectify that.<br> <p> The low_latency knob doesn't impact delays or merging, one of the key aspects to getting low latency for a series of operations (like starting your firefox while other IO is happening) is actually making sure we get the delays right. If we take the classic case of reader vs writer, the writer dirty speed will greatly outnumber the writeback speed. So we always have tons of dirty pages waiting to be written. The normal reader, however, does dependent reads that are serialized by each other. When one read finishes, another will be issued by the reader very shortly. Achieving good throughput and latency for the reader in CFQ is accomplished by briefly waiting for another IO when one has completed. In CFQ, this is called idling.<br> <p> The two primary changes in behaviour for CFQ in -rc3 is letting seeky IO also idle, even if the hardware does command queuing (which most does these days) and limiting the damage that the async IO can do while sync IO is also happening. With the 'low_latency' knob switched to on, CFQ will only slowly build up a queue depth of async writes. This greatly helps reduce the impact that a writer will have on the system interactiveness, since when we miss a sync idle window only slightly, the amount of async writeback sent to the device will be limited by the time since that last sync IO.<br> <p> The end result is that the desktop experience should be less impacted by background IO activity. It's also worth mentioning that the 'low_latency' setting defaults to on.<br> </div> Thu, 08 Oct 2009 07:11:32 +0000