LWN: Comments on "Smarter write throttling" https://lwn.net/Articles/245600/ This is a special feed containing comments posted to the individual LWN article titled "Smarter write throttling". en-us Tue, 21 Oct 2025 13:33:08 +0000 Tue, 21 Oct 2025 13:33:08 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Smarter write throttling https://lwn.net/Articles/264169/ https://lwn.net/Articles/264169/ richlv <div class="FormattedComment"><pre> wouldn't this also be significantly about desktops ? it seems the grandparent was talking about local disks only, even though article explicitly mentions other use cases like nfs over gprs :) ). desktops tend to have all kinds of relatively slow things connected to them - usb drives, cameras, portable players, all kinds of slow network stuff... if i understood the idea correctly, these could theoretically block/limit local hdd, which would otherwise be the fastest device in an average desktop. </pre></div> Tue, 08 Jan 2008 12:00:10 +0000 Smarter write throttling https://lwn.net/Articles/257535/ https://lwn.net/Articles/257535/ mangoo <div class="FormattedComment"><pre> This patch is more about servers than desktops. This is a server with 1700 MB RAM, and it does quite a bit of IO: # while true; do grep Dirty /proc/meminfo ; sleep 2s ; done Dirty: 172404 kB Dirty: 173100 kB Dirty: 173332 kB Dirty: 173260 kB Dirty: 173692 kB Dirty: 173684 kB Dirty: 173724 kB Dirty: 174256 kB Dirty: 174772 kB Dirty: 175120 kB (...) </pre></div> Wed, 07 Nov 2007 13:48:10 +0000 Device-level IO blocking https://lwn.net/Articles/252685/ https://lwn.net/Articles/252685/ njs I doubt that would make much difference -- the goal is that the rate at which writes are queued should match the rate at which writes are completed. Since there is a buffer, these rates don't have to match in fine detail; they only have to match on average.<br> <p> Even over long time periods, different devices will be more or less fast; but the differences within a device tend (I believe) to wash out over any time period longer than a few hundred milliseconds.<br> Tue, 02 Oct 2007 17:30:54 +0000 Smarter write throttling https://lwn.net/Articles/252628/ https://lwn.net/Articles/252628/ gat3way Man, I just wonder how much write I/O you should have on a modern-day system with let's say 2GB of RAM, so that the dirty cache would cause a significant memory pressure? I mean, before pdflush wakes up and writes them back???<br> <p> On my desktop system with 1 GB RAM, apache server with a PHP application for network monitoring that does lots of mysql INSERTs, 2 testing tomcat5 instances, KDE, evolution and firefox running, according to /proc/meminfo, the memory occupied by dirty pages rarely exceed 10MB...<br> <p> <p> Tue, 02 Oct 2007 11:13:11 +0000 Device-level IO blocking https://lwn.net/Articles/252620/ https://lwn.net/Articles/252620/ Jel Device-level IO blocking doesn't seem granular enough. Isn't there some <br> way to manage disk writes with awareness of platters/heads and access <br> times, so that you essentially gain the performance benefits of RAID?<br> <p> That would be a much nicer layer to queue IO at.<br> Tue, 02 Oct 2007 09:11:00 +0000 Smarter write throttling https://lwn.net/Articles/246060/ https://lwn.net/Articles/246060/ giraffedata <blockquote> What is throttled here is the speed of dirtying pages, </blockquote> <p> Actually, nothing is throttled here. Rather, the amount of memory made available for writebehind buffering is restricted, so folks have to wait for it. Tightening resource availability slows down users of resources, like throttling would, but it's a fundamentally different concept. Resource restriction pushes back from the actual source of the problem (scarce memory), whereas throttling makes the requester of resource just ask for less, based on some higher level determination that the resource manager would give him too much. <p> I think the article is really about smarter memory allocation for writebehind data, not smarter write throttling. <p> (To understand the essence of throttling, I think it's good to look at the origin of the word. It means to choke, as in squeezing a tube (sometimes a person's throat) to reduce flow through it. The throttle of a classic gas engine prevents the fuel system from delivering fuel to the engine even though the engine is able to burn it). Sun, 19 Aug 2007 01:48:33 +0000 Streamed writes https://lwn.net/Articles/246058/ https://lwn.net/Articles/246058/ i3839 What would be good to have is something similar as read ahead, but for writes and opposite: When streaming writing is detected the process should be blocked aggressively so that it has very few dirty pages outstanding, just enough to have near optimal write speed. Such streamed dirty pages could also be moved to the (head of) LRU list to get them quicker out of page cache, after they're written.<br> <p> Pity I don't have the time to implement it now. :-(<br> <p> Sun, 19 Aug 2007 00:56:25 +0000 Smarter write throttling https://lwn.net/Articles/246056/ https://lwn.net/Articles/246056/ i3839 Looking at it like a feedback system, the current negative feedback that limits writing at all is the amount of memory (or part of it thanks to tunables, but in the end it's memory that limits it). Practically this means you can have huge number of dirty pages outstanding, all waiting on one slow device.<br> <p> Now combine the facts that you can't get rid of dirty memory quickly with that they probably aren't used anymore (rewrite doesn't happen that often I think), and you've just managed to throw away all free memory, slowing everything down. So basically you're using most of your memory for buffering writes, while the write speed doesn't increase significantly after a certain buffer size.<br> <p> What is throttled here is the speed of dirtying pages, not the speed of writeout, to keep the number of dirty pages limited. (But although the dirtying is throttled, the average should in both cases be the write speed of the device, so perhaps "throttling" isn't quite the right word).<br> <p> The deadlock mentioned is AFAIK the situation where the system is under heavy memory pressure and wants to free some memory, which it tries to do by writing out dirty pages, but doing that might require allocating some memory, which can't be done because that's what lacking, hence a deadlock. But I don't know if that's the mentioned LVM example, or if that's another case.<br> <p> Sun, 19 Aug 2007 00:46:44 +0000 Smarter write throttling https://lwn.net/Articles/245940/ https://lwn.net/Articles/245940/ giraffedata <blockquote> A lot of I/O through the LVM level could create large numbers of dirty pages destined for the physical device. Should things hit the dirty thresholds at the LVM level, however, the process could block before the physical drive starts writeback. In the worst case, the end result here is a hard deadlock of the system </blockquote> <p> I don't see any deadlock in this description, but maybe I'm supposed to know something more about how LVM works. (It doesn't make sense to me, for example, that a physical drive starts writeback; I expect a physical drive to be the <em>target</em> of a writeback). <p> But I've seen the pageout-based memory deadlock plenty of times in action when a process that has to move in order for memory to get clean (maybe it holds a lock for some dirty file) blocks on a memory allocation. At least at one time, these were possible in Linux in various ways, network-based block devices and filesystems being the worst offenders, and people addressed it with tuning policies that sought to make it unlikely that the amount of clean allocatable memory ever shrank to zero. <p> "Throttling" usually refers to that sort of tuning -- a more or less arbitrary limit placed on how fast something can go. I hope that's not what this is about, because it is possible actually to fix deadlocks -- make them mathematically impossible while still making the full resources of the system available to be apportioned any way you like. It's not easy, but the techniques are well known (never wait for a Level N resource while holding a Level N+1 resource; have pools of memory at various levels). <p> I hate throttling. Throttling is waiting for something artificial. In a well-designed system, you can wait for real resource and never suffer deadlock or unfairness. Fri, 17 Aug 2007 17:17:47 +0000 I/O scheduler https://lwn.net/Articles/245762/ https://lwn.net/Articles/245762/ corbet Because there is a gap between when a page is dirtied and when it gets to the I/O scheduler. Pages are not scheduled for writeback immediately upon dirtying - that's the page cache in action. So, by the time the I/O scheduler gets involved, it's too late to keep more pages from being dirtied. Thu, 16 Aug 2007 12:51:07 +0000 Smarter write throttling https://lwn.net/Articles/245734/ https://lwn.net/Articles/245734/ edschofield Why wouldn't the I/O scheduler be a good place to perform write throttling?<br> Thu, 16 Aug 2007 08:48:39 +0000