User: Password:
Subscribe / Log in / New account

Smarter write throttling

Smarter write throttling

Posted Aug 19, 2007 0:46 UTC (Sun) by i3839 (guest, #31386)
In reply to: Smarter write throttling by giraffedata
Parent article: Smarter write throttling

Looking at it like a feedback system, the current negative feedback that limits writing at all is the amount of memory (or part of it thanks to tunables, but in the end it's memory that limits it). Practically this means you can have huge number of dirty pages outstanding, all waiting on one slow device.

Now combine the facts that you can't get rid of dirty memory quickly with that they probably aren't used anymore (rewrite doesn't happen that often I think), and you've just managed to throw away all free memory, slowing everything down. So basically you're using most of your memory for buffering writes, while the write speed doesn't increase significantly after a certain buffer size.

What is throttled here is the speed of dirtying pages, not the speed of writeout, to keep the number of dirty pages limited. (But although the dirtying is throttled, the average should in both cases be the write speed of the device, so perhaps "throttling" isn't quite the right word).

The deadlock mentioned is AFAIK the situation where the system is under heavy memory pressure and wants to free some memory, which it tries to do by writing out dirty pages, but doing that might require allocating some memory, which can't be done because that's what lacking, hence a deadlock. But I don't know if that's the mentioned LVM example, or if that's another case.

(Log in to post comments)

Smarter write throttling

Posted Aug 19, 2007 1:48 UTC (Sun) by giraffedata (subscriber, #1954) [Link]

What is throttled here is the speed of dirtying pages,

Actually, nothing is throttled here. Rather, the amount of memory made available for writebehind buffering is restricted, so folks have to wait for it. Tightening resource availability slows down users of resources, like throttling would, but it's a fundamentally different concept. Resource restriction pushes back from the actual source of the problem (scarce memory), whereas throttling makes the requester of resource just ask for less, based on some higher level determination that the resource manager would give him too much.

I think the article is really about smarter memory allocation for writebehind data, not smarter write throttling.

(To understand the essence of throttling, I think it's good to look at the origin of the word. It means to choke, as in squeezing a tube (sometimes a person's throat) to reduce flow through it. The throttle of a classic gas engine prevents the fuel system from delivering fuel to the engine even though the engine is able to burn it).

Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds