|
|
Subscribe / Log in / New account

The io.weight I/O-bandwidth controller

The io.weight I/O-bandwidth controller

Posted Jun 30, 2019 2:58 UTC (Sun) by marcH (subscriber, #57642)
Parent article: The io.weight I/O-bandwidth controller

> More commonly in recent times, though, the focus has shifted to latency: a process should be able to count on completing an I/O request within a bounded period of time. The controller should be able to provide those guarantees while still driving the underlying device at something close to its maximum rate.

Bounded latency for every stream/process while driving the underlying device close to its maximum rate: that's practically word for word the objective people fixing bufferbloat set themselves. Now I realize there are some differences. The main one is probably that packet loss is not just allowed in networking: it's the main signal. Yet I suspect there's a fair amount of overlap in the approaches. Are these two crowds connected to each other? Networking was not mentioned once in this article.

BTW maybe there would be more networking people reading this article if the keyword "latency" had been in the title. Or even better: in the name of the scheduler itself. Again on the "marketing" topic, why call it a "controller"? Straight from some legacy name in the kernel code maybe?


to post comments

The io.weight I/O-bandwidth controller

Posted Jun 30, 2019 14:40 UTC (Sun) by Paf (subscriber, #91811) [Link]

If not controller, what would you call it?

The io.weight I/O-bandwidth controller

Posted Jun 30, 2019 22:35 UTC (Sun) by mtaht (subscriber, #11087) [Link]

I read the article and made popcorn! :) So many aspects of queue theory apply to processor and I/O scheduling, (and your local supermarket) and I'd like it if more folk had at least this book of kleinrock's on their shelf ( https://www.amazon.com/Queueing-Systems-Vol-Computer-Appl... ) - but all his work is worth reading. Also, "Algorithms to live by" is good.

There was even an attempt once at applying a fq_codel-like technique to queue up commands for a graphics card - which worked really well, except that one of the possible commands included resetting the pipeline.

Anyway, given deep buffers on a SSD device itself, something along the lines of BQL and utilizing a completion interrupt to keep those from getting too deep might be good. For spinning rust, instead of SSDs, you'd have to weight the seek somehow, and in that case you actually do want any seeks "along the way" to get inserted into the on-device queue... aggh, I'm rooting for y'all to sort it out.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds