The io.weight I/O-bandwidth controller
The io.weight I/O-bandwidth controller
Posted Jun 30, 2019 2:58 UTC (Sun) by marcH (subscriber, #57642)Parent article: The io.weight I/O-bandwidth controller
Bounded latency for every stream/process while driving the underlying device close to its maximum rate: that's practically word for word the objective people fixing bufferbloat set themselves. Now I realize there are some differences. The main one is probably that packet loss is not just allowed in networking: it's the main signal. Yet I suspect there's a fair amount of overlap in the approaches. Are these two crowds connected to each other? Networking was not mentioned once in this article.
BTW maybe there would be more networking people reading this article if the keyword "latency" had been in the title. Or even better: in the name of the scheduler itself. Again on the "marketing" topic, why call it a "controller"? Straight from some legacy name in the kernel code maybe?
      Posted Jun 30, 2019 14:40 UTC (Sun)
                               by Paf (subscriber, #91811)
                              [Link] 
       
     
      Posted Jun 30, 2019 22:35 UTC (Sun)
                               by mtaht (subscriber, #11087)
                              [Link] 
       
There was even an attempt once at applying a fq_codel-like technique to queue up commands for a graphics card - which worked really well, except that one of the possible commands included resetting the pipeline. 
Anyway, given deep buffers on a SSD device itself, something along the lines of BQL and utilizing a completion interrupt to keep those from getting too deep might be good. For spinning rust, instead of SSDs, you'd have to weight the seek somehow, and in that case you actually do want any seeks "along the way" to get inserted into the on-device queue... aggh, I'm rooting for y'all to sort it out. 
     
    The io.weight I/O-bandwidth controller
      
The io.weight I/O-bandwidth controller
      
           