User: Password:
|
|
Subscribe / Log in / New account

*Ouch*.

*Ouch*.

Posted Apr 19, 2007 6:09 UTC (Thu) by smurf (subscriber, #17840)
Parent article: Schedulers: the plot thickens

I can certainly understand Con.

But, on the other hand, runqueue handling is an issue that, so far, every scheduler has shown problem with, one time or another. The radical idea of doing away with them altogether certainly deserves a closer look.

I'm interested how the single-RB-tree idea will handle on a machine with a lot of CPUs. Off to check gmane.lists.linux.kernel ...


(Log in to post comments)

rbtree is per-CPU

Posted Apr 19, 2007 6:18 UTC (Thu) by axboe (subscriber, #904) [Link]

Hi,

The rbtree task timeline is per CPU, it's not a global entity. The latter would naturally never fly.

rbtree is per-CPU

Posted Apr 19, 2007 6:28 UTC (Thu) by kwchen (guest, #13445) [Link]

Has anyone experimented with one scheduling entity per numa-node, or one per physical CPU package etc, instead of current one per CPU?

rbtree is per-CPU

Posted Apr 19, 2007 6:47 UTC (Thu) by smurf (subscriber, #17840) [Link]

Gah. Forgive my sloppy use of language. By "single RB tree" I meant "a single tree to replace the former run queues structure", not "a single tree on the whole system". Process migration between CPUs is, after all, not going to go away.

On another front, despite Con being rather miffed by Ingo's original patch, the subsequent dialog between them is a model of mutual respect that lots of people can learn from. Myself included.

*Ouch*.

Posted Apr 26, 2007 21:58 UTC (Thu) by slamb (guest, #1070) [Link]

But, on the other hand, runqueue handling is an issue that, so far, every scheduler has shown problem with, one time or another. The radical idea of doing away with them altogether certainly deserves a closer look.

I'm surprised by the combination of no one doing this before and someone doing it now. Before reading the recent article about the scheduler, I'd only seen priority queues implemented as an array of plain queues in cases where there were only a handful of priorities. When there's one per nice level (140?) or many more (priority is a function of nice level and timeslice left), it seems like trees or heaps would be an obvious choice. Having a sorted structure seems much simpler than doing these minor and major rotations to the array, with this secondary "expired" array.

So given that they originally did this a different way, the logical question is why. Was it so the time complexity could be O(p) [*] rather than O(log n)? Well, now Ingo's apparently thinking that's not important. How many processes was the O(1) scheduler designed to support? How long does Ingo's scheduler take to run in that situation?

If O(log n) does turn out to be a problem, I wonder if a mostly-sorted soft heap would be better at amortized O(1). Faster as a whole, and "corrupting" a fixed percentage of priorities might not be a bad way to avoid total starvation of low-priority processes, but amortized time might mean it'd be too jerky to be used.

[*] - p = number of priorities...from skimming the description, it doesn't look like it was ever O(1) like the cool kids said. They just considered p to be a constant 140.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds