Posted Apr 26, 2007 21:58 UTC (Thu) by slamb
In reply to: *Ouch*.
Parent article: Schedulers: the plot thickens
But, on the other hand, runqueue handling is an issue that, so far, every scheduler
has shown problem with, one time or another. The radical idea of doing away with them
altogether certainly deserves a closer look.
I'm surprised by the combination of no one doing this before and someone doing it now.
Before reading the recent article about the scheduler, I'd only seen priority queues implemented
as an array of plain queues in cases where there were only a handful of priorities. When there's
one per nice level (140?) or many more (priority is a function of nice level and timeslice left), it
seems like trees or heaps would be an obvious choice. Having a sorted structure seems much
simpler than doing these minor and major rotations to the array, with this secondary "expired"
So given that they originally did this a different way, the logical question is why. Was it so
the time complexity could be O(p) [*] rather than O(log n)? Well, now Ingo's apparently thinking
that's not important. How many processes was the O(1) scheduler designed to support? How long
does Ingo's scheduler take to run in that situation?
If O(log n) does turn out to be a problem, I wonder if a mostly-sorted soft heap would be better at amortized O(1). Faster as a
whole, and "corrupting" a fixed percentage of priorities might not be a bad way to avoid total
starvation of low-priority processes, but amortized time might mean it'd be too jerky to be used.
[*] - p = number of priorities...from skimming the description, it doesn't look like it was
ever O(1) like the cool kids said. They just considered p to be a constant 140.
to post comments)