kernel developers who care about the desktop
kernel developers who care about the desktop
Posted Jul 25, 2007 10:49 UTC (Wed) by mingo (guest, #31122)In reply to: Oh, please by JoeBuck
Parent article: Interview with Con Kolivas (APC)
Yes. I'm for one very much interested in the desktop, and i know that many other kernel developers are too - including Linus. During the past 3 years hundreds of patches (and a handful of major features) went from the -rt tree (that i'm maintaining together with Thomas Gleixner), into the upstream kernel. [ And there's still 380 patches to go as of v2.6.23-rc1-rt2 - some of them have been waiting for upstream integration nearly 4 years - ouch! :-/ ]
Most of those patches were inspired by problems that are primarily related to the desktop. One of the largest features of v2.6.21 (dynticks and high resolution timers) is a feature that is mainly relevant on the desktop (laptops in particular).
It never hurts if a kernel feature helps all categories of Linux use - but in general, just about anything sane that helps the desktop helps in the server-space too, if formulated generally enough. Inotify (now upstream) helps servers too. Dynticks (now upstream) helps servers too, high-resolution timers (now upstream) helps servers too, PREEMPT_BKL (getting rid of the big kernel spinlock, now upstream too) helps servers as well, etc.
In terms of desktop latencies, the CPU scheduler was never really a huge issue in practice, we had lots of much graver latency problems that were patiently tracked down and fixed. (and, as many have pointed it out already, I/O schedulers and the VM policies have a much bigger role in the everyday latencies that a desktop user sees - the CPU/task scheduler is a distant third in that aspect.)
But CPU schedulers do stir emotions: everyone who is interested in OSs has some sort of notion about what a CPU scheduler is, everyone is affected by it so everyone has an opinion about how it should (or should not) be done. Fixes for serious but otherwise dull latencies deep in the kernel are hard to understand so they raise a lot less emotions. (Which is a big benefit: it makes them all that much easier to merge ;-)
I can understand Con's frustration with the kernel patch process, and while i disagree with Con about some technological issues related to CPU scheduling (and conceded others, and not only in the SD/CFS scheduler discussion but in the past too: out of the 250+ modifications that were done to sched.c in the past 2 years alone [authored by over 80 different kernel hackers], 15 changes were from Con), i actually fully agree with Con's points about swap-prefetch and disagree with the MM maintainers' decision about swap-prefetch.
I'd not be surprised if the core problem that lies behind swap-prefetch is something that we wanted to see fixed on servers too.
In any case, regardless of the technological merits, it never feels good to have a patch rejected - i was upset about it myself whenever it happened, and i'm upset about it even today when it happens. Rejection of my patches still happens quite frequently. (And if anyone knows about some insider old boy's network that gets kernel patches past Linus easily then please let me know, i'd like to join! ;-)
Posted Jul 26, 2007 1:52 UTC (Thu)
by ras (subscriber, #33059)
[Link] (3 responses)
The response time of a the machine when a disk bound is running task is a major issue. To the extent that in 2.6.8, you could literally kill a machine with a mke2fs because it flooded the block cache, invoking the OOM killer which then took out some important process or other. It doesn't do that with newer kernels, but it can still delay the start up of vi by a minute or two.
Think about it. A single mke2fs, nice'ed to 19, can bring a 4 CPU machine to its knees. You can tell the CPU scheduler you want mke2fs to only take a small percentage of the available CPU time, but apparently there is no way to tell the IO scheduler its not allowed hog 95% of memory by swamping the block cache. Yuk!
Posted Jul 26, 2007 14:36 UTC (Thu)
by mingo (guest, #31122)
[Link] (2 responses)
ionice will solve that problem for you - it has been available since 2.6.13.
Posted Jul 26, 2007 23:08 UTC (Thu)
by conman (guest, #14830)
[Link] (1 responses)
Posted Jul 30, 2007 9:17 UTC (Mon)
by mingo (guest, #31122)
[Link]
Con Kolivas wrote:
hi Con - this is the first time i've seen you characterise ionice as "never working", so your statement is quite surprising to me - could you please elaborate on this criticism? Have you reported your problems with ionice to lkml?
I've used ionice myself and it works well within its boundaries.
Your point about IO & VM schedulers being the dominant cause for slowdowns rings true, and will become more so with the move to multiple CPU's. Most desktop machines have a load factor of less than 4, so with 4 CPU's every task that wants to run can have its own CPU and there is nothing the schedule.kernel developers who care about the desktop
kernel developers who care about the desktop
Have you tried using ionice? You'll find it has never worked.kernel developers who care about the desktop
kernel developers who care about the desktop
Have you tried using ionice? You'll find it has never worked.
