LWN: Comments on "Interview: the return of the realtime preemption tree" https://lwn.net/Articles/319544/ This is a special feed containing comments posted to the individual LWN article titled "Interview: the return of the realtime preemption tree". en-us Wed, 03 Sep 2025 03:11:51 +0000 Wed, 03 Sep 2025 03:11:51 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Another Hard real time Linux https://lwn.net/Articles/320211/ https://lwn.net/Articles/320211/ razb <div class="FormattedComment"> <font class="QuotedText">&gt; Another Hard real time Linux</font><br> <font class="QuotedText">&gt; [Kernel] Posted Feb 20, 2009 13:25 UTC (Fri) by saffroy</font><br> &gt;<br> <font class="QuotedText">&gt; Another approach is to use a real-time hypervisor: you can have</font><br> <font class="QuotedText">&gt; real-time scheduling, (almost) full access to the bare-metal, and even</font><br> <font class="QuotedText">&gt; (more or less) friendly APIs to communicate with the other OS. You can</font><br> <font class="QuotedText">&gt; even have a full-featured RTOS running there.</font><br> Funny you mention it. I actually thought of using this technology to have a solution for a single cpu machines. But it turned out that hyper-threading is good enough for offsched, so i did not try it. But i very much agree, we do not utilize the machines enough. <br> <font class="QuotedText">&gt; BTW, is it reasonable to imagine the RT-preempt tree running kvm running</font><br> <font class="QuotedText">&gt; a RTOS ?</font><br> don't know.<br> </div> Fri, 20 Feb 2009 22:26:46 +0000 Another Hard real time Linux https://lwn.net/Articles/320209/ https://lwn.net/Articles/320209/ razb <div class="FormattedComment"> <font class="QuotedText">&gt; Another Hard real time Linux</font><br> <font class="QuotedText">&gt; [Kernel] Posted Feb 19, 2009 10:12 UTC (Thu) by i3839</font><br> &gt;<br> <font class="QuotedText">&gt;&gt; correct. but it is limited only to:</font><br> <font class="QuotedText">&gt;&gt; 1. accessing ***vmalloc**** space ***directly*** . You can access any</font><br> <font class="QuotedText">&gt;&gt; kmalloc'ed address directly , and access vmalloc'ed space by walking</font><br> <font class="QuotedText">&gt;&gt; on the pages. what I mean is that you can access everything.</font><br> <font class="QuotedText">&gt;&gt; 2. unable to kmalloc</font><br> <font class="QuotedText">&gt;&gt; 3. unable to free memory. ( For example : kfree ).</font><br> &gt;<br> <font class="QuotedText">&gt; What's dangerous about accessing vmalloced space directly if it's</font><br> <font class="QuotedText">&gt; pinned? Or did I misunderstand?</font><br> vmalloc pages are updated to the kernel master page table in the<br> VMALLOC area. when the processor mmu tries to access these pages it<br> faults. but, hey , offsched cannot fault.<br> kmalloc pages are static and do not require faults.<br> <font class="QuotedText">&gt;&gt; You can access any facility in the kernel. you can send or receive</font><br> <font class="QuotedText">&gt;&gt; packets. and I do it on AMD-Intel machines successfully.</font><br> &gt;<br> <font class="QuotedText">&gt; Though those facilities may not access vmalloc space directly, nor</font><br> <font class="QuotedText">&gt; allocate/free memory? Seems very fragile, because you can't know if they</font><br> <font class="QuotedText">&gt; will in the future (assuming you audited all the code that may be</font><br> <font class="QuotedText">&gt; executed by those facilities, which is a lot of tricky work).</font><br> vmalloc memory is rarely used. it is used in audio drivers, and for<br> loading modules which is no more than an annoying problem.<br> <p> <font class="QuotedText">&gt; How can you send and receive packets if you can't allocate the space</font><br> <font class="QuotedText">&gt; needed for them? Not with the standard networking stack, can you?</font><br> Recv: offsched is used for mere packet parsing . once done with the<br> parsing packet will be moved to kernel or dropped.<br> Send: pre-allocate all you need.<br> I am using a private UDP stack. udp is not a big deal.<br> <p> <font class="QuotedText">&gt;&gt; gettimeofday is not a timer, it is a clock. try and schedule a task to</font><br> <font class="QuotedText">&gt;&gt; be run T microseconds from now, you will skew, and the more tasks, it</font><br> <font class="QuotedText">&gt;&gt; will skew more.</font><br> &gt;<br> <font class="QuotedText">&gt; Right, totally different, sorry. But you only run one task, so the timer</font><br> <font class="QuotedText">&gt; is just a more efficient way of not doing anything in the meantime?</font><br> Only one task ? why not have both recv and transmit ? why do you think<br> an OS processor is fully utilized ?<br> Benchmarks show a speed up of 2.8 for an 8 cores machine.<br> <font class="QuotedText">&gt;&gt; even with NAPI you may get your system to be jammed, and worst of all</font><br> <font class="QuotedText">&gt;&gt; even with unrelated traffic, offsched suggests another approach of</font><br> <font class="QuotedText">&gt;&gt; containing incoming traffic to a single or more cores. This way cpu0,</font><br> <font class="QuotedText">&gt;&gt; the main operating system processor, will not be at risk.</font><br> &gt;<br> <font class="QuotedText">&gt; This is a generic problem: Any (user or kernel) process can use too many</font><br> <font class="QuotedText">&gt; resources, slowing down the machine as a whole. Offsched doesn't solve</font><br> In NAPI we consume entire system computation power, in offsched we don't. I decided to call it offsched containment concept.<br> <font class="QuotedText">&gt; that at all, except for some explicit kernel cases which are 'ported' to</font><br> <font class="QuotedText">&gt; offsched, which is a lot of work.</font><br> Yes, it is a lot of work, unfortunately. currently i do not know how<br> much work it is to climb up a TCP stack in offsched context. Do you know of a good RT tcp stack ?<br> Also, rule of 80-20 proves that 20% of the code can handle 80% of the<br> cases,so i may find ,myself fixing only 20% of the tcp code. very much depends whether offsched will ever reach mainline. <br> <font class="QuotedText">&gt; realtime preemption, on the other hand, tries to solve this problem in a</font><br> <font class="QuotedText">&gt; more generic way.</font><br> <p> <font class="QuotedText">&gt; And moving networking to offsched may contain the damage to one core,</font><br> <font class="QuotedText">&gt; but it doesn't solve the real problem, e.g. sshing into the box doesn't</font><br> <font class="QuotedText">&gt; work quicker or better in any way. If the NIC generates more packets</font><br> <font class="QuotedText">&gt; than can be handled, the right solution is to drop some early. Basically</font><br> <font class="QuotedText">&gt; what you always do in an overload situation: Don't try to do everything,</font><br> <font class="QuotedText">&gt; drop some stuff.</font><br> why a single NIC ? Many appliances if not most are shipped with an<br> administration interface, and a public interface.<br> The public is the exposed interface. if it is under attack, the entire<br> system is under attack , especially in a world 10G interfaces.<br> In offsched, we assign OFFSCHED-NAPI over 10G interface....<br> <font class="QuotedText">&gt; Now the nasty thing is that it's hard to see the difference between a</font><br> <font class="QuotedText">&gt; DoS and just a very high load.</font><br> &gt;<br> <font class="QuotedText">&gt; Besides, handling the network packets with all cores instead of one may</font><br> <font class="QuotedText">&gt; be the difference between being DoSed and just slowed down.</font><br> who says a single OFFSCHED core is used ?<br> <font class="QuotedText">&gt;&gt; you cannot run user space with interrupts disabled. So you probably</font><br> <font class="QuotedText">&gt;&gt; meant kernel space, and it will look something like this:</font><br> &gt;<br> <font class="QuotedText">&gt; Bad wording on my part, sorry. No, I meant that all interrupt handlers</font><br> <font class="QuotedText">&gt; are executed on other cores than the "special" one, and the few that</font><br> This is soft real time. user space cannot do hard real time. you can<br> never guarantee meeting deadlines because you are in ring 3. If you want to use a high priority kernel thread, you probably pre-allocate memory(..well... i do.. ) . so ? better use offsched.<br> <font class="QuotedText">&gt; would happen anyway are disabled semi-permanently. (The scheduling clock</font><br> <font class="QuotedText">&gt; can be disabled because a rt task is running and no involuntary</font><br> <font class="QuotedText">&gt; scheduling should happen. Easier now with dynticks though.)</font><br> It is a good idea, why not wrap offsched timer with clockevents?<br> thanks.<br> <font class="QuotedText">&gt; Basically moving the special kernel task running on that core to a</font><br> <font class="QuotedText">&gt; special user space task running on that core. Or at least add it as an</font><br> <font class="QuotedText">&gt; option. Add some special syscalls or character drivers to do the more</font><br> <font class="QuotedText">&gt; esoteric stuff and voila, all done.</font><br> <font class="QuotedText">&gt;&gt; but you will fail.</font><br> <font class="QuotedText">&gt;&gt; a processor must walk trough a quiescent state ; if you try it, you</font><br> <font class="QuotedText">&gt; will</font><br> <font class="QuotedText">&gt;&gt; have RCU starvation, and I have been there... :) . one of my papers</font><br> <font class="QuotedText">&gt;&gt; explains that.</font><br> &gt;<br> <font class="QuotedText">&gt; This problem is still there though. But it seems like a minor adjustment</font><br> <font class="QuotedText">&gt; to RCU to teach it that some cores should be ignored, or to keep track</font><br> <font class="QuotedText">&gt; if some cores did any RCU stuff at all (perhaps it already does that</font><br> <font class="QuotedText">&gt; now, didn't check).</font><br> &gt;<br> <font class="QuotedText">&gt; All in all what you more or less have is standard Linux kernel besides a</font><br> <font class="QuotedText">&gt; special mini-RT-OS, running on a separate core. Only, you extend the</font><br> <font class="QuotedText">&gt; current kernel to include the functionality of that RT-OS, and use other</font><br> <font class="QuotedText">&gt; bits and pieces of the kernel when convenient. This is better than a</font><br> <font class="QuotedText">&gt; totally separate RT-OS, but still comes with the disadvantages of one:</font><br> <font class="QuotedText">&gt; Very limited and communication with the rest of the system is tricky. If</font><br> <font class="QuotedText">&gt; done well it's a small step forwards, but why not think bigger and try</font><br> <font class="QuotedText">&gt; to solve the tougher problems?</font><br> correct. I decided to call it "hybrid system",this is because you<br> enjoy the stabilty of linux server and OFFSCHED. If A is the size of<br> your software, and B is the size of the Real time code, B/A is likely<br> to be small. Why mess with a big RT system for such small fraction ?<br> You are more than welcome to suggest other strategies.<br> </div> Fri, 20 Feb 2009 22:19:12 +0000 Another Hard real time Linux https://lwn.net/Articles/320146/ https://lwn.net/Articles/320146/ saffroy <div class="FormattedComment"> Another approach is to use a real-time hypervisor: you can have real-time scheduling, (almost) full access to the bare-metal, and even (more or less) friendly APIs to communicate with the other OS. You can even have a full-featured RTOS running there.<br> <p> BTW, is it reasonable to imagine the RT-preempt tree running kvm running a RTOS ?<br> </div> Fri, 20 Feb 2009 13:25:52 +0000 yum repo? https://lwn.net/Articles/320053/ https://lwn.net/Articles/320053/ bkoz <div class="FormattedComment"> Looking for kernel-rt as well, but don't see details on a new yum repo for the renewed realtime work.<br> </div> Thu, 19 Feb 2009 19:41:01 +0000 Interview: the return of the realtime preemption tree https://lwn.net/Articles/320008/ https://lwn.net/Articles/320008/ Lovechild <div class="FormattedComment"> Back in the day there was a very handy yum repo available. This made it trivially easy to test for users and was a good way to detect problem scenerios. I am hoping to see something like that return with this reinvigorated rt effort.<br> </div> Thu, 19 Feb 2009 14:59:07 +0000 Another Hard real time Linux https://lwn.net/Articles/319981/ https://lwn.net/Articles/319981/ i3839 <div class="FormattedComment"> <font class="QuotedText">&gt; correct. but it is limited only to:</font><br> <font class="QuotedText">&gt; 1. accessing ***vmalloc**** space ***directly*** . You can access any</font><br> <font class="QuotedText">&gt; kmalloc'ed address directly , and access vmalloc'ed space by walking</font><br> <font class="QuotedText">&gt; on the pages. what I mean is that you can access everything.</font><br> <font class="QuotedText">&gt; 2. unable to kmalloc</font><br> <font class="QuotedText">&gt; 3. unable to free memory. ( For example : kfree ).</font><br> <p> What's dangerous about accessing vmalloced space directly if it's pinned? Or did I misunderstand?<br> <p> <font class="QuotedText">&gt; You can access any facility in the kernel. you can send or receive</font><br> <font class="QuotedText">&gt; packets. and I do it on AMD-Intel machines successfully.</font><br> <p> Though those facilities may not access vmalloc space directly, nor allocate/free memory? Seems very fragile, because you can't know if they will in the future (assuming you audited all the code that may be executed by those facilities, which is a lot of tricky work).<br> <p> How can you send and receive packets if you can't allocate the space needed for them? Not with the standard networking stack, can you?<br> <p> <font class="QuotedText">&gt; gettimeofday is not a timer, it is a clock. try and schedule a task to</font><br> <font class="QuotedText">&gt; be run T microseconds from now, you will skew, and the more tasks, it</font><br> <font class="QuotedText">&gt; will skew more.</font><br> <p> Right, totally different, sorry. But you only run one task, so the timer is just a more efficient way of not doing anything in the meantime?<br> <p> <font class="QuotedText">&gt; even with NAPI you may get your system to be jammed, and worst of all</font><br> <font class="QuotedText">&gt; even with unrelated traffic, offsched suggests another approach of</font><br> <font class="QuotedText">&gt; containing incoming traffic to a single or more cores. This way cpu0,</font><br> <font class="QuotedText">&gt; the main operating system processor, will not be at risk.</font><br> <p> This is a generic problem: Any (user or kernel) process can use too many resources, slowing down the machine as a whole. Offsched doesn't solve that at all, except for some explicit kernel cases which are 'ported' to offsched, which is a lot of work.<br> <p> realtime preemption, on the other hand, tries to solve this problem in a more generic way.<br> <p> And moving networking to offsched may contain the damage to one core, but it doesn't solve the real problem, e.g. sshing into the box doesn't work quicker or better in any way. If the NIC generates more packets than can be handled, the right solution is to drop some early. Basically what you always do in an overload situation: Don't try to do everything, drop some stuff.<br> <p> Now the nasty thing is that it's hard to see the difference between a DoS and just a very high load.<br> <p> Besides, handling the network packets with all cores instead of one may be the difference between being DoSed and just slowed down.<br> <p> <font class="QuotedText">&gt; you cannot run user space with interrupts disabled. So you probably</font><br> <font class="QuotedText">&gt; meant kernel space, and it will look something like this:</font><br> <p> Bad wording on my part, sorry. No, I meant that all interrupt handlers are executed on other cores than the "special" one, and the few that would happen anyway are disabled semi-permanently. (The scheduling clock can be disabled because a rt task is running and no involuntary scheduling should happen. Easier now with dynticks though.)<br> <p> Basically moving the special kernel task running on that core to a special user space task running on that core. Or at least add it as an option. Add some special syscalls or character drivers to do the more esoteric stuff and voila, all done.<br> <p> <font class="QuotedText">&gt; but you will fail.</font><br> <font class="QuotedText">&gt; a processor must walk trough a quiescent state ; if you try it, you will</font><br> <font class="QuotedText">&gt; have RCU starvation, and I have been there... :) . one of my papers</font><br> <font class="QuotedText">&gt; explains that.</font><br> <p> This problem is still there though. But it seems like a minor adjustment to RCU to teach it that some cores should be ignored, or to keep track if some cores did any RCU stuff at all (perhaps it already does that now, didn't check).<br> <p> All in all what you more or less have is standard Linux kernel besides a special mini-RT-OS, running on a separate core. Only, you extend the current kernel to include the functionality of that RT-OS, and use other bits and pieces of the kernel when convenient. This is better than a totally separate RT-OS, but still comes with the disadvantages of one: Very limited and communication with the rest of the system is tricky. If done well it's a small step forwards, but why not think bigger and try to solve the tougher problems?<br> <p> </div> Thu, 19 Feb 2009 10:12:44 +0000 Another Hard real time Linux https://lwn.net/Articles/319972/ https://lwn.net/Articles/319972/ razb <div class="FormattedComment"> <font class="QuotedText">&gt;If I understood your idea correctly, you basically just run some very limited kernel code on a dedicated core with all unrelated interrupts etc. disabled?</font><br> <p> correct. but it is limited only to:<br> 1. accessing ***vmalloc**** space ***directly*** . You can access any kmalloc'ed address directly , and access vmalloc'ed space by walking on the pages. what I mean is that you can access everything. <br> 2. unable to kmalloc<br> 3. unable to free memory. ( For example : kfree ).<br> <p> <font class="QuotedText">&gt;- 1us accurate timer.</font><br> <font class="QuotedText">&gt;Standard gettimeofday gives me that. The system has plenty very accurate timers, the problem is transferring that info fast enough to where it's needed.</font><br> gettimeofday is not a timer, it is a clock. try and schedule a task to be run T microseconds from now, you will skew, and the more tasks , it will skew more.<br> <p> <font class="QuotedText">&gt;- Firewall/routing/etc. offloading.</font><br> <font class="QuotedText">&gt;This is totally real-time unrelated. Basically it wastes one whole core on doing that instead of letting that core do also other things, and adds extra communication overhead between cores/subsystems (still need to get the packets from somewhere and tell which ones go where etc). It seems the same can be achieved by pinning the NIC interrupts to one core and &gt;giving all network related stuff highest priority.</font><br> <p> First, you are correct . It is real time unrelated. offsched is not just for real time use, but for many other things.having high ingest traffic means you will probably enable NAPI, and NAPI disables incoming interrupts to reduce interrupts overhead, and even with NAPI you may get your system to be jammed, and worst of all even with unrelated traffic, offsched suggests another approach of containing incoming traffic to a single or more cores. This way cpu0 , the main operating system processor, will not be at risk. Also, in regard to the waste of processors, again you are correct; but offsched is not meant to be used on your laptop, but on appliances with several cores; which , unfortunately never achieve linear speed-up.<br> <p> <font class="QuotedText">&gt;You basically replace standard processes with very limited kernel code running on dedicated core. I don't say this is a bad idea in itself, but for this to make sense you want to have many (independent, low power) cores. I suspect that PC hardware isn't very suitable for this, because too much is shared by cores. It probably makes more sense for embedded systems, but even there it's questionable because of the kernel code only limitation.</font><br> You can access any facility in the kernel. you can send or receive packets. and I do it on AMD-Intel machines successfully.<br> <p> <font class="QuotedText">&gt;What's the advantage of offsched compared to running a user space process at real-time priority pinned on a core with interrupts disabled?</font><br> you cannot run user space with interrupts disabled. So you probably meant kernel space, and it will look something like this:<br> cli<br> foo()<br> sti<br> but you will fail.<br> a processor must walk trough a quiescent state ; if you try it, you will have RCU starvation, and I have been there... :) . one of my papers explains that. <br> <p> <font class="QuotedText">&gt;Or in other words, what problem does your approach solve?</font><br> I merely suggest a different approach for real time and security for machine with several cores or hyper threading.<br> I am using offsched on my appliances for network work.<br> </div> Thu, 19 Feb 2009 08:34:38 +0000 Another Hard real time Linux https://lwn.net/Articles/319964/ https://lwn.net/Articles/319964/ i3839 <div class="FormattedComment"> If I understood your idea correctly, you basically just run some very limited kernel code on a dedicated core with all unrelated interrupts etc. disabled?<br> <p> That seems so limited that it has not much practical use. Biggest problems are that it can't run user code and that any communication with other cores easily breaks the real-time guarantuee.<br> <p> Examples:<br> <p> - 1us accurate timer.<br> Standard gettimeofday gives me that. The system has plenty very accurate timers, the problem is transferring that info fast enough to where it's needed.<br> <p> - Firewall/routing/etc. offloading.<br> This is totally real-time unrelated. Basically it wastes one whole core on doing that instead of letting that core do also other things, and adds extra communication overhead between cores/subsystems (still need to get the packets from somewhere and tell which ones go where etc). It seems the same can be achieved by pinning the NIC interrupts to one core and giving all network related stuff highest priority.<br> <p> You basically replace standard processes with very limited kernel code running on dedicated core. I don't say this is a bad idea in itself, but for this to make sense you want to have many (independent, low power) cores. I suspect that PC hardware isn't very suitable for this, because too much is shared by cores. It probably makes more sense for embedded systems, but even there it's questionable because of the kernel code only limitation.<br> <p> What's the advantage of offsched compared to running a user space process at real-time priority pinned on a core with interrupts disabled?<br> <p> Or in other words, what problem does your approach solve?<br> <p> </div> Thu, 19 Feb 2009 06:44:40 +0000 Another Hard real time Linux https://lwn.net/Articles/319911/ https://lwn.net/Articles/319911/ razb <div class="FormattedComment"> Hello<br> I have written a piece of software called offline scheduler(offsched). It is based on Linux ability to offload a running processor. what i do is very simple:<br> 1. offload a processor.<br> 2. let this processor wonder in my hook.<br> <p> currently, i have written a 1-us timer. I will happy for any criticism.<br> It is my master work. <br> <p> <a href="http://sos-linux.svn.sourceforge.net/viewvc/sos-linux/offsched/trunk/Documentation/OFFSCHED.pdf">http://sos-linux.svn.sourceforge.net/viewvc/sos-linux/off...</a><br> <p> Raz<br> <p> <p> <p> </div> Wed, 18 Feb 2009 21:51:36 +0000 Whatever? https://lwn.net/Articles/319895/ https://lwn.net/Articles/319895/ man_ls What do you mean, "whatever"? So despite all his good work (and his personal friendship with Linus), Ingo is just a minion of Microsoft? This is completely outrageous! We should discuss this issue because argh argh aaarrrghhh... Wed, 18 Feb 2009 19:54:19 +0000 Interview: the return of the realtime preemption tree https://lwn.net/Articles/319760/ https://lwn.net/Articles/319760/ SEJeff <div class="FormattedComment"> Good point, then instead of being paid to work on Linux, a lot of them <br> would just do it for fun... Oh wait, a lot of them ALREADY work on Linux <br> for fun. Hmmmm Microsoft is in a heap of trouble then.<br> </div> Wed, 18 Feb 2009 01:11:03 +0000 Interview: the return of the realtime preemption tree https://lwn.net/Articles/319659/ https://lwn.net/Articles/319659/ drag <div class="FormattedComment"> Whatever. <br> <p> He is just throwing a hint out to Microsoft. Basically he is saying:<br> <p> "Hey Microsoft, if you want stop Linux domination all you have to do is keep all the Linux developers fat, happy, drunk, rich, and hand out beach homes like candy"<br> </div> Tue, 17 Feb 2009 16:26:06 +0000 Interview: the return of the realtime preemption tree https://lwn.net/Articles/319640/ https://lwn.net/Articles/319640/ nix <div class="FormattedComment"> That's just what you think. In fact Ingo is an emissary of the Dark Side, <br> but this goes unknown as any attempt to discuss it on public fora leads to <br> argh argh aaarrrghhh...<br> </div> Tue, 17 Feb 2009 14:25:57 +0000 Interview: the return of the realtime preemption tree https://lwn.net/Articles/319632/ https://lwn.net/Articles/319632/ rahulsundaram <div class="FormattedComment"> He is working for Red Hat for a long time now. He is obviously just joking.<br> </div> Tue, 17 Feb 2009 13:37:41 +0000 Interview: the return of the realtime preemption tree https://lwn.net/Articles/319626/ https://lwn.net/Articles/319626/ dambacher <div class="FormattedComment"> <a href="http://en.wikipedia.org/wiki/Irony">http://en.wikipedia.org/wiki/Irony</a><br> /dambacher<br> </div> Tue, 17 Feb 2009 12:34:32 +0000 Interview: the return of the realtime preemption tree https://lwn.net/Articles/319619/ https://lwn.net/Articles/319619/ mtk77 <div class="FormattedComment"> Interesting article. But I don't quite follow "All paid for by the nice folks from Microsoft btw". Is Ingo working there now?<br> </div> Tue, 17 Feb 2009 11:42:38 +0000 subscriber links https://lwn.net/Articles/319600/ https://lwn.net/Articles/319600/ garrison <p>From the LWN FAQ:</p> <blockquote><p>Where is it appropriate to post a subscriber link?</p> <p>Almost anywhere. Private mail, messages to project mailing lists, and weblog entries are all appropriate. As long as people do not use subscriber links as a way to defeat our attempts to gain subscribers, we are happy to see them shared.</p></blockquote> Tue, 17 Feb 2009 05:14:46 +0000 at last! https://lwn.net/Articles/319599/ https://lwn.net/Articles/319599/ quotemstr <div class="FormattedComment"> Erm, I would have asked before sending out that link.<br> </div> Tue, 17 Feb 2009 05:04:58 +0000 at last! https://lwn.net/Articles/319566/ https://lwn.net/Articles/319566/ nettings <div class="FormattedComment"> wow. this is major news for the linux audio crowd. i took the liberty of posting a subscriber's link to a public mailing list, i hope that's ok. thanks for this coverage!<br> </div> Mon, 16 Feb 2009 21:27:02 +0000