Realtime and interrupt latency
[Posted June 14, 2005 by corbet]
The realtime Linux patches, covered at length (too much length, according
to some) on these pages, have been aimed primarily at reducing scheduling
latency: the amount of time it takes to switch control to a high-priority
process in response to an event which makes it runnable. Scheduling
latency is important, but the harder end of the realtime spectrum also
places a premium on interrupt latency: how long the system takes to respond
to a hardware interrupt. In many realtime situations, the processor must
answer quickly when the hardware asks for attention; excessive latency can
lead to lost data and failure to respond quickly enough to external
events. A Linux-based, realtime beer monitoring system may only have a
few milliseconds to deal with a "refrigerator door opened" interrupt before
one's roommate has swiped a bottle and left the scene. In this sort of
high-stakes deployment, interrupt latency is everything.
One of the biggest sources of interrupt latency is periods when the
processor has simply disabled interrupt delivery. Device drivers often
disable interrupts - on the local processor at least - to avoid creating
race conditions with themselves. Even (or especially) when spinlocks are
used to control concurrency with interrupt handlers, interrupts must be
disabled. Imagine a driver which duly acquires a spinlock before working
with its data structures. One of that driver's devices raises an interrupt while
the lock is held, and the interrupt handler runs on the same CPU. That
interrupt handler will try to acquire the same spinlock, and, finding it
busy, will proceed to spin until the lock becomes free. But, since the
interrupt handler has preempted the only thread which can ever release the
lock, it will spin forever. That is a different sort of interrupt latency
altogether, and one which even general-purpose kernels try to avoid. The
usual technique is simply to disable interrupts while holding a spinlock
which might be acquired by an interrupt handler. Disabling interrupts
solves the immediate problem, but it can lead to increased interrupt
latency.
Ingo Molnar's realtime preemption patches improve the situation by moving
interrupt handlers into their own processes. Since interrupt handlers are
scheduled with everything else, and since "spinlocks" no longer spin with
this patch set, the sort of deadlock described in the previous paragraph
can not happen. So there is no longer any need to disable interrupts when
acquiring spinlocks. Changing the locking primitives eliminated the major
part of the code in the kernel which runs with interrupts disabled.
Daniel Walker recently noticed that one could do a little better - and
followed up with a patch showing how.
Fixing the locking primitives got rid of most of the driver code which runs
with interrupts turned off, but it did nothing for all of the places where
drivers explicitly disable interrupts themselves with a call to
local_irq_disable(). In most of these cases, the driver is simply
trying to avoid racing with its interrupt handler. But when interrupt handlers
run in their own threads, all
that is really needed to avoid concurrency problems is to disable
preemption. So Daniel's patch reworks local_irq_disable() to turn
off preemption while leaving the interrupt
configuration alone. For the few cases where it is truly necessary to
disable interrupts at the hardware level, hard_local_irq_disable()
(later renamed to raw_local_irq_disable())
has been provided.
One might argue that disabling preemption is counterproductive, given that
any code which runs with preemption disabled will contribute to the
scheduling latency problem. But any code which disables interrupts already
runs with preemption turned off, so the situation is not made any worse by
this patch. It could, in fact, be improved: all that really needs to be
protected against is preemption by one specific interrupt handler thread.
The extra scheduler complexity which would be required to implement that
solution is unlikely to be worth it, however; better to just fix the
drivers to use locks. So Ingo picked up Daniel's patch, spent a few
minutes completely reworking it, and added it to his realtime preemption
patch set.
Meanwhile, Karim Yaghmour was heard
wondering:
I'm not sure exactly why you guys are reinventing the wheel. Adeos
already does this soft-cli/sti stuff for you, it's been available
for a few years already, tested, and ported to a number of
architectures, and is generalized, why not just adopt it?
It does seem that not everybody understands what the Adeos patch (available
from the Gna server) does.
The description of Adeos, in its current form, as a "nanokernel" probably
does this work a disservice; what Adeos really comes down to is a patch to
the kernel's interrupt handling code.
To reduce interrupt latency, Adeos takes the classic approach of adding a
layer of indirection. The patch adds an "interrupt pipeline" to the
low-level, architecture-specific code. Any "domain" (read "piece of code")
can register itself with this interrupt pipeline, providing a priority as
it does so. Whenever a hardware interrupt arrives, Adeos works its way
down the pipeline, calling into the handler of each domain which has
expressed an interest in that interrupt. The higher-priority handlers are,
of course, called first.
In this world, the regular Linux interrupt subsystem is registered as just
another Adeos domain. Any code which absolutely, positively must have its
interrupts arrive within microseconds can register itself as a
higher-priority domain. When interrupt time comes, the high-priority code
can respond to the interrupt before Linux even hears about it. Since
nothing in Linux can possibly get in the way (unless it does evil things to
the hardware), there is no need to worry about which parts of Linux might
create latency problems.
Some benchmark results were recently
posted; they showed generally better performance from Adeos than from the
realtime preemption patch. Some issues have been raised, however, with how
those numbers were collected; the tests are set to be rerun in the near
future.
Meanwhile, a slow debate over inclusion of the realtime work continues,
with some participants pushing for the code to be merged eventually, others
being skeptical, and a few asking for the realtime discussion to be removed
from linux-kernel altogether. One viewpoint worth considering can be found
in this posting from Gerrit Huizenga, who
argued that the realtime patches of today resemble the scalability patches
from a few years ago, and that they must follow a similar path toward
inclusion:
I believe that any effort towards mainline support of RT has to
follow a similar set of guidelines. And, I believe strongly that
*most* of the RT code should be crafted so that every single laptop
user is running most of the code *and* benefiting from it. If most
of the RT code goes unused by most of the population, and the only
way to get an RT kernel of any reasonable level is to ask the
distros to build yet another configuration, RT will always be a
poor, undertested, underutilized ugly stepchild of Linux.
Ingo Molnar clearly understands this; he has consistently worked toward
making the realtime patches minimally intrusive and useful in many
situations. Parts of the realtime work have already been merged, and this
process may continue. There may come a time when developers will be
surprised to discover that most of the realtime preemption patch can be
found in the mainline.
(
Log in to post comments)