|
|
Subscribe / Log in / New account

An API for specifying latency constraints

Modern processors support a number of power states. When there is nothing of any real interest going on, they can be instructed to power down to one of potentially several different levels. Since processors on most systems are idle much of the time, this capability can be put to use to bring about a significant reduction in power use. Cutting power demand is most helpful on systems with limited power sources - laptops, portable music players, Linux-powered penguin robots, etc. - but cutting power consumption is also a good thing to do in most other environments as well.

Powering down the CPU becomes an even more useful thing to do once a dynamic tick mechanism is in use - something which appears possible for the Linux i386 port in 2.6.19. The elimination of the periodic clock interrupt will allow the processor to sleep for longer periods of time when there is nothing to do. Longer sleeps can translate into deeper power saving modes, reducing consumption even further.

The problem that can come up, however, is that the more aggressive power management modes will, by their nature, cause the processor to take longer to get back into an operating state. So, as the processor is put more deeply to rest, the system's latency in responding to external events will increase. In some situations, that latency can cause the system to fail to operate properly. Audio or video data might get dropped, a network adapter may start to see errors, or that robotic penguin could fail to respond in time to a cyber-walrus threat. The usual response to that problem, beyond hunting walruses to extinction, is to simply disable the power-saving behavior. but such drastic responses should not really be necessary.

Various devices in the system, when operating in certain modes, will need to obtain responses from the system within a given period of time. The drivers for those devices know how the device is being operated at any given moment, so they know what the latency requirements are. If the system as a whole had that information, it could tune its operations to the minimum latency requirements in effect at the moment, and could change its operations as the requirements change. But there is no mechanism in the system for handling - and reacting to - this information.

Arjan van de Ven has set out to change this situation with a latency tracking infrastructure patch. This work adds a set of new functions which may be used by drivers to indicate their latency requirements:

    #include <linux/latency.h>

    void set_acceptable_latency(char *identifier, int usecs);
    void modify_acceptable_latency(char *identifier, int usecs);
    void remove_acceptable_latency(char *identifier);

When a driver enters a mode where it has specific latency requirements (a camera driver starts acquiring frame data, say), it can tell the system about the maximum latency it can handle with set_acceptable_latency(). The identifier parameter is only used for identifying the request later on; usecs is the maximum latency in microseconds. The latency requirement can be changed with modify_acceptable_latency(), or eliminated altogether with remove_acceptable_latency().

The back end of the latency infrastructure includes a notifier chain for letting interested subsystems know when the maximum acceptable latency has changed. The current consumer of this information is the ACPI subsystem, which can use it to adjust the processor's idle state to meet that requirement. One could imagine that a smart dynamic tick implementation could use this information as well.

In the current patch, only one subsystem (the IPW2100 wireless network driver) declares its latency requirements. This version of the patch has been proposed for inclusion in the -mm kernel, however, with the idea that other driver maintainers could start to make use of it. Unless some sort of surprising objection comes up, the latency management infrastructure looks likely to be a part of the 2.6.19 kernel.

Index entries for this article
KernelACPI
KernelLatency


to post comments

Things like this just tickle me pink :-)

Posted Aug 31, 2006 6:13 UTC (Thu) by felixfix (subscriber, #242) [Link] (1 responses)

This kind of response to things just can't be beat. No doubt commercial OS vendors (Sun, Microsoft) could act this fast internally, but then it would be ages before it hit the customers and before the new API got some traction with other developers. Even the old kernel dev model, with even numbers = stable and odd numbers = dev, weren't as fast. There are a few hiccups with this fast release cycle, but they are pretty minor overall, and it constantly amazes me how well the kernel team handles this new dev cycle.

Go Linux!

Things like this just tickle me pink :-)

Posted Aug 31, 2006 14:37 UTC (Thu) by cventers (guest, #31465) [Link]

Absolutely. I still hear people very external to the Linux community
gripe about the 2.6 process from time to time (leaving me to wonder why
they care since they're not users). Being on LKML, it's damn easy to
notice just how effective 2.6 really is. With 2.6, everyone wins - users,
distributors and developers.

An API for specifying latency constraints

Posted Aug 31, 2006 7:16 UTC (Thu) by ncm (guest, #165) [Link] (1 responses)

I wonder how this can be transitioned in. It seems to me that any driver that doesn't declare its latency requirement must be assumed to need the minimum possible latency. The result is that the last driver to be updated holds up implementing power-down. It's not as bad as it might be, as lots of drivers aren't even open much of the time, and most of us won't have the hardware they drive, but I wonder how we will know which driver is keeping our own machine from sleeping.

The problem seems similar to discovering which daemon is keeping the disk from ever spinning down.

An API for specifying latency constraints

Posted Aug 31, 2006 7:46 UTC (Thu) by arjan (subscriber, #36785) [Link]

Actually it's sort of the other way around. Right now, the power savings already happen, including the latency hit. It's just that these are right now blind to the actual requirements, and assume everything goes.

With this patch drivers and subsystems can put a brake on that behavior basically (which also means the power savings can be more agressive since they're less likely to run into bad behavior, but that's for a bit later).

Also, the good news is that for ALSA at least, the latency can be calculated in the subsystem; eg there is no need to adjust all sound drivers, just a few lines of code in the subsystem are enough... (the alsa patch is going to Andrew Morton today).

An API for specifying latency constraints

Posted Aug 31, 2006 19:28 UTC (Thu) by sepreece (guest, #19270) [Link]

I like this a lot.

However, I'd like to see something similar exposed to user space, since applications may also have latency requirements. This is especially true in consumer devices, where a lot of essential and relatively low-level functionality is likely to be built as user-space applications rather than as drivers.

Walrus vs. penguins

Posted Oct 19, 2006 12:45 UTC (Thu) by bjornen (guest, #38874) [Link] (2 responses)

Walruses (and polar bears) live in the cold Arctic seas of the Northern Hemisphere, penguins is an aquatic, flightless birds living in the Southern Hemisphere.

Thus, they will never meet.

(Important stuff!)

Walrus vs. penguins

Posted Oct 19, 2006 19:18 UTC (Thu) by bronson (subscriber, #4806) [Link] (1 responses)

Oh yeah? How do you explain this then?

http://www.flakmag.com/jim/short/walrus.html

Walrus vs. penguins

Posted Nov 29, 2006 10:08 UTC (Wed) by jarkao2 (guest, #41960) [Link]

I've some suspicion this diary could be partly faked.
Are you sure any Scandinavians hunted walruses in 2001?


Copyright © 2006, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds