At FOSDEM 2009 (Free and Open
Source Software Developers' European Meeting) in Brussels, there were a
number of interesting talks about the state of power management in
Linux. Matthew Garrett from Red
Hat talked at length about aggressive power management for graphics
hardware. People tend to forget that graphics hardware is more than a
processor: it is not just the GPU that draws power, the graphics card's
memory, outputs, and, of course, the displays themselves all draw power as
well. Until now, most of the work on power management has focused on the
GPU, but if you want really good power management, you have to attack the
problem on all these fronts. And that's what Garrett is doing at Red Hat
and shared in his FOSDEM presentation.
The power consumption of the GPU can be decreased by two techniques:
clock gating or reclocking. Clock gating means that different bits of the
chip are disconnected from the clock when not in use, and thus less power
is drawn. However, this functionality has to be hardwired in the chip
design and it must be supported in the graphics driver. And that's where
Linux is still lagging behind, according to Garrett: "For a long time
Linux graphics support has focused on getting a picture. We can go further
now, but we just need the documentation to adapt the drivers." Clock
gating has no negative effect whatsoever on the performance of the GPU.
Reclocking is another story: when the GPU is running at a frequency of 600 MHz and you reclock/underclock it to 100 MHz, this results in a massive reduction in power usage, but it also means that the performance is reduced accordingly. Garrett cited a difference of 5 W if clock gating and reclocking is combined on Radeon graphics hardware.
The second component that can be optimized is memory: each memory access
draws power. So what can we do about power consumption of memory? Read less
often (which is essentially reclocking) or read less memory. Reducing the
memory clock can save you again around 5 W, but it introduces visual
artifacts on the screen if reclocking happens while the screen is being
scanned. The other interesting route (read less memory) comes down to
compressing the framebuffer. Most recent Intel graphics chipsets implement
this by a run length encoding (RLE) of the screen content on a line-by-line
basis. Garrett notes that this means your desktop background can make
a difference in battery life: vertical gradients compress very well using
this scheme, but horizontal gradients do not.
Another interesting consequence of the memory component is that periodic screen updates are really bad for power consumption. According to Garrett, moving the mouse cursor around has an instantaneous increase of power consumption by 15 W. A blinking cursor draws 2 W, and also the display of seconds or a blinking colon in the system tray clock draws unnecessary power. Garrett adds philosophically: "The whole point of a blinking cursor is attracting your attention. But when you're typing, your attention is already going to your text input, and when you're not typing, it doesn't need your attention. So is it really needed to blink the cursor?"
The third component where power management can make a difference are the
outputs. Just powering off an unneeded output port saves around 0.5
W. If you know for sure that you don't need the external output on your
laptop, you can safely turn it off and gain a bit of battery time. However,
if you need to connect an external monitor or video projector afterward,
you will first need to power on the output port explicitly. It all comes
down to a tradeoff between functionality and power consumption.
The last (but not least) component of graphics hardware is the
displays. This is another place were reclocking can save some
watts. For example, the LVDS (Low-voltage differential signaling) link to
a laptop's LCD screen uses power at each clock transition. Reducing the
refresh rate reduces the power consumption. While CRT screens begin to
flicker if the refresh rate is too low, TFT's don't have this
problem. According to Garrett, most TFT screens can be driven at 30 Hz, but
then they tend to display visual artifacts. Garrett only recommends this
LVDS reclocking when the screen is idle, which saves around 0.5 W. If the
screen becomes active again, the system should return to a normal refresh
rate of 60 Hz. Another solution is DPMS (Display Power Management
Signaling): just turn off the screen when it's idle. Even a screensaver
drawing a black screen draws power, while DPMS really turns off the
So what's the current state of this "aggressive power management"?
Dynamic clock gating is implemented in most recent graphic cards. Future
developments will implement even more aggressive dynamic state management:
graphics hardware will power on functionality when the system needs it and
power it off when it's not used. Graphics drivers and the operating system
should control this without irritating the user. Garrett stresses that
power management has to be as invisible as possible, otherwise the user
will not be happy and stop caring about "green" computing. Garrett is now
working on the Radeon code to get some prototype functionality. As it
stands now, the combination of dynamic GPU and memory reclocking can save
10 to 15 W, and LVDS reclocking can save 0.5 W. For a laptop, this doesn't
make a huge difference, but it is still a significant increase in battery
Power management in Nokia's next Maemo device
In the embedded track of FOSDEM, Peter De Schrijver of Nokia gave an insightful but very technical talk about advanced power management for OMAP3. This integrated chip platform made by Texas Instruments is based on an ARM Cortex A8 processor and has a GPU, DSP (digital signal processor) and ISP (image signal processor). Because the chip is targeted at mobile devices, some advanced power management functionality is built-in: the chip is divided in different voltage domains, and in each module the interface clock and functional clock can be turned off independently.
Nokia used an OMAP1 chip in the N770 internet tablet, and an OMAP2 chip in the N800 and N810 internet tablets. The devices use Nokia's Maemo platform, based on Debian GNU/Linux. Last year Nokia executive Ari Jaaksi revealed that their next Maemo device would use an OMAP3 chip. De Schrijver talked about the power management architecture of OMAP3, but also about the Linux support Nokia is developing for this functionality.
Power management on the OMAP3 can be subdivided into two types. On the
one hand, there is active power management. It's essentially the same
principle as reclocking in graphics hardware: with a lower clock frequency,
the chip is running on a lower voltage, resulting in less power
consumption. With dynamic voltage frequency scaling this can be handled
automatically. In Linux, the frequency scaling of the CPU is implemented in
the cpufreq driver, while for the core (the interconnects between different
blocks of the chip and some peripherals) there is a new API call for
drivers, named set_min_bus_tput(), which sets the minimum bus
throughput needed by a device.
On the other hand, when the chip is idle, there are solutions such as clock control, which can be implemented in software (by a driver) or hardware (an auto idle function). Moreover, clocks of different modules of the chip can be turned off selectively: if the interface clock is off, the core can sleep; if the functional clock is off, the module can sleep. The implementation of clock control in the OMAP3 chip is done in the clock framework of the linux-omap kernel, and Nokia is adding the patches to linux-arm now.
The OMAP3 chip knows four power states per domain: "on", "inactive",
"retention" and "off". In the "inactive" state, the chip works at normal
voltage but the clocks are stopped, while in the "retention" state the
chip works at a lower voltage. This means that the "inactive" state uses
more power than the "retention" state, but has a lower wakeup latency. The
shared resource framework (SRF) that determines the power state for each
domain of the chip is implemented by Texas Instruments and is hidden from
the driver programmer by an API. This API has to be implemented by the
power management framework and has to be used by the drivers. The API
documentation is not yet released, but De Schrijver said this will be added
into the kernel Documentation directory soon.
The "off" mode has some challenges: while the power management framework can handle saving and restoring the state of the cpu, memory controller and other components, each driver has to handle its module. This means: reinitialize the module when the functional clock is enabled and save and restore the context and shadow registers in memory.
In his talk, De Schrijver also gave a status update of the work. The
"retention" state works. Basic "off"
mode works on
most development boards; drivers are being adapted for "off" mode now and
will be ready at the end of February. All this code is being merged in
the linux-arm kernel tree, but eventually it will be merged in the mainline
kernel. According to De Schrijver, all these power management techniques
will be used in the next Nokia Maemo device: the long-awaited successor of
Comments (23 posted)
Release engineering for a large project is always a tricky task. Balancing
the needs of new features, removing old cruft, and bug fixing while still
producing releases in a timely fashion is difficult. Python is currently
struggling with this as it is trying to determine which things go into a 3.0.1
release versus those that belong in 3.1.0. The discussion gives a glimpse
into the thinking that must go on as projects decide how, what, and
when to release.
It is very common to find bugs shortly after a release that would seem to
necessitate a bug fix release. Ofttimes these are bugs that would have been
considered show-stopping had they been found before the release. But what
about features that were supposed to be dropped, after having been
deprecated for several releases, but were mistakenly left in? That is one
of the current dilemmas facing Python.
One of the changes made in Python 3.0 was a change
to comparisons and, in particular, removing the cmp()
function. That function takes two arguments, returning -1, 0, or 1 based
on whether the first argument was less than, equal to, or greater than the
second. Python 3.0 set out to clean up some of the "warts" of the language;
cmp() could be handled in other, more efficient ways. The only
problem is: cmp() didn't really get removed from the Python
3.0 release in December.
It was recognized quite quickly (the bug report shows it being
reported three days after the release), but it wasn't exactly clear what to
do about it. There may now exist "valid" Python 3.0 programs that use
cmp() and function correctly. This led Guido van Rossum to say: "Bah. That means
we'll have to start deprecating cmp() in 3.1, and won't
be able to remove it until 3.2 or 3.3. :-)" He seems to have only
been half-serious, as the smiley might indicate, eventually concluding: "OK, remove it
in 3.0.1, provided that's released this year." Unfortunately, the
"this year" he was referring to is 2008.
Because Python 3 was such a major shift in the language, the 2to3
tool was created to help fix old code to work with the new interpreter.
But, 2to3 did not change calls to cmp(), so code created
that tool will run in Python 3.0. That makes for a bit of a tangle as van
Well, since 2to3 doesn't remove cmp, and it actually works, it's
likely that people will be accidentally depending on it in code
converted from 2.x. In the past, where there was a discrepancy between
docs and code, we've often ruled in favor of the code using arguments
like "it always worked like this so we'll break working code if we
change it now". There's clearly an argument of timeliness there, which
is why we'd like to get this fixed ASAP. The alternative, which nobody
likes, would be to keep it around, deprecate it in 3.1, and remove it
in 3.2 or 3.3.
As of this writing, Python 3.0.1 is intended
for release on February 13 with the removal of cmp(). There
seem to be a number of reasons that the release slipped into 2009, not
least is the holiday season that tends to eat up a fair chunk of December.
But it was also more complicated to remove cmp() than it at first
appeared. There were several standard libraries and tests that were still
using it as well Python internals that still referred to it. Inevitably,
as those things were getting worked out, other problems cropped up.
There are some fairly serious performance problems with the new I/O
library, with some experiencing read performance three orders of magnitude
slower on Python 3.0. There are also problems with chunked HTTP responses
when using urllib. Both of these require fairly extensive fixes,
though, which also requires lots of testing. It all adds up to a lot of
work, so folks start to wonder if much or all of the work shouldn't get
pushed into the 3.1 release which is targeted at an April or May time frame.
There are others who argue that the 3.0 series should be abandoned entirely
in the near term. Rather than have a 3.0.1 with substantial changes from
3.0—including the incompatible removal of cmp()—3.1
should be released quickly so that it is the release targeted by
developers. As Raymond Hettinger put it:
My preference is to drop 3.0 entirely (no [incompatible] bugfix release)
and in early February release 3.1 as the real 3.x that migrators ought
to aim for and that won't have [incompatible] bugfix releases. Then at
PyCon, we can have a real bug day and fix-up any chips in the paint.
There are some fairly important new features—notably moving the
new I/O to C for performance reasons—that will not make it for a
release in February, though. Since a 3.2 release would be quite a ways
off, those features would languish for too long. 3.1 release manager
Benjamin Peterson would would rather see an
immediate 3.0.1 release:
However, it seems to me that there are
two kinds of issues: those like __cmp__ removal and some silly IO bugs
that have been fixed for a while and [are] waiting to be released.
There's also projects like io in c which are important, but would not
make the schedule you and I want for 3.0.1/3.1. It's for those longer
term features that I want 3.0.1 and 3.1. If we [immediately] released 3.1,
when would those longer term projects that are important for migration
make it to stable? 3.2 is probably a while off.
There are also concerns that an immediate release called 3.1 might lead to
confusion and unhappiness for users. Martin Löwis voiced those fears to general agreement:
I would fear that than 3.1 gets the same fate as 3.0. In May, we will
all think "what piece of junk was that 3.1 release, let's put it to
history", and replace it with 3.2. By then, users will wonder if there
is ever a 3.x release that is any good.
Part of the problem is the "no new features" rule for bug fix
releases—those that are typically numbered by bumping the third digit
of the version number. Python established that rule in the 2.x series, to try to protect
the "most conservative users" as van Rossum points out. Those users have not moved to
Python 3 yet, so van Rossum argues that the rule can be suspended:
Frankly, I don't really believe the users for whom those rules were
created are using 3.0 yet. Instead, I expect there to be two types of
users: people in the educational business who don't have a lot of
bridges to burn and are eager to use the new features; and developers
of serious Python software (e.g. Twisted) who are trying to figure out
how to port their code to 3.0. The first group isn't affected by the
changes we're considering here (e.g. removing cmp or some obscure
functions from the operator module). The latter group *may* be
affected, simply because they may have some pre-3.0 code using old
features that (by accident) still works under 3.0.
This argument seemed to help crystallize a consensus of sorts. There were
some other discussions of exactly which "features" should make an
appearance in 3.0.1, but the push for numbering the bug fix release as 3.1
seemed to fade. The 3.0.1 release is currently scheduled for February
13th, while other new features—undoubtedly along with additional
fixes—will come with the 3.1 release in April or May.
Part of what was considered in the deliberations was the impact on users
and what they will expect from how the releases are numbered. It is a
difficult problem, as KDE found
out a year ago. Users have certain expectations based on release
numbering, which are largely outside of a project's control. But, some
kinds of changes, especially those that are not backward compatible,
necessitate a "large enough" numeric change to indicate that.
It is a fine line, which is why Python has struggled with it. One hopes
that any development for Python 3—a large, incompatible language
overhaul itself—avoided using cmp(), and will then be
unaffected. If not, the relatively small window in time should keep the
number of affected programs to a minimum.
Comments (6 posted)
GNOME's Do are both
descendants of the Run tools that have been part of desktop environments
for years. However, instead of allowing you to enter a single command, both
Do and KRunner are rapidly evolving into full-scale application launchers
that rival main menus as a tool from which to control the desktop. Both
require practice to use well, but, their compactness on the screen may
appeal to intermediate to advanced users — especially those who
prefer keyboard shortcuts to using the mouse.
A new version of KRunner has just been released along with KDE 4.2, and
should be available soon in your distribution's repositories along with the
rest of the new version, although some distributions may not include it in
the default KDE installation.
By contrast, Do is less tightly integrated into its desktop's development
cycles, but version 0.8.0 of Do was released in late January. You can find
instructions on the Do project site. However, many of the distributions
listed do not yet have the latest version in their repositories, so, in
many cases, the best option is to compile the source code, after first
installing Mono support.
Like Do, KRunner opens in a small window. To use it, you press Alt+F2 to
start the program, then start typing. In response, KRunner displays a list
of programs that could complete your input, rather like tab completion in
the BASH shell, except in visual form.
In the simplest cases, what you type can be a command. On
this level, KRunner differs little from a Run command, aside from the fact
that you can tab to a selection or click it with the mouse.
However, two dozen plugins that are installed along with the basic program
extend KRunner's capabilities far beyond those of a Run command. Provided
that the calculator plugin is installed and enabled, you can enter basic
calculations in KRunner, using an asterisk (*) for a multiplication sign
and a forward slash (/) for division along with the plus and subtraction
signs. Similarly, you use KRunner to convert units of measurement, or to
open a web site for currency conversion. Other plugins allow you to open a
web search or to search bookmarks, contacts, recent
documents or your web browser history.
The one catch with many plugins is that you need to learn a simple syntax
in order to use them. For example, if you want to do a web search for "LWN"
using Google, you would enter "gg:LWN". In much the same way, if
you wanted to convert the average human body temperature from the
Fahrenheit to the Celsius scale, you would enter "98.8 F. in
C.". Fortunately, KRunner is well-documented, so you
should have little trouble learning the syntax for your favorite commands.
A small complication is that KRunner includes task-oriented and
command-oriented views. But apart from the positioning of suggestions, the
difference is chiefly what sort of completions KRunner offers. The main
advantage of the different views is that by carefully selecting them and
enabling or disabling plugins, you can make the completions more likely to
be the ones you want.
In addition to the two views, KRunner also offers a view of currently
running processes that you can use to kill misbehaving applications. Short
of a link to other system settings, KRunner could hardly be more of a
command center for desktop activities.
Do works in approximately the same way as KRunner, differing mostly in the
details. To invoke Do one generally uses the "Super + Space" (typically
Windows key along with space bar) combination. Like KRunner, Do works on
the most basic level by suggesting
completions for the shell command, binary, or task that you type. When the
completion you want appears, a Run button opens in a right-hand pane that
you can navigate to via the Tab key.
One of Do's main differences from KRunner is in some of the
plugins you can use.
As you would expect, Do uses
GNOME applications like Evolution and Rhythmbox to handle requests, while
KRunner uses KDE choices such as KMail or Amarok. Besides having thumbnail
file previews, Do is also noticeably more web-oriented than KRunner, with
plugins for blogging, RSS feeds, and Google Contacts. In fact, if you
choose, you can even use Do to write a tweet or short email.
The latest version of Do also includes support for themes. One of the most
useful of these themes is Docky, which
converts Do into a launchpad with configurable application icons, making it
more of a main menu replacement than ever.
Both KRunner and Do are convenient tools, and run almost as well under
other desktops as they do on their native ones. Both, too, amount to a
control center that is often more convenient than hunting down the
individual program in the sub-menus.
All the same, neither is a tool for a beginner. True, both support task
completions, so that you can, for instance, write an email without having
to remember what program is the default for emails on your desktop.
However, I suspect that most users are oriented to programs more than
tasks. Since neither of these programs offers a complete list of available
programs, new users may find either KRunner or Do hard to use.
traditional menu can be cumbersome, it does have the advantage of
displaying a complete list of possibilities. By comparison, in KRunner or
Do, you need to already know the possibilities. Otherwise, you can hardly
begin to enter one or search for it. And, to further complicate matters,
some users may not remember the necessary syntax to use certain plugins
unless they use the plugins constantly. This limitation affects both
KRunner or Do, although Do has a simpler interface.
But for more experienced users, after a brief learning period, programs
like KRunner or Do are probably more efficient than menus — not least
because you can use them while keeping both hands on the keyboard rather
than one straying to the mouse. You might compare the two programs to
learning touch-typing: Although neither is immediately accessible, the way
that a mouse and a menu are, once you are comfortable, both offer
significantly enhanced ease of use and efficiency.
Comments (17 posted)
In recent months, growing recognition for OpenStreetMap has led to an
explosion in imports of public and private data. Mapping every street,
lake, skiing piste and pizza takeaway in the world might sound like a fun
hobby, but being able to pull in your government's basic street network
makes the job a whole lot easier. This mix of "crowd sourced" map data from
volunteer efforts, private and public donations of data, and commercial
developments based on the results, is a classic open source story.
OpenStreetMap was founded by Steve Coast in 2004, borne of a frustration
with the prevailing preference for proprietary data in the UK. The Ordnance
Survey, which can trace its roots back to 1747, is the part-government
funded agency with some of the world's most detailed and best loved
maps. Unfortunately they charge you an arm and a leg to get the underlying
vector data. So out stepped Coast, equipped with a GPS, notepad and pen,
followed by tens of thousands of volunteers all manually gathering the data
to enter into OpenStreetMap's database. To get a feel for the explosion of
data over the past year, look at this
Thankfully, help for the crowd was at hand from the start. Coast quickly
secured an import of GPS traces from a courier company for central London;
the donation cost the courier company nothing but was very helpful for
OpenStreetMap. Much more impressive imports began more recently, with the
US census bureau's TIGER
database bringing data for the entire street network for the United
States of America in late 2007. The Netherlands appeared in even finer
detail around the same time, thanks to a donation by Dutch
This process has now rapidly accelerated. You can get an idea of the scale of the
import activity from these incomplete wiki pages on importing
government data, the catalogue of
major imports and the enormous list of
potential data sources. These imports vary from quite comprehensive
-- such as the Canadian Geobase -- to very
specific datasets like NAPTAN (UK public
transport access points) and UK
oil wells. Importing vectors for buildings in addition to roads has
been popular, examples include Boston in the USA and
City in the Philippines.
Of course, most of these imports have come from governments and public
agencies who are empowered or required to release the data into the public
domain. Any import needs to be carefully reviewed to ensure that copyrights
- and database rights in Europe - aren't infringed. For those of us mapping
in countries like the UK, this means more walking and cycling, with only
occasional negotiations opening up niche data such as oil wells and bus
stops. Politics still holds the project back -- or makes for more fun,
depending on your perspective.
Politics was a driving force behind one of the most interesting recent
collaborations between volunteers and public / non-governmental
agencies. Whilst the world was watching the Israel-Palestine conflict on
TV, long-time OpenStreetMap volunteer and geospatial activist Mikel Maron
was attempting to produce high quality maps of the Gaza
strip. Maron worked with UN and aid agencies to obtain data,
gain the funds to buy aerial imagery that volunteers could trace, and
locate Palestinian expatriates who could fill in details from memory.
With commercial uses for OpenStreetMap emerging, such as the recently
developer products, and free software projects like Marble integrating the maps into their
interfaces, OpenStreetMap is gaining clout. In the country that started it
all, a government-commissioned
study found that there would be more economic benefits for the UK if
map data was released into the public domain than under the current
proprietary model. Under pressure from a growing campaign, and these compelling
examples of the benefits of open collaboration, we might just see the
terrain shifting from a few interesting imports to a major change in
mainstream attitudes towards public data.
At the very least, you'll have a lot of high quality map data to play
with at your leisure in the future.
(Interested in adding data to OpenStreetMap? Tom Chance will be returning
in the near future with a look at how that process works.)
Comments (6 posted)
Page editor: Jonathan Corbet
Next page: Security>>