Nobody likes binary blobs. When those blobs come in the form of firmware
for peripheral devices, though, it is probably fair to say that most of us
are able to live with their existence. Yes, it would be nicer to have the
source and to be able to create modified firmware loads, but, as long as
the firmware itself is distributable, Linux users usually do not worry
about it. That is not true of all users, though, as can be seen from an
effort underway in the GTA04 project.
GTA04 bills itself as the
next-generation OpenMoko phone. The end product is intended to be a board
that can fit inside an existing OpenMoko handset case but which provides a
number of features the OpenMoko phone never had: 3G/HSPA connectivity, USB
2.0 on-the-go support, BlueTooth, an FM transceiver, various sensors
(barometer, gyroscope, ...), and the obligatory flashlight device. Also
provided will be a Debian port to make the whole thing work. The hardware
design is reasonably far along, with boards from an
early adopter program about to make their way to developers. There is,
though, one nagging problem that the project would still like to solve.
The GTA04 uses a Marvell 8686 "Libertas" chip for its WiFi connectivity;
that is the same chip used in the first generation OLPC XO laptop. That
chip requires a significant firmware blob to be loaded before it can
function. One of the earliest bugs filed at OLPC called for
the replacement of this blob with free software, but nobody stepped up and
got the job done. So, five years later, the GTA04 project, whose goal is
the creation of a 100% free handset, is stuck shipping that same firmware
Some projects would shrug their collective shoulders, treat the blob as
part of the hardware, and move on to the long list of other problems in
need of solution. Others might bite the bullet, get hardware documentation
from somewhere, and write a replacement blob. Yet others might decide that
Marvell's hardware is incompatible with their goals and find a different
WiFi chip for their board. GTA04, though, has done none of the above.
What the GTA04 developers would like to do is documented on this
The task is to develop a prototype of a microcontroller that sends
an immutable firmware program through an SDIO interface into a
Marvell 8686 based WLAN chip independently from the main CPU. The
goal is to isolate the non-free firmware binary from the main CPU
so that it becomes effectively circuitry.
The architecture we want to use this with is a TI OMAP3 with a
level shifter and a Wi2Wi W2CBW003 chip connected to it.
Our idea is to use a MSP430F5528IYFF controller that sits between
the OMAP3 and the level shifter so that either of the MCUs can
control the SDIO interface and right after reset, the firmware is
injected from the MSP430 into the Wi2Wi chip.
In other words, the project proposes to add another microcontroller to its
board for the sole purpose of shoving the firmware blob into the Marvell
WiFi controller. By so doing, they turn the blob into "effectively
circuitry" and, seemingly, no longer feel quite so dirty about it. The
Free Software Foundation agrees with
this goal, having apparently set it as a condition for their endorsement of
The current thinking is that by removing the ability for this
software to be upgraded, we can effectively treat the wireless
networking hardware as a circuit, and importantly, not a malicious
circuit that can be upgraded.
One might object that, if the firmware blob contains malicious code, that
code will still exist even if it does not pass through the CPU as data
first. The GTA04 is meant to be a free device; the operating software
will be under the user's control. So one would expect that the firmware
blob will not be spontaneously replaced by something nastier. If the
firmware blob were somehow able to maliciously "upgrade" itself
against the will of the Linux system running the phone, it would be able to
do any of a number of other unpleasant things regardless of which processor
loads it into the WiFi controller. Meanwhile, if Marvell comes out with a
new blob offering better performance or fixing bugs, users will no longer
have the option of installing it on their phones.
It is, in other words, hard to see the benefits that
are being bought through this exercise.
In fact, it would seem that the GTA04 project is wanting to saddle itself
with a more complex design, higher hardware costs, less flexibility in the
future and increased power
usage to create a system that runs the exact same binary blob on the same
controller. That's a high price to pay for the comfort that comes from
having swept the blob under the rug where it can't be seen. This whole
thing might be humorous if it weren't for the little fact that it would
really be nice to see this project succeed; more freedom in this area is
sorely needed. What they are trying to do is challenging enough without
the addition of artificial obstacles. It will be a sad day if the pursuit
of a truly free handset is impeded by an exercise in papering over an
inconvenient binary blob.
(Thanks to Paul Wise for the heads-up).
Comments (59 posted)
Desktop Linux users used to be easily divided into two camps when it
came to color management: there were the graphics geeks — who
painstakingly profiled their scanners, printers, and displays and manually
set up color management in each of their creative applications — and
there was everyone else, who saw color management as an esoteric hobby with
no practical value. The color geeks are quietly taking over, however,
thanks to recent work that automates color management at the system and
The root of the color management problem is that no two devices have
exactly the same color reproduction characteristics: monitors and printers
vary wildly in the tonal range and gamut that they can reproduce; similarly
cameras and scanners vary wildly in what they can pick up. If you spend
all week staring at a single display device it is easy to forget this, but
anyone who has set their netbook on the desk beside a desktop LCD display
recognizes immediately how different they can be. As a practical matter,
when a user makes a printout and finds it too dark, or orders a piece of
clothing online and is surprised at its color when it arrives, lack of
color management is the problem.
But color management is essentially a solved problem that has yet to be
implemented system-wide on Linux. Every display or input device can be
profiled — that is, its color characteristics measured and
saved in a standardized format like an ICC color profile.
With profiles in hand, applications need only to perform a transformation
on RGB data to map it from one profile (say, a camera's) to another (a
display's). Most of the time, a mapping with a perceptual rendering
intent is used, which means the final image will appear the same to the
human eye on the display as it did through the camera viewfinder.
The old way
Several free software packages exist for applications to perform these
transformations. By far the most popular is LittleCMS, an MIT-licensed library
used by GIMP, Krita, ImageMagick, Inkscape, Scribus, and many other
graphics applications. Another well-known alternative is Argyll, which is licensed under the
AGPL. Argyll consists of a collection of command-line tools, however,
rather than a shared library designed for use by application programs.
LittleCMS support simply gives each application the ability to
perform profile-to-profile transformations. Users are still required to open
the preferences dialogs in each application and specify the .icc
file for each profile used in their hardware setup. As a result, "color
management" came to be a feature that could be enabled or disabled on a
per-application basis, since only a small percentage of users cared enough
to have profiles for their displays and printers.
A far better approach would be for the system to keep track of the
relevant profiles (since displays, scanners, and printers tend to stay put
for long periods of time), and have applications automatically retrieve
them as needed — no user intervention required. That is the approach
taken by both Windows and Mac OS X.
The technicolor dawn
This is where Richard Hughes's colord and Gnome
Color Manager (GCM) come into play. Colord is a framework for automating
the storage and retrieval of color profiles. It includes a system daemon
that maintains an SQLite database to track which profiles map to which
devices; it also
provides a D-Bus interface for other programs to query (or in the case of
GCM, add and change) profiles.
It can retrieve scanner and printer device IDs through Udev and present
a simple interface with which users can select their preferred profile for
each device. Multiple profiles per device are supported, to allow separate
profiles for different paper stocks, ink options, flatbed and transparency
adapters, and other parameters. An application can ask for the profile for
a specific device using dot-separated qualifiers, such as
RGB.Glossy.600dpi. If colord does not have a stored profile for
the requested qualifiers, it can fall back one parameter at a time to find
the best match, e.g., to RGB.Glossy.* or RGB.*.600dpi,
and eventually back to RGB.*.* and the default * for a
The other half of the framework is GCM, which is a session-level process
implemented as a GNOME settings daemon plug-in. GCM talks to the X.org
session to retrieve the display device information through XRandR and,
where possible, to set display settings such as video card gamma tables
(VCGTs). The session-level process can also read ICC profile files stored
in the user's home directory (as opposed to the system-wide ICC profile
directories /var/share/color and /var/lib/color, which
are watched by the colord daemon).
Hughes said in a
recent talk that the system/session split, though it sounds confusing at
first, is required for several reasons. First, the system daemon needs to
talk to other system-level processes such as CUPS (for printers) and SANE
(for scanners), and needs to work even when there is no active user session
(consider shared printers, printers that print directly from memory cards,
and GDM login screens, for example). But the session-level process also
needs to be separated from privileged operations for other reasons.
SELinux will not allow a system daemon to load files (in this case, ICC
profiles) from a user's home directory because of security concerns, and
colord would not even be able to read them if the home directory is
encrypted. The separation also makes it possible for KDE, LXDE, or other
environments to write their own colord-compatible session management code
independent of GCM.
The application's viewpoint
The framework provided by colord and GCM is very high-level;
applications must still do the color transformations using LittleCMS or
another library. But by relying on colord to provide the correct profile
information for a particular hardware device, there is less for the
application developer to worry about.
GCM provides an interface for the user to manage his or her profiles,
which alleviates the problem of manually setting the same preferences in
every application. But applications do need to be updated to work with
colord. At the moment, the list of colord-compatible programs is short:
Alexandre Prokoudine at Libre Graphics World that it includes CUPS,
Foomatic, Simple Scan, and Compiz-CMS. The GTK+ toolkit is also compatible
he said, and KDE support is on the way.
General-purpose GTK+ or KDE applications can decide to entrust colord
with their entire color management workflow, but creative applications may
still want to provide additional options. For example, users may want to
simulate an output device with a different profile to "soft proof" before
printing or rendering images. Colord can still help, because it can
maintain profiles for "virtual devices" in addition to the local hardware,
but of course presenting that functionality to the user requires pulling
back the curtain away from the "it just works" approach favored by simple
Perhaps understandably, support for colord among the already-color-aware
Linux creative suite appears to be growing slowly. Krita has integration plans,
but it may be a while before the other applications that already implement
color management adopt colord, on the grounds that the users who already
care have also already configured their software.
On the other hand, Hughes has been busy submitting patches to other
projects to enable colord support. He added the colord support now found
in CUPS' gstoraster filter (which converts PostScript and PDF content to
raster data) and foomatic (which can directly generate printer-language
files) and has reportedly been patching other tools as well. Colord
debuted in 2010, and was available in GNOME 3.0, but is now a hard
dependency for the imminently-due GNOME 3.2, so for the first time,
application developers can count on its availability.
Where profiles come from
As a practical matter, the big lingering question for end users is where
the all-important device profiles come from. After all, it makes no
difference how easily the applications can retrieve the default profile for
a display or printer if there is no profile available.
Hughes has designed colord to support profiles from a variety of
sources, and always takes the "the user is right" approach: a
manually-selected profile is assumed to be better than a generic one.
best possible profiles are those that are created with proper measuring
device: a tristimulus colorimeter for displays, a high-quality target for
scanners or digital cameras, or a spectrophotometer for a printer.
But all of these methods involve some outlay of cash. The price of
colorimeters has fallen sharply in recent years, but even the cheapest
devices run close to US $100, which can sound excessive for a hardware
device only used on isolated occasions. There are non-profit producers of
quality IT 8.7 targets, such as Wolf Faust, but they are still
not free. Spectrophotometers, which measure reflected light, remain
expensive, although there are paid services to which users can mail a test
printout and get a high-quality ICC profile in return. But the one-time
cost of buying such a profile rises when you consider the need to have each
ink and paper combination measured separately.
Still, GCM does support creating a display profile with a USB
colorimeter, importing camera or scanner images of calibration targets for
scanner or camera profiles, and scanning printouts to generate printer
profiles. GCM uses the low-level profiling tools from Argyll for much of
this process, but Hughes has also been adding native drivers for common
colorimeters like the Pantone Huey.
The next rung down on the quality ladder is manufacturer-provided
profiles. These vary depending on device type and quality control. An
expensive film scanner might ship with an accurate profile on the
accompanying CD, while an inexpensive laptop display is completely hit or
miss. As Hughes explained,
I have a T61 laptop with a matte 15"
screen, but as any T61 owner [who's smashed their screen] knows, the T61
can have about half a dozen different panel types from factories in a few
different countries. So trying to come up with the ultimate T61 laptop
profile is going to be really tricky.
manufacturer-provided profiles exist, however, Linux distributions
generally do not have the rights to ship them, so they are inaccessible to
an "it just works" framework.
As an alternative, Hughes said, GCM can probe XRandR devices for Extended
display identification data (EDID), and auto-generate a basic display
profile based on it.
If (and this is a big "if") the vendor puts
semi-sane data into the EDID then we can get some chromaticity and
whitepoint co-ordinates which allows us to generate a profile that's
certainly better than the default.
Here again, Hughes warned that
the quality of the device in question correlates to the quality of the EDID
data one should expect. Manufacturers who regularly swap in LCD panels
from Korea, China, and Taiwan without indicating a difference in the model
number are likely to burn in generic EDID data that covers the average
characteristics of the product line.
Unfortunately, how useful the provided EDID data is can also depend on
extraneous factors like the video driver. For example, the NVIDIA binary
drivers only support XRandR 1.2, which in turn does not include support for
sending separate VCGTs to multiple monitors. Thus regardless of the
quality of the data, at least one display will be forced to use gamma
tables that are only accurate for a different monitor. It is also possible
to run into older monitors that do not provide EDID information at all, or
to have KVM switches or splitters lose it en route to the video card.
Kai-Uwe Behrmann of the Oyranos
project, a competing color management framework, recently undertook an
effort to build an open, user-contributed database of device profiles.
the system was developed in the 2011 Google Summer of Code by student
Sebastian Oliva. There is a web front-end running at icc.opensuse.org, which returns both raw
.icc files and JSON-formatted metadata about the source device. It is an
interesting approach, though one with its own set of problems. Display
characteristics change over time, for example, as the backlight ages, and
there is no way to verify the conditions under which another person's
profile was created.
In the final analysis there is no automated substitute for a good
profile created with quality test materials; a fallback profile will always
be better than nothing, but unpredictable. Perhaps the best thing the
community at large can do if it wishes to spread good, non-generic
profiling is to make sure that the distributions and desktop environments
pack a colorimeter and IT 8.7 target into the "booth box" when heading to
community Linux shows. It certainly would not hurt.
The colord daemon and GCM session manager provide the framework on which
future applications can build end-to-end, "it just works" color management
for the Linux desktop environment. That future is a big step up from the
recent past, where each color-aware application needed to pester the user
individually for a collection of profile settings, but it is still not
color nirvana. For that one needs to look ahead to full-screen color
Hughes outlined the situation in his interview at Libre Graphics World.
A full-screen color managed system would need a color-aware compositing
manager, but would also allow applications to mark specific regions of the
screen to "opt out" from the transformation — UI widgets are assumed
to have sRGB pixels, for example, but a photograph would have a camera
profile instead. But by pushing the transformation into the compositor, it
becomes possible to offload the calculations to the GPU, potentially
speeding up workflow dramatically.
plug-in is an attempt to do this, although an incomplete one: because the
applications are not able to mark specific regions with different
transformation needs, the plug-in assumes all images are sRGB. For Compiz
users, however, it is an intriguing glimpse at the possibilities.
Behrmann has developed a draft specification for communicating color
regions, formerly called net-color but recently renamed X Color Management
in the draft stage for version 0.3). Hughes has concerns about the
specification for use with colord, including its network transparency and
its ease of implementation for application developers, but so far there is
not a competing proposal. As he told Libre Graphics World, however, he is
hoping to put together a plan for GNOME 3.4 that will incorporate colord
support into Mutter, the GNOME compositing manager, but it is likely to
involve changes to GTK+ and other parts of the stack as well.
Color-management is not something that can be completely
automated, but desktop Linux is on track toward the next best thing, that the
choice of sane defaults is automated. Colord effectively moves
color-awareness out of the application space and down to the session level.
For most people, on modern video hardware, an automatically-detected
display profile will provide a "good enough" color balanced experience.
The list of applications taking advantage of it is short, but growing, but
the color geeks need not frown — as was the case before, those who
are interested in hand-tweaking their color matching can still dig into the
details and customize the experience.
Comments (25 posted)
Security is a much-discussed subject at the moment; it has become clear
that security needs to be improved throughout our community - and, indeed,
in the industry as a whole. But anybody who has lived through the last
decade does not need to be told that many actions carried out in the name
of improving security are, at best, intended to give control to somebody
else. At worst, they can end up reducing security at the same time. A
couple of examples from the hardware world show how "security" often
conflicts with freedom - and with itself.
UEFI secure boot
LWN first wrote about the UEFI "secure boot"
feature last June. At that point, the potential for trouble was clear,
but it was also mostly theoretical. More recently, it has been revealed that Microsoft intends to require the
enabling of secure boot for any system running the client version of
Windows 8. That makes the problem rather more immediate and concrete.
The secure boot technology is not without its value. If an attacker is
able to corrupt the system's firmware, bootloader, or kernel image, no
amount of good practice or security code will be able to remedy the
situation; that system will be owned by the attacker. Secure boot makes
such attacks much harder to carry out; the system will detect the corrupted
code and refuse to run it. An automated teller machine should almost
certainly have this kind of feature enabled, for example. Many LWN readers
find that the amount of time they have to put into family technical support
would drop considerably if certain family members had their systems
protected in this way.
Secure boot requires trust in whatever agency applies its signature to the
code. A better name for the feature might be "restricted boot," since it
restricts the system to booting code that has been signed by a trusted
key. The idea is sound enough, except for one little problem: who decides
which keys are trusted? Hardware vendors seeking Microsoft certification
will create a secure boot implementation that trusts Microsoft's keys.
They need not trust any others - not even from other hardware vendors
selling Windows-compatible hardware.
Secure boot would not be a big problem if users were guaranteed the right
to install their own keys or to disable the feature altogether. The owner
of a specific computer may well want to restrict the system to booting
kernels signed by Red Hat, SUSE, or OpenBSD. They might also want to say
that Windows is not a trusted system - but only as long as the driver
firmware needed to boot is signed by somebody other than Microsoft. The owners
may want to build their own
kernels signed with their own keys. Or they may decide that secure boot is
a pain that they would rather do without. With this freedom, secure boot
could be a beneficial feature indeed.
But nobody is guaranteeing that freedom. The ability to disable secure
boot, at least, may come standard on traditional "desktop PC"
systems, but the role of those systems in the market is declining.
Microsoft very much wants to push Windows into tablets, handsets,
refrigerators, and other new systems. Such machines do not have a stellar
record with regard to enabling owner control even now; it does not seem
likely that Microsoft's certification requirements will improve that
situation. Just as things seemed to be getting better in that area, we may
be about to see them get worse again.
That said, loss of control over our systems is not a foregone conclusion.
Microsoft will have to be very careful about monopoly concerns in the areas
where it is dominant. In the areas where Microsoft has failed to gain
dominance, there is no guarantee that it ever will. And, even then, users
have been clear enough about their desire for access to their own systems
to gain the attention of some big handset manufacturers. Lockdown via
secure boot is not a foregone conclusion; in fact, it looks like a battle
we should be able to win. But we must certainly keep our eyes on the
The pointer to this paper by
Alan Dunn et al [PDF] came via
Alan Cox. These investigators have figured out a way to use the
trusted platform module (TPM) found in most systems to hide malware from
anybody trying to investigate it. In essence, the TPM can be used to
create a trusted botnet capable of resisting attempts to determine what the
hostile code is actually doing.
The TPM provides a number of cryptographic functions along with a set of
"platform configuration registers" (PCRs) that can be used to make
guarantees about the state of the system. As long as the boot path is
trusted, the TPM can sign a message containing PCR values proving that a
specific set of software is running on the system. Fears that this "remote
attestation" capability would be used to lock down systems from afar have
not generally come true - so far. The TPM can also perform encryption and
decryption of data, optionally tied to specific PCR values.
One other TPM-supported feature is "late launch," a mechanism by which code
can be executed in an uninterruptable and unobservable manner. Late launch
is used to enable mechanisms like Intel
TXT; it is another way of ensuring that only "trusted" code can run on
The attack described in the paper requires gaining control of the TPM, an
act which may or may not be easy (even after the system itself has been
compromised) depending on how the TPM is being used. Once that has been
done, the compromised software will be able to attest to a remote controlling
node that it is in full control of the system. That node can then send
down encrypted code to be run in the late launch mode. This code is
limited in what it can do - it cannot call into the host operating system
for anything, for example - but it can make important policy decisions
controlling how the malware will operate.
Understanding - and defeating - malware often depends on the ability to
observe it in action and reverse engineer its decision making. If it
proves impossible to observe malware in operation or to run it in a
virtualized mode, that malware will be harder to stop. The attack is not
easy, but experience has shown that the world does not lack for capable,
motivated, and well-funded attackers who might just take up the challenge.
That would not bode well for the future security of the net as a whole.
Needless to say, the protection of botnets seems counter to the objectives
that led to the creation of the TPM in the first place. It has always been
clear that technology imposed in the name of "security" has the potential
to cost us control over our own systems. Now it seems that technology
could even hand control over to overtly hostile organizations. That does
not seem like a more secure situation, somehow.
Comments (18 posted)
Page editor: Jonathan Corbet
Next page: Security>>