Your editor had the honor of speaking at MontaVista's Vision 2008 conference
recently. This conference - a gathering of MontaVista's customers -
provided an opportunity to observe how (part of) the embedded industry sees
itself and its role in the larger Linux community. Relations between
embedded systems and Linux as a whole have often been a little uneasy; a
situation which probably will not change in the near future. That said,
there are signs that
embedded developers are starting to think about the value of engaging more
directly with the development community that they depend on.
William Mills is the Chief Technologist for Open Linux Solutions at Texas
Instruments; his brief presentation at Vision was an interesting
demonstration of how attitudes in the industry are changing. According to
Mr. Mills, TI's method for developing Linux drivers for its products
involved doing the work behind closed doors, then distributing the result
through MontaVista. That approach has changed, though. TI now does its
driver work in a public git tree, with a focus on merging the code upstream
as a first priority. Customers who want to work directly with upstream
kernels can get the code directly.
In a sense, it would appear that TI has removed MontaVista as the
intermediary which distributes drivers for TI hardware. But TI still
distributes code through MontaVista, so customers looking for a supported,
integrated offering can still get a distribution which suits their needs.
There's no shortage of embedded systems vendors who lack the skills and the
desire to support a Linux distribution themselves; for those vendors,
buying a supported system makes a lot of sense. For everybody else, the
software is free and part of the mainline kernel, as it should be.
MontaVista founder Jim Ready discussed "the state of embedded Linux,"
focusing on areas where there is a bit of a mismatch between what the Linux
community is providing and what the embedded industry needs. Certain kinds
of functionality are missing; the ability to do user-space interrupt
synchronization was one example. The rate of change in the kernel is very
high, presenting embedded vendors with the difficult choice of backporting
fixes or upgrading to a more recent kernel. Tracing and profiling tools
are not up to the level needed by the industry.
Jim also talked some about realtime functionality, which currently must be
patched into the kernel separately. He complained that changes made to the
mainline kernel often break the realtime patch sets, leaving developers
scrambling to make things work again. Keeping these patches in a working
state requires constant effort; it is a significant cost.
All of this may sound like whining from an industry which
has earned a reputation for taking more from Linux than it is willing to
put back in. But Jim put the blame directly on the embedded industry
itself; embedded vendors, he says, still haven't quite gotten it. While
taking some pride in MontaVista's position in the list of top contributors
to the kernel, he suggested that MontaVista should be enjoying the company
of more embedded systems firms. The embedded industry should be
contributing more to the kernel than it is.
What it comes down to, says Jim, is that the center of gravity in the Linux
development world can be found in enterprise computing. Vendors in that
industry are contributing heavily to the kernel and, as a result, the
kernel tends to fit their needs better. The embedded community needs to
get together and figure out how it, too, can become a more prominent
contributor and work to drive the kernel in directions which suit its
Judging from the response in the room, many of those in the audience seem
to agree with this point of view. Some see it differently, though. During
your editor's talk, a member of the audience asked whether the embedded
community should stop using a kernel developed by enterprise system vendors
and, instead, make its own version of the kernel suited to its needs.
Needless to say, your editor discouraged this approach; the cost of forking
the kernel and fragmenting the development community would vastly exceed
the value of any benefits gained. But the questioner seemed unconvinced.
The clear conclusion to be made from that exchange is that there are still
people in the embedded industry who do not see the value of working with
the larger Linux development community. It is easy to fault the embedded
community for its failure to contribute back, but it also makes sense to
look in the mirror and ask if we couldn't make a more persuasive case for
joining in. There has been a sustained effort to encourage the embedded
systems industry to become a full participant in our community; over the
years, that work has yielded a steady stream of successes. By continuing
and improving this work, we'll continue the process of bringing our
community together. Then we'll truly have a single system that runs on
everything from wrist watches to supercomputers.
Comments (8 posted)
Almost one year ago, LWN examined
the GCC plugin mechanism
- or, more exactly, the lack of such a
mechanism. Despite the increasing level of interest in adding
special-purpose modules to the GCC compiler, GCC has no API which allows
this addition to be done. So developers working on GCC extensions are
faced with the daunting prospect of patching their code directly into the
compiler. This situation looked unlikely to change; the Free Software
Foundation's fears that a plugin mechanism would be used by proprietary
extensions was just too strong. One year later, though, things look a
little different; there may be a plugin-capable GCC available in the
(relatively) near future.
There are a lot of good reasons for wanting to add plugins to the GCC
compiler. The implementation of better optimization techniques is an
obvious example, but there is more than that. The EDoc++ project has put together a
static analysis tool which performs checking of exception handling in C++
code - and generates documentation while it's at it. Mozilla uses its Dehydra tool to find
potential problems in the browser's code base. The LLVM compiler can be thought of as a sort of
GCC plugin, currently. The Middle End Lisp
Translator project is working on a Lisp-like language which, in turn,
can be used within plugins for static analysis and code transformations.
The list goes on; just about any project working on
the processing of programs can benefit from hooking into the GCC platform.
The concern that has long been expressed by the FSF (which owns the
copyrights on GCC) is that a general plugin mechanism would make it
possible for companies to traffic in binary-only GCC modules. Rather than
contribute a new analysis or optimization tool - or a new language - to the
community, companies might have an incentive to distribute their work
separately under a restrictive license. That runs very much counter to
what the FSF is trying to accomplish, so opposition from that direction is
not particularly surprising.
But the pressure for some sort of plugin API is not going away, so the GCC
developers have been thinking about ways to make it possible without
upsetting Richard Stallman. One alternative which has been discussed is to
require plugins to be written in a high-level scripting language - Python
or Perl, perhaps. Then plugins would, for all practical purposes, have to
be distributed in source form. Even if they carried a hostile license, it
would be possible to study them and learn how they actually work.
Another possibility is to take a page from the Linux kernel's book and keep
the plugin API unstable. If the API changed with every GCC release, GCC
would become a moving target which would be much harder for proprietary
vendors to keep up with. An unstable API may be the way things go in any
case - there may be no other way to allow GCC itself to continue to
progress quickly - but experience with the kernel shows that an unstable
API is not, by itself, enough to scare off a determined proprietary
software vendor. It might reduce the number of proprietary GCC modules,
but it would not eliminate them.
Alternatively, one could require plugin modules to declare their license to
the GCC core, which could then reject plugins that lack a suitable
license. Again, experience with the kernel suggests that there are limits
to how far one can get with this approach. Proprietary plugin vendors
could distribute a version of GCC with the license check patched out - or
just have their plugin lie about its license.
Yet another possibility is to not worry about the problem at all; it is not
clear that the world is full of vendors waiting for an opportunity to abuse
a GCC plugin API. As GCC developer Ian Lance Taylor puts it:
The FSF doesn't want plugins because they are concerned that people
will start distributing proprietary plugins to gcc. I personally
think this is a fear from twenty years ago which shows a lack of
understanding of today's compiler market, but, that said, the FSF
wants to cover themselves for the future as well.
Someday, perhaps, the FSF will feel sufficiently confident to allow
unrestricted plugin access to GCC, but that does not appear to be in the
cards at this time.
What does appear to be happening, though, is an attempt to enable
plugins by way of some licensing trickery. The GCC suite is covered by the
GPL, a fact which does not, in itself, affect the licensing of any program
which is compiled by GCC. But GCC is more than just the compiler; it also
includes a runtime library needed to make most GCC-compiled programs
actually run. Linking to the runtime library could cause the resulting
program to be a derived product of that library; since the runtime library
is licensed under the GPL, that could be a concern for anybody compiling
non-GPL-licensed code. To address that concern, the runtime code has long
carried an exception to the GPL:
As a special exception, you may use this file as part of a free
software library without restriction. Specifically, if other files
instantiate templates or use macros or inline functions from this
file, or you compile this file and link it with other files to
produce an executable, this file does not by itself cause the
resulting executable to be covered by the GNU General Public
License. This exception does not however invalidate any other
reasons why the executable file might be covered by the GNU General
That is the language which enables the distribution of proprietary software
built with GCC. The plan, said to be under consideration currently,
is to change the wording of that exemption; essentially, it would no longer
apply to code compiled with the use of proprietary GCC plugins. The new
license is not finalized, but Mr. Taylor guesses it will look something like this:
[I]f you modify gcc by adding GPL-incompatible software used to
generate code, it is likely that you will not be granted any
exception to the GPL when using the runtime library. In other
words, if you 1) add an optimization pass to gcc using the
(hypothetical) plugin architecture, and 2) that optimization pass
is not licensed under a GPL-compatible license, and 3) you generate
object code using that optimization pass, and 4) you link that
generated object code with the gcc runtime library (e.g., libgcc or
libstdc++-v3), then you will not be permitted to distribute the
resulting executable except under the terms of the GPL.
The actual wording of the new runtime license has been a long time in
coming; the FSF's lawyers want to get it right so that it discourages
undesired conduct while staying out of the way for everybody else. It also
does not appear to be the FSF's highest priority at the moment. So
nobody really knows when it might become official - though there have been
notes to the list suggesting that it could happen in the near future.
What we do seem to know is that it will happen, sooner or later, and the
addition of a plugin mechanism to GCC will become possible. So the
developers are starting to think about how the API will work. There are a
couple of existing GCC plugin frameworks already, and plenty of thoughts on
how they could be improved; see, for example, this discussion for an idea of what is being
talked about. But the details are likely to be of interest mostly to GCC
hackers, while the end result will be beneficial to a much wider community
of developers and users.
Comments (73 posted)
The Linux kernel recently saw the addition of a "basic Braille screen reader",
and thus, the addition of a drivers/accessibility
subdirectory and its
corresponding CONFIG_ACCESSIBILITY option. It is worth noting that one of the
first reactions was "what the heck is accessibility?" This shows how the idea
is still quite unknown to developers.
And yet the issue of GNU/Linux accessibility, i.e. the usability of GNU/Linux
by disabled people (e.g. blind people) is, of course, not new. Work in that
area has been conducted for a long time: the speakup speech screen reader
saw its 0.07 version against Linux 2.2.7 in 1999, and the brltty Braille
reader started in 1995. The basic Braille screen reader that has just been
added to the Linux kernel is just the emerging part of that work which has been
around since then.
With the popularization of GNU/Linux among non-technical people, there has
been renewed interest in mainline accessibility support: the GNOME
OpenOffice.org and Firefox 3 can now be rendered via Braille and speech
synthesis thanks to the AT-SPI framework and the Orca screen reader. KDE will
soon follow when these technologies get rebased on D-BUS. In addition,
accessibility menus have
started appearing in the upstream distributions.
One of the main concerns
web browsers and office suite support. With more and more companies and
governments migrating to Linux—particularly since some states require
accessibility of tools used in government—renewed development effort
was becoming more and more of a must. In Massachusetts, people had even signed
a petition against the migration to libre software because it was not yet
accessible at the time!
What is Accessibility?
Accessibility, sometimes abbreviated a11y, means making software usable by
disabled people. That includes blind people of course, but also people who
have low vision, are deaf, colorblind, have only one hand, can move only a few
fingers, or even only the eyes. It also includes people with (even light)
cognitive troubles or just not familiar with the language. Last but not least,
it includes elderly people, who often have a bit of all these disabilities.
Yes, that actually means everybody is concerned, eventually. That means support
for special devices, but also general care during development, like not assuming
that an audible alarm will be heard or a transient message will be read.
Maybe one of the most obvious accessibility techniques is speech synthesis,
turns text into audio that can be sent to speakers or headphones. There used
to be hardware speech synthesis (supported by the speakup drivers), but these
have often been replaced by software speech synthesis. While the quality of
commercial software speech synthesis is very good these days, the quality
software vary a lot. While there is very good libre English speech synthesis,
the support of other languages is quite diverse. For instance, the Festival
and eSpeak libre engines easily support a wide range of languages,
but their sound is rather robotic. There are better phoneme libraries like
mbrola, but they are often not completely libre. To better handle all these
potential speech synthesis backends, the speech dispatcher daemon takes care
of automatically choosing the appropriate synthesis according to the desired
language and style.
Another very popular kind of device is Braille terminals. These "show" text
by raising and lowering little pins which thus form Braille patterns.
cost is very high, a Braille terminal often has room for only 40 characters
or even 20 or 12. They integrate keys to navigate around the screen, so the
user ends up
reading it piece by piece. Compared to speech synthesis, the reading accuracy
is far better, but not everybody can read Braille, and the cost remains very
high (on the order of $5,000). The support of the various existing devices
good: both the brltty and suseblinux screen readers support a very wide range of
Blind people will actually often use a combination of speech synthesis and
Braille devices. As for other kinds of disabilities, the kind of devices varies
a lot. It ranges from joysticks (natively supported by X.org) to eye-tracking
systems (managed by dasher), via press button (supported by the GNOME Onscreen
Keyboard) or mere screen magnification (implemented by gnome-mag).
The eternal Command Line Interface vs Graphical User Interface flamewar actually
also holds for people using a Braille terminal or speech synthesis. The
contrast is perhaps even exacerbated by the inherent difficulties of performing
anything with a computer when being disabled.
The old traditional way of using a GNU/Linux system, the text console, has
been working well with Braille devices and speech synthesis for a long time.
The principle is indeed quite simple: there are 25 lines
of 80 characters and text appears sequentially. Screen readers for Braille
terminals would thus just automatically display what was last written and
permit the user
to navigate among these 25 lines. Screen readers for speech synthesis (e.g.
speakup or yasr) would speak text as it appears on the screen, and have some
review facilities similar to what Braille screen readers have. This works quite
well because applications are limited to the TTY interface, they cannot have
non-accessible fancy features such as graphical buttons. Some applications may
still not be so easy to read, e.g. if they draw ASCII art or use colors to show
active buttons, but they often have options to get more accessible, a collection
of tips can be found on this wiki.
Accessibility of graphical desktops is on the other hand a quite recent matter,
in part because the issue is technically much less simple: while applications on
the text console are limited to producing text, these days graphical
usually render text as bitmaps themselves, so that the textual information is
not available outside of the application for screen readers. There have been
application adaptation attempts in the past (like ultrasonix), but they never
really got popular. The GNOME project has been developing AT-SPI (Assistive
Technology Service Provider Interface) for the past decade, and that has become
really promising with the advent of the Orca screen reader. AT-SPI can be
understood as a protocol between screen readers (e.g. Orca) and applications.
To be "accessible", applications thus have to implement AT-SPI, or use a toolkit
that implements it (like GTK and soon Qt), so that screen readers can get the
logical and textual content of the application. Orca is not yet as good as
what mature, proprietary Windows screen readers can achieve, but it is already
usable for everyday work. It is progressing rapidly, notably thanks to the
support of Sun and the involvement of the Accessibility Free Software Group. At the
time of writing, only gtk+ 2 (and thus the GNOME desktop and gtk+ 2
applications), Java/Swing, the Mozilla suite, OpenOffice.org, and acrobat reader
implement AT-SPI and thus are accessible. Qt (and thus the KDE desktop) is
expected to support it once it gets rebased on D-BUS. To get the best results,
the latest versions of applications should be used: for instance, Firefox is
really usable only starting from version 3.
Another approach is the use of self-reading applications. For instance, Firevox
is a version of Firefox that integrates a dedicated screen reader. That permits
a tighter interaction between the reader and the application, but
that is of course limited to that particular application. Another example is
emacspeak, which is a vocalized version of emacs. Some people simply just use
emacspeak and nothing else, as emacs already meets all their needs.
All in all, as usual the mileage varies. Some people will be very happy with
the mature, efficient screen reading of the text console, while other
consider that as a regression (like going back to DOS) and prefer using
intuitive environments such as the GNOME desktop, even if the Orca screen reader
is still quite young. It is actually quite common to use both: for instance the
text console for the usual work, and the graphical environment for tasks that
Now, how can all of that be installed? Most distributions already provide most
of the useful packages, but they often lack documentation on which tools are
useful according to the various disabilities. The Linux Accessibility Resource
Site is a quite complete source of information on the various tools that one
could use. There is also a wiki page meant
for administrators to get started with accessibility needs.
A point worth noting, however, is that some distributions have accessibility
components built into their installation CDs. For instance, starting from
Etch (aka Debian GNU/Linux 4.0), the Debian installer automatically detects
Braille terminals and if found,
switches to text mode, runs brltty, and makes sure that brltty
gets installed and configured on the target system. Other distributions often
have been non-officially adapted into so-called "Braillified"
installation images. The very important point is that it permits disabled
people to be completely independent from the help of sighted people, even
(re)installation of a system has to be done! That is clearly one area in which
Windows is far behind GNU/Linux achievements.
To sum it up, "accessible" GNU/Linux is getting its democratization step as
well, just a bit shifted in time compared to the average Linux democratization.
There are, of course, things that could be improved. Even if distributions
usually contain accessibility software, it is hard for accessibility-newcomers
to know which software will be useful for the various kinds of disabilities
users can have, so distributions will have to develop wizards to help them.
meanwhile, websites such as the Linux Accessibility Resource Site can
be used as sources of information. In any case, discussion with the disabled
users is essential to establish a suitable solution (setting up Braille output
would be useless if the user can not read Braille for instance).
Beyond the mere use of GNU/Linux or its installation, one area that still is not
really accessible at all is the early stages of the boot process. With future
development of the recently added basic Braille screen reader, the Linux kernel
should eventually be able to provide basic feedback even before user space
reader daemons can be started from the hard disk. Bootloaders like lilo and
grub are able to emit basic beeps, but being able to accurately edit the
kernel command line, for example, would require some support. Last but not
with BIOS settings is currently possible for disabled people only on high-end
machines that can drive a serial console. The democratization of the EFI
platform could be an opportunity to embed basic screen reading functionalities.
[Samuel Thibault has been working on accessibility since 2002, when he and
colleague designed the BrlAPI client/server Braille output engine, now
used by Orca for Braille support . Since then he has worked on various
tasks, from the Debian installer support to Braille standardization. In his
professional life, he conducted a PhD on thread scheduling on high-end
and is now a lecturer at the University of Bordeaux.]
Comments (19 posted)
Page editor: Jonathan Corbet
Next page: Security>>