Supporting multiple platforms in a free software project can be difficult; even
more so when the software needs to closely interact with the underlying
hardware. The GNOME project is currently struggling with that issue a bit,
as some would like to see a definitive statement that the GNOME desktop
environment is targeted for
Linux exclusively, while others see supporting Solaris and the various
flavors of BSD as essential. But, because the majority of GNOME
developers are Linux-based, there will always be something of a Linux-bias,
as most new features, especially low-level features, get their start on Linux.
We have seen this kind of thing crop up before. The DRI/DRM project for
supporting 3D graphics for the X Window system ran into a similar problem last September. When
the bulk of the development community is based on just one of the target
platforms, it is difficult to fully support the minority targets. For
GNOME, that means that the BSDs and Solaris have to play catch-up on some
low-level features like HAL or, more recently, things like DeviceKit and
Christian Schaller started things off with
a request on the gnome-desktop-devel mailing list: "So I would like
to ask the GNOME release team to please come forward
and clearly state that the future of GNOME is to be a linux desktop
system as opposed to a desktop system for any Unix-like system."
His point was that it was already a fait accompli, but that the
GNOME community—and release team—should formalize the
decision, rather than just continue to handle things that way.
As one might guess, there was far from uniform agreement with that idea.
Sun folks, in particular, were not particularly enamored with officially
proclaiming GNOME to be "Linux only". Sun is a long-time contributor to
GNOME and would rather see the multi-platform nature of GNOME continue. As
Anyway, if anything, I guess I'd argue that it's time to actually
reinforce the notion that the GNOME desktop is intended for use on any
Unix-like system, and to figure out how to better distribute the
development and QA workload to make that happen, so that non-Linux
contributors have more chance to make significant contributions upstream
again instead of spending most of their time treading downstream water.
One of the problems with that approach is the testing burden that it
causes. Developers would need to check that their code works on multiple
different systems, many of which are either not available or not
particularly interesting to those developers. Those who want to see GNOME
supported on their OS will clearly need to do the bulk of the work to make
that happen. But there is an additional problem, as David Zeuthen
You know, maybe if the non-Linux platforms actually participated in
_designing_ and _developing_ the core plumbing bits, threads like this
wouldn't have to happen.
In that message, Zeuthen outlined how he had seen several GNOME
features get added to Solaris long after there were Linux implementations,
which resulted in a lot more pain for Solaris. He would much rather see
Sun (and other interested parties) start working on these new features as
they are being developed, so that portability and other problems are
identified earlier and fixed—before they become set in stone. Benson
"Oh, there's no doubt Sun and our ilk have to do much better as
well". Artem Kachitchkine, who did the initial HAL port to Solaris,
also agreed, but thinks that it is still
possible to do timely multi-platform releases:
To give a simplified example,
what we had during HAL development sometimes, say, 0.x.y was released
based on Linux exclusively and we had to follow that up with a 0.x.y.1
release to fix FreeBSD/Solaris issues. With an established N-way
commitment from all interested platforms, I believe such issues could be
resolved upfront, leading to higher quality releases (less iterations)
and a more even cost distribution, with little effect on schedule.
So from a bystander's point of view, maintaining GNOME's platform
neutrality requires effort from both sides: from the ideological
leaders, maintaining portability as a core requirement, built in not
screwed on; and from interested platforms, continuous participation and
Though the Sun folks participating in the discussion made it clear they
weren't necessarily representing the company's views, the discussion does
some Sun engineers are aware of the issues—and would like to see them
get resolved. On the other hand, no one from the BSD camp spoke up, or
provided any glimpse into the thinking of the other main GNOME desktop
platforms. If Kachitchkine's vision is to come about, the BSDs would need
to get on board as well.
Somewhat ironically, supporting GNOME on Windows and Mac OS X is quite a
bit easier, as they do not require the desktop functionality. As Jason
Clinton points out, those two platforms are
"application target platforms" as opposed to "desktop
target platforms" like Solaris, Linux, and the BSDs. He also notes
that the BSD situation is rather different than that for Solaris:
On the *BSD side of things, the desktop-related driver situation is
lamentable. However, *BSD has a huge thing going for it: vast parts of the
user space are nearly identical to Linux. So with exception given to the
absence of udev, it really isn't all that different. Indeed, there is even a
semi-official *BSD kernel for Debian.
OpenSolaris, however, suffers from a legacy of esoterically cathedral-like
design on some fundamental sub-systems. The work to make all the things
mentioned above work is so, so much more than any other platform for GNOME.
Clinton floated the idea that Sun should just drop Solaris and move to
Linux, though no one really wanted to see yet another Solaris vs. Linux
flamewar. But his point about Solaris standing out from the rest of the
desktop target platforms rings true, and it will be up to Sun—or the
OpenSolaris community—to put the effort into making GNOME work on
that platform. The right way to approach that, as Zeuthen and others said,
is for Solaris folks to be
working with the GNOME community, not just making GNOME work on their
OS. Zeuthen cites a specific example
of what he means:
The perception, at least from me personally, is that Sun isn't doing a
very good job at *working* with the GNOME community. Case in point, if
RBAC or Visual Panels are oh-so-much-better, why the heck are you guys
not trying to push it for non-Linux? And actually do the integration
work inside GNOME instead of bolting your work on after the fact? That
would benefit both Sun, the rest of the GNOME community and it would
make you guys look a lot better. At least in my eyes.
In the end, though, it is the evolution of what a "desktop environment"
encompasses that underlies much of the difficulty with portability. With
desktop environments taking on more and more of the functionality typically
handled by the kernel and other low-level plumbing, it will be difficult to
keep it easily portable to different platforms. Colin Walters sums it up this way:
Here's the fundamental problem as I see it - GNOME filled the "Unix
like system desktop" checkbox over 10 years ago, on top of POSIX, X11,
and some random bits. A lot of what we've been doing since is filling
in the stuff for a *complete operating system*, because POSIX and X
cover so little. Stuff like having USB devices work, power management,
and networking are hard problems that cross every layer from the
kernel to the desktop UI.
Those kinds of problems are only going to be solved—at least in a
cross-platform manner—by all of the stakeholders working together,
from the outset, on a solution. Currently, that doesn't seem to be
happening, so the Linux-oriented solutions dominate. As GNOME
continues to move more into the system-level services, which traditionally
have been handled by the platform itself, there is clearly a need for the
Solaris and BSD communities to get involved.
Until that happens,
we are likely to continue to see the "Linux first" style of GNOME
development, either officially or tacitly.
Comments (23 posted)
The saga of the GCC runtime library has been covered here
a couple of times
in the past. The library's license is a legal hack which tries to accomplish a set
of seemingly conflicting goals. The GCC runtime library (needed by almost
all GCC-compiled programs) is licensed under GPLv3; that notwithstanding,
the Free Software Foundation wants this library to be usable by proprietary
programs - but only if no proprietary GCC plugins have been used in the
compilation process. The runtime library
published by the FSF appears to have accomplished those
objectives. But now it seems that, perhaps, the GCC runtime licensing has
put distributors into a difficult position.
The problem has to do with programs which are licensed exclusively under
version 2 of the GPL. Examples of such programs include git and udev,
but there are quite a few more. The GPLv3 licensing of the GCC runtime
library (as of version 4.4) would normally make that library impossible to
distribute in combination with a GPLv2-licensed program, since the two
licenses are incompatible. The runtime library exception is intended to
make that problem go away; the relevant text is:
You have permission to propagate a work of Target Code formed by
combining the Runtime Library with Independent Modules, even if
such propagation would otherwise violate the terms of GPLv3,
provided that all Target Code was generated by Eligible Compilation
Processes. You may then convey such a combination under terms of
your choice, consistent with the licensing of the Independent
So, as long as the licensing of the "Independent Modules" (the
GPLv2-licensed code, in this case) allows it, the GCC runtime library can
be distributed in binary form with code under a GPLv3-incompatible
license. So there should not be a problem here.
But what if the licensing of the "Independent Modules" does not allow this
to happen? That is the question which
Florian Weimer raised on the GCC mailing list. The GCC runtime library
exception allows that code to be combined with programs incompatible with
its license. But, if the program in question is covered by GPLv2, the
problem has not been entirely resolved: GPLv2 still does not allow the
distribution of a derived work containing code with a GPLv2-incompatible license. The
GPLv3 licensing of the runtime library is, indeed, incompatible with GPLv2,
so combining the two and distributing the result would appear to be a
violation of the program's license.
The authors of version 2 of the GPL actually anticipated this problem; for that reason,
that license, too, contains an exception:
However, as a special exception, the source code distributed need
not include anything that is normally distributed (in either source
or binary form) with the major components (compiler, kernel, and so
on) of the operating system on which the executable runs, unless
that component itself accompanies the executable.
This is the "system library" exception; without it, distributing binary
copies of GPLv2-licensed programs for proprietary platforms would not be
allowed. Even distributing a Linux binary would risk putting the
people distributing the program in a position where they would have to be
prepared to provide (under a GPLv2-compatible license)
the sources for all of the libraries used by the binary. This exception is
important; without it, distributing GPLv2-licensed programs in binary form
would be painful (at best) or simply impossible.
But note that the exception itself contains an exception: "unless
that component itself accompanies the executable." This says that,
if somebody distributes GCC together with a GPLv2-licensed program, the
system library exception does not apply to the code which comes from GCC.
And that includes the GCC runtime library. One might think that tossing a
copy of the compiler into the distribution of a binary program would be a
strange course of action, but that is
exactly what distributors do. So,
on the face of it, distributors like Debian (which, naturally, turned up
this problem) cannot package GPLv2-licensed code with the GCC 4.4 runtime
library without violating the terms of GPLv2.
This is a perverse result that, probably, was not envisioned or desired by
the FSF when it wrote these licenses. But Florian reports that attempts to get clarification
from the FSF have gone unanswered since last April. He adds:
If the FSF keeps refusing to enter any discussion on this matter
(I'm not even talking about agreeing on a solution yet!), our
options for dealing with the GCC 4.4 relicensing fallout at Debian
are pretty limited. It's also likely that any unilateral action
will undermine the effect of some of the FSF's licensing policies.
One could argue that the real problem is with the GPLv2 system library
exception-exception. That (legal) code was written in a world where there
were no free operating systems or distributors thereof, and where nobody
was really thinking that there could be conflicting versions of the GPL.
Fixing GPLv2 is not really an option, though; this particular problem will
have to be resolved elsewhere. But it's not entirely clear where that
resolution could be.
A statement from the FSF that, in its view, distributing GPLv2-licensed
binaries with the GPLv3-licensed GCC runtime library is consistent with the
requirements of both licenses might be enough. But such a statement would
not be binding on any other copyright holders - and it is probable that the
bulk of the code which is not making the move to GPLv3 is not owned by the
FSF. A loosening of the licensing on the GCC runtime library could help,
but this is a problem which could return, zombie-like, every time a body of
library code moves to GPLv3. It's a consequence of the fundamental
incompatibility between versions 2 and 3 of the license.
This has the look of the sort of problem that might ordinarily be
studiously ignored into oblivion. If one avoids the cynical view that the
FSF desires this incompatibility as a way of pushing code toward GPLv3,
it's hard to see a situation where a copyright holder would actually
challenge a distributor for shipping this particular combination. But the
Debian Project is not known for ignoring this kind of issue. So we may
well be hearing more about this conflict in the coming months.
(Thanks to Brad Hards for the heads-up on this issue).
Comments (119 posted)
It is hard to have an overriding "theme" at an event as large as
O'Reilly's Open Source
Convention (OSCON), but during the 2009 convention, one subject that
came up again and again was increasing the number of connections between
open source and government. There are three basic facets to the topic:
adoption of open source products by government agencies, participation in
open source project development by governments and their employees, and
using open source to increase transparency and public access to
governmental data and resources. Though much of the discussion
(particularly in the latter category) sprang from the new Obama
administration's interest in open data and government transparency, very
few of the issues are US-centric: the big obstacles to
government adoption of open source technology are the same around the
world, from opaque procurement processes to
fears about secrecy and security.
O'Reilly CEO Tim O'Reilly was the first to broach the subject, in his
Wednesday morning keynote, and over the next three days, no fewer than
three talks and three panel discussions dealt with government and open
source interaction. The Open Source
Initiative's (OSI) Danese Cooper led the "Open Source, Open Government"
panel, which addressed all three dimensions of the issue turn by turn.
Deborah Bryant of Oregon State University's Open Source Lab (OSL) led the panel
discussion "Bureaucrats, Technocrats and Policy Cats: How the Government is
turning to Open Source, and Why," which focused on adoption and
transparency. Adina Levin of Socialtext led the "Hacking the Open
Government" panel in a discussion centering on open data access.
Clay Johnson's "Apps for America" session dealt with open source
adoption and open data, courtesy of Sunlight Labs' involvement in the US
government's Data.gov service. Gunnar
Hellekson of Red Hat emphasized
government participation in his "Applying Open Source Principles to Federal
Government" talk, and the "Computational Journalism" session by Nick
Diakopoulos and Brad Stenger dealt with practical examples of turning open
access government data into a usable form. Finally, Sunlight Labs led
all-day hackathon sessions Wednesday through Friday, helping attendees
build applications that use government data sources.
Government usage of open source
The open source community has two reasons to encourage increased usage
of open source code by government agencies: because it believes in the
inherent value of open source, and because using free software instead of
proprietary software means less taxpayer money is spent on IT
infrastructure. Several of the OSCON sessions addressed the barriers to
entry faced by open source as a product. Some are well-known, such as
long-time government contractors' larger presence in the bidding process and
the lingering perception that open source code leaves no one to blame when
Other issues, however, are less frequently raised but just as real. For
example, several panelists at "Open Source, Open Government" agreed that
some government entities put up fierce resistance to free software because
they do not want to run afoul of ethics laws that prohibit them from
accepting gifts — if free software has value, then government
officials are not allowed to receive the code without paying for it. That
objection elicited a small amount of laughter from the audience, but all on
stage agreed that it is a genuine concern.
Solutions to these barriers to entry involve both new ideas and
old-fashioned legwork. OSI's Michael Tiemann observed that government's
distinctive buying habits permit open source some additional advantages
over proprietary software, for those who are looking for them. He cited
the example of product retirement: government agencies are often restricted
in how and when they can dispose of old technology (for security and
budgetary reasons). In contrast, open source products that are deemed failed
experiments or simply no longer needed can be disposed of easily. Hellekson concurred, noting that the US Department of Defense
has recently acknowledged
that breaking projects into smaller, modular chunks is more successful than the traditional
As O'Reilly pointed out in his keynote, though, getting open source
products considered during the bidding process for most government
contracts is primarily a challenge of persistence. There are many people
with the skills to navigate the procurement processes, he said, but
considering the specialization required, few are able or willing to make
selling to a single customer (such as a national government) their entire
Government contributions to open source
Once a government agency has adopted an open source package for its own
internal use, there is often another battle to get the agency to
participate in the open source development model, sending patches or even
bug reports back upstream. Digium's
John Todd noted that, in his experience with the Asterisk project, public employees
often are not permitted to contribute code to open source projects, or they
find that there is no process in place to get approval to contribute.
Bryant responded to Todd's story by saying that OSL had some resources
that could prove useful in talking to public employees. OSL also hosts the
Government Open Source Conference
(GOSCON), which emphasizes participation in open source development.
Hellekson cited several examples of government agencies that are
participating in open source development, notably NASA's CoLab, the Department of Energy, the US Navy, and the
National Consortium for Offender Management
Systems, a coalition of state correctional agencies.
Enhancing government with open source
Using open source software to improve government transparency and access
was the most popular aspect of the government/open source connection
— in large part encouraged by the recent appointment of two open
source-friendly people to prominent technology positions in the US
government: Aneesh Chopra for Federal Chief Technology Officer and Vivek
Kundra for Federal Chief Information Officer.
"Open government" as a political principle is not specific to software,
but many of the speakers and panelists at OSCON centered in on the areas
where open source software could contribute to the broader goal: namely,
making government-produced and government-collected data easier to access
and mine, and building mash-ups and other applications on top of government
sources that expose new information to the public.
Several of the speakers, including the Sunlight Foundation's Greg Elin,
emphasized that the new US administration's present interest in open data
is a valuable opportunity to showcase the useful public applications that
open source software can produce — but that the window of opportunity
will not remain open for long, thanks to re-election cycles and waning
interest. By the end of 2009, said Johnson, if open source coders have not
build demonstrable success stories on top of the government's open data, it
will be harder to persuade Washington D.C. to open up additional data
Sunlight Labs' focus is building applications that take advantage of
Data.gov, a new initiative that makes raw data catalogs publicly available in
machine- and human-readable form. The initial data sets released are
collected from 18 agencies such as the US Geological Survey, Environmental
Protection Agency, Patent and Trademark Office, and even the Department of
Homeland Security. Sunlight is sponsoring a development contest that
will award $25,000 in prizes to open source application developers that use
The various OSCON panels discussed what tools and infrastructure are
needed to better take advantage of the data that governments do provide
— including query pre-processors to enable better searching,
document-to-data conversion utilities, reusable encapsulation APIs in
popular languages like Python and Ruby, and good simulation and prediction
models to analyze the data itself in more than a historical context.
Hellekson summarized what the open source community can do to better
work with government agencies making their first forays into open source
collaboration. His three points were to remember that "government
agencies" are actually just people, to allow those people to make mistakes
and learn from them, and to celebrate their successes.
Hobbyist, to enterprise, to government
From an open source developer's perspective, local, regional, and
national governments represent potential users, customers ... and
developers. Much of the OSCON discussion about open source and government
moved beyond such practical technical considerations to touch on
philosophy, too — open content from governments should lead to more
transparent processes, greater accountability, and better democracy, so the
However one feels about that question, though, working more closely with
government agencies can be a huge win for open source projects and
communities. Excitement over the possibilities was on display at OSCON;
with luck the increased engagement with the public sector will be just as
fruitful as it has been with the enterprise sector over the past few
Comments (7 posted)
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
- Security: A desktop "secrets" API; New vulnerabilities in bind, firefox, kernel, mysql,...
- Kernel: Dynamic probes with ftrace; Finding buffer overflows with Parfait; A tempest in a tty pot.
- Distributions: Debian and time-based freezes; Omega (Pug) Release; openSUSE 11.2 Milestone 4; Tin Hat 20090727 is out; Ubuntu Karmic Koala alpha 3; Debian to adopt time-based releases.
- Development: Google releases Neatx NX server, new versions of MySQL, python-ldap, sqlmap, Django, Gnucash, PostGIS, guitarix, GCC, Rakudo Perl, ControlTier, GIT, Mercurial.
- Press: Open Source for America, Akademy-es and OSCON coverage, Splashtop/Yahoo partnership, NASA uses Linux, Jim Ready interview, LiVES video editor, reviews of KDE 4.3, Live Services Plug-in and rBuilder 5.
- Announcements: Amazon apologizes about Kindle, LF credit card, Red Hat joins S&P 500, Pro Git online, barriers to free software entry, SourceForge awards, White Camel awards, three event reports, new Linux-Kongress dates, LGM.