Supporting multiple platforms in a free software project can be difficult; even more so when the software needs to closely interact with the underlying hardware. The GNOME project is currently struggling with that issue a bit, as some would like to see a definitive statement that the GNOME desktop environment is targeted for Linux exclusively, while others see supporting Solaris and the various flavors of BSD as essential. But, because the majority of GNOME developers are Linux-based, there will always be something of a Linux-bias, as most new features, especially low-level features, get their start on Linux.
We have seen this kind of thing crop up before. The DRI/DRM project for supporting 3D graphics for the X Window system ran into a similar problem last September. When the bulk of the development community is based on just one of the target platforms, it is difficult to fully support the minority targets. For GNOME, that means that the BSDs and Solaris have to play catch-up on some low-level features like HAL or, more recently, things like DeviceKit and PolicyKit.
Christian Schaller started things off with a request on the gnome-desktop-devel mailing list: "So I would like to ask the GNOME release team to please come forward and clearly state that the future of GNOME is to be a linux desktop system as opposed to a desktop system for any Unix-like system." His point was that it was already a fait accompli, but that the GNOME community—and release team—should formalize the decision, rather than just continue to handle things that way.
As one might guess, there was far from uniform agreement with that idea. Sun folks, in particular, were not particularly enamored with officially proclaiming GNOME to be "Linux only". Sun is a long-time contributor to GNOME and would rather see the multi-platform nature of GNOME continue. As Calum Benson put it:
One of the problems with that approach is the testing burden that it causes. Developers would need to check that their code works on multiple different systems, many of which are either not available or not particularly interesting to those developers. Those who want to see GNOME supported on their OS will clearly need to do the bulk of the work to make that happen. But there is an additional problem, as David Zeuthen points out:
In that message, Zeuthen outlined how he had seen several GNOME features get added to Solaris long after there were Linux implementations, which resulted in a lot more pain for Solaris. He would much rather see Sun (and other interested parties) start working on these new features as they are being developed, so that portability and other problems are identified earlier and fixed—before they become set in stone. Benson agreed: "Oh, there's no doubt Sun and our ilk have to do much better as well". Artem Kachitchkine, who did the initial HAL port to Solaris, also agreed, but thinks that it is still possible to do timely multi-platform releases:
So from a bystander's point of view, maintaining GNOME's platform neutrality requires effort from both sides: from the ideological leaders, maintaining portability as a core requirement, built in not screwed on; and from interested platforms, continuous participation and timely response.
Though the Sun folks participating in the discussion made it clear they weren't necessarily representing the company's views, the discussion does show that some Sun engineers are aware of the issues—and would like to see them get resolved. On the other hand, no one from the BSD camp spoke up, or provided any glimpse into the thinking of the other main GNOME desktop platforms. If Kachitchkine's vision is to come about, the BSDs would need to get on board as well.
Somewhat ironically, supporting GNOME on Windows and Mac OS X is quite a bit easier, as they do not require the desktop functionality. As Jason Clinton points out, those two platforms are "application target platforms" as opposed to "desktop target platforms" like Solaris, Linux, and the BSDs. He also notes that the BSD situation is rather different than that for Solaris:
OpenSolaris, however, suffers from a legacy of esoterically cathedral-like design on some fundamental sub-systems. The work to make all the things mentioned above work is so, so much more than any other platform for GNOME.
Clinton floated the idea that Sun should just drop Solaris and move to Linux, though no one really wanted to see yet another Solaris vs. Linux flamewar. But his point about Solaris standing out from the rest of the desktop target platforms rings true, and it will be up to Sun—or the OpenSolaris community—to put the effort into making GNOME work on that platform. The right way to approach that, as Zeuthen and others said, is for Solaris folks to be working with the GNOME community, not just making GNOME work on their OS. Zeuthen cites a specific example of what he means:
In the end, though, it is the evolution of what a "desktop environment" encompasses that underlies much of the difficulty with portability. With desktop environments taking on more and more of the functionality typically handled by the kernel and other low-level plumbing, it will be difficult to keep it easily portable to different platforms. Colin Walters sums it up this way:
Those kinds of problems are only going to be solved—at least in a cross-platform manner—by all of the stakeholders working together, from the outset, on a solution. Currently, that doesn't seem to be happening, so the Linux-oriented solutions dominate. As GNOME continues to move more into the system-level services, which traditionally have been handled by the platform itself, there is clearly a need for the Solaris and BSD communities to get involved. Until that happens, we are likely to continue to see the "Linux first" style of GNOME development, either officially or tacitly.covered here a couple of times in the past. The library's license is a legal hack which tries to accomplish a set of seemingly conflicting goals. The GCC runtime library (needed by almost all GCC-compiled programs) is licensed under GPLv3; that notwithstanding, the Free Software Foundation wants this library to be usable by proprietary programs - but only if no proprietary GCC plugins have been used in the compilation process. The runtime library exception published by the FSF appears to have accomplished those objectives. But now it seems that, perhaps, the GCC runtime licensing has put distributors into a difficult position.
The problem has to do with programs which are licensed exclusively under version 2 of the GPL. Examples of such programs include git and udev, but there are quite a few more. The GPLv3 licensing of the GCC runtime library (as of version 4.4) would normally make that library impossible to distribute in combination with a GPLv2-licensed program, since the two licenses are incompatible. The runtime library exception is intended to make that problem go away; the relevant text is:
So, as long as the licensing of the "Independent Modules" (the GPLv2-licensed code, in this case) allows it, the GCC runtime library can be distributed in binary form with code under a GPLv3-incompatible license. So there should not be a problem here.
But what if the licensing of the "Independent Modules" does not allow this to happen? That is the question which Florian Weimer raised on the GCC mailing list. The GCC runtime library exception allows that code to be combined with programs incompatible with its license. But, if the program in question is covered by GPLv2, the problem has not been entirely resolved: GPLv2 still does not allow the distribution of a derived work containing code with a GPLv2-incompatible license. The GPLv3 licensing of the runtime library is, indeed, incompatible with GPLv2, so combining the two and distributing the result would appear to be a violation of the program's license.
The authors of version 2 of the GPL actually anticipated this problem; for that reason, that license, too, contains an exception:
This is the "system library" exception; without it, distributing binary copies of GPLv2-licensed programs for proprietary platforms would not be allowed. Even distributing a Linux binary would risk putting the people distributing the program in a position where they would have to be prepared to provide (under a GPLv2-compatible license) the sources for all of the libraries used by the binary. This exception is important; without it, distributing GPLv2-licensed programs in binary form would be painful (at best) or simply impossible.
But note that the exception itself contains an exception: "unless that component itself accompanies the executable." This says that, if somebody distributes GCC together with a GPLv2-licensed program, the system library exception does not apply to the code which comes from GCC. And that includes the GCC runtime library. One might think that tossing a copy of the compiler into the distribution of a binary program would be a strange course of action, but that is exactly what distributors do. So, on the face of it, distributors like Debian (which, naturally, turned up this problem) cannot package GPLv2-licensed code with the GCC 4.4 runtime library without violating the terms of GPLv2.
This is a perverse result that, probably, was not envisioned or desired by the FSF when it wrote these licenses. But Florian reports that attempts to get clarification from the FSF have gone unanswered since last April. He adds:
One could argue that the real problem is with the GPLv2 system library exception-exception. That (legal) code was written in a world where there were no free operating systems or distributors thereof, and where nobody was really thinking that there could be conflicting versions of the GPL. Fixing GPLv2 is not really an option, though; this particular problem will have to be resolved elsewhere. But it's not entirely clear where that resolution could be.
A statement from the FSF that, in its view, distributing GPLv2-licensed binaries with the GPLv3-licensed GCC runtime library is consistent with the requirements of both licenses might be enough. But such a statement would not be binding on any other copyright holders - and it is probable that the bulk of the code which is not making the move to GPLv3 is not owned by the FSF. A loosening of the licensing on the GCC runtime library could help, but this is a problem which could return, zombie-like, every time a body of library code moves to GPLv3. It's a consequence of the fundamental incompatibility between versions 2 and 3 of the license.
This has the look of the sort of problem that might ordinarily be studiously ignored into oblivion. If one avoids the cynical view that the FSF desires this incompatibility as a way of pushing code toward GPLv3, it's hard to see a situation where a copyright holder would actually challenge a distributor for shipping this particular combination. But the Debian Project is not known for ignoring this kind of issue. So we may well be hearing more about this conflict in the coming months.
(Thanks to Brad Hards for the heads-up on this issue).
It is hard to have an overriding "theme" at an event as large as O'Reilly's Open Source Convention (OSCON), but during the 2009 convention, one subject that came up again and again was increasing the number of connections between open source and government. There are three basic facets to the topic: adoption of open source products by government agencies, participation in open source project development by governments and their employees, and using open source to increase transparency and public access to governmental data and resources. Though much of the discussion (particularly in the latter category) sprang from the new Obama administration's interest in open data and government transparency, very few of the issues are US-centric: the big obstacles to government adoption of open source technology are the same around the world, from opaque procurement processes to fears about secrecy and security.
O'Reilly CEO Tim O'Reilly was the first to broach the subject, in his Wednesday morning keynote, and over the next three days, no fewer than three talks and three panel discussions dealt with government and open source interaction. The Open Source Initiative's (OSI) Danese Cooper led the "Open Source, Open Government" panel, which addressed all three dimensions of the issue turn by turn. Deborah Bryant of Oregon State University's Open Source Lab (OSL) led the panel discussion "Bureaucrats, Technocrats and Policy Cats: How the Government is turning to Open Source, and Why," which focused on adoption and transparency. Adina Levin of Socialtext led the "Hacking the Open Government" panel in a discussion centering on open data access.
Clay Johnson's "Apps for America" session dealt with open source adoption and open data, courtesy of Sunlight Labs' involvement in the US government's Data.gov service. Gunnar Hellekson of Red Hat emphasized government participation in his "Applying Open Source Principles to Federal Government" talk, and the "Computational Journalism" session by Nick Diakopoulos and Brad Stenger dealt with practical examples of turning open access government data into a usable form. Finally, Sunlight Labs led all-day hackathon sessions Wednesday through Friday, helping attendees build applications that use government data sources.
The open source community has two reasons to encourage increased usage of open source code by government agencies: because it believes in the inherent value of open source, and because using free software instead of proprietary software means less taxpayer money is spent on IT infrastructure. Several of the OSCON sessions addressed the barriers to entry faced by open source as a product. Some are well-known, such as long-time government contractors' larger presence in the bidding process and the lingering perception that open source code leaves no one to blame when problems arise.
Other issues, however, are less frequently raised but just as real. For example, several panelists at "Open Source, Open Government" agreed that some government entities put up fierce resistance to free software because they do not want to run afoul of ethics laws that prohibit them from accepting gifts — if free software has value, then government officials are not allowed to receive the code without paying for it. That objection elicited a small amount of laughter from the audience, but all on stage agreed that it is a genuine concern.
Solutions to these barriers to entry involve both new ideas and old-fashioned legwork. OSI's Michael Tiemann observed that government's distinctive buying habits permit open source some additional advantages over proprietary software, for those who are looking for them. He cited the example of product retirement: government agencies are often restricted in how and when they can dispose of old technology (for security and budgetary reasons). In contrast, open source products that are deemed failed experiments or simply no longer needed can be disposed of easily. Hellekson concurred, noting that the US Department of Defense has recently acknowledged that breaking projects into smaller, modular chunks is more successful than the traditional large contracts.
As O'Reilly pointed out in his keynote, though, getting open source products considered during the bidding process for most government contracts is primarily a challenge of persistence. There are many people with the skills to navigate the procurement processes, he said, but considering the specialization required, few are able or willing to make selling to a single customer (such as a national government) their entire career.
Once a government agency has adopted an open source package for its own internal use, there is often another battle to get the agency to participate in the open source development model, sending patches or even bug reports back upstream. Digium's John Todd noted that, in his experience with the Asterisk project, public employees often are not permitted to contribute code to open source projects, or they find that there is no process in place to get approval to contribute.
Bryant responded to Todd's story by saying that OSL had some resources that could prove useful in talking to public employees. OSL also hosts the Government Open Source Conference (GOSCON), which emphasizes participation in open source development.
Hellekson cited several examples of government agencies that are participating in open source development, notably NASA's CoLab, the Department of Energy, the US Navy, and the National Consortium for Offender Management Systems, a coalition of state correctional agencies.
Using open source software to improve government transparency and access was the most popular aspect of the government/open source connection — in large part encouraged by the recent appointment of two open source-friendly people to prominent technology positions in the US government: Aneesh Chopra for Federal Chief Technology Officer and Vivek Kundra for Federal Chief Information Officer.
"Open government" as a political principle is not specific to software, but many of the speakers and panelists at OSCON centered in on the areas where open source software could contribute to the broader goal: namely, making government-produced and government-collected data easier to access and mine, and building mash-ups and other applications on top of government sources that expose new information to the public.
Several of the speakers, including the Sunlight Foundation's Greg Elin, emphasized that the new US administration's present interest in open data is a valuable opportunity to showcase the useful public applications that open source software can produce — but that the window of opportunity will not remain open for long, thanks to re-election cycles and waning interest. By the end of 2009, said Johnson, if open source coders have not build demonstrable success stories on top of the government's open data, it will be harder to persuade Washington D.C. to open up additional data sets.
Sunlight Labs' focus is building applications that take advantage of Data.gov, a new initiative that makes raw data catalogs publicly available in machine- and human-readable form. The initial data sets released are collected from 18 agencies such as the US Geological Survey, Environmental Protection Agency, Patent and Trademark Office, and even the Department of Homeland Security. Sunlight is sponsoring a development contest that will award $25,000 in prizes to open source application developers that use Data.gov.
The various OSCON panels discussed what tools and infrastructure are needed to better take advantage of the data that governments do provide — including query pre-processors to enable better searching, document-to-data conversion utilities, reusable encapsulation APIs in popular languages like Python and Ruby, and good simulation and prediction models to analyze the data itself in more than a historical context.
Hellekson summarized what the open source community can do to better work with government agencies making their first forays into open source collaboration. His three points were to remember that "government agencies" are actually just people, to allow those people to make mistakes and learn from them, and to celebrate their successes.
From an open source developer's perspective, local, regional, and national governments represent potential users, customers ... and developers. Much of the OSCON discussion about open source and government moved beyond such practical technical considerations to touch on philosophy, too — open content from governments should lead to more transparent processes, greater accountability, and better democracy, so the argument goes.
However one feels about that question, though, working more closely with government agencies can be a huge win for open source projects and communities. Excitement over the possibilities was on display at OSCON; with luck the increased engagement with the public sector will be just as fruitful as it has been with the enterprise sector over the past few years.
Page editor: Jonathan Corbet
Copyright © 2009, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds