It has been nearly six years since the release
, the first major release done by the X.Org Foundation.
Among the many changes in this release was the change to a new modular
architecture that allowed individual graphics drivers to be built and
distributed independently from the server itself. Separating out the
drivers was seen as a way to make it easier to contribute to their
development and to get support for new hardware out to users as quickly as
possible. By all accounts, X.Org has been a success, and the modular
server architecture would seem to have worked out well.
So it is interesting to see a surge of new discussion of the architecture
of our graphics subsystems at all levels. At the kernel level, developers
are rethinking the relationships between graphics subsystems; that
discussion will be covered in a separate article. At the X.Org level,
instead, developers are considering undoing one of the headline changes
from X11R7.0 and pulling graphics drivers back into the server itself.
The topic was evidently discussed at the recently-concluded X.Org Developer
Conference; Jesse Barnes then brought it to the
mailing list with the goal of discussing the good and bad aspects of
returning to a monolithic server. At the top of his list of "pros" was
that it would make it easier to make API changes in the server and push
them into drivers. The kernel benefits from the ability to make internal
API changes quickly; X could also make use of this flexibility. Being able
to effect API changes immediately would also enable the removal of a bunch
of backward-compatibility code intended to make it easier to mix driver and
The down side, of course, is that life gets harder for developers
maintaining out-of-tree drivers - though it must be said that not everybody
saw that as a disadvantage. Perhaps a monolithic server will encourage
more drivers to move into the X.Org repository. But there is an
interesting twist: some drivers, like the Wacom
input driver, are licensed under the GPL. X, of course, is under the MIT
license; adding GPL-licensed code into the mix would raise some awkward
questions. What that means, in reality, is that some drivers will remain
outside of the X server repository regardless of what happens to the rest.
Alex Deucher made the claim that API
changes in the X server have been decreasing in frequency for some time, so
the ability to make them more easily may not be all that valuable. The
movement of much of the driver code into the kernel may have caused a
reduction in disruptive driver changes in the X.Org server. But it may
also be that, as Keith Packard said:
"We don't get ABI changes because they're nearly impossible to
handle." If those changes could be made more easily, chances are
that the server would see more of them. Dave Airlie noted that he has a scheme in the works to
make major changes to the driver API, but that he does not see how it can
be done with the current, modular model.
How merging the drivers would affect testing is not entirely clear. On the
one hand, testers working with new drivers would be building a new server
as well, improving testing of the whole system. On the other, testing just
a new driver without moving to a new server as well would become harder
with the effect that some users are likely to just not
bother. So the amount of driver testing could decrease. Nobody really
knows how that would settle out, though, without actually trying the
Distributors could end up working harder to backport drivers to older
servers, but the level of concern expressed by distributors in this
discussion has been relatively low. As Ubuntu maintainer Bryce Harrington
put it: "I favor just doing whatever
the upstream developers feel is best for their own needs."
A different concern expressed by driver developers has to do with the core X.Org
development process. The rules say that all patches must be reviewed
before they can be merged into the master branch. Like all other projects,
X.Org is short of reviewer time, so getting the requisite Reviewed-by tags
for obscure changes can be a long and frustrating process. Driver code is
typically understood by - and reviewed by - fewer developers than core
code; that suggests that making changes to drivers in a monolithic server
could be hard. The project might have to make some process changes in
response; perhaps, as is the case in the kernel, the bar for driver changes
could be set a little lower than for changes to the core code.
From your editor's reading of the discussion, it seems that there is
a stronger sentiment in favor of making this change than there is against
it. Those in favor see a way of making the development process more
efficient and more closely integrating all the pieces of the X server.
Those who have expressed opposition are mostly concerned about peripheral
issues: licensing, review process, etc. Even the developer who is arguably
the strongest opponent of the change - Luc Verhaegen - has argued that the real concerns are elsewhere.
A better "mindset" when it comes to the design of the core API, he said,
would have more far-ranging effects than the organization of the source
As of this writing, no decision has been made public. It is not even 100%
clear how such a decision can be made in the X.Org environment, which lacks
a dictator (benevolent or otherwise) with the power to impose or block a
change. Any project wanting to be successful in the long term must
occasionally examine and tweak its institutions and processes. X.Org has
already achieved the "long term," but not without some significant changes
on the way. Relative to those changes, a decision to pull the drivers back
in - or not to do so - seems relatively insignificant. The project, its
processes, and its code have all improved considerably over the course of
the last six years or so; that trend seems likely to continue into the
future either way.
Comments (15 posted)
Open source and free software projects often encounter culture clash
whenever they have to work with standards bodies. The most obvious problem
is the secrecy that many proprietary-vendor-driven standards processes
demand of participants, but that is not the only challenge. The PostgreSQL
database project has been grappling with these challenges in recent weeks
in an effort to strike a balance between its needs as a project and the
closed structures and process of the ISO, which is the publisher of the
official standard for SQL.
The topic arose on the pgsql-hackers mailing list in mid-September, when Susanne Ebrecht lamented the apparent lack of interest in the SQL standards process among PostgreSQL developers, prompted by her experience having a conference talk proposal on the subject rejected. She noted that another ISO meeting was fast approaching, and although rules prevented her from disclosing new drafts of the standard to "the public," she was permitted to discuss them privately with the organization that supported her (PostgreSQL), and asked if there was sufficient interest to set up a private mailing list for such discussions.
It apparently came as a surprise to several on the list that Ebrecht was
an official representative in the ISO process. However, as she elaborated
to the list, her role is not a direct (or a particularly powerful) one.
The ISO has managed the SQL standard since 1987, as ISO/IEC
9075. But the ISO itself is composed of representatives — one
per country — from 162 separate national standards bodies. The
German standards body Deutsches Institut für Normung (DIN) solicited
Ebrecht's input for their own
work on SQL.
The final voting on changes to the ISO standard for SQL is done by the assembled national representatives, however. Thus, even though Ebrecht can present PostgreSQL's concerns to the DIN SQL committee, they are still several steps removed from making it into the eventual standard — steps where the vested interests of corporations and other nations gain more and more influence on the outcome. The real practical question posed to PostgreSQL is how Ebrecht could communicate about the process to the developers without running afoul of the committees' secrecy rules.
It might be possible to avoid violating the non-disclosure rule by discussing broad changes to the drafts on a public mailing list without going into detail. But in SQL as in so much of life, the devil is in the details, so the consensus eventually was that a private list would be set up, to which Ebrecht could forward updates from the standards-writing process. To keep the list traffic confidential, it would be limited to known PostgreSQL contributors.
Standards: who needs 'em?
On the plus side, there does seem to be a healthy interest among project
members in following the ISO standards process. As Heikki Linnakangas said,
the process may not have sparked much discussion over the years, but
"it's hard to get excited about something if you don't know what's
happening." As core team member Josh Berkus said in an email,
though, the non-disclosure rules are just one of several challenges.
These challenges are:
- Requirements of confidentiality around all proceedings of the
committee, which causes extreme difficulty for open source projects used to
making all internal decisions on public mailing lists;
- Requirements to designate specific, pre-cleared staff who need to
attend meetings by telephone or in person, around the world, adding expense
and time requirements open source projects have trouble meeting;
- Intense political atmosphere where all decisions are a matter of
vendor alliances and have little or nothing to do with technical
The ISO SQL committee is a particularly egregious example of the first point. Not only are all of their internal drafts secret, but the final published SQL standard is not available freely; it's vended for a substantial fee with restrictive copyright. While there are reasons to keep the minutes of the meetings confidential, there's no really good reason for this level of secrecy over the drafts and final publication, except to support the incumbent proprietary vendors.
On the third point, Berkus offered a specific example where influential
vendors appear to have used the standards process as a weapon. Both
PostgreSQL and MySQL supported a simple syntax for the retrieval of a subset of
the rows returned by a query using the LIMIT and OFFSET
operators, he said, syntax which was well-understood and well-liked by
users. But the standards committee adopted a different syntax that was
more verbose, but which added no additional features or flexibility. He said:
While the minutes of the meetings in question are closed to me, I
suspect that the entire motivation for this was Oracle and Microsoft's
desire to specify something which would be incompatible with the leading
open source databases.
Open source projects are not the only players put at a disadvantage by this sort of tactic, either, he observed. The same hurdles affect startup companies, to the protection of entrenched players against competition.
Distrust of the ISO process was visible from others in the project as
well. PostgreSQL's resident standards guru Peter Eisentraut commented in
an off-list email that, for end users, SQL is "pretty useless as a
'standard'" when compared to more complete specifications like C and
XML. SQL lacks specifications for important features like optimization and
administration, he said, and worse still, the language itself is
"baroque," with every new feature adopting a completely new
syntax. As a result, there is no clear way to extend the language in a
consistent fashion, which is problematic for PostgreSQL and other
Open source, proprietary vendors, and incompatibility
Joe Abbate mused that perhaps it was time for the open source database players to establish their own standard not controlled by incumbent vendors out to protect their business. Abbate's initial message to that effect came across as a call to form an "open source fork" of SQL itself, which most of the PostgreSQL team seemed to think was a bad idea. In addition to the confusion it would create for users, attempting a fork would require tremendous time and energy — and as Greg Smith commented, "standardization tends to attract lots of paperwork. Last thing you want to be competing with a big company on is doing that sort of big company work."
On the other hand, some, like Christopher Browne, pointed out that open source projects should consider participating in new standards processes that are just beginning, such as the UnQL specification proposed for NoSQL database queries. Darren Duncan suggested much the same thing with respect to the Muldis D language.
Abbate clarified his intention in a follow-up
message, saying he did not mean to propose embarking on a
standards-fork. "I only think it may be useful to discuss SQL
features, informally or otherwise, with other open source 'competitors'
such as SQLite, MySQL (brethren), Firebird, etc.."
With regard to Abbate's idea, Berkus affirmed the value of communication
between the various open source database projects, noting that they already
meet annually at OpenSQL Camp. But
there are essentially only three open source relational databases that
matter, he said: PostgreSQL, MySQL, and SQLite. Among those, MySQL is now
split into several competing fragments, the largest of which is owned by
Oracle. As a result, cross-project communication boils down to PostgreSQL
concurring with SQLite, he said, "which we already mostly
Realistically, though, Berkus does not feel that SQL users are demanding
more features and syntax:
I personally can't think of too many things I'd want to *add* to
the SQL standard. Simplify, yes, but add, no. Possibly the OpenSQL
group could work on more accessible syntax for stuff like windowing
and recursive queries. However, it's more likely that we'll be
working more on direct language interfaces in the future instead
In the broader open source community, then, relational databases may have it easy because SQL is old enough that it is both well known and established (not to mention the fact that most users are resigned to incompatibility between competing vendors). Other software projects are not so lucky, from patent-driven fights about video codecs in HTML5 to supporting new hardware specification in the Linux kernel. The roadblocks Berkus mentioned are problematic no matter what the standard. Large projects or well-funded organizations may be fortunate enough to get a representative into the process (as PostgreSQL has), but a closed process dominated by proprietary vendors cannot be reformed in a day.
Comments (55 posted)
The GNOME project is currently readying its 3.2 release, the first
update to the re-vamped infrastructure and environment introduced in April
2011. Although many minor enhancements and changes are slated to roll
out with 3.2, the one with the greatest potential to affect end users
is the extensions mechanism for the GNOME Shell desktop interface.
When 3.0 was released, a large slice of the negative criticism it
received centered around the difficulty of customizing GNOME Shell when compared
to the GNOME 2.x panel and menu system. The placement and orientation of
GNOME Shell's interface elements was fixed; fonts, key bindings, and icons
could not be changed; popular informational applets and controls were
missing (and there was no facility for bringing them back); there were no
UI or window-manager themes, et cetera. A stopgap measure called Gnome Tweak Tool appeared
later that restored user control over a number of basic settings, but only
for a fixed set of options.
In the months since, that extension system has slowly begun to take shape. Initially, individual developers would announce extensions on their personal blogs, which were periodically rounded-up on third-party discussion-and-review blogs. That process made locating them difficult, and knowing which ones to trust dicey. An official collection is now hosted in the GNOME Git repository, currently consisting of nine extensions, but it became clear in recent months that a real extension infrastructure would be needed — to handle hosting public extensions, validating and reviewing code, and providing users with a simple way to install and uninstall their selections from within GNOME.
The foot with a sweet tooth
is the codename for the new GNOME Shell extension infrastructure project.
The user-facing front end of the system is a planned extension-hosting web
site à la Mozilla's addons.mozilla.org, at which visitors can
search for and download extensions. The URL for the site is variously
reported as extensions.gnome.org or extensions.gnome3.org, although neither
is active yet. There will be (at least) two substantive differences
between the GNOME extension site and Mozilla's Add-Ons repository,
While one possible solution would be to limit the installation of extensions
to a specialized application, the current scheme is to access to
the extension site with any modern browser, and use other measures to
detect and block malware. A sticking point in this approach is how to
permit the site's web application to safely learn which Shell extensions
are currently installed (and at which version numbers) in order to
present the correct options to the use in the UI (i.e., "Install" versus
"Uninstall / Upgrade"); it is also necessary to create a mechanism to
actually install extensions from the browser.
To provide those capabilities, some sort of go-between
is required, perhaps a local process that can speak HTTP to the browser,
although there are of course inherent security risks to running a local
server that responds to queries about the local filesystem. On the
extension site's side of the equation, the plan is to implement a code
review policy with cryptographically-signed uploads of each extension.
Reviewers will only be responsible for checking new extensions for
malicious behavior, not grading them on importance, functionality, or
The GNOME Shell discussion list has been debating several approaches to the extension-signing and review-process pieces of the puzzle. Owen Taylor's preferred plan involves two signatures: one from each reviewer, and a separate one from the site — although he noted that the manual steps could constitute a weak spot.
In theory signatures can offer a layer of security that is independent of the security of the hosting of extensions. It's hard for me to wrap my head around a way to make that practical, if we want to be able to review and approve extensions through the web UI.
Schemes I can come up end up with end up with something like:
- Reviewers download and review extensions locally, then sign them with their GPG key.
- An administrator takes the signed extensions, checks the reviewers signature and then adds the official GNOME signature.
That would be very secure, but it also creates manual steps and points of failure that would likely make the system just not work in practice.
We shouldn't forget either that our opinion about whether an extension is safe can change over time. A signature only means that that exact code base passed review at one point in time.
Extension security is an issue, but as was noted elsewhere in the thread, it is not a greater security risk than that of running any other desktop application.
As the example extensions linked to above show, however, a fair number of GNOME Shell extension authors appear interested in substantially changing the desktop experience. The GNOME Shell team seems to be taking a hands-off approach (noting on the Sweet Tooth wiki page that the project will not endorse or support third-party extensions).
Hopefully making that policy prominent on the extensions site will appease the camp that worries about customization diluting the "GNOME brand," but it does leave open the possibility of mutually-conflicting extensions. So far there appear to be no safeguards in place, although there was some discussion about an SELinux-like permission system. Keeping track of permission sets is primarily a security policy tool, but would also assist in managing collection of extensions.
The developer angle
Looking Glass offers a way for aspiring extension writers to explore the GNOME Shell environment. One can select items in GNOME Shell with the mouse and copy the underlying GObject structure to the Looking Glass evaluator, and there is a special function to slow down animations for easier debugging. But it still is not complete enough to serve as a full development environment.
A bigger issue is that GNOME Shell still lacks comprehensive documentation of its structure, methods, and conventions — despite the fact that such documentation is part of the official roadmap. The extension system is potentially powerful, but the only way for outsiders to learn how to write extensions is to hunt for tutorials and examples on individual blogs. Some of them are quite good, such as Finnbarr Murphy's. But without a real effort to maintain official documentation, they rapidly fall out of date.
Providing add-on developers with adequate resources is an area where Mozilla excels with its Mozilla Developer Network sites; GNOME will need to play catch-up in order to grow a healthy GNOME Shell extension community.
Now that the freeze for 3.2 is upon us, and the extensions site is still not up and running, it appears that the framework will be relegated to a "soft-launch" in the 3.2 cycle. Taylor described it as "a bit of a stealth-beta for this release ... something we're still working on, something we don't advertise as a release feature, but something that you can already use."
Perhaps that is for the best. Although the extensibility of GNOME Shell is promising (perusing the extensions already written is quite an experience), the human side of the framework still needs work. One only needs to take a look at the public response to GNOME 3.0 to see how poor messaging can undermine a technical success.
The 3.0 public relations and marketing blitz completely failed to communicate to users and developers that an extension mechanism was in the works, or even a possibility for the future — it got no mention in the release notes, the talking points, or even the design documents. GNOME developers have been talking about GNOME Shell extensions in the run-up to 3.2, but with API documentation and developer outreach still missing in action, the risk is that yet another major release will pass with the project missing an chance to attract coders and strengthen its software ecosystem.
Comments (30 posted)
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
- Security: The kernel hardening roundtable; UEFI secure boot; New vulnerabilities in django, httpd, php, zabbix, ...
- Kernel: Control groups at LPC 2011; Toward a unified display driver framework; dm-verity.
- Distributions: SmartOS: virtualization with ZFS and KVM; Tumbleweed backs off on systemd for now, Arch, openSUSE, ...
- Development: Why CouchDB?; GNU Health, OpenShot, ...
- Announcements: Maemo community loses Gary "lcuk" Birkett; Android articles; PyCon, LibreOffice Conf, lca, ...