The X.Org Foundation released
xorg-server 1.7 on October 1st, in preparation for the imminent release of
X11R7.5. Users can look
forward to improvements in display configuration, screen transformation,
and input devices, including the much-anticipated Multi-Pointer X (MPX)
code that supports multiple independent keyboard focus points and mouse
pointers. At the same time, the development team is drawing up plans to
adopt a new release process to accommodate a predictable release schedule
and better testing.
What's new with 1.7 and 7.5
Lower-level changes in the new release include several new
display-oriented technologies: support for Extended Extended display
identification data (E-EDID) and an update to the X Resize, Rotate and
Reflect Extension (XRandR). Another proposed update, the "Shatter"
enhancement to the EXA acceleration architecture, was deferred to a future
E-EDID is a revision of the EDID format with which monitors provide a
machine-readable list of capabilities to attached graphics cards. E-EDID
supports longer strings than EDID, localization of strings, and adds fields
for aspect ratio changing and additional timing and frequency formulas.
E-EDID will eventually be superseded by a newer format called DisplayID,
but is of particular importance to home electronics users because of its
usage in HDMI devices.
1.3 adds two new capabilities: projective transforms and panning.
Projective Transforms allow more generalized transformations of the image
buffer than the previously supported rotation and reflection. This will
allow transforms to correct for keystoning and other distortion, as well as
scaling of the image buffer. If the displayed desktop is smaller than the
virtual screen size, enabling Panning will allow the display to follow the
The deferred project Shatter was one of
X.Org's Google Summer of Code projects, and when integrated will allow
screens to be split between multiple framebuffers.
Input devices also see changes in this release, most notably with MPX.
As the name indicates, MPX allows multiple input devices to be used at
once. That does not mean merely the ability to plug in two mice and two
keyboards physically; X has supported that for a long time. But without
MPX, multiple attached mice both control the same pointer, and multiple
keyboards both route keystrokes into the same input stream. MPX allows for
multiple, separate cursors, with separate focus behavior. Some X
applications and toolkits will require modification to work with MPX, as
they hard-code in the assumption that there is only one keyboard and
MPX is part of a larger revision to the X input system named XInput2.
XInput2 builds on the previous XInput API, and adds other features such as
Device Properties, a mechanism through which generic properties can be
attached to input devices to report special characteristics to the X server
and client applications. Such properties might include mouse button
timeouts, pointer acceleration, or even logical names (such as
distinguishing between multiple attached mice).
Other updates to specific subsystems include changes for Mesa, SELinux,
and VGA arbitration, enhancements to the XQuartz server designed for Mac OS
X, as well as the deprecation of several obsolete and unmaintained modules
The process for 1.8
Peter Hutterer proposed
reworking the X.Org release process in an email to the xorg-devel mailing
list on September 26. He cited three problems with the existing process:
an unpredictable schedule, too much development in the git master that
frequently leaves it broken and unusable, and a too-short testing cycle
that occurs late in the release process. He noted that the three problems
were tightly related, and proposed that the project adopt a timed,
predictable release schedule with separate windows for feature merging,
bug fixing, and final testing.
The proposed process begins with starting separate branches for new
features, rather than developing them as patch sets that could disrupt
master. For each release cycle, the project would then use a three month
merge window to integrate the feature branches into master, then enter into
a two-month bug fix window, and finally freeze master for a one-month
release window, during which time a release manager is in charge, and only
crucial fixes are merged in. The result, argued Hutterer, would be a
predictable six-month release cycle, and a much easier environment for
Keith Packard questioned
whether 3:2:1 was the best ratio for feature merging, bug fixing, and
release freezing, specifically noting that the feature merge window was
considerably larger than that used by the Linux kernel team. Hutterer replied
that he thought it was a good starting value, particularly due to the fact
that the entire process was new, but added that he thought every facet of
the process should be reviewed after the 1.8 cycle, including possibly
the merge window.
The effect on testing was particularly popular with the other developers
on the list during the subsequent discussion. Several contrasted X.Org's
differences from the Linux kernel, beginning with the relative scarcity of
X.Org testers. The consensus in the thread was that the history of an
unstable git master and lack of documentation to guide willing testers in
building and testing the code was to blame; a revised release process with
a stable master and individual feature branches could go a long way towards
building a community of active X.Org testers.
Hutterer made his proposal on the list because he was unable to attend
the 2009 X Developers'
Conference (XDC), held in Portland September 28-30. The XDC attendees
discussed the proposal, after which Daniel Stone posted
their decisions to xorg-devel. The group plans to adopt the basic proposed
model for the xorg-server 1.8 / X11R7.6 release cycle, with the addition of
choosing release managers for each cycle and asking developers to adopt
per-subsystem trees in same manner that the Linux kernel developers use for
Stone's email generated its own controversy thanks to its suggestion
that if the the new process is a success for the 1.8/7.6 cycle, then the
next step would be to merge graphics drivers into the main xorg-server tree
for the 1.10/7.7 cycle. The arguments against merging drivers into the
main xorg-server code base included license incompatibilities, but
ultimately more developers deemed the simplicity of maintaining drivers in
the same codebase as the server to be a long-term win. Still, that change
in source code management is still just a proposal, and one slated for two
release cycles in the future.
Ultimately, the goal of the proposed new release process is to make the
main X.Org codebase more stable, more predictable, and as a result, easier
to test. As several on the xorg-devel list pointed out, xorg-server is
used on just as many systems as the Linux kernel, but has only a fraction
of the active testers that help make the kernel so robust. X.Org continues
to make improvements and enhancements with every release, and long gone are
the naysayers of a decade ago who proposed ditching X altogether. Hopefully,
X11R7.6 and xorg-server 1.8 will arrive on schedule six months from now,
and will show the fruits of a longer and more determined testing process,
Comments (17 posted)
Real Time Linux Workshop
was held in Dresden, Germany, at
the end of September; it was attended by some 200 researchers and developers
working in that area. RTLWS was a well-organized event, with engaged
participants, interesting topics, and more than adequate amounts of German
beer. This article will be concerned with three sessions from that event;
other topics (deadline schedulers in particular) will be looked at separately.
Real time or real fast?
There is a certain amount of confusion surrounding realtime systems; most
commonly, people think that it is concerned with speed. The real focus of
realtime computing, though, is determinism: the fastest possible response
is far less important than knowing that the system will respond within a
bounded time period. In fact, realtime is often at odds with speed,
especially if speed is measured in system throughput; this conflict was
driven home by Paul McKenney's talk titled "Real fast or real time: how to
choose." Paul concluded that one should choose the "real fast" option in a
number of situations, including those where throughput is the primary
consideration, virtualization is in use, or hard deadlines are not
present. In other words, if realtime response is not needed, a realtime
kernel should not be used - not a particularly surprising conclusion.
Interestingly, though, the "real fast" option may sometimes be best in
hard-deadline situations as well. In particular, if the amount of
processing which must be done within the deadline is large enough, the
performance costs associated with hard realtime systems may become more of
an impediment to getting the work done in time than the non-deterministic
nature of general-purpose systems. The number Paul put out was 20ms; if
the system must do more computing than that within each deadline cycle, it
is likely to perform better on "real fast" machines. In other words, after
20ms of computation, a throughput-optimized system will have caught up
enough to make up for any extra latency which might delay the start of that
See Paul's paper [PDF] for
Determinism is generally seen as a software issue; it is expected that hardware
always behaves in a consistent way. Some research [PDF]
presented by Peter Okech,
though, makes it clear that contemporary hardware is not as deterministic
as one might think. Today's computers incorporate a great deal of
complexity from many sources: multiple processors, multiple levels of
instruction-processing pipelines, instruction reordering, branch
prediction, system management interrupts, etc. From complexity, says
Peter, comes randomness. As a
demonstration of this fact, his group did extensive timings of simple
instruction sequences; even after long "warmup" cycles and with interrupts
disabled, these sequences
never did reach a point where they would execute in a constant or
For added fun, Peter's group coded a random number generator based on
hardware non-determinism. The resulting random number sequences were then
subjected to all of the tests they could come up with, from basic
mean-calculation and compression tests through to full entropy computation.
results came out the same each time: instruction timings on contemporary
systems are truly random. There is no real need to buy special-purpose
hardware for random number generation; we are already running on such
hardware. Needless to say, there are implications for anybody looking for
strict determinism from their systems, especially on very small time scales.
Developers and academics
The closing event of the conference was a panel session on the disconnect
between academia and the development community; the panelists were James
H. Anderson, Thomas Gleixner, Hermann Härtig, Jan Kiszka, Doug
Niehaus, Ismael Ripoll, and Peter Zijlstra. The problem statement
asked: why are there dozens of papers on deadline schedulers, but no
implementation in Linux? How can somebody get a computer science degree
without learning about the problems posed by multicore processors? The
actual discussion was relatively unstructured, involving numerous members
of the audience, and it did not answer those specific questions. But it
was interesting nonetheless.
The session opened with an invitation to the panelists to make wishes, with
no real concern
for practicality. Developers and academics both wished that professors
could receive recognition and credit for patches which get merged into an
upstream project. The current system rewards the publication of papers
while ignoring practical contributions (including little details like
Without an incentive to get their work upstream, researchers tend to stop
working once their research reaches a publishable state.
It was noted that in some companies (Siemens was cited), employees get
credit for accepted patches in much the same way they get credit for
more traditional publications.
Another wish which was well received on both sides was the idea that
developers and researchers should attend each others' conferences. The two
groups tend to speak very different languages; for example, academics talk
about "deadlines" (a set period after which the work must be done) while
developers worry about "latency" (how long it takes the system to respond
to an event). Given fundamental concepts that differ in this way, it is not
entirely surprising that the two groups do not always communicate well.
Going over to the
other side and being immersed in the concerns and language found there
would be helpful for everybody working in this field.
Developers asked for the publication of papers which are more easily read on
their own. It is hard for busy developers to make time to read academic
papers; if they have to go look up a dozen other papers to make sense of
one, they are likely to just give up. The publication of more survey
papers was suggested as one way to help in this area. Another was to read
recent dissertations, which tend to start with relatively complete
summaries of the current state of academic understanding. The hosting of
summary tutorials at conferences was also suggested.
There was a request from academia for more example problems and tasks that
students could take on. Also requested was an easier way to hook research
code into the kernel and play with it. That might make it easier for
academics to push code upstream, but not all developers are convinced
that's a good idea. Instead, they say, it may be better if academics remain
focused on long-term problems, with the development community adapting the
best ideas for implementation and upstream merging.
The best thing that could
happen would be that Linus Torvalds suddenly falls in love with
If one gives academics the green light to be impractical, they will rarely
miss the opportunity. So, it was suggested, the best thing that could
happen would be that Linus Torvalds suddenly falls in love with
microkernels. Thomas Gleixner could then become the maintainer of the L4 microkernel system. The
underlying motivation here was not just that academics still think
microkernels are better (many certainly do); it's also the simple fact that
the Linux kernel has become so complex that it's getting hard for
researchers to play with.
There was some lamentation that the academic community is not really
producing students who are able to work with the development community.
They don't know how to get code upstream. Increasingly, it seems, they
don't really even know how to program - especially at the operating
systems level. The academic system was charged with churning out armies of
Java programmers who have little understanding of how computers actually
work and have no clue of the costs of things. The result is that they go
forth and create no end of highly bloated systems. The really good
developers, it was claimed, tend to come from an electrical engineering
background - though the prevalence of hardware engineers who churn out bad
code was also noted.
Some universities have experimented with "real-world programming" courses.
One of the things they have found is that registrations tend to be low -
there is not a great deal of interest in taking that kind of class. There
was also some special criticism directed toward the "Bologna process,"
which is trying to harmonize educational offerings across Europe. That
process calls for reducing the standard undergraduate program to three
years, which is not at all sufficient to teach people what they really need
A suggestion for students who are interested in learning community
development was to simply start with mailing list archives and spend some
time watching how things are done. Then dive in. The community is making
a real effort to avoid flaming people to a crisp these days, so jumping in
is safer than it once was. But, in the end, people join the development
community because they are interested in doing so; offering netiquette
lessons is unlikely to inspire more of them. There are very few students
who have the interest and the ability to become competent system-level
programmers. It has always been that way; things have not really changed
in that regard.
Internships at open source companies were suggested as a way to build both
interest and experience. Such internships exist at a number of companies,
though they tend to be fairly severely limited in number. What does exist,
though, is the Google Summer of Code program, which is, for all practical
purposes, an internship program on a massive scale. The problem here is
that the kernel and realtime communities are not really organized in a way
that lets them sign up to mentor summer of code students - this problem
should certainly be solvable.
But none of that will help if students do not want to learn to do real
development in the community. As strange as it seems, it appears to not be
an entirely attractive profession. It takes years of work to become a
competent engineer; many are simply unwilling to put in that time. Whether
things have gotten worse because people expect instant gratification now,
or whether it has always been this way was a matter of debate. One
panelist suggested that things will only get better when good engineers
make more money than good lawyers.
Another complaint was that universities have a certain tendency to actively
block free software users. Some use proprietary virtual private network
technology which is not available to Linux users. Homework submission
sites which only work with Internet Explorer were also mentioned.
The session ended with little in the way of specific action items, but
there was one: researchers requested a means by which they could easily
experiment with new scheduling algorithms in the kernel. It was agreed
that some sort of pluggable schedule technology would be added to the
realtime tree, which has long served as a sort of playground for
interesting new approaches. A pluggable scheduler seems unlikely to make
it upstream, but presence in the realtime tree should make it sufficiently
available for researchers to make use of.
The conference adjourned with the announcement of the venue for next year's
event. The Real Time Linux Workshop has tended to move around more than
most conferences; past events have been held all over Europe as well as
China, Mexico, and the US. The 2010 Workshop will continue that practice
by moving to Nairobi, Kenya, in the latter part of October. That should be
an interesting place to discuss what's happening in the rapidly developing
realtime Linux area.
Comments (26 posted)
Linux-based mobile phone platforms are really just specialized
distributions. Like other distributions, phone platforms will live or die
based on how well they meet the needs of their users. The Android platform
has a high profile at the moment as the result of the entry of more
handsets into the market, but also as a result of Google's actions toward
derived distributions. Android is clearly not meeting the needs of all its
users currently, but changes are afoot which may improve the situation.
The dust has mostly settled after Google's shutdown of the Cyanogen build for Android phones.
Nobody can really dispute Google's core claim that Cyanogen was
redistributing proprietary software in ways not allowed by the license.
But numerous people have disputed Google's good sense; those
applications are freely downloadable elsewhere and can only run on phones
which already shipped with a copy. So shutting down their redistribution does
Google little (if any) good, but it has had a harsh chilling effect over the
enthusiastic communities that were promoting Android and trying to make it
better. Now those communities are trying to regroup and continue their
work, but the rules of the game have changed.
The most community-friendly representative within Google has long been
Jean-Baptiste Queru; he clearly puts quite a bit of time into helping other
developers work with Android. He is now at the center of an effort to turn
Google's "Android Open Source
Project" (AOSP) into something deserving of that name.
Jean-Baptiste has (belatedly, one might say) figured out one of the major obstacles to
contributing to the platform: the difficulty of actually running one's
The primary target form factor for Android is a phone. That means
that, deep inside, a fundamental part of allowing writers to play
their part is to allow the Android Open-Source Project to be used
on phones. And, by that, I don't just mean that it needs to compile
and boot, i mean that it has to be usable as a day-to-day
phone. Right now, it's not. The range of applications is too
limited, the applications that are in there don't all work, and
there are quite a few system glitches along the way.
Another aspect is that it makes no sense to expect every
contributor to have to apply the same set of manual patches to get
to a basic working state. Android Open-Source Project should be
usable "out of the box" on commonly available hardware.
Anybody who has tried to build and install Android knows that this "out of
the box" experience is certainly not available today. Part of the problem
is the massive size and complexity of the Android platform as a whole;
there is not a whole lot to be done about that. But even owners of the
"Android Developer Phone," who might reasonably expect to be able to
develop for their
phones, have to locate a set of proprietary components and incorporate them
into the build. And then there's the problem of those proprietary
applications. A purely-free Android build lacks the maps, gmail, calendar,
and market applications - and the synchronization backends which keep
things current with the mothership. Such a build does not equip a
handset to be "usable as a day-to-day phone."
The first step, according to Jean-Baptiste, is to get to where an Android
build just works on the target hardware - the ADP1, naturally. Once the
hardware-level hassles have been overcome, it might make sense to talk
about filling in the missing applications. But until developers can easily
create a build that runs on a real handset, there's not much point in
looking at the bigger goals. With the upcoming
AOSP 1.0 release, it looks like this preliminary phase is nearing
Solving the rest of the problems should not be all that hard. If the gmail
application never becomes available, mail can be read through IMAP instead
- and that might just inspire some people to help improve the somewhat
painful email application currently shipped with Android. There is a lot
of interest in free mapping utilities, including tools like AndNav which have the potential to
surpass Google's maps program. AndNav works from OpenStreetMap data and
has the ability to do turn-by-turn navigation - something that the Google
tool is unlikely to ever be able to do. SlideME has been offered as a free replacement for the market
application. And so on.
The harder part might be the tools requiring synchronization with Google's
services; those protocols are not always open. It has been made clear that
the Android Open Source Project - hosted at Google - is not going to host
software developed for reverse-engineered protocols. So, if Google
continues to refuse to make the gmail, calendar, and market backends
available, those applications simply will not be supported in free builds.
There is, of course, nothing preventing the implementation of applications
which synchronize to services hosted elsewhere.
The other place where Google will make its presence felt in this project is
(L)GPLv3 is out of the question in all circumstances - it scares
the phone industry so much that we'd be hurting the entire Android
ecosystem if such code made it anywhere into the Android
GPLv2 might be allowed for new components, maybe, but given the extent to
which Android has gone out of its way to avoid GPLv2 software as well, it
could still be a hard sell.
Those looking for a more independent effort may be interested in the Open Android Alliance, which is
working to make a fully-free version of Android outside of Google. The
page (on Google Code, ironically) states that new work will be licensed
under GPLv3. It looks like the developers behind the Alliance are not
strongly tied to that license, but there are certainly developers out there
who would like to see some sort of copyleft license used. If Google is
going to hold back and make them reimplement applications, they reason,
Google should not be able to take the resulting code and distribute it as
another proprietary application.
The Open Android Alliance has a number of developers
who are said to be working on various aspects of the problem. It does not,
however, appear to have a mailing list or any code available for download.
This is a newborn project; its long-term viability is yet to be determined.
What is clear is that people take the "open handset" idea seriously. It is
not enough to dump a bunch of code into an online git server; many of us
actually want to mess with our devices. Google, perhaps, is starting to
understand that, even if it is still having a hard time balancing pressures
from the development community, wireless carriers, hardware manufacturers,
and its own lawyers. It is not yet clear whether that understanding
will translate into sufficient openness for the Android project, but it
appears that things might be headed in the right direction.
It seems that Linux World Domination in the handset market is within our
grasp. But which Linux distributions will participate in that success?
a number of Android handsets out there, but there are still more based on
other Linux distributions and the LiMo platform. Soon (not soon enough,
for many of us) there will be Maemo-based handsets to play with, and it
would not be entirely surprising to see Moblin-based handsets in the
not-too-distant future. Some of these platforms will do better than others
in the market. It may well be that the platform which is the most open,
and which draws the most developer interest, will win out.
Comments (36 posted)
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
- Security: LPC: Three sessions from the security track; New vulnerabilities in elinks, kernel, openoffice, samba,...
- Kernel: 2.6.x-rc0; Concurrency-managed workqueues; Infrastructure unification in the block layer.
- Distributions: openSUSE Boosters; new releases from Gentoo, openSUSE and Ubuntu; ZevenOS; reviews of openSUSE and Ubuntu.
- Development: LPC: The past, present, and future of Linux audio, Stress testing IMAP clients, GCC LTO merged, new versions of MPD, PulseAudio, Ingres, Samba, OpenSSH, TurboGears, Amarok, KDE, AsciiDoc, Gnucash, SQL-Ledger, liboggz, Perl, Python.
- Announcements: FSF GNU Bucks, OW2 and OSA merge, Netgear's open source router, Bilski case developments, state of open source, OLPC in Uruguay, EFF Pioneer Awards, GROW'10 cfp, Akademy dates, LF End User Summit, OO.o conf.