The X window system is the kernel of the desktop Linux experience; if X
does not work well, nothing built on top of it will work well either. Despite
its crucial role, X suffered from relative neglect for a number of years
before being revitalized by the X.org project. Two talks at linux.conf.au
covered the current state of the X window system and where we can expect
things to go in the near future.
Keith Packard is a fixture at Linux-related events, so it was no surprise
to see him turn up at LCA. His talk covered X at a relatively high,
feature-oriented level. There is a lot going on with X, to say the least.
Keith started, though, with the announcement that Intel had released
complete documentation for some of its video chips - a welcome move, beyond
There are a lot of things that X.org is shooting for in the near future.
The desktop should be fully composited, allowing software layers to provide
all sorts of interesting effects. There should be no tearing (the
briefly inconsistent windows which result from partial updates). We need
integrated 2D and 3D graphics - a goal which is complicated by the fact
that the 2D and 3D APIs do not talk to each other. A flicker-free boot
(where the X server starts early and never restarts) is on most
distributors' wishlist. Other desired features include fast and secure
user switching, "hotplug everywhere," reduced power consumption, and a
reduction in the (massive) amount of code which runs with root privileges.
So where do things stand now? 2D graphics and textured video work well.
Overlaid video (where video data is sent directly to the frame
buffer - a performance technique used by some video playback applications)
does not work with compositing, though. 3D graphics does not always work
that well either; Keith put up the classic example of glxgears running
while the window manager is doing the "desktops on a cube" routine - the 3D
application runs outside of the normal composite mechanism and so cannot be
rotated with all the other windows.
On the tearing front, only 3D graphics supports no-tearing operations now.
Avoiding tearing is really just a matter of waiting for the video retrace
before making changes, but the 2D API lacks support for that.
The integration of APIs is an area requiring some work still. One problem
is that Xv (video) output cannot be drawn offscreen - again, a problem for
compositing. Some applications still use overlays, which really just have
no place on the contemporary desktop. It is impossible to do 3D graphics
to or from pixmaps, which defeats any attempt to pass graphical data
between the 2D and 3D APIs. On the other side, 2D operations do not
Fast user switching can involve switching between virtual terminals, which
is "painful." Only one user session can be running 3D graphics at a time,
which is a big limitation. On the hotplug front, there are some
limitations on how the framebuffer is handled. In particular, the X server
cannot resize the framebuffer, and it can only associate one framebuffer
with the graphics processor. Some GPUs have maximum line widths, so the
one-framebuffer issue limits the maximum size of the internal desktop.
With regard to power usage: Keith noted that using framebuffer compression
in the Intel driver saves 1/2 watt of power. But there are a number of
things to be fixed yet. 2D graphics busy-waits on the GPU, meaning that a
graphics-intensive program can peg the system's CPU, even though the GPU is
doing all of the real work. But the GPU could be doing more as well; for
example, video playback does most of the decoding, rescaling, and color
conversion in the CPU. But contemporary graphics processors can do all of
that work - they can, for example, take the bit stream directly from a DVD
and display it. The GPU requires less power than the CPU, so shifting that
work over would be good for power consumption as well as system
Having summarized the state of the art, Keith turned his attention to the
future. There is quite a bit of work being done in a number of areas - and
not being done in others - which leads toward a better X for everybody. On
the 3D compositing front, what's needed is to eliminate the "shared back
buffers" used for 3D rendering so that the rendered output can be handled
like any other graphical data.
Eliminating tearing requires providing the ability to synchronize with the
vertical retrace operation in the graphics card. The core mechanism to do
this is already there in the form of the X Sync extension. But, says
Keith, nobody is working on bringing all of this together at the moment.
Getting rid of boot-time flickering, instead, is a matter of getting the X
server properly set up sufficiently early in the process. That's mostly a
To further integrate APIs, one thing which must be done is to get rid of
overlays and to allow all graphical operations (including Xv operations) to
draw into pixmaps. There is a need for some 3D extensions to create a
channel between GLX and pixmaps.
Supporting fast user switching means adding the ability to work with
multiple DRM master. Framebuffer resizing, instead, means moving
completely over to the EXA acceleration architecture and finishing the
transition to the TTM memory
manager. In the process, it may become necessary to break all existing
DRI applications, unfortunately. And multiple framebuffer support is the
objective of a project called "shatter," which will allow screens to be
split across framebuffers.
Improving the power consumption means getting rid of the busy-waiting with
2D graphics (Keith say the answer is simple: "block"). The XvMC protocol
should be extended beyond MPEG; in particular, it needs work to be able to
properly support HDTV. All of this stuff is currently happening.
Finally, on the security issue, Keith noted the ongoing work to move
graphical mode setting into the kernel. That will eliminate the need for
the server to directly access the hardware - at least, when DRM-based 2D
graphics are being done. In that case, it will become possible to run the
X server as "nobody," eliminating all privilege. There are few people who
would argue against the idea of taking root privileges away from a massive
program like the X server.
In a separate talk, Dave Airlie covered the state of Linux graphics at a
lower level - support for graphics adapters. He, too, talked about moving
graphical mode setting into the kernel, bringing an end to a longstanding
"legacy issue" and turning the X server into just a rendering system. That
will reduce security problems and help with other nagging issues (graphical
boot, suspend and resume) as well.
Mode setting is the biggest area of work at the moment. Beyond that, the
graphics developers are working on getting TTM into the kernel; this will
give them a much better handle on what is happening with graphics memory.
Then, graphics drivers are slowly being reworked around the Gallium3D
architecture. This will improve and simplify these drivers significantly, but "it's
going to be a while" before this work is ready. The upcoming DRI2 work will improve buffering and
fix the "glxgears on a cube" problem.
Moving on to graphics adapters: AMD/ATI has, of course, begun the process
of releasing documentation for its hardware. This happened in an
interesting way, though: AMD went to SUSE in order to get a driver
developed ahead of the documentation release; the result was the "radeonhd"
driver. Meanwhile, the Avivo project, which had been reverse-engineering
ATI cards, had made significant progress toward a working driver. Dave
took that work and the AMD documentation to create the
improved "radeon" driver. So now there are two competing projects writing
drivers for ATI adapters. Dave noted that code is moving in both
directions, though, so it is not a complete duplication of work. (As an
aside, from what your editor has heard, most observers expect the radeon
driver to win out in the end).
The ATI R500 architecture is a logical addition to the earlier (supported)
chipsets, so R500 support will come relatively quickly. R600, instead, is
a totally new processor, so R600 owners will be "in for a wait" before a
working driver is available.
Intel has, says Dave, implemented the "perfect solution": it develops free
drivers for its own hardware. These drivers are generally well done and
well documented. Intel is "doing it right."
NVIDIA, of course, is not doing it right. The Nouveau driver is coming
along, now, with 5-6 developers working on it. Dave had an RandR
implementation in a state of half-completion for some time; he finally
decided that he would not be able to push it forward and merged it into the
mainline repository. Since then, others have run with it and RandR support
is moving forward quickly. It was, he says, a classic example of why it is
good to get the code out there early, whether or not it is "ready."
Performance is starting to get good, to the point that NVIDIA suddenly
added some new acceleration improvements to its binary-only driver.
Dave is still hoping that NVIDIA might yet release some documents - if it
happens by next year, he says, he'll stand in front of the room and dance a
to post comments)