LWN.net Logo

LCA: Two talks on the state of X

By Jonathan Corbet
February 8, 2008
The X window system is the kernel of the desktop Linux experience; if X does not work well, nothing built on top of it will work well either. Despite its crucial role, X suffered from relative neglect for a number of years before being revitalized by the X.org project. Two talks at linux.conf.au covered the current state of the X window system and where we can expect things to go in the near future.

Keith Packard is a fixture at Linux-related events, so it was no surprise to see him turn up at LCA. His talk covered X at a relatively high, feature-oriented level. There is a lot going on with X, to say the least. Keith started, though, with the announcement that Intel had released complete documentation for some of its video chips - a welcome move, beyond any doubt.

There are a lot of things that X.org is shooting for in the near future. The desktop should be fully composited, allowing software layers to provide all sorts of interesting effects. There should be no tearing (the briefly inconsistent windows which result from partial updates). We need integrated 2D and 3D graphics - a goal which is complicated by the fact that the 2D and 3D APIs do not talk to each other. A flicker-free boot (where the X server starts early and never restarts) is on most [Keith Packard] distributors' wishlist. Other desired features include fast and secure user switching, "hotplug everywhere," reduced power consumption, and a reduction in the (massive) amount of code which runs with root privileges.

So where do things stand now? 2D graphics and textured video work well. Overlaid video (where video data is sent directly to the frame buffer - a performance technique used by some video playback applications) does not work with compositing, though. 3D graphics does not always work that well either; Keith put up the classic example of glxgears running while the window manager is doing the "desktops on a cube" routine - the 3D application runs outside of the normal composite mechanism and so cannot be rotated with all the other windows.

On the tearing front, only 3D graphics supports no-tearing operations now. Avoiding tearing is really just a matter of waiting for the video retrace before making changes, but the 2D API lacks support for that.

The integration of APIs is an area requiring some work still. One problem is that Xv (video) output cannot be drawn offscreen - again, a problem for compositing. Some applications still use overlays, which really just have no place on the contemporary desktop. It is impossible to do 3D graphics to or from pixmaps, which defeats any attempt to pass graphical data between the 2D and 3D APIs. On the other side, 2D operations do not support textures.

Fast user switching can involve switching between virtual terminals, which is "painful." Only one user session can be running 3D graphics at a time, which is a big limitation. On the hotplug front, there are some limitations on how the framebuffer is handled. In particular, the X server cannot resize the framebuffer, and it can only associate one framebuffer with the graphics processor. Some GPUs have maximum line widths, so the one-framebuffer issue limits the maximum size of the internal desktop.

With regard to power usage: Keith noted that using framebuffer compression in the Intel driver saves 1/2 watt of power. But there are a number of things to be fixed yet. 2D graphics busy-waits on the GPU, meaning that a graphics-intensive program can peg the system's CPU, even though the GPU is doing all of the real work. But the GPU could be doing more as well; for example, video playback does most of the decoding, rescaling, and color conversion in the CPU. But contemporary graphics processors can do all of that work - they can, for example, take the bit stream directly from a DVD and display it. The GPU requires less power than the CPU, so shifting that work over would be good for power consumption as well as system responsiveness.

Having summarized the state of the art, Keith turned his attention to the future. There is quite a bit of work being done in a number of areas - and not being done in others - which leads toward a better X for everybody. On the 3D compositing front, what's needed is to eliminate the "shared back buffers" used for 3D rendering so that the rendered output can be handled like any other graphical data. Eliminating tearing requires providing the ability to synchronize with the vertical retrace operation in the graphics card. The core mechanism to do this is already there in the form of the X Sync extension. But, says Keith, nobody is working on bringing all of this together at the moment. Getting rid of boot-time flickering, instead, is a matter of getting the X server properly set up sufficiently early in the process. That's mostly a distributor's job.

To further integrate APIs, one thing which must be done is to get rid of overlays and to allow all graphical operations (including Xv operations) to draw into pixmaps. There is a need for some 3D extensions to create a channel between GLX and pixmaps.

Supporting fast user switching means adding the ability to work with multiple DRM master. Framebuffer resizing, instead, means moving completely over to the EXA acceleration architecture and finishing the transition to the TTM memory manager. In the process, it may become necessary to break all existing DRI applications, unfortunately. And multiple framebuffer support is the objective of a project called "shatter," which will allow screens to be split across framebuffers.

Improving the power consumption means getting rid of the busy-waiting with 2D graphics (Keith say the answer is simple: "block"). The XvMC protocol should be extended beyond MPEG; in particular, it needs work to be able to properly support HDTV. All of this stuff is currently happening.

Finally, on the security issue, Keith noted the ongoing work to move graphical mode setting into the kernel. That will eliminate the need for the server to directly access the hardware - at least, when DRM-based 2D graphics are being done. In that case, it will become possible to run the X server as "nobody," eliminating all privilege. There are few people who would argue against the idea of taking root privileges away from a massive program like the X server.

[Dave Airlie] In a separate talk, Dave Airlie covered the state of Linux graphics at a lower level - support for graphics adapters. He, too, talked about moving graphical mode setting into the kernel, bringing an end to a longstanding "legacy issue" and turning the X server into just a rendering system. That will reduce security problems and help with other nagging issues (graphical boot, suspend and resume) as well.

Mode setting is the biggest area of work at the moment. Beyond that, the graphics developers are working on getting TTM into the kernel; this will give them a much better handle on what is happening with graphics memory. Then, graphics drivers are slowly being reworked around the Gallium3D architecture. This will improve and simplify these drivers significantly, but "it's going to be a while" before this work is ready. The upcoming DRI2 work will improve buffering and fix the "glxgears on a cube" problem.

Moving on to graphics adapters: AMD/ATI has, of course, begun the process of releasing documentation for its hardware. This happened in an interesting way, though: AMD went to SUSE in order to get a driver developed ahead of the documentation release; the result was the "radeonhd" driver. Meanwhile, the Avivo project, which had been reverse-engineering ATI cards, had made significant progress toward a working driver. Dave took that work and the AMD documentation to create the improved "radeon" driver. So now there are two competing projects writing drivers for ATI adapters. Dave noted that code is moving in both directions, though, so it is not a complete duplication of work. (As an aside, from what your editor has heard, most observers expect the radeon driver to win out in the end).

The ATI R500 architecture is a logical addition to the earlier (supported) chipsets, so R500 support will come relatively quickly. R600, instead, is a totally new processor, so R600 owners will be "in for a wait" before a working driver is available.

Intel has, says Dave, implemented the "perfect solution": it develops free drivers for its own hardware. These drivers are generally well done and well documented. Intel is "doing it right."

NVIDIA, of course, is not doing it right. The Nouveau driver is coming along, now, with 5-6 developers working on it. Dave had an RandR implementation in a state of half-completion for some time; he finally decided that he would not be able to push it forward and merged it into the mainline repository. Since then, others have run with it and RandR support is moving forward quickly. It was, he says, a classic example of why it is good to get the code out there early, whether or not it is "ready." Performance is starting to get good, to the point that NVIDIA suddenly added some new acceleration improvements to its binary-only driver. Dave is still hoping that NVIDIA might yet release some documents - if it happens by next year, he says, he'll stand in front of the room and dance a jig.


(Log in to post comments)

ATI + xrandr

Posted Feb 8, 2008 20:33 UTC (Fri) by hathawsh (guest, #11289) [Link]

It may be old news to some, but I just found out that xrandr and S-video are working
beautifully in the latest radeon driver.  Using Gentoo's package
(x11-drivers/xf86-video-ati-6.7.197), my Thinkpad T43, with a built-in Mobility Radeon X300,
can now do more video layout tricks in Linux than Windows lets me do.  I learned about the
TV-out capability here:

  http://www.x.org/wiki/radeonTV

And here's a lot of info about the latest xrandr:

  http://www.thinkwiki.org/wiki/Xorg_RandR_1.2

It's obvious that great things are happening to X.  Thank you, guys.

LCA: Two talks on the state of X

Posted Feb 8, 2008 20:46 UTC (Fri) by flewellyn (subscriber, #5047) [Link]

The only problem I see with moving the low-level graphics mode setting stuff into the kernel, is that it makes X less portable. If X depends on kernel drivers, then the kernel of any OS that wants to run X will have to have those drivers, or something similar that talks the same interface.

I don't see this being a huge issue with the BSDs, or with OpenSolaris: just write the kernel code, BSD license it, and port it over to each kernel. But what about other platforms?

LCA: Two talks on the state of X

Posted Feb 8, 2008 20:57 UTC (Fri) by drag (subscriber, #31333) [Link]

Gallium3D should help with that, I believe. It's a very nice idea, at least. Should support
all OSes, including Windows if you really want it to.

http://www.tungstengraphics.com/gallium3D.htm

It should reduce complexity for drivers and thus the OS-specific stuff will be easier to make.

LCA: Two talks on the state of X

Posted Feb 9, 2008 6:26 UTC (Sat) by flewellyn (subscriber, #5047) [Link]

Well, I guess it's true: you can always add another layer of indirection...

LCA: Two talks on the state of X

Posted Feb 9, 2008 17:57 UTC (Sat) by drag (subscriber, #31333) [Link]

Especially when it makes sense.. like making it easier to write simpler drivers that are as
effective as the old ones. (and making that code more portable) Or making it possible to
support APIs other then just OpenGL. Or making it easier for programmers to utilize the GPU
for things other then just making games go fast.

The thing is is that the video card is no longer just useful for accelerating 3D rendering or
making movie playback smooth. 

The GPU is more and more a co-processor for your computer then anything else. A GPU has a
massive amount of floating point performance and it usually has a fat wad of memory that has a
extraordinary amount of bandwidth between it and the GPU. For certain tasks a 300 dollar video
card can blow away a cluster of x86 machines.

LCA: Two talks on the state of X

Posted Feb 10, 2008 2:47 UTC (Sun) by flewellyn (subscriber, #5047) [Link]

This is true. I've read about projects to use the GPU's vector processing for general purpose computing. It's interesting, but it has me wondering when the vendors are going to realize the inefficiency of having two powerful but dissimilar processors on the machine, and fold GPU functions back into the CPU. IBM's Cell architecture may well be an example of this already.

But as far as what you pointed to, I'm a little bit worried about the attempts to cover over differences between OpenGL and D3D. The only reason I'm worried is because, of course, D3D is a moving target, and one for which Microsoft has every reason NOT to cooperate with portability efforts.

LCA: Two talks on the state of X

Posted Feb 10, 2008 17:27 UTC (Sun) by stevenj (guest, #421) [Link]

It's interesting, but it has me wondering when the vendors are going to realize the inefficiency of having two powerful but dissimilar processors on the machine, and fold GPU functions back into the CPU. IBM's Cell architecture may well be an example of this already.

This has been going on for decades... see Sutherland's wheel of reincarnation.

LCA: Two talks on the state of X

Posted Feb 10, 2008 22:40 UTC (Sun) by flewellyn (subscriber, #5047) [Link]

Oh, indeed.  That's why I brought it up.

LCA: Two talks on the state of X

Posted Feb 11, 2008 14:08 UTC (Mon) by michaeljt (subscriber, #39183) [Link]

Is that such a problem, given that mode setting is definitely not performance critical?

LCA: Two talks on the state of X

Posted Feb 12, 2008 19:22 UTC (Tue) by rfunk (subscriber, #4054) [Link]

It has nothing to don with being performance-critical, and everything to 
do with low-level hardware interaction.  The Unix philosophy is to have 
the kernel do all direct hardware interaction.

LCA: Two talks on the state of X

Posted Feb 13, 2008 14:19 UTC (Wed) by daenzer (✭ supporter ✭, #7050) [Link]

> It should reduce complexity for drivers and thus the OS-specific stuff will
> be easier to make.

True, but Gallium3D doesn't really have anything to do with modesetting.

LCA: Two talks on the state of X

Posted Feb 8, 2008 21:46 UTC (Fri) by iabervon (subscriber, #722) [Link]

There's no need to lose the mode setting code entirely. Just leave it in an OS-interaction
section that's used for OSes that don't have mode setting and do allow direct access to the
graphics device. (On the other hand, are there any platforms that aren't open-source and give
the open-source X server direct access to the video card?)

LCA: Two talks on the state of X

Posted Feb 9, 2008 9:01 UTC (Sat) by LWN_net (guest, #47938) [Link]

As far as I know BSD's are using xfree, not xorg. Others that have a problem with that can
also just use xfree86.

Nope

Posted Feb 9, 2008 11:03 UTC (Sat) by khim (subscriber, #9252) [Link]

As of May 19th 2007, X.org 7.2 has been merged into the FreeBSD Ports Tree. Read /usr/ports/UPDATING for upgrade instructions., The XFree86 implementation is the default on NetBSD but X.Org from pkgsrc can be used alternatively, etc. BSD's either switched to X.org as default or support it as alternative - and in most cases it's only matter of legacy hardware support.

LCA: Two talks on the state of X

Posted Feb 9, 2008 12:30 UTC (Sat) by ernest (subscriber, #2355) [Link]

Hmmm. I doubted that, so I did a quick google search. 

If i combine bsd and x.org I get much more recent page, some of them with content indicating 
x.org is being used.

When combining bsd and xfree86 I get many more hits, but most of them from the xfree86 web 
site, and also usually older hits.

Specifically:
- when searching the openbsd web site faqs, x.org is mentioned in many places. not so xfree
- on the apple web site is said that X on OSX is x.org based
- from the freebsd wiki we can see that x.org is the default since about a year (but xfree is
also 
being used).
- Similarely netbsd build system explains about x.org, but mentions that xfree can also be
used. 

From all that I gather xfree usage is being dropped from the bsd community in favor of x.org.

Ernest.

LCA: Two talks on the state of X

Posted Feb 9, 2008 16:35 UTC (Sat) by rsidd (subscriber, #2582) [Link]

FreeBSD and OpenBSD switched to Xorg very early (in fact OpenBSD switched before most linux
distros did, and Theo de Raadt was one of the earliest and most vocal condemners of the
XFree86 licence change that prompted the Xorg fork).  For technical reasons (relating to their
build system) NetBSD stuck with XFree86 in their base system, but have xorg in their package
system (pkgsrc) and will be removing XFree86 soon.

LCA: Two talks on the state of X

Posted Feb 9, 2008 12:02 UTC (Sat) by airlied (subscriber, #9104) [Link]

I'm just wondering what other platforms... x drivers are mainly Linux BSDs and OSOL

I'm not sure of another kernel that ppl might care about hw drivers on

LCA: Two talks on the state of X

Posted Feb 9, 2008 17:36 UTC (Sat) by jayavarman (guest, #19600) [Link]

I don't understand why this issue is always brought up. Technical improvements _can_not_
become dependent on least common denominator or your so called "portability". I say f#$%
portability! There are people motivated to improve _currently_broken_ things on Linux. If
other people have problems with that they'd better get _working_ and not whine that people are
not doing their work.

Sorry for the harsh words but it's the reality.

Rui

X portability is also a trap

Posted Feb 10, 2008 21:32 UTC (Sun) by arjan (subscriber, #36785) [Link]

In the name of cross-OS portability, for the last 2 decades, X has done it's own hardware
management. [*]  
The effect of this is that while X is nicely portable from one OS to another, the portability
from one generation of hardware to the next got severely compromised, and more than a few
times X needed expanding or changing for this reason. And even more times, the OS kernel, such
as Linux, grew nasty workarounds to keep older versions of X working on newer HW or newer
kernels. 

Lets face it: 
It's the kernels job to manage system resources; doing that twice is a bad mistake. It's also
a kernels job to abstract hardware to a large degree..

For all intents and purposes, this makes the current X part of the kernel (logically). In that
light, the current split is rather bad, and needs to be fixed (and is getting fixed
thankfully). 


[*] Yes I know this is getting fixed.

X portability is also a trap

Posted Feb 10, 2008 22:42 UTC (Sun) by flewellyn (subscriber, #5047) [Link]

Oh, I agree.  I'm just hoping that this is solvable by, say, exporting a uniform interface for
the driver-level stuff on all OSes.

LCA: Two talks on the state of X

Posted Feb 14, 2008 11:36 UTC (Thu) by etienne_lorrain@yahoo.fr (guest, #38022) [Link]

 Another solution is to keep the video mode set by the bootloader, using the manufacturer
BIOS, before switching to protected mode.
 The video manufacturer can do whatever he likes at init in his BIOS, the kernel just manages
framebuffers, the X server just enables optimisations over framebuffer once it has recognised
the hardware - if X does not recognise the hardware it still works - but slower.
 Another advantage for CRT users (vs LCD screen) is that parameters like screen size/position
do not change depending on the operating system.

LCA: Two talks on the state of X

Posted Feb 21, 2008 15:49 UTC (Thu) by lgb (guest, #784) [Link]

Nice, but user may want to alter video mode without reboot :) However video card vendor would
ship cards with data tables on their video BIOS to explain register settings for video modes
and likes. Please note that it's possible to use 'int 10h' (real mode) interrupt service to
set up video mode even from Linux with some kind of emulation. Also video bios may contain
code can be used from protected mode directly as well?

LCA: Two talks on the state of X

Posted Feb 8, 2008 22:15 UTC (Fri) by tjc (subscriber, #137) [Link]

A question that I've been asking for years: are there any plans to allow window managers to
intercept client requests to raise themselves?

I find it really obnoxious when Firefox, for example, is raised to the top of the stack over
whatever I happen to be working on just because a page finally loaded.  It would be really
nice if the window manger could intercept this request and do something a bit less rude, like
maybe place the window second from top.

LCA: Two talks on the state of X

Posted Feb 8, 2008 22:34 UTC (Fri) by jospoortvliet (subscriber, #33164) [Link]

Isn't this just a firefox issue? I know KWin has focus stealing 
prevention, which works fine with most apps - just not with some, like 
firefox.

LCA: Two talks on the state of X

Posted Feb 8, 2008 23:01 UTC (Fri) by tjc (subscriber, #137) [Link]

In this particular case it is a Firefox issue, but I guess the point I'm raising is that if
the window manager can't intercept an attempt to raise a window (or grab the focus), then
there will probably always be badly behaved apps that do this.  Relying on apps to do the
right thing seems futile.

LCA: Two talks on the state of X

Posted Feb 9, 2008 16:40 UTC (Sat) by rsidd (subscriber, #2582) [Link]

If it is a firefox issue, here's what you do (and what I do): Edit -> Preferences -> Content
-> Enable Javascript "advanced" button -> uncheck "Allow scripts to raise or lower windows".

LCA: Two talks on the state of X

Posted Feb 9, 2008 16:43 UTC (Sat) by rsidd (subscriber, #2582) [Link]

Got to add though: it would be very nice to see the window manager implement "raise windows only to top of stack of windows for that application". So the Firefox window will be the topmost firefox window and you will see it on top if you're already focussed on firefox, but not if you're focussed on an xterm or something else.

LCA: Two talks on the state of X

Posted Feb 9, 2008 20:30 UTC (Sat) by tjc (subscriber, #137) [Link]

> it would be very nice to see the window manager implement "raise windows only to top of
stack of windows for that application".

I think sawfish does something like this?

The Firefox problem seems to have been fixed, if only recently.  I booted to a new install of
Ubuntu 7.10, and Firefox 2.0.0.12 doesn't raise itself on page loads anymore.  It still maps
new windows over the top of the currently focused window, but it's a start anyway.

LCA: Two talks on the state of X

Posted Feb 9, 2008 1:19 UTC (Sat) by njs (guest, #40338) [Link]

> are there any plans to allow window managers to intercept client requests to raise
themselves?

Window managers have always had that capability -- in fact, the *only* thing a client can do
is ask the window manager to raise it; the window manager is the only app capable of actually
issuing the raise operation on the server.  You may want to take this up with the authors of
your window manager, or of firefox.

Focus-stealing is another issue -- any client can assign keyboard focus to any window at any
moment, and the most a window manager can do is notice that it has happened and set it back
again, possibly after some keystrokes have been lost.  It'd be nice if this could be disabled,
but it's not clear it can be, given the quirks of X's focus API.  (Modern apps use a very
small subset of X's traditional focus model, but then there are the legacy apps...)

LCA: Two talks on the state of X

Posted Feb 9, 2008 20:39 UTC (Sat) by tjc (subscriber, #137) [Link]

Thanks for the information.  It seems clear that I have been mistaking window raising for
focus stealing.

When a client steals the focus, is the window manager notified, or does it have to keep
polling to notice this?

LCA: Two talks on the state of X

Posted Feb 9, 2008 22:20 UTC (Sat) by njs (guest, #40338) [Link]

> When a client steals the focus, is the window manager notified, or does it have to keep
polling to notice this?

It is notified.

LCA: Two talks on the state of X

Posted Feb 14, 2008 13:51 UTC (Thu) by kaol (subscriber, #2346) [Link]

Firefox really doesn't raise itself, but it sends an extended window manager hint (EWMH for
short) and it's up to the window manager to decide what to do with the event.  There's nothing
to be intercepted there, but you should instead check your window manager's settings.

I use fvwm myself, where doing "DestroyFunc EWMHActivateWindowFunc" in the config file was
enough to make Firefox behave.  I did "DestroyFunc UrgencyFunc" too, for good measure. I can't
remember now if that had any further effect.

LCA: Two talks on the state of X

Posted Feb 14, 2008 18:46 UTC (Thu) by spitzak (guest, #4593) [Link]

Actually the "raise this window" is intercepted by the window manager. There is a method to
force the input focus to a window, but I don't think any modern programs use that, and no
program can rely on it, as many window managers change it on any mouse movement.

What I want to see is the reverse: I want the window manager to tell the program that it is
attempting to raise/resize/hide/etc the window and let the program do whatever it wants.

In particular the program can decide *not* to raise the window, or raise any child windows
above it. This would eliminate the big mess of transient-for type hints, and would finally
allow floating windows/palettes above more than one main window. And it would allow a program
to not raise itself when you just try to move the insertion cursor or hit a button, but still
raise itself when you click in the empty area.

Also if programs could directly resize the windows and draw immediately then it would
eliminate the still-horrible resize behavior of X. The composite extension does *not* fix
this. In addition programs could implement arbitrarily complex restrictions on the resize and
position.



LCA: Two talks on the state of X

Posted Feb 15, 2008 2:32 UTC (Fri) by RobertBrockway (guest, #48927) [Link]

Hi tjc.  That's beautifully put.

I don't know why it is taking so long for a very simple idea to get understood: "Interrupting
a user by shifting the window focus without their consent is a bad idea because it breaks' the
user's concentration".

what about....

Posted Feb 9, 2008 4:02 UTC (Sat) by mattdm (guest, #18) [Link]

Reducing network latency and unneeded overhead when using X over relatively slow network
links? (A.k.a "what in the world do people keep using VNC for?")

what about....

Posted Feb 9, 2008 13:18 UTC (Sat) by liljencrantz (guest, #28458) [Link]

Network transparency was dropped as a priority a long time ago. Just look at the new font
rendering system. Basically, all text is rendered in the X client and sent ofer the socket to
the server these days. This improves rendering performance of the brain dead widget
implementations that insist on rendering text and then applying filters to that text client
side before rendering on screen at the expense of network transparency.

what about....

Posted Feb 9, 2008 17:35 UTC (Sat) by nix (subscriber, #2304) [Link]

Um, the entire purpose of the render extension is to make such things 
not-insanely-expensive. (It's certainly not noticeable bandwidth-wise to 
me, and almost all my X interaction is with remote clients.)

X's real network problem is latency and roundtrips (especially when 
mapping windows), not bandwidth.

what about....

Posted Feb 10, 2008 12:21 UTC (Sun) by rossburton (subscriber, #7254) [Link]

So you didn't see the ancient benchmarks which shows that Xft/Render is faster than core fonts
then?

what about....

Posted Feb 10, 2008 13:49 UTC (Sun) by nix (subscriber, #2304) [Link]

To be fair it is *dog* slow if your app only supports client-side fonts 
and your server is some old thing that doesn't support the render 
extension.

what about....

Posted Feb 11, 2008 8:31 UTC (Mon) by liljencrantz (guest, #28458) [Link]

Like I said in my original post, rendering on the client side is faster in implementations
that end up transfering the text back to the client anyway. Unfortunately, this includes GTK
and QT.

what about....

Posted Feb 11, 2008 9:40 UTC (Mon) by jamesh (guest, #1159) [Link]

Pretty much any application ends up needing font metrics on the client side to handle things like wrapped text, or sizing a widget to fit a string of text.

The old font system had a number of problems related to this, namely:

  • To determine the metrics of a font at a particular size, it would rasterise it. This gets expensive for large fonts or large font sizes.
  • As i18n has improved, today's fonts have more glyphs than those in the past. This increases the cost of determining the metrics ...
  • While the font information provided by the old font system was okay for basic rendering of latin text, it completely fails for certain scripts that require complex shaping. The information needed to render these fonts are usually found in OpenType tables that are not exposed.

Now if you are running into speed problems with client side font rendering to remote displays, there are a few things you should check. The most important one is to check if the display supports the RENDER extension.

If it does not, then you should get a noticeable speed increase by disabling anti-aliasing. With anti-aliasing enabled but without RENDER, the X client downloads the background pixmap underneath where it will render the text, composites the text locally then sends the result back to the server. With anti-aliasing disabled, it can simply send the glyphs to the server to draw directly (which is comparable to what happens in the RENDER case, anti-aliased or not).

what about....

Posted Feb 11, 2008 20:45 UTC (Mon) by nix (subscriber, #2304) [Link]

Why can't Xft (or Xrender) make this check for you at client connection 
time? There's really no excuse for requiring users to know this sort of 
black magic just because they want to connect to an old X server.

what about....

Posted Feb 12, 2008 2:31 UTC (Tue) by jamesh (guest, #1159) [Link]

Well, running an application locally against an old RENDER-less X server gives reasonable
performance, so it isn't clear that always disabling anti-aliasing is the best default on such
servers -- it really depends on the bandwidth and latency.

I guess the real answer to your question is that it isn't a high priority of the developers.
They are putting their effort into improving performance when used against non-obsolete X
servers (effort that helps in the remote X case as well).  Given that there is only so much
they can do, I don't blame them.

what about....

Posted Feb 12, 2008 8:19 UTC (Tue) by nix (subscriber, #2304) [Link]

Oh yes, agreed: this is a whinge that's getting rapidly less important 
over time. Probably against the installed base of X servers it's entirely 
insignificant by now, not least because exceed and other incredibly 
obsolete X11R4/5 Windows-based X servers have been comprehensively 
outcompeted by Cygwin's X server :)

what about....

Posted Feb 11, 2008 23:11 UTC (Mon) by liljencrantz (guest, #28458) [Link]

My bad, I was unclear in what I was saying. I did not mean to say that the old font system was
superior, as it clearly lacked many features needed for fast high quality text rendering. What
I do belive is that a font rendering system that rasterizes fonts on the server side but gives
access to font metrics on the client side as well should be possible to create, thus giving
you full performance and full access to graphical goodies. The best way to do this with as
little latency as possilbe might be to allow the client to calculate font meterics but leave
all actual rendering to the server.

what about....

Posted Feb 12, 2008 2:42 UTC (Tue) by jamesh (guest, #1159) [Link]

Sun tried this with STSF:

    http://stsf.sourceforge.net/about.html

It was not exactly a success, but did offer some benefits over fontconfig/Xft if you were
running hundreds of X servers from a single machine.  When running just a single X server,
these benefits were not apparent and the increased complexity for developers is a clear minus.

what about....

Posted Feb 14, 2008 9:21 UTC (Thu) by liljencrantz (guest, #28458) [Link]

I completely missed the existence of stsf, thank you for the pointer.

I read up a bit on why it failed, and the main reason (judging from mailing list posts, at
least) seems to be that the various X hackers (e.g. Havoc Pennington and various GTK hackers)
felt that fontconfig was good enough. And in all fairness, that is probably completely true if
you don't care about network transparency. I've never heard any other serious criticism
against fontconfig except for the client side rendering issue.

LCA: Two talks on the state of X

Posted Feb 10, 2008 11:45 UTC (Sun) by modernjazz (guest, #4185) [Link]

Very interesting. Just a wishlist comment: should LWN's revenues get to 
the point of being able to support expansion, I'd personally vote for 
more regular coverage of what's going on in X...a "weekly X page" would 
be ideal, but even monthly would be great. (Obviously, you do some X 
features already, and they are appreciated.) X seems to be a part of the 
greater Linux ecosystem that is particularly ripe for progress, and it 
would be fun to have LWN's help in monitoring that progress.

LCA: Two talks on the state of X

Posted Feb 10, 2008 22:34 UTC (Sun) by justi9 (guest, #7522) [Link]

Seconded!  I too would very much like a section devoted to X and related topics.

The part of LWN I personally enjoy the most is the kernel section, but I think an X section
would compete for my interest.

Indeed, LWN's taking it up might attract more developers to improving X.

Covering X

Posted Feb 10, 2008 23:02 UTC (Sun) by corbet (editor, #1) [Link]

I've been trying for years to improve our coverage of X. It's hard, though - there's a lot that goes on out of sight. That's part of why I tend to write so many X-related articles at conferences; that's when I can really learn about what's going on.

Covering X

Posted Feb 14, 2008 12:48 UTC (Thu) by nix (subscriber, #2304) [Link]

This article was xcellent.

(sorry, but it had to be said)

Covering X

Posted Feb 14, 2008 22:18 UTC (Thu) by dbnichol (subscriber, #39622) [Link]

One of the places I've found that has a lot of interesting content (after filtering out the
bugzilla noise) is the dri-devel list.

http://news.gmane.org/gmane.comp.video.dri.devel

Here's an interesting thread Dave Airlie started yesterday that features many of the prominent
players:

http://thread.gmane.org/gmane.comp.video.dri.devel/29341

"nobody"

Posted Feb 11, 2008 22:31 UTC (Mon) by rfunk (subscriber, #4054) [Link]

Don't run X as "nobody" -- that's reserved to be the owner of 
NFS-available files where they're owned by root on the NFS server.

X should run as its own user, e.g. "xserver".

"nobody"

Posted Feb 11, 2008 23:39 UTC (Mon) by bronson (subscriber, #4806) [Link]

"nobody" has come to mean the generic unprivileged user.  Lots of distros run server software
as nobody these days.

If NFS really wants to make this distinction, why don't they call their user "nfs-not-root" or
"squashed-root" or something a little less generic?

"nobody"

Posted Feb 12, 2008 1:21 UTC (Tue) by nix (subscriber, #2304) [Link]

Because NFS was the first thing to need such a user, so it's become 
traditional?

(Isn't `daemon' the ID you run otherwise-low-privilege daemons as, 
anyway?)

"nobody"

Posted Feb 13, 2008 20:43 UTC (Wed) by cortana (subscriber, #24596) [Link]

It's the UID you run a daemon as when you want it to have privileges to interfere with other
daemons running as 'daemon'. :)

Seriously... adduser --system is not hard! Give everything its own user!

"nobody"

Posted Feb 14, 2008 12:49 UTC (Thu) by nix (subscriber, #2304) [Link]

But what if I had over 64K daemons running?

... oh, 32-bit uids. What if I had over four billion daemons running?

(seriously, with KDE it's only a matter of time! I suppose those all run as the KDE desktop
user though.)

;}

"nobody"

Posted Feb 14, 2008 18:58 UTC (Thu) by quotemstr (subscriber, #45331) [Link]

That's the problem. If everyone uses the same generic "unprivileged" user, that user actually
becomes quite powerful. Better would be to separate out users for each application.

"nobody"

Posted Feb 22, 2008 8:34 UTC (Fri) by goaty (guest, #17783) [Link]

The practice of running all daemons as "nobody" was a historical mistake. This has been
corrected in Debian and (I hope) most other Linux distributions. Each daemon runs as a
separate user, which prevents both accidental and malicious interference with other daemons.

The extreme example is qmail, which has four user accounts and no license.

I think "run as nobody" has become a shorthand for "run as an unprivileged user instead of
root", and that is how the article was using it. I think the LWN guys are clueful enough that
they wouldn't suggest we literally run the X server as the "nobody" user.

LCA: Two talks on the state of X

Posted Feb 14, 2008 12:45 UTC (Thu) by nix (subscriber, #2304) [Link]

I hope video overlays don't get completely removed. On my 9250, I see 5% CPU usage when
rendering full-screen full-motion video through Xv (mostly because of rescaling): I see >80%
CPU usage at about five frames a second if I'm using textured video playback, even with the
DRI to help out.

Obviously, I never ever use textured video playback. My preferred window manager doesn't
support compositing, and even if it did, I'm obvously staid and conventional in actually
*watching* video when it's playing rather than spinning it around and applying bling to it.

Yes, the 9250 is a bit old, but it's not *that* old, and the main CPU (an Athlon 4) is not
*that* old either. It would be bad to require everyone to upgrade to video cards requiring
non-free drivers (or a new motherboard: Intel cards or cards on PCIe) just because of an
architectural revision to enable video playback to interoperate with features that are of much
less importance than actually playing back the video at reasonable speed.

LCA: Two talks on the state of X

Posted Feb 14, 2008 15:09 UTC (Thu) by daenzer (✭ supporter ✭, #7050) [Link]

As the xf86-video-ati radeon driver doesn't provide a textured XVideo adaptor yet, I assume
you're referring to using an OpenGL output plugin in your video player. The reason this
performs so badly is that the Mesa r200 driver is very inefficient in getting texture data
from the application to the GPU, involving several memory copies and possibly other
processing.

A textured XVideo adaptor (as implemented in xf86-video-intel e.g.) shouldn't require
significantly more CPU cycles than an overlay.

LCA: Two talks on the state of X

Posted Feb 15, 2008 9:20 UTC (Fri) by ranmachan (subscriber, #21283) [Link]

A textured XVideo adapter could also have the advantage that you can play more than just one
video at the same time.  NVidia had that ages ago in their binary driver (because their cards
could do the color conversion on stretchblit AFAIK).

LCA: Two talks on the state of X

Posted Feb 15, 2008 21:13 UTC (Fri) by nix (subscriber, #2304) [Link]

How... useful.

Do you have a multistreamed consciousness, or something? How on earth can 
you pay *attention* to two videos at the same time? (And if you're only 
paying attention to one, isn't the other one annoying, flickering away in 
the background?)

LCA: Two talks on the state of X

Posted Feb 19, 2008 7:20 UTC (Tue) by daenzer (✭ supporter ✭, #7050) [Link]

Think several browser tabs with embedded video players. Only one of them may actually play at
any time, but only one of them can get the single overlay port.

LCA: Two talks on the state of X

Posted Feb 19, 2008 7:55 UTC (Tue) by nix (subscriber, #2304) [Link]

Argh, yes, I see, because they won't even display a paused image without 
an overlay.

LCA: Two talks on the state of X

Posted Feb 15, 2008 21:16 UTC (Fri) by nix (subscriber, #2304) [Link]

I completely failed to realise that Xv could turn the incoming video into 
textures and slam it at the video card. Obviously if it did that things 
would be about as fast as slamming it at the video card without 
texturizing it first, with the advantage that modern cards are designed to 
render big textures fast.

And, yes, if the thing's doing copies performance will suck. From your 
wording it sounds like it's not something fundamental to the hardware :) 
When I get another machine to use as a desktop box later this year (so can 
afford to restart X on this one without losing all my working state) I 
might have a look at that. Someone else will probably get to it first, 
though...

Dissenter's voice on overlays, compositing, etc.

Posted Feb 14, 2008 23:07 UTC (Thu) by kamil (subscriber, #3802) [Link]

I am concerned about this "war on overlays" and pushing us all towards compositing whether we
want to or not.

I recently tried Compiz-fusion, and while I'm ready to admit that the transparent desktop cube
looks very cool, I found it too slow to use on a daily basis, even though I have a relatively
modern machine (1 year old laptop with 2GHz Core2duo and Intel 945 GM).  The performance of
terminal windows or Firefox was acceptable, but e.g. Gimp was uncomfortably slow when doing
screen updates.  Not to mention that the X server died several times on one day, something it
has never done on that machine before.  After a day of playing with it, I was relieved to turn
compositing back off.

With respect to overlaid XVideo, as you might be aware, recent Intel drivers use texturing by
default.  One of the first things I do after downloading a new driver is reverting to
overlays.  Why?  Intel's textured XVideo does not support brightness/contrast adjustment (or
didn't last time I tried).  Videos often require a different brightness setting than the
desktop, so to me this lack is basically a show-stopper.  Plus, last time I tried, textured
XVideo badly suffered from tearing; I've never seen that problem with overlaid XVideo.

I get this uncomfortable feeling, and I know I'm not the only one, that Intel developers keep
pushing half-baked ideas at us.  Sure, I can see the shortcomings of overlays, but I refuse to
switch so long as the alternative pitched is nowhere near as usable.  I think Intel developers
would do many people a big favour if they slowed down with adding new features and focused on
improving the ones already there.

Dissenter's voice on overlays, compositing, etc.

Posted Feb 15, 2008 0:51 UTC (Fri) by jamesh (guest, #1159) [Link]

Have you reported a bug?

Dissenter's voice on overlays, compositing, etc.

Posted Feb 15, 2008 4:20 UTC (Fri) by kamil (subscriber, #3802) [Link]

Well, when I first learned of the lack of brightness/contrast adjustment issue, I found a
discussion in the archives of the developers mailing list that made it clear that they were
well aware of this shortcoming, but had no intention to fix it any time soon.

This was over a year ago.  Last I checked, the problem was still there, although I can't test
with the very latest Intel driver, because it is unstable in combination with my X server (I'm
assuming that once I update the X server, it will get stable).

Copyright © 2008, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds