LWN: Comments on "Danjou: Thoughts and rambling on the X protocol" https://lwn.net/Articles/390389/ This is a special feed containing comments posted to the individual LWN article titled "Danjou: Thoughts and rambling on the X protocol". en-us Fri, 31 Oct 2025 16:24:37 +0000 Fri, 31 Oct 2025 16:24:37 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Ranting on the X protocol https://lwn.net/Articles/413609/ https://lwn.net/Articles/413609/ foom <div class="FormattedComment"> Yeah, It'd sure be nice if DBus Just Worked with a remote display. It's is a pretty significant step backwards in terms of being able to run apps on remote machines...it worked fine for KDE's DCOP for years before...<br> </div> Fri, 05 Nov 2010 21:11:19 +0000 Ranting on the X protocol https://lwn.net/Articles/413603/ https://lwn.net/Articles/413603/ mgedmin <div class="FormattedComment"> Can I please have ssh -X forward my DBUS session too?<br> </div> Fri, 05 Nov 2010 20:51:06 +0000 Ranting on the X protocol https://lwn.net/Articles/392315/ https://lwn.net/Articles/392315/ nix <div class="FormattedComment"> OK, that's mystifying. Cross-host buses will never be implemented, only, uh, they are. And they're even documented in the manpage! :) (grep for 'tcp' in dbus-daemon(1).)<br> <p> I guess they found a secure way to do it, or decided the tradeoffs were worthwhile. Yay, cross-system KDE4 IPC calls, here we come!<br> <p> </div> Wed, 16 Jun 2010 20:27:09 +0000 Ranting on the X protocol https://lwn.net/Articles/392020/ https://lwn.net/Articles/392020/ nix <div class="FormattedComment"> Yes, some locking is still necessary, but perhaps not so much that the whole thing would be a dead loss (as happens if e.g. you try to handle each X client with a separate thread in the X server).<br> <p> </div> Mon, 14 Jun 2010 20:12:13 +0000 Ranting on the X protocol https://lwn.net/Articles/391762/ https://lwn.net/Articles/391762/ renox <div class="FormattedComment"> I didn't know what CCNx is, so I've investigated, interesting, but I'm quite surprised by a reference to CCNx in a discussion about X's network transparency as CCNx seems to solve a very different problem: duplicating existing 'fixed' data..<br> <p> Can you elaborate?<br> <p> <p> </div> Fri, 11 Jun 2010 11:02:21 +0000 Ranting on the X protocol https://lwn.net/Articles/391605/ https://lwn.net/Articles/391605/ renox <div class="FormattedComment"> <font class="QuotedText">&gt;&gt; Perhaps it is worth splitting the input thread out from a SIGIO handler into a separate thread &lt;&lt;</font><br> <p> Could you explain?<br> The input thread needs to read the display state to pass the events to the correct applications, so there's also a kind of locking which must be done here: wouldn't this create the same issue as before?<br> <p> </div> Thu, 10 Jun 2010 12:37:59 +0000 Ranting on the X protocol https://lwn.net/Articles/391545/ https://lwn.net/Articles/391545/ mfedyk <div class="FormattedComment"> Virtualbox uses RDP, which can export one application.<br> <p> Can you give an example of using VNC to forward one application without the surrounding desktop (ie, rootless)?<br> </div> Thu, 10 Jun 2010 05:08:22 +0000 Ranting on the X protocol https://lwn.net/Articles/391390/ https://lwn.net/Articles/391390/ Cyberax <div class="FormattedComment"> No.<br> <p> DRI1 passes commands directly to the card, but it can't be composited (no Compiz for you with DRI1).<br> <p> AIGLX (which is used _without_ DRI1) passes commands through the X-server, but can be composited.<br> <p> DRI2 passes commands directly to the card and can be composited.<br> </div> Wed, 09 Jun 2010 11:07:59 +0000 Ranting on the X protocol https://lwn.net/Articles/391349/ https://lwn.net/Articles/391349/ rsidd <i>Can anyone think of *any* application (other than e.g. talking to Windows systems on the far side, or handling disconnected operation, neither of which X was designed for) for which VNC is preferable to X? I can't think of any.</i> </P><P> Here's one: a slow network connection with latency (eg, connecting to office from my home). VNC is usable, X11 forwarding is not. I am willing to believe that X11 beats VNC on gigabit networks. And, of course, if the connection goes down you lose nothing (except time). You seem to be saying X was designed for LANs (where disconnected operation is relatively uncommon) and not wider networks. If so, that's another argument for VNC. Tue, 08 Jun 2010 22:15:49 +0000 Ranting on the X protocol https://lwn.net/Articles/391341/ https://lwn.net/Articles/391341/ nix <div class="FormattedComment"> Yes, with things that don't use Render much, or which render mostly monocoloured backgrounds (like most text editors), VNC's simpleminded 'just transmit giant bitmaps' scheme is intolerably inefficient and laggy. It's laggy even over a gigabit network... tightvnc makes it a bit better, but not much.<br> <p> Apps that render backgrounds which consist of repeating elements can use Render to transmit those elements only once.<br> <p> With things that have heavily coloured/textured backgrounds which are different at every point, maybe, just maybe both would be equally efficient... although surely you'd send that to the X server as a pixmap.<br> <p> Can anyone think of *any* application (other than e.g. talking to Windows systems on the far side, or handling disconnected operation, neither of which X was designed for) for which VNC is preferable to X? I can't think of any.<br> </div> Tue, 08 Jun 2010 20:23:32 +0000 Threads on X11 being useless https://lwn.net/Articles/391310/ https://lwn.net/Articles/391310/ nix <div class="FormattedComment"> Agreed. I'd pay attention when jg says it's not the right way to do things, though: he's been hacking X for longer than almost anyone else, and possibly longer than *anyone* else continuously (the only other contender would I think be Keith Packard).<br> <p> </div> Tue, 08 Jun 2010 17:52:54 +0000 Ranting on the X protocol https://lwn.net/Articles/391307/ https://lwn.net/Articles/391307/ nix <div class="FormattedComment"> There is nothing intrinsic in the X protocol that requires the server to be single-threaded. In fact, the X Consortium spent a lot of effort in the 1990s making a multithreaded X server.<br> <p> It was abandoned, because the locking overhead made it *slower* than a singlethreaded server.<br> <p> Perhaps it is worth splitting the input thread out from a SIGIO handler into a separate thread (last I checked the work in that direction was ongoing). But more than that seems a dead loss, which is unsurprising given the sheer volume of shared state in the X server, all of which must be lock-protected and a lot of which changes very frequently.<br> <p> </div> Tue, 08 Jun 2010 17:09:35 +0000 Ranting on the X protocol https://lwn.net/Articles/391306/ https://lwn.net/Articles/391306/ nix <div class="FormattedComment"> Hang on. You're (IMHO correctly) promoting KMS/DRI2 over AIGLX/DRI1 because DRI1 passed commands directly to hardware, which is bad... but you're proposing direct hardware access as a way to fix the (unproven) 'slowness' of X?<br> <p> I am confused. You seem to be contradicting yourself without need for any help from the rest of us.<br> <p> </div> Tue, 08 Jun 2010 17:07:31 +0000 Ranting on the X protocol https://lwn.net/Articles/391305/ https://lwn.net/Articles/391305/ nix <div class="FormattedComment"> Yeah, remote GLX sucks. So hopelessly that I just ran World of Goo remotely (which 2D app uses OpenGL to do its rendering) over a 1Gb LAN, without noticing until I quit that it was remote.<br> <p> That's hopeless, that is.<br> <p> </div> Tue, 08 Jun 2010 17:05:38 +0000 X network transparency https://lwn.net/Articles/391288/ https://lwn.net/Articles/391288/ jschrod <div class="FormattedComment"> Did it ever occur to you that a remote desktop is something different than a remote application and that for many people it is not a good enough substitution? Use cases that cannot be replicated have already been cited. (In particular: many remote applications that integrate with one's WM, and not many remote desktops that need a full desktop setup at the remote system.)<br> <p> But you think that all the world has to have the same needs as you, don't you?<br> </div> Tue, 08 Jun 2010 14:38:32 +0000 Ranting on the X protocol https://lwn.net/Articles/391271/ https://lwn.net/Articles/391271/ jschrod <div class="FormattedComment"> VNC a replacement for remote X? You're joking, right?<br> <p> Maybe for people who just want to do a short action on a remote desktop, and don't care for consistent keyboard handling or working X selections. But for people like me who have X applications open from 5-10 remote systems all the time, network transparency on the application level (and not on the desktop level!) is one of THE reasons to use X.<br> </div> Tue, 08 Jun 2010 09:02:13 +0000 Ranting on the X protocol https://lwn.net/Articles/391184/ https://lwn.net/Articles/391184/ nye <div class="FormattedComment"> <font class="QuotedText">&gt;Windows 95 could be started on a computer with 3Mb of RAM (official minimal requirement was 4Mb) and ran just fine on 8Mb</font><br> <p> No.<br> <p> Everyone not working for Microsoft stated a minimum of 16MB to run Windows 95. I actually tried running it on a computer with 8MB of RAM - as long as you had no applications running you might just about be able to use explorer to browse the filesystem, very slowly, but trying to launch an application would cause a 5 minute swap storm.<br> </div> Mon, 07 Jun 2010 15:13:21 +0000 Danjou: Thoughts and rambling on the X protocol https://lwn.net/Articles/391151/ https://lwn.net/Articles/391151/ Wol <div class="FormattedComment"> <font class="QuotedText">&gt;&gt; 3. Small is beautiful. Lots of small programs are better than one big one.</font><br> <p> <font class="QuotedText">&gt;Again, you are proving the point. X11 is a bloated mess with lots of redundant functionality that nobody uses or needs anymore.</font><br> <p> That WAS true of XFree, which was project-managed by a bunch of Windows fans. It's not true of Xorg except insofar of the legacy code they haven't yet managed to rewrite out of the system.<br> <p> It's a sad fact that X stagnated for years, precisely because the people in charge thought MS-Windows was a better solution. Fortunately the project threw the main developer out, because he didn't agree with them, so now we have the Xorg fork and X is motoring along nicely again.<br> <p> Cheers,<br> Wol<br> </div> Sun, 06 Jun 2010 20:45:14 +0000 Ranting on the X protocol https://lwn.net/Articles/391150/ https://lwn.net/Articles/391150/ Wol <div class="FormattedComment"> As someone who knows nothing whatsoever about VNC (other than it's similar to X), but who currently uses one PC as an X-term into another all the time, you have one user here to whom X's network transparency is rather important ...<br> <p> Cheers,<br> Wol<br> </div> Sun, 06 Jun 2010 19:58:56 +0000 Ranting on the X protocol https://lwn.net/Articles/391146/ https://lwn.net/Articles/391146/ ortalo <div class="FormattedComment"> Really sounds mind-raising. Hope to hear again from you on this topic in a not-too-distant future.<br> </div> Sun, 06 Jun 2010 16:02:03 +0000 Danjou: Thoughts and rambling on the X protocol https://lwn.net/Articles/391145/ https://lwn.net/Articles/391145/ ortalo <div class="FormattedComment"> "But worse still, the old UNIX books are all but useless on a 'modern' linux and there are no replacements available, i.e. it is all scrounge around on obscure wikis and blogs for docs that are either out of date or describe an even newer version than your distro shipped."<br> You are probably true, but maybe there is hope not so far... I really enjoyed spending the last decade reading something like LWN.net - I do not lack so much of the old Unix books that were deprecating at an incredibly fast pace (and were so expensive btw).<br> <p> </div> Sun, 06 Jun 2010 15:58:37 +0000 Ranting on the X protocol https://lwn.net/Articles/391124/ https://lwn.net/Articles/391124/ renox <div class="FormattedComment"> <font class="QuotedText">&gt;&gt;As for the second reason, "the context switching between the client thread and the server thread happens in an "uncontrolled" manner", that is also the way it is for Unix domain sockets, but is that really a problem? &lt;&lt;</font><br> <p> I would argue that for latency purpose the lack of cooperation between the scheduler and current IPCs can really be a problem.<br> <p> <font class="QuotedText">&gt;&gt;then what would prevent a pair of malicious threads from hogging the CPU by constantly sending messages back and forth?&lt;&lt;</font><br> IMHO, there should be an IPC provided by the OS which would allow a process to say: deliver this message to this other process and run it as a part of my 'runtime quota'.<br> This wouldn't allow CPU hogging, and would provide lower latency, but note that for this to truly work, you still need that a shared server works on the message provided by the client who give the 'scheduling time' and not on something else..<br> <p> </div> Sat, 05 Jun 2010 20:09:23 +0000 Ranting on the X protocol https://lwn.net/Articles/391121/ https://lwn.net/Articles/391121/ daniels <em>For example, it's not possible to have tiled frontbuffer - because all of the code in X.org has to be rewritten. And X.org is LARGE.</em><br /><br /> This will probably come as a huge surprise to the >95% of desktop X users (all Intel, most AMD, all NVIDIA beyond G80) who have a tiled frontbuffer. Sat, 05 Jun 2010 18:25:52 +0000 Ranting on the X protocol https://lwn.net/Articles/391120/ https://lwn.net/Articles/391120/ drago01 <div class="FormattedComment"> <a href="http://lists.x.org/archives/xorg-devel/2010-June/009571.html">http://lists.x.org/archives/xorg-devel/2010-June/009571.html</a><br> </div> Sat, 05 Jun 2010 18:13:32 +0000 Ranting on the X protocol https://lwn.net/Articles/391089/ https://lwn.net/Articles/391089/ jd <div class="FormattedComment"> You're absolutely right, the 386SX-16 (a 16MHz 386 processor with a 387 co-processor) was not a "fast machine". It was also limited in memory - for some reason, the maximum was 5 megs of RAM.<br> <p> Yet I was using X11R4, with OpenLook (BrokenLook?). I could open up to 20 EMACS sessions simultaneously before experiencing noticeable degradation in performance. When gaming, I'd have a Netrek server, Netrek client and 19 Netrek AIs running in parallel on the same machine.<br> <p> Compiling was no big. GateD would sometimes cause OOM to kick in and kill processes, but I don't recall any huge difficulty. It was the machine I wrote my undergraduate degree project on (fractal image compression software).<br> <p> Mind you, I paid huge attention to setup. The X11 config file was hand-crafted. Everything was hand-compiled - kernel on upwards - to squeeze the best performance out of it. Swap space was 2.5x RAM.<br> <p> <p> </div> Sat, 05 Jun 2010 00:27:04 +0000 Ranting on the X protocol https://lwn.net/Articles/391066/ https://lwn.net/Articles/391066/ rqosa <p><font class="QuotedText">&gt; About 45,600 results (0.32 seconds)</font></p> <p>How do those numbers prove anything? (Incidentally, a search for "Gallium3D" gives only "About 26,300 results".)</p> <p>Also, if OpenVG is useless, then why are Qt and Cairo both implementing it?</p> <p><font class="QuotedText">&gt; And who's going to rewrite X.org?</font></p> <p>It's being rewritten all the time. Just look at how much has changed since X11R6.7.0.</p> <p><font class="QuotedText">&gt; And it matters if server is single-threaded (because of input latency, for example).</font></p> <p>I don't remember ever seeing users complaining about the input latency of the current Xorg.</p> Fri, 04 Jun 2010 20:12:44 +0000 Ranting on the X protocol https://lwn.net/Articles/391060/ https://lwn.net/Articles/391060/ anselm <blockquote><em>On a fast machine - sure.</em></blockquote> <p> Hm. IIRC, in the 1990s, the previous poster's 386SX-16 wasn't a »fast machine« by any stretch of the imagination. It probably would have struggled with Windows 95, considering that Windows 95, according to Microsoft, required at least a 386<em>DX</em> processor (i.e., one with a 32-bit external bus). </p> <blockquote> <em>Windows 95 could be started on a computer with 3Mb of RAM (official minimal requirement was 4Mb) and ran just fine on 8Mb. Linux with X struggled to start xterm on that configuration.</em> </blockquote> <p> No. In 1994 my computer was a 33 MHz 486DX notebook with 8 megabytes of RAM &ndash; not a blazingly fast machine even by the standards of the time but what I could afford. I used that machine to write, among other things, a TeX DVI previewer program (like xdvi but better in some respects, worse in others) based on Tcl/Tk, with the interesting parts written in C of course, but even so. There was no problem running that program in parallel with TeX and Emacs, and in spite of funneling all the user interaction through Tcl it felt just as quick as xdvi, even with things like the xdvi-like magnifier window. </p> <p> Five years before that, a SPARCstation 1 was considered a nice machine to have, and that ran X just fine, thank you very much. I doubt that the machine I was hacking on at the time had anywhere <em>near</em> 8 megs of RAM. </p> Fri, 04 Jun 2010 19:12:33 +0000 Ranting on the X protocol https://lwn.net/Articles/391061/ https://lwn.net/Articles/391061/ jonabbey <div class="FormattedComment"> On the other hand, I remember using dedicated X Terminals against remote systems (VMS, early 90's Unix) that didn't even have graphics hardware.<br> <p> Horses for courses.<br> </div> Fri, 04 Jun 2010 19:01:58 +0000 Ranting on the X protocol https://lwn.net/Articles/391047/ https://lwn.net/Articles/391047/ Cyberax <div class="FormattedComment"> "Moving graphics to the kernel, eh? So we can assume you've tried the X11 drivers for kernel framebuffers and/or for KGI?<br> <p> It's easy to say it would be faster, but context switches are expensive and therefore you need to make sure that you place the division between kernel and userspace application at the point where there would be fewest such switches. I've not seen any analysis of the code of this kind, so I'm not inclined to believe anyone actually knows where a split should be incorporated."<br> <p> Read about KMS and DRI2, please.<br> <p> "Personally, I remember X being considerably faster than Windows in the 1990s. I was using a Viglen 386SX-16 with maths co-processor, and X ran just fine. Windows was a bugbear. I was even able to relay X over the public Internet half-way across Britain and have no problems."<br> <p> On a fast machine - sure. However, Windows 95 could be started on a computer with 3Mb of RAM (official minimal requirement was 4Mb) and ran just fine on 8Mb. Linux with X struggled to start xterm on that configuration.<br> </div> Fri, 04 Jun 2010 17:43:40 +0000 Ranting on the X protocol https://lwn.net/Articles/391032/ https://lwn.net/Articles/391032/ jd <div class="FormattedComment"> Moving graphics to the kernel, eh? So we can assume you've tried the X11 drivers for kernel framebuffers and/or for KGI?<br> <p> It's easy to say it would be faster, but context switches are expensive and therefore you need to make sure that you place the division between kernel and userspace application at the point where there would be fewest such switches. I've not seen any analysis of the code of this kind, so I'm not inclined to believe anyone actually knows where a split should be incorporated.<br> <p> Personally, I remember X being considerably faster than Windows in the 1990s. I was using a Viglen 386SX-16 with maths co-processor, and X ran just fine. Windows was a bugbear. I was even able to relay X over the public Internet half-way across Britain and have no problems.<br> </div> Fri, 04 Jun 2010 16:54:36 +0000 Ranting on the X protocol https://lwn.net/Articles/391012/ https://lwn.net/Articles/391012/ Cyberax <div class="FormattedComment"> <font class="QuotedText">&gt;&gt; OpenVG is stillborn.</font><br> <font class="QuotedText">&gt;Says who?</font><br> <p> Me, obviously. <br> <p> <a href="http://www.google.com/search?q=OpenVG">http://www.google.com/search?q=OpenVG</a><br> About 45,600 results (0.32 seconds) <br> <p> Oh, and Google too.<br> <p> <font class="QuotedText">&gt;&gt; It's already obsoleted by GL4</font><br> <font class="QuotedText">&gt;And will ARM-based cell phones and netbooks be able to run OpenGL 4 with good performance?</font><br> <p> In a few years - yep. There's nothing in GL4 which makes it intrinsically slow.<br> <p> <font class="QuotedText">&gt;Is there anything inherent in the X protocol that requires an X server to be single-threaded? I doubt it.</font><br> <p> And who's going to rewrite X.org? And it matters if server is single-threaded (because of input latency, for example).<br> <p> And old legacy code in X.org does have its effect. For example, it's not possible to have tiled frontbuffer - because all of the code in X.org has to be rewritten. And X.org is LARGE.<br> </div> Fri, 04 Jun 2010 13:59:21 +0000 Ranting on the X protocol https://lwn.net/Articles/391005/ https://lwn.net/Articles/391005/ rqosa <p><font class="QuotedText">&gt; OpenVG is stillborn.</font></p> <p>Says who?</p> <p><font class="QuotedText">&gt; It's already obsoleted by GL4</font></p> <p>And will ARM-based cell phones and netbooks be able to run OpenGL 4 with good performance?</p> <p><font class="QuotedText">&gt; The problem with X is that it's crufty. It's single-threaded</font></p> <p>Is there anything inherent in the X protocol that requires an X server to be single-threaded? I doubt it.</p> <p>Also, if the X server no longer does the rendering (replaced by GPU offscreen rendering), then does it really matter if the X server is single-threaded?</p> <p><font class="QuotedText">&gt; and legacy code has a non-negligible architectural impact (do you know that X.org has an x86 emulator to interpret BIOS code for VESA modesetting? I'm kidding you not: http://cgit.freedesktop.org/xorg/xserver/tree/hw/xfree86/... ).</font></p> <p>That's probably not used much any more (not used at all with KMS). And if you're already not using it, why should you even care whether it exists?</p> Fri, 04 Jun 2010 12:58:35 +0000 Ranting on the X protocol https://lwn.net/Articles/390998/ https://lwn.net/Articles/390998/ Cyberax <div class="FormattedComment"> ""If you put it that way, then it's pretty much already the case that "only a hollow shell of X remains"; XRender has replaced the core drawing protocol, and compositing window managers using OpenGL and AIGLX are already in widespread use. So the next step is probably to replace XRender with OpenVG for 2D applications, to offload more of the work onto the GPU. (An OpenVG backend for Cairo has been around for some time already.) "<br> <p> Applications skip directly to OpenGL. QT right now even almost works, try running QT applications with "-graphicssystem opengl" switch. GTK has something similar.<br> <p> "And maybe another next step is to have GPU memory protection replace AIGLX, like you suggested, in the case of local clients."<br> <p> AIGLX is not necessary already with the open stack drivers which use DRI2. The main reason for AIGLX was impossibility of compositing of DRI1-applications - they pass commands directly to hardware. DRI2 fixed this by allowing proper offscreen rendering and synchronization.<br> <p> "What about 2D apps using OpenVG? If desktop apps that don't require very high graphics performance (for example Konsole, OpenOffice, Okular, etc.) migrate from XRender to OpenVG, then it seems like it would be useful to make the OpenVG command stream network-transparent, because performance should be adequate to run these clients over a LAN."<br> <p> Makes no sense, OpenVG is stillborn. It's already obsoleted by GL4 - you can get good antialiased rendering using shaders with double-precision arithmetic.<br> <p> "As for remote-side rendering for remote GLX clients: what GLX clients would anyone actually want to run remotely? It seems like most apps that need fast 3D graphics would be run locally."<br> <p> A nice text editor with 3D effects? :)<br> <p> The problem with X is that it's crufty. It's single-threaded and legacy code has a non-negligible architectural impact (do you know that X.org has an x86 emulator to interpret BIOS code for VESA modesetting? I'm kidding you not: <a href="http://cgit.freedesktop.org/xorg/xserver/tree/hw/xfree86/x86emu">http://cgit.freedesktop.org/xorg/xserver/tree/hw/xfree86/...</a> ). So IMO it makes a sense to design "X12" protocol to break away from legacy and just run rootless X.org for compatibility.<br> </div> Fri, 04 Jun 2010 12:00:42 +0000 Ranting on the X protocol https://lwn.net/Articles/390994/ https://lwn.net/Articles/390994/ rqosa <p><font class="QuotedText">&gt; And by that time only a hollow shell of X remains.</font></p> <p>If you put it that way, then it's pretty much already the case that "only a hollow shell of X remains"; XRender has replaced the core drawing protocol, and compositing window managers using OpenGL and AIGLX are already in widespread use. So the next step is probably to replace XRender with OpenVG for 2D applications, to offload more of the work onto the GPU. (An OpenVG backend for Cairo <a rel="nofollow" href="http://lists.freedesktop.org/archives/cairo/2008-January/012833.html">has been around for some time already</a>.) And maybe another next step is to have GPU memory protection replace AIGLX, like you suggested, in the case of local clients.</p> <p><font class="QuotedText">&gt; So it makes sense to ditch X completely and use it only for compatibility with old clients. Like Apple did in Mac OS X.</font></p> <p>An X server that supports OpenGL/OpenVG + off-screen rendering + compositing should be able to perform as well as anything else (when using clients that do drawing with OpenGL or OpenVG), while at the same time retaining backwards compatibility with old clients that use XRender and with even older clients that use the core protocol. So there's no good reason to drop the X protocol.</p> <p><font class="QuotedText">&gt; And where's the place for network transparency? Remote GLX _sucks_ big time. It sucks so hopelessly that people want to use server-side rendering instead of trying to optimize it: http://www.virtualgl.org/About/Background</font></p> <p>What about 2D apps using OpenVG? If desktop apps that don't require very high graphics performance (for example Konsole, OpenOffice, Okular, etc.) migrate from XRender to OpenVG, then it seems like it would be useful to make the OpenVG command stream network-transparent, because performance should be adequate to run these clients over a LAN.</p> <p>As for remote-side rendering for remote GLX clients: what GLX clients would anyone actually want to run remotely? It seems like most apps that need fast 3D graphics would be run locally.</p> Fri, 04 Jun 2010 11:19:12 +0000 Ranting on the X protocol https://lwn.net/Articles/390989/ https://lwn.net/Articles/390989/ Cyberax <div class="FormattedComment"> "As for the second reason, "the context switching between the client thread and the server thread happens in an "uncontrolled" manner", that is also the way it is for Unix domain sockets, but is that really a problem? If "the thread waiting on the signaled event is the next thread to be scheduled", then what would prevent a pair of malicious threads from hogging the CPU by constantly sending messages back and forth?"<br> <p> Quick-LPC had special hooks in scheduler. You could hog CPU, of course, but that was not a big deal at that time. It's so no big deal that you still can do this in Windows and up until several years ago in Linux: <a href="http://www.cs.huji.ac.il/~dants/papers/Cheat07Security.pdf">http://www.cs.huji.ac.il/~dants/papers/Cheat07Security.pdf</a> :)<br> <p> Also, Quick-LPC had some special hooks that allowed NT to do faster context switches. I remember investigating it for my own purposes - it was wickedly fast, but quite specialized.<br> <p> "X can work essentially that way too, except that compositing is done by another user-space process (a "compositing window manager")"<br> <p> And by that time only a hollow shell of X remains. About the same size as Wayland. So it makes sense to ditch X completely and use it only for compatibility with old clients. Like Apple did in Mac OS X.<br> <p> "Each application using OpenGL generates a command stream which is sent to the driver and rendered off-screen, and then the compositing window manager composites the off-screen surfaces together."<br> <p> And where's the place for network transparency? Remote GLX _sucks_ big time. It sucks so hopelessly that people want to use server-side rendering instead of trying to optimize it: <a href="http://www.virtualgl.org/About/Background">http://www.virtualgl.org/About/Background</a> <br> <p> "Alternately, 2D apps can use OpenVG instead of OpenGL (at least, they should be able to fairly soon; is OpenVG supported in current Xorg and current intel/ati/nouveau/nvidia/fglrx drivers?)"<br> <p> OpenVG is partially supported by Gallium3D.<br> </div> Fri, 04 Jun 2010 10:18:52 +0000 Ranting on the X protocol https://lwn.net/Articles/390983/ https://lwn.net/Articles/390983/ rqosa <p><font class="QuotedText">&gt; Quick-LPC in Windows NT ( http://windows-internal.net/Wiley-Undocumented.Windows.NT... ). It was waaaay better than anything at the time. It's phased out now, because overhead of real LPC is not that big for modern CPUs.</font></p> <p>The first reason why it says that Quick LPC is faster than regular LPC, "<font class="QuotedText">there is a single server thread waiting on the port object and servicing the requests</font>", does not exist for Unix domain sockets. For a SOCK_STREAM socket, you can have a main thread which does accept(2) and then hands each new connection off to a worker thread/process, or you can have multiple threads/processes doing accept(2) (or select(2) or epoll_pwait(2) or similar) simultaneously on the same listening socket (by "preforking"). For a SOCK_DGRAM socket, (unless I'm mistaken) you can have multiple threads/processes doing recvfrom(2) simultaneously on the same socket (again, preforking).</p> <p>As for the second reason, "<font class="QuotedText">the context switching between the client thread and the server thread happens in an "uncontrolled" manner</font>", that is also the way it is for Unix domain sockets, but is that really a problem? If "<font class="QuotedText">the thread waiting on the signaled event is the next thread to be scheduled</font>", then what would prevent a pair of malicious threads from hogging the CPU by constantly sending messages back and forth?</p> <p><font class="QuotedText">&gt; No, penalty is right there - in the architecture. It just can be worked around.</font></p> <p>If it's possible for local clients to be fast, then there's no "penalty". ("Penalty" would mean that the capability for clients to be non-local <em>prevents</em> local clients from being fast. But it doesn't, because local clients <em>can</em> use things that remote processes can't, like shared memory, etc.)</p> <p><font class="QuotedText">&gt; Each application there, in essence, gets a virtualized graphics card so it can draw everything directly on its surfaces. And OS only manages compositing and IO.</font></p> <p>X can work essentially that way too, except that compositing is done by another user-space process (a "compositing window manager"). Each application using OpenGL generates a command stream which is sent to the driver and rendered off-screen, and then the compositing window manager composites the off-screen surfaces together. Alternately, 2D apps can use OpenVG instead of OpenGL (at least, they should be able to fairly soon; is OpenVG supported in current Xorg and current intel/ati/nouveau/nvidia/fglrx drivers?)</p> Fri, 04 Jun 2010 09:58:41 +0000 Ranting on the X protocol https://lwn.net/Articles/390978/ https://lwn.net/Articles/390978/ Cyberax <div class="FormattedComment"> <font class="QuotedText">&gt;Do you have numbers from a benchmark to support that statement? And is there any property of the API that makes it inherently faster than the send(2)/recv(2) API? (When using Unix domain sockets, all that send(2)/recv(2) does is to copy bytes from one process to another. How can you make it any faster, except for using shared memory to eliminate the copy-operation?)</font><br> <p> Quick-LPC in Windows NT ( <a href="http://windows-internal.net/Wiley-Undocumented.Windows.NT/pg_0193.htm">http://windows-internal.net/Wiley-Undocumented.Windows.NT...</a> ). It was waaaay better than anything at the time. It's phased out now, because overhead of real LPC is not that big for modern CPUs.<br> <p> <font class="QuotedText">&gt;&gt; X penalizes local application, it's just that the workaround for this is so ancient that it is though of as normal.</font><br> <font class="QuotedText">&gt;You're contradicting yourself. If the "workaround" exists, then the "penalty" doesn't exist.</font><br> <p> No, penalty is right there - in the architecture. It just can be worked around.<br> <p> <font class="QuotedText">&gt;&gt;One of the major benefits of KMS is that it can get rid of that need for the X server to run as root and access the hardware directly. Moving the windowing system into the kernel, like you're suggesting, would throw away that benefit.</font><br> <p> I'm not suggesting moving the whole windowing system, it makes no sense _now_.<br> <p> Vista is 100% correct in its approach (IMO). Each application there, in essence, gets a virtualized graphics card so it can draw everything directly on its surfaces. And OS only manages compositing and IO.<br> <p> So we get the best of both worlds - applications talk directly to hardware (it might even be possible to make a userland command submission using memory protection on recent GPUs!) and windowing system manages windows.<br> <p> <font class="QuotedText">&gt;&gt; And AIGLX is a temporary hack, good DRI2 implementation is better.</font><br> <font class="QuotedText">&gt;How is it "better"?</font><br> <p> Faster, less context switches, etc. And if you try to accelerate AIGLX - you'll get something isomorphic to DRI2.<br> </div> Fri, 04 Jun 2010 08:14:01 +0000 Ranting on the X protocol https://lwn.net/Articles/390974/ https://lwn.net/Articles/390974/ rqosa <p><font class="QuotedText">&gt; Windows-style message passing, for example.</font></p> <p>Do you have numbers from a benchmark to support that statement? And is there any property of the API that makes it inherently faster than the send(2)/recv(2) API? (When using Unix domain sockets, all that send(2)/recv(2) does is to copy bytes from one process to another. How can you make it any faster, except for using shared memory to eliminate the copy-operation?)</p> <p><font class="QuotedText">&gt; X penalizes local application, it's just that the workaround for this is so ancient that it is though of as normal.</font></p> <p>You're contradicting yourself. If the "workaround" exists, then the "penalty" doesn't exist.</p> <p><font class="QuotedText">&gt; Ha. X used to be the most insecure part of the OS - several megabytes of code running with root privileges and poking hardware directly.</font></p> <p><font class="QuotedText">&gt; KMS has finally fixed this by moving some functionality to the kernel, where it belongs.</font></p> <p>One of the major benefits of KMS is that it can get rid of that need for the X server to run as root and access the hardware directly. Moving the windowing system into the kernel, like you're suggesting, would <strong>throw away</strong> that benefit.</p> <p><font class="QuotedText">&gt; And AIGLX is a temporary hack, good DRI2 implementation is better.</font></p> <p>How is it "better"?</p> Fri, 04 Jun 2010 07:19:55 +0000 Ranting on the X protocol https://lwn.net/Articles/390972/ https://lwn.net/Articles/390972/ Cyberax <div class="FormattedComment"> "Compared to what? Unix domain sockets should have about the same performance as any other type of local IPC, except for shared memory."<br> <p> Windows-style message passing, for example.<br> <p> "But that's beside the point the two posters above were trying to make, which is this: contrary to what "roc" said, X's capability for network transparency does not "[penalize] the performance of local applications"<br> <p> But it does. X penalizes local application, it's just that the workaround for this is so ancient that it is though of as normal.<br> <p> "Didn't it move back to userspace in Vista?"<br> <p> Partially. They now moved some parts to userspace to reduce number of context switches. Vista still has user32 subsystem in the kernel.<br> <p> "It might reduce the amount of context-switches slightly, but it's bad for system stability and security."<br> <p> Ha. X used to be the most insecure part of the OS - several megabytes of code running with root privileges and poking hardware directly.<br> <p> KMS has finally fixed this by moving some functionality to the kernel, where it belongs.<br> <p> And AIGLX is a temporary hack, good DRI2 implementation is better.<br> </div> Fri, 04 Jun 2010 05:30:42 +0000 Ranting on the X protocol https://lwn.net/Articles/390968/ https://lwn.net/Articles/390968/ rqosa <p><font class="QuotedText">&gt; Socket communication is fairly slow.</font></p> <p>Compared to what? Unix domain sockets should have about the same performance as any other type of local IPC, except for shared memory.</p> <p><font class="QuotedText">&gt; X becomes fast if shared memory is used to transfer images - i.e. if you forget about network transparency.</font></p> <p>But that's beside the point the two posters above were trying to make, which is this: contrary to what "roc" said, X's capability for network transparency <em>does not</em> "[penalize] the performance of local applications".</p> <p><font class="QuotedText">&gt; moving graphics to the kernel (like Windows did)</font></p> <p>Didn't it move back to userspace in Vista?</p> <p><font class="QuotedText">&gt; can make your system even faster</font></p> <p>It might reduce the amount of context-switches slightly, but it's bad for system stability and security.</p> <p><font class="QuotedText">&gt; Anyway, way forward seems to be client-side direct drawing using OpenGL.</font></p> <p>What's wrong with the AIGLX way?</p> Fri, 04 Jun 2010 04:15:32 +0000