Danjou: Thoughts and rambling on the X protocol
Two years ago, while working on awesome, I joined the Freedesktop initiative to work on XCB. I had to learn the arcane of the X11 protocol and all the mysterious and old world that goes with it. Now that I've swum all this months in this mud, I just feel like I need to share my thoughts about what become a mess over the decades." (Thanks to Paul Wise)
Posted Jun 1, 2010 20:35 UTC (Tue)
by clugstj (subscriber, #4020)
[Link] (73 responses)
Kids these days... ;^)
Posted Jun 1, 2010 23:22 UTC (Tue)
by jg (guest, #17537)
[Link] (72 responses)
At this point, however, one of X's biggest features is moribund: network transparency is a really good thing, and has been lost. And X11 was at best designed for campus scale transparency; we did not do a good job avoiding round trips that causes major performance problems when on WAN's.
However, if I had a chance to do another distributed UI system, it would look <b>radically</b> different than X (though able to run X on top in the style of rootless applications, as has been done on both Mac and Windows).
I've been stewing hard about what this might look like, between work we did at CRL prior to its destruction, and sparked much more intently recently by exposure to the ideas in CCNx (see www.ccnx.org), which has to be the most interesting networking technology I've seen in 20 years.
Maybe this summer I'll have a few minutes to blog about the ideas; the last 18 months has been more than chaotic to me.
Posted Jun 1, 2010 23:41 UTC (Tue)
by rriggs (guest, #11598)
[Link] (46 responses)
It's really a sad state of affairs at the moment.
Posted Jun 2, 2010 0:26 UTC (Wed)
by nix (subscriber, #2304)
[Link] (45 responses)
Nice halfassed job, guys. :(
Posted Jun 2, 2010 6:18 UTC (Wed)
by paulj (subscriber, #341)
[Link] (44 responses)
Posted Jun 2, 2010 11:33 UTC (Wed)
by nix (subscriber, #2304)
[Link] (43 responses)
Posted Jun 2, 2010 13:42 UTC (Wed)
by paulj (subscriber, #341)
[Link] (1 responses)
Posted Jun 2, 2010 14:12 UTC (Wed)
by mjthayer (guest, #39183)
[Link]
If ssh were adapted to forward DBus too, would DBus even need to be patched?
Posted Jun 2, 2010 15:12 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (37 responses)
Nothing in DBUS' design precludes the possibility of network transparency. It's just that people are disenchanted about it after CORBA, NFS and X11 fiascoes.
Really, you either should design protocols for WAN in the first place or don't even try to be 'network transparent'.
Posted Jun 2, 2010 16:42 UTC (Wed)
by nix (subscriber, #2304)
[Link] (36 responses)
I'd agree that the right way to do this would probably be to teach SSH to proxy the thing: is there an environment variable like DISPLAY that D-Bus clients can use to find their server, though?
Posted Jun 2, 2010 16:48 UTC (Wed)
by paulj (subscriber, #341)
[Link]
Posted Jun 2, 2010 17:04 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (4 responses)
Fixing these problems is not trivial.
So VNC over the network is way better. Nowdays, I prefer to just start vnc4server on the target host and use vncviewer to connect. It works everywhere, goes through firewalls easily, and has good enough latency.
Posted Jun 3, 2010 13:58 UTC (Thu)
by rriggs (guest, #11598)
[Link] (3 responses)
Posted Jun 3, 2010 14:56 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
It can. VirtualBox uses this functionality.
Posted Jun 10, 2010 5:08 UTC (Thu)
by mfedyk (guest, #55303)
[Link]
Can you give an example of using VNC to forward one application without the surrounding desktop (ie, rootless)?
Posted Jun 2, 2010 23:25 UTC (Wed)
by roc (subscriber, #30627)
[Link] (29 responses)
X penalizes the performance of local applications by forcing you to communicate with the X server over a socket where other window systems can do the work with a user-space library, kernel module or customized fast IPC. X effectively is penalizing local applications in order to support network transparency.
But then X bungles network display by being horrendously bandwidth-inefficient and intolerant of latency, especially over WANs. That's why VNC and rdesktop are much more pleasant to use than remote X.
X11: The Worst Of Both Worlds.
Posted Jun 3, 2010 1:00 UTC (Thu)
by jonabbey (guest, #2736)
[Link]
The problem isn't the network transparency.
Posted Jun 3, 2010 9:25 UTC (Thu)
by modernjazz (guest, #4185)
[Link] (27 responses)
This myth has been debunked many times---profiling shows that socket communication accounts for an absolutely negligible fraction of the problem. For local applications, the real issues tend to be inadequacies in the way video hardware is used, which is why this area has received so much attention (but still isn't complete) over the last few years.
I agree with the latency problems over WAN, though. For anything beyond LAN I use NX, which I like much better than VNC. Back when I had a 54k modem, NX was still somewhat usable, whereas nothing else was. NX proves the (perhaps unsurprising) fact that there is some value in a protocol that doesn't just rasterize everything.
Posted Jun 3, 2010 12:02 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (26 responses)
No. Socket communication is fairly slow. X becomes fast if shared memory is used to transfer images - i.e. if you forget about network transparency.
And if you really want to do it, then moving graphics to the kernel (like Windows did) can make your system even faster. I quite distinctly remember that X was unbearably slow compared to Windows in 90-s. Today overhead is pretty small, but it's still there.
Anyway, way forward seems to be client-side direct drawing using OpenGL. In this way X effectively only manages window frames and input events.
Posted Jun 4, 2010 4:15 UTC (Fri)
by rqosa (subscriber, #24136)
[Link] (19 responses)
> Socket communication is fairly slow. Compared to what? Unix domain sockets should have about the same performance as any other type of local IPC, except for shared memory. > X becomes fast if shared memory is used to transfer images - i.e. if you forget about network transparency. But that's beside the point the two posters above were trying to make, which is this: contrary to what "roc" said, X's capability for network transparency does not "[penalize] the performance of local applications". > moving graphics to the kernel (like Windows did) Didn't it move back to userspace in Vista? > can make your system even faster It might reduce the amount of context-switches slightly, but it's bad for system stability and security. > Anyway, way forward seems to be client-side direct drawing using OpenGL. What's wrong with the AIGLX way?
Posted Jun 4, 2010 5:30 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (18 responses)
Windows-style message passing, for example.
"But that's beside the point the two posters above were trying to make, which is this: contrary to what "roc" said, X's capability for network transparency does not "[penalize] the performance of local applications"
But it does. X penalizes local application, it's just that the workaround for this is so ancient that it is though of as normal.
"Didn't it move back to userspace in Vista?"
Partially. They now moved some parts to userspace to reduce number of context switches. Vista still has user32 subsystem in the kernel.
"It might reduce the amount of context-switches slightly, but it's bad for system stability and security."
Ha. X used to be the most insecure part of the OS - several megabytes of code running with root privileges and poking hardware directly.
KMS has finally fixed this by moving some functionality to the kernel, where it belongs.
And AIGLX is a temporary hack, good DRI2 implementation is better.
Posted Jun 4, 2010 7:19 UTC (Fri)
by rqosa (subscriber, #24136)
[Link] (17 responses)
> Windows-style message passing, for example. Do you have numbers from a benchmark to support that statement? And is there any property of the API that makes it inherently faster than the send(2)/recv(2) API? (When using Unix domain sockets, all that send(2)/recv(2) does is to copy bytes from one process to another. How can you make it any faster, except for using shared memory to eliminate the copy-operation?) > X penalizes local application, it's just that the workaround for this is so ancient that it is though of as normal. You're contradicting yourself. If the "workaround" exists, then the "penalty" doesn't exist. > Ha. X used to be the most insecure part of the OS - several megabytes of code running with root privileges and poking hardware directly. > KMS has finally fixed this by moving some functionality to the kernel, where it belongs. One of the major benefits of KMS is that it can get rid of that need for the X server to run as root and access the hardware directly. Moving the windowing system into the kernel, like you're suggesting, would throw away that benefit. > And AIGLX is a temporary hack, good DRI2 implementation is better. How is it "better"?
Posted Jun 4, 2010 8:14 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (16 responses)
Quick-LPC in Windows NT ( http://windows-internal.net/Wiley-Undocumented.Windows.NT... ). It was waaaay better than anything at the time. It's phased out now, because overhead of real LPC is not that big for modern CPUs.
>> X penalizes local application, it's just that the workaround for this is so ancient that it is though of as normal.
No, penalty is right there - in the architecture. It just can be worked around.
>>One of the major benefits of KMS is that it can get rid of that need for the X server to run as root and access the hardware directly. Moving the windowing system into the kernel, like you're suggesting, would throw away that benefit.
I'm not suggesting moving the whole windowing system, it makes no sense _now_.
Vista is 100% correct in its approach (IMO). Each application there, in essence, gets a virtualized graphics card so it can draw everything directly on its surfaces. And OS only manages compositing and IO.
So we get the best of both worlds - applications talk directly to hardware (it might even be possible to make a userland command submission using memory protection on recent GPUs!) and windowing system manages windows.
>> And AIGLX is a temporary hack, good DRI2 implementation is better.
Faster, less context switches, etc. And if you try to accelerate AIGLX - you'll get something isomorphic to DRI2.
Posted Jun 4, 2010 9:58 UTC (Fri)
by rqosa (subscriber, #24136)
[Link] (15 responses)
> Quick-LPC in Windows NT ( http://windows-internal.net/Wiley-Undocumented.Windows.NT... ). It was waaaay better than anything at the time. It's phased out now, because overhead of real LPC is not that big for modern CPUs. The first reason why it says that Quick LPC is faster than regular LPC, "there is a single server thread waiting on the port object and servicing the requests", does not exist for Unix domain sockets. For a SOCK_STREAM socket, you can have a main thread which does accept(2) and then hands each new connection off to a worker thread/process, or you can have multiple threads/processes doing accept(2) (or select(2) or epoll_pwait(2) or similar) simultaneously on the same listening socket (by "preforking"). For a SOCK_DGRAM socket, (unless I'm mistaken) you can have multiple threads/processes doing recvfrom(2) simultaneously on the same socket (again, preforking). As for the second reason, "the context switching between the client thread and the server thread happens in an "uncontrolled" manner", that is also the way it is for Unix domain sockets, but is that really a problem? If "the thread waiting on the signaled event is the next thread to be scheduled", then what would prevent a pair of malicious threads from hogging the CPU by constantly sending messages back and forth? > No, penalty is right there - in the architecture. It just can be worked around. If it's possible for local clients to be fast, then there's no "penalty". ("Penalty" would mean that the capability for clients to be non-local prevents local clients from being fast. But it doesn't, because local clients can use things that remote processes can't, like shared memory, etc.) > Each application there, in essence, gets a virtualized graphics card so it can draw everything directly on its surfaces. And OS only manages compositing and IO. X can work essentially that way too, except that compositing is done by another user-space process (a "compositing window manager"). Each application using OpenGL generates a command stream which is sent to the driver and rendered off-screen, and then the compositing window manager composites the off-screen surfaces together. Alternately, 2D apps can use OpenVG instead of OpenGL (at least, they should be able to fairly soon; is OpenVG supported in current Xorg and current intel/ati/nouveau/nvidia/fglrx drivers?)
Posted Jun 4, 2010 10:18 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (13 responses)
Quick-LPC had special hooks in scheduler. You could hog CPU, of course, but that was not a big deal at that time. It's so no big deal that you still can do this in Windows and up until several years ago in Linux: http://www.cs.huji.ac.il/~dants/papers/Cheat07Security.pdf :)
Also, Quick-LPC had some special hooks that allowed NT to do faster context switches. I remember investigating it for my own purposes - it was wickedly fast, but quite specialized.
"X can work essentially that way too, except that compositing is done by another user-space process (a "compositing window manager")"
And by that time only a hollow shell of X remains. About the same size as Wayland. So it makes sense to ditch X completely and use it only for compatibility with old clients. Like Apple did in Mac OS X.
"Each application using OpenGL generates a command stream which is sent to the driver and rendered off-screen, and then the compositing window manager composites the off-screen surfaces together."
And where's the place for network transparency? Remote GLX _sucks_ big time. It sucks so hopelessly that people want to use server-side rendering instead of trying to optimize it: http://www.virtualgl.org/About/Background
"Alternately, 2D apps can use OpenVG instead of OpenGL (at least, they should be able to fairly soon; is OpenVG supported in current Xorg and current intel/ati/nouveau/nvidia/fglrx drivers?)"
OpenVG is partially supported by Gallium3D.
Posted Jun 4, 2010 11:19 UTC (Fri)
by rqosa (subscriber, #24136)
[Link] (11 responses)
> And by that time only a hollow shell of X remains. If you put it that way, then it's pretty much already the case that "only a hollow shell of X remains"; XRender has replaced the core drawing protocol, and compositing window managers using OpenGL and AIGLX are already in widespread use. So the next step is probably to replace XRender with OpenVG for 2D applications, to offload more of the work onto the GPU. (An OpenVG backend for Cairo has been around for some time already.) And maybe another next step is to have GPU memory protection replace AIGLX, like you suggested, in the case of local clients. > So it makes sense to ditch X completely and use it only for compatibility with old clients. Like Apple did in Mac OS X. An X server that supports OpenGL/OpenVG + off-screen rendering + compositing should be able to perform as well as anything else (when using clients that do drawing with OpenGL or OpenVG), while at the same time retaining backwards compatibility with old clients that use XRender and with even older clients that use the core protocol. So there's no good reason to drop the X protocol. > And where's the place for network transparency? Remote GLX _sucks_ big time. It sucks so hopelessly that people want to use server-side rendering instead of trying to optimize it: http://www.virtualgl.org/About/Background What about 2D apps using OpenVG? If desktop apps that don't require very high graphics performance (for example Konsole, OpenOffice, Okular, etc.) migrate from XRender to OpenVG, then it seems like it would be useful to make the OpenVG command stream network-transparent, because performance should be adequate to run these clients over a LAN. As for remote-side rendering for remote GLX clients: what GLX clients would anyone actually want to run remotely? It seems like most apps that need fast 3D graphics would be run locally.
Posted Jun 4, 2010 12:00 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (10 responses)
Applications skip directly to OpenGL. QT right now even almost works, try running QT applications with "-graphicssystem opengl" switch. GTK has something similar.
"And maybe another next step is to have GPU memory protection replace AIGLX, like you suggested, in the case of local clients."
AIGLX is not necessary already with the open stack drivers which use DRI2. The main reason for AIGLX was impossibility of compositing of DRI1-applications - they pass commands directly to hardware. DRI2 fixed this by allowing proper offscreen rendering and synchronization.
"What about 2D apps using OpenVG? If desktop apps that don't require very high graphics performance (for example Konsole, OpenOffice, Okular, etc.) migrate from XRender to OpenVG, then it seems like it would be useful to make the OpenVG command stream network-transparent, because performance should be adequate to run these clients over a LAN."
Makes no sense, OpenVG is stillborn. It's already obsoleted by GL4 - you can get good antialiased rendering using shaders with double-precision arithmetic.
"As for remote-side rendering for remote GLX clients: what GLX clients would anyone actually want to run remotely? It seems like most apps that need fast 3D graphics would be run locally."
A nice text editor with 3D effects? :)
The problem with X is that it's crufty. It's single-threaded and legacy code has a non-negligible architectural impact (do you know that X.org has an x86 emulator to interpret BIOS code for VESA modesetting? I'm kidding you not: http://cgit.freedesktop.org/xorg/xserver/tree/hw/xfree86/... ). So IMO it makes a sense to design "X12" protocol to break away from legacy and just run rootless X.org for compatibility.
Posted Jun 4, 2010 12:58 UTC (Fri)
by rqosa (subscriber, #24136)
[Link] (7 responses)
> OpenVG is stillborn. Says who? > It's already obsoleted by GL4 And will ARM-based cell phones and netbooks be able to run OpenGL 4 with good performance? > The problem with X is that it's crufty. It's single-threaded Is there anything inherent in the X protocol that requires an X server to be single-threaded? I doubt it. Also, if the X server no longer does the rendering (replaced by GPU offscreen rendering), then does it really matter if the X server is single-threaded? > and legacy code has a non-negligible architectural impact (do you know that X.org has an x86 emulator to interpret BIOS code for VESA modesetting? I'm kidding you not: http://cgit.freedesktop.org/xorg/xserver/tree/hw/xfree86/... ). That's probably not used much any more (not used at all with KMS). And if you're already not using it, why should you even care whether it exists?
Posted Jun 4, 2010 13:59 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (3 responses)
Me, obviously.
http://www.google.com/search?q=OpenVG
Oh, and Google too.
>> It's already obsoleted by GL4
In a few years - yep. There's nothing in GL4 which makes it intrinsically slow.
>Is there anything inherent in the X protocol that requires an X server to be single-threaded? I doubt it.
And who's going to rewrite X.org? And it matters if server is single-threaded (because of input latency, for example).
And old legacy code in X.org does have its effect. For example, it's not possible to have tiled frontbuffer - because all of the code in X.org has to be rewritten. And X.org is LARGE.
Posted Jun 4, 2010 20:12 UTC (Fri)
by rqosa (subscriber, #24136)
[Link]
> About 45,600 results (0.32 seconds) How do those numbers prove anything? (Incidentally, a search for "Gallium3D" gives only "About 26,300 results".) Also, if OpenVG is useless, then why are Qt and Cairo both implementing it? > And who's going to rewrite X.org? It's being rewritten all the time. Just look at how much has changed since X11R6.7.0. > And it matters if server is single-threaded (because of input latency, for example). I don't remember ever seeing users complaining about the input latency of the current Xorg.
Posted Jun 5, 2010 18:25 UTC (Sat)
by daniels (subscriber, #16193)
[Link]
Posted Jun 8, 2010 17:09 UTC (Tue)
by nix (subscriber, #2304)
[Link] (2 responses)
It was abandoned, because the locking overhead made it *slower* than a singlethreaded server.
Perhaps it is worth splitting the input thread out from a SIGIO handler into a separate thread (last I checked the work in that direction was ongoing). But more than that seems a dead loss, which is unsurprising given the sheer volume of shared state in the X server, all of which must be lock-protected and a lot of which changes very frequently.
Posted Jun 10, 2010 12:37 UTC (Thu)
by renox (guest, #23785)
[Link] (1 responses)
Could you explain?
Posted Jun 14, 2010 20:12 UTC (Mon)
by nix (subscriber, #2304)
[Link]
Posted Jun 8, 2010 17:07 UTC (Tue)
by nix (subscriber, #2304)
[Link] (1 responses)
I am confused. You seem to be contradicting yourself without need for any help from the rest of us.
Posted Jun 9, 2010 11:07 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link]
DRI1 passes commands directly to the card, but it can't be composited (no Compiz for you with DRI1).
AIGLX (which is used _without_ DRI1) passes commands through the X-server, but can be composited.
DRI2 passes commands directly to the card and can be composited.
Posted Jun 8, 2010 17:05 UTC (Tue)
by nix (subscriber, #2304)
[Link]
That's hopeless, that is.
Posted Jun 5, 2010 20:09 UTC (Sat)
by renox (guest, #23785)
[Link]
I would argue that for latency purpose the lack of cooperation between the scheduler and current IPCs can really be a problem.
>>then what would prevent a pair of malicious threads from hogging the CPU by constantly sending messages back and forth?<<
Posted Jun 4, 2010 16:54 UTC (Fri)
by jd (guest, #26381)
[Link] (5 responses)
It's easy to say it would be faster, but context switches are expensive and therefore you need to make sure that you place the division between kernel and userspace application at the point where there would be fewest such switches. I've not seen any analysis of the code of this kind, so I'm not inclined to believe anyone actually knows where a split should be incorporated.
Personally, I remember X being considerably faster than Windows in the 1990s. I was using a Viglen 386SX-16 with maths co-processor, and X ran just fine. Windows was a bugbear. I was even able to relay X over the public Internet half-way across Britain and have no problems.
Posted Jun 4, 2010 17:43 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (4 responses)
It's easy to say it would be faster, but context switches are expensive and therefore you need to make sure that you place the division between kernel and userspace application at the point where there would be fewest such switches. I've not seen any analysis of the code of this kind, so I'm not inclined to believe anyone actually knows where a split should be incorporated."
Read about KMS and DRI2, please.
"Personally, I remember X being considerably faster than Windows in the 1990s. I was using a Viglen 386SX-16 with maths co-processor, and X ran just fine. Windows was a bugbear. I was even able to relay X over the public Internet half-way across Britain and have no problems."
On a fast machine - sure. However, Windows 95 could be started on a computer with 3Mb of RAM (official minimal requirement was 4Mb) and ran just fine on 8Mb. Linux with X struggled to start xterm on that configuration.
Posted Jun 4, 2010 19:01 UTC (Fri)
by jonabbey (guest, #2736)
[Link]
Horses for courses.
Posted Jun 4, 2010 19:12 UTC (Fri)
by anselm (subscriber, #2796)
[Link] (1 responses)
Hm. IIRC, in the 1990s, the previous poster's 386SX-16 wasn't a »fast machine« by any stretch of the imagination. It probably would have struggled with Windows 95, considering that Windows 95, according to Microsoft, required at least a 386DX processor (i.e., one with a 32-bit external bus).
No. In 1994 my computer was a 33 MHz 486DX notebook with 8 megabytes of RAM – not a blazingly fast machine even by the standards of the time but what I could afford. I used that machine to write, among other things, a TeX DVI previewer program (like xdvi but better in some respects, worse in others) based on Tcl/Tk, with the interesting parts written in C of course, but even so. There was no problem running that program in parallel with TeX and Emacs, and in spite of funneling all the user interaction through Tcl it felt just as quick as xdvi, even with things like the xdvi-like magnifier window.
Five years before that, a SPARCstation 1 was considered a nice machine to have, and that ran X just fine, thank you very much. I doubt that the machine I was hacking on at the time had anywhere near 8 megs of RAM.
Posted Jun 5, 2010 0:27 UTC (Sat)
by jd (guest, #26381)
[Link]
Yet I was using X11R4, with OpenLook (BrokenLook?). I could open up to 20 EMACS sessions simultaneously before experiencing noticeable degradation in performance. When gaming, I'd have a Netrek server, Netrek client and 19 Netrek AIs running in parallel on the same machine.
Compiling was no big. GateD would sometimes cause OOM to kick in and kill processes, but I don't recall any huge difficulty. It was the machine I wrote my undergraduate degree project on (fractal image compression software).
Mind you, I paid huge attention to setup. The X11 config file was hand-crafted. Everything was hand-compiled - kernel on upwards - to squeeze the best performance out of it. Swap space was 2.5x RAM.
Posted Jun 7, 2010 15:13 UTC (Mon)
by nye (subscriber, #51576)
[Link]
No.
Everyone not working for Microsoft stated a minimum of 16MB to run Windows 95. I actually tried running it on a computer with 8MB of RAM - as long as you had no applications running you might just about be able to use explorer to browse the filesystem, very slowly, but trying to launch an application would cause a 5 minute swap storm.
Posted Jun 16, 2010 20:27 UTC (Wed)
by nix (subscriber, #2304)
[Link] (2 responses)
I guess they found a secure way to do it, or decided the tradeoffs were worthwhile. Yay, cross-system KDE4 IPC calls, here we come!
Posted Nov 5, 2010 20:51 UTC (Fri)
by mgedmin (subscriber, #34497)
[Link] (1 responses)
Posted Nov 5, 2010 21:11 UTC (Fri)
by foom (subscriber, #14868)
[Link]
Posted Jun 2, 2010 14:17 UTC (Wed)
by bronson (subscriber, #4806)
[Link] (23 responses)
But I disagree with you in one place: nobody cares anymore about X's network transparency! Even today setting up xauth is so unreliable and enough of a hassle that everybody I know just fires up VNC instead. Can't wait for spice.
With X, if there's a network blip, your apps all hang. With VNC, you can leave the office now and access your apps from the coffee shop 6 hours later. No forethought needed.
VNC feels MUCH faster, especially over today's 1Mbit 3G links. Remote X has weird terminology (which is the client and which is the server), and a weird auth setup that nobody wants to understand anymore.
Sample size of one, don't shoot the messenger. All I'm saying is this: as long as VNC and Spice are easier to use, feel faster, and are more reliable out of the box, very few end users are going to give a hoot about X's network transparency. I think the KDE and Gnome guys can be forgiven for chucking a poorly-implemented, seldom-used feature out the window.
Posted Jun 2, 2010 15:25 UTC (Wed)
by nybble41 (subscriber, #55106)
[Link] (5 responses)
It's not "weird terminology". The program which starts first, ends last, and provides access to system resources (display, keyboard, mouse) is the server. The shorter-lived applications which connect to the server to make use of these shared resources are the clients. This is perfectly consistent with every other client/server arrangement out there.
What throws people is that we're used to thinking of servers as a particular type of *hardware*, steadily ticking away the CPU-hours in some climate-controlled back room, out of sight, whereas "clients" are the visible workstation PCs at everyone's desk. However, the X client/server terminology is perfectly reasonable from a *software* point-of-view, and X is software, not hardware.
Anyway, so long as X uses Unix-domain network sockets for IPC--a reasonable choice, even for purely local apps--it gets network transparency practically for free. I'd probably drop support for xauth and remote display managers in favor of SSH forwarding and Avahi service discovery, though, and extend SSH to forward D-Bus and PulseAudio by default along with X.
Posted Jun 2, 2010 15:54 UTC (Wed)
by mgedmin (subscriber, #34497)
[Link] (2 responses)
Posted Jun 2, 2010 18:33 UTC (Wed)
by njs (subscriber, #40338)
[Link]
Posted Jun 2, 2010 18:36 UTC (Wed)
by nybble41 (subscriber, #55106)
[Link]
I think it would be better to forward the PA connections via SSH, as with X11 tunneling, so that no one can eavesdrop and there are no PA-specific issues with firewalls, etc. There is a PULSE_SERVER environment variable one can set to override the X11 property.
Posted Jun 2, 2010 20:16 UTC (Wed)
by RCL (guest, #63264)
[Link] (1 responses)
Posted Jun 3, 2010 10:44 UTC (Thu)
by nix (subscriber, #2304)
[Link]
I'm not sure how you could fix this without dropping *all* server-side state, which would be a performance catastrophe.
Posted Jun 2, 2010 16:12 UTC (Wed)
by martinfick (subscriber, #4455)
[Link] (6 responses)
Perhaps you don't, but I do (and I suspect that many others still do too), so please don't speak for others. Yes, many people have cell phones, but you might be shocked to find out that land lines are still very common also. :)
Yes, it is a pain to re-enable kdm to allow remote logins anytime I upgrade it, (for some reason, the old settings are never enough), but that is hardly X's fault, and surely it doesn't make me want to throw out the baby...
As for VNC, it never seems to work as well for me, last time I tried it, youtube videos were all blue... If it works for you, I am happy and I wouldn't want (or advocate) to take it away from you, I hope you could see it in you to return that sentiment.
Posted Jun 2, 2010 16:21 UTC (Wed)
by rsidd (subscriber, #2582)
[Link] (5 responses)
And youtube video works over X11 forwarding? How about the audio, over either X11 or VNC?
VNC works well for the cases it's meant for, and on slow links it's much faster than X11 forwarding.
Posted Jun 2, 2010 16:37 UTC (Wed)
by martinfick (subscriber, #4455)
[Link]
Posted Jun 2, 2010 21:36 UTC (Wed)
by daglwn (guest, #65432)
[Link] (2 responses)
That's not been my experience at all. VNC has always been slower than just running windowed emacs over the X protocol, for example.
Posted Jun 8, 2010 20:23 UTC (Tue)
by nix (subscriber, #2304)
[Link] (1 responses)
Apps that render backgrounds which consist of repeating elements can use Render to transmit those elements only once.
With things that have heavily coloured/textured backgrounds which are different at every point, maybe, just maybe both would be equally efficient... although surely you'd send that to the X server as a pixmap.
Can anyone think of *any* application (other than e.g. talking to Windows systems on the far side, or handling disconnected operation, neither of which X was designed for) for which VNC is preferable to X? I can't think of any.
Posted Jun 8, 2010 22:15 UTC (Tue)
by rsidd (subscriber, #2582)
[Link]
Here's one: a slow network connection with latency (eg, connecting to office from my home). VNC is usable, X11 forwarding is not. I am willing to believe that X11 beats VNC on gigabit networks. And, of course, if the connection goes down you lose nothing (except time). You seem to be saying X was designed for LANs (where disconnected operation is relatively uncommon) and not wider networks. If so, that's another argument for VNC.
Posted Jun 3, 2010 0:12 UTC (Thu)
by agriffis (guest, #10251)
[Link]
Posted Jun 2, 2010 16:51 UTC (Wed)
by jg (guest, #17537)
[Link] (6 responses)
However, it is also the case that toolkits/applications should be involved in a deep way. If I want to run an app on a phone display, it should in fact be a different UI than when I'm on the HD screen across the room.
Our current architecture is broken, in ways that are not recoverable. This is why I said I'd do it in a radically different way, if I got the chance to do it again.
Every other system out there is similarly (or worse) broken, btw, there is nothing about this rant that is X specific.
Posted Jun 2, 2010 23:47 UTC (Wed)
by obi (guest, #5784)
[Link] (5 responses)
Posted Jun 3, 2010 0:18 UTC (Thu)
by jg (guest, #17537)
[Link] (4 responses)
NeWS attempted the opposite: load the UI into the display server, and send messages. In some ways, they were right; except the language chosen was horrid (postscript), and interesting extensions almost always still would have required modifications to that engine.
I think we (both NeWS and X) were wrong and the rendering for a UI should be in an entirely separate (set of) address spaces; in fact, I think for most applications, that they are currently written at too low level a level of abstraction.
The extra address space transitions this would imply is just not important any more, and would enable lots of interesting scalability and other enhancements.
So, think of a system in which identity is a first class citizen along with security (as occurs in CCNx, out of the box), based around the idea of the fundamental operation just being compositing of pixels, and multiple engines that might generate these pixels (which might even be VM's, running on the same iron, but with possibly very different graphics models).
And it would be pull, rather than push, and time based. Everything would be named (potentially globally).
A fundamentally different sort of architecture.
As I said, I've been stewing about this for much of the last 10 years, and it is seeing CCNx that has caused the fog to (partially) clear. I'm trying to clarify these ideas for a workshop later this summer; until then, this is as far as I will go.
Posted Jun 3, 2010 14:41 UTC (Thu)
by obi (guest, #5784)
[Link] (2 responses)
I think I can guess some aspects of the graphics architecture you have in mind, based on what you imply. I'd be very interested to hear more about it though - guess I'll have to wait until summer.
Posted Jun 3, 2010 20:57 UTC (Thu)
by jg (guest, #17537)
[Link]
Conceptually, however, it and some similar projects elsewhere I believe will change how we think about communications as much as TCP/IP (and packet networking in general) changed the view from the world of telephony.
It's view (which I still am working to get my head around), is a very different view of the world; there is some other work that has also affected my world view deeply (in particular the T-Time view found in Croquet derived from Dave Reed's thesis of so many years ago).
It is time for something *really* new....
Posted Jun 3, 2010 21:35 UTC (Thu)
by dmk (guest, #50141)
[Link]
I like that. Kinda like the Matrix. :)
Posted Jun 6, 2010 16:02 UTC (Sun)
by ortalo (guest, #4654)
[Link]
Posted Jun 3, 2010 14:30 UTC (Thu)
by mebourne (guest, #50785)
[Link]
"Even today setting up xauth is so unreliable and enough of a hassle that everybody I know just fires up VNC instead" - ssh solved the xauth problem back in the nineties, you don't even have to know it's there.
"VNC feels MUCH faster, especially over today's 1Mbit 3G links" - native X11 is unusable on high latency links but then VNC sucks on high latency links too. VNC also gives an unpleasant user experience and forces you to remote a whole desktop preventing any useful integration with your local system. X11 makes remote programs behave like local ones and lets you use them with all the same convenience.
As to solving the latency issue just use XPRA (http://en.wikipedia.org/wiki/Xpra). This removes round-trips in the X11 protocol and makes very high latency links usable - more useable than any other technology I've seen. It has the added bonus of surviving disconnections, allowing you to connect from somewhere else, all the same stuff you get from VNC. It's like having "screen" for X. It's as easy to use as VNC but all the advantages of normal remote X.
I use XPRA over a 270ms round-trip link for all my work apps all day every day and it's the only useable solution.
Posted Jun 6, 2010 19:58 UTC (Sun)
by Wol (subscriber, #4433)
[Link]
Cheers,
Posted Jun 8, 2010 9:02 UTC (Tue)
by jschrod (subscriber, #1646)
[Link]
Maybe for people who just want to do a short action on a remote desktop, and don't care for consistent keyboard handling or working X selections. But for people like me who have X applications open from 5-10 remote systems all the time, network transparency on the application level (and not on the desktop level!) is one of THE reasons to use X.
Posted Jun 11, 2010 11:02 UTC (Fri)
by renox (guest, #23785)
[Link]
Can you elaborate?
Posted Jun 1, 2010 21:22 UTC (Tue)
by bboissin (subscriber, #29506)
[Link] (22 responses)
Posted Jun 1, 2010 23:03 UTC (Tue)
by drag (guest, #31333)
[Link] (19 responses)
For more traditional Linux environments Redhat has developed Wayland as a alternative to using a X Windows-based environment.
It depends heavily on the modern graphics infrastructure being built up (Gallium3d, Kernel-mode setting, etc etc) for it's acceleration. Unlike Xorg it's designed from the ground up to not touch hardware directly; which should make things easier.
GTK has been ported to it. I think QT has also.
Theoretically it can happily support running X11-based applications through composition. The idea being that you have a X Server that draws entirely to a off-screen buffer and then you bring the application windows via desktop composition. Similar thing should be possible for Android's stuff..
That way you could still maintain backwards compatibility with maximizing the benefits of a modern display design.
--------------------
I would not mind seeing a effort to modernize X by creating a X12 protocol.
I've read that the vast majority of X and X extensions is no longer used by most modern applications. So I figure you could massively simplify things and get X12 with only relatively minor changes needed for existing applications.
Then maybe they could add support for streaming video to remote systems using WebM, similar to how Redhat SPICE uses MJPEG to stream desktops from virtualized Windows environment.
Posted Jun 2, 2010 0:38 UTC (Wed)
by quartz (guest, #37351)
[Link] (15 responses)
Not to mention there is very little activity on it. Mind you, I'd love to have a simpler stack, but Wayland is pretty far away and slow to get there...
Posted Jun 2, 2010 6:21 UTC (Wed)
by drag (guest, #31333)
[Link] (14 responses)
As far as Wayland goes... there is really not much point to putting a huge amount of effort into something like that until the Linux graphics stack matures quite a bit.
We need to eliminate all direct access from the X server to your hardware so that the X Server is nothing more then the software redirects Linux graphics APIs and input to the applications. Maybe when we get to that point Wayland may not be needed, but it would be nice to get rid of the legacy crud that is hanging around "Linux on the desktop"'s neck.
Posted Jun 2, 2010 11:34 UTC (Wed)
by nix (subscriber, #2304)
[Link] (13 responses)
Posted Jun 2, 2010 15:22 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (10 responses)
It uses KMS and command submission infrastructure so it doesn't need to run as root and touch hardware directly.
If/when KMS+Gallium3D layer matures enough, it should be fairly simple to move QT/GTK directly to this layer (they are less and less dependent on exotic X features, anyway) and run rootless X for compatibility with the old clients. Like Apple did in 2000-s.
Posted Jun 2, 2010 17:04 UTC (Wed)
by nix (subscriber, #2304)
[Link] (9 responses)
To be honest I can't see any advantage whatsoever in having a direct-on-the-hardware Gtk/Qt port: you lose network transparency, which is useful, and gain a tiny bit of performance which is probably entirely imperceptible locally in any case.
Posted Jun 2, 2010 17:51 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (8 responses)
And there are better ways to create networked GUI applications now - VNC and SPICE. One can even use streaming video with SPICE, something which is impossible in pure networked X.
And X with KMS already can work as a simple user, without root access. It might happen in Ubuntu this year: https://blueprints.edge.launchpad.net/ubuntu/+spec/deskto... (Fedora might have it already, no idea)
Posted Jun 2, 2010 19:02 UTC (Wed)
by martinfick (subscriber, #4455)
[Link] (7 responses)
Wow, where is the logic in this statement? If KDE breaks it, it will be a great loss (more for KDE than for X). If another large project breaks some other standard, NFS, HTTP, TCP, would you simply conclude that it must therefore be "not a great loss"?
Everyone claiming that network transparency of X can currently be done in better ways is obviously NOT thinking about all the use cases that X supports.
If you are a single user with simple use cases, i.e. I want to see my home desktop from my laptop while at work, yes VNC is good. But if you have ever worked in a collaborative environment across many unixy machines, X is undeniably way more powerful than VNC.
If I simply want to log into a test machine and run one GUI app (perhaps a browser), remotely, without having an entire desktop setup, I can do that easily with X. I can have ten different apps on my desktop running across ten different remote hosts transparently. The fact that these apps might not eventually integrate perfectly with KDE or GNOME in the future is not going to kill X (and they certainly would integrate even less well with VNC). It will be a shame, but the desktops are not the end all be all. X will survive just fine without their full support and will continue to be used in creative powerful ways, even if some people can't envision the use cases that other's have been (and will still be) using for decades.
Posted Jun 2, 2010 19:56 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (6 responses)
Is HTTP used by less than 1% of KDE users?
Also, what do you think about removal of Gopher support from FireFox?
"If you are a single user with simple use cases, i.e. I want to see my home desktop from my laptop while at work, yes VNC is good. But if you have ever worked in a collaborative environment across many unixy machines, X is undeniably way more powerful than VNC."
Yawn. I worked in a collaborative environment. X is still not superior to VNC.
"If I simply want to log into a test machine and run one GUI app (perhaps a browser), remotely, without having an entire desktop setup, I can do that easily with X. I can have ten different apps on my desktop running across ten different remote hosts transparently."
Yawn. So most of people won't care about it. Apple's GUI is not network transparent, but I don't see people crying rivers about it. That's what is going to happen.
Also, you can certainly have only your application windows without complete desktop environment with VNC. VirtualBox does this, for example.
Posted Jun 2, 2010 20:27 UTC (Wed)
by sfeam (subscriber, #2841)
[Link] (3 responses)
Maybe all that yawning indicates you are too tired to think about how and why people still use this?
I agree strongly with martinfick. It is very nice to be able to ssh in to another machine, often a server not running a desktop at all, and fire up one X-based utility. Ditto for ssh access to an older or minimally configured machine that doesn't have enough memory to run vncserver.
And yes, it is a problem that Apple's GUI does not support this. It means, for example, that even though FileMaker running on an Apple server provides a nice http interface for user access, I still have to walk over to another lab in order to perform administrative tasks on it. I don't cry a river, but I'd switch to an X11+linux equivalent for FileMaker if I knew of one.
Posted Jun 2, 2010 21:49 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
And there are people who still use Gopher. Should we base all our browsers around Gopher layer? Maybe, you know, by using big ASCII ART pictures transformed 'on-the-fly' to 24-bit color images.
Cause that's about the same way current X works.
"Ditto for ssh access to an older or minimally configured machine that doesn't have enough memory to run vncserver."
This excuse is quite thin. Vnc4server with minimal DE takes about 4Mb of RAM.
"And yes, it is a problem that Apple's GUI does not support this. It means, for example, that even though FileMaker running on an Apple server provides a nice http interface for user access, I still have to walk over to another lab in order to perform administrative tasks on it. I don't cry a river, but I'd switch to an X11+linux equivalent for FileMaker if I knew of one."
And I just use VNC for Apple. You know the one that Apple bundles with each copy of Mac OS X (http://www.wikihow.com/Setup-VNC-on-Mac-OS-X).
So maybe you should stop assuming that networked X is the only way to do networked graphics?
Posted Jun 8, 2010 14:38 UTC (Tue)
by jschrod (subscriber, #1646)
[Link]
But you think that all the world has to have the same needs as you, don't you?
Posted Jun 3, 2010 8:42 UTC (Thu)
by Janne (guest, #40891)
[Link]
Well, you could just use VNC, which works just fine on Macs. Apple even has a polished implementation of it available:
http://www.apple.com/remotedesktop/
But of course, standard free VNC would work as well, it just might not have all the bells and whistles of the Apple Remote Desktop.
And if you are running X11-aware UNIX-apps, then you could use X11 for MacOS, which is shipped with every copy of OS X....
Posted Jun 2, 2010 20:39 UTC (Wed)
by sbishop (guest, #33061)
[Link] (1 responses)
Yawn. So most of people won't care about it. Apple's GUI is not network transparent, but I don't see people crying rivers about it. That's what is going to happen. And if that's what you're looking for--a Unix GUI which isn't network transparent--then a solution, as you have already pointed out, is already available. I know from experience that X network transparency for a single app is extremely useful in a lab environment, where the rest of Linux is also useful. Inflexible operating systems which only cater to the average user are easy to find. I don't see any reason to throw out the very things which make Linux flexible and unique. By the way, does anyone know how it is that KDE is supposed to be breaking this valuable feature of X exactly? I don't follow KDE.
Posted Jun 2, 2010 21:17 UTC (Wed)
by drag (guest, #31333)
[Link]
Moving to OS X is not a good one because I really dislike Apple's corporate policies and I dislike how the Unix side of things and the Aqua side of things are two entirely different worlds. With Gnome on Linux I have full system integration from top to bottom.
But if I was to loose out with X network transparency in exchange for a more efficient system, less memory usage, and better designed drivers (and the other benefits that that gives) then it would be a happy trade off.
There are a massive number of different ways to do remote GUI systems nowadays.
X is pretty good, but nobody ever fixed it's deficiencies. Instead we have a shitload of hacks and work arounds on the toolkit level to make it usable.
Posted Jun 2, 2010 18:44 UTC (Wed)
by drag (guest, #31333)
[Link] (1 responses)
We are getting closer and closer, certainly.
What we will eventually need is a generic DDX that only targets "Linux" rather then individual video devices.
Graphics drivers are maturing: Open Source ATI and Nvidia drivers are doing a good job progressing and are in the progress of providing mature Gallium drivers.
Open Source Intel driver developers do not seem to have received the 'Gallium religion' yet.. they seem happy with their own UXA stuff happening in the X Server; but I think they will get the religion rapidly once they realize that writing a new driver from the hardware-on-up for each API they want to support is a shitty idea.
--------------------------
For remote desktop/network transparency I envision something like SPICE were you are able to stream video using modern compression techniques to client systems.
Maybe using WebM. Have the rendering happen on the remote machine, the buffers for the applications get rendered to Vp8. PulseAudio + Gstreamer outputs the audio to Vorbis.
These get combined to a WebM stream that get sent to the 'client' (your local machine).
Then all that is needed is a protocol for input control were pointer, keyboard, copy/paste buffer, input gets feed back to the remote machine in order to interact with the programs.
The client program to display the remote desktop then needs to not be anything more complicated then a web browser. Browser would render the WebM video and then Javascript to capture the input from the user and send it back to the remote system.
With modern computer architectures with built-in video acceleration both the clients and the servers will be able to do the compression/decompression no-sweat.
Posted Jun 3, 2010 16:24 UTC (Thu)
by Spudd86 (subscriber, #51683)
[Link]
Posted Jun 2, 2010 0:44 UTC (Wed)
by jreiser (subscriber, #11027)
[Link] (2 responses)
On my machine xdpyinfo shows 27 extensions [list follows] and over 20 of them are used intensively. Extensions:
BIG-REQUESTS
Composite
DAMAGE
DOUBLE-BUFFER
DPMS
DRI2
GLX
Generic Event Extension
MIT-SCREEN-SAVER
MIT-SHM
RANDR
RECORD
RENDER
SGI-GLX
SHAPE
SYNC
X-Resource
XC-MISC
XFIXES
XFree86-DGA
XFree86-DRI
XFree86-VidModeExtension
XINERAMA
XInputExtension
XKEYBOARD
XTEST
XVideo
Posted Jun 2, 2010 2:42 UTC (Wed)
by jg (guest, #17537)
[Link]
Posted Jun 3, 2010 16:26 UTC (Thu)
by Spudd86 (subscriber, #51683)
[Link]
Posted Jun 2, 2010 3:11 UTC (Wed)
by eklitzke (subscriber, #36426)
[Link] (1 responses)
Based on this, my assumption is that the android display code probably isn't a viable replacement for the kind of things that xorg does, even if it is more efficient in some areas.
(disclaimer: I'm not an X11 or android hacker, so these are just my guesses)
Posted Jun 2, 2010 9:03 UTC (Wed)
by juanjux (guest, #11652)
[Link]
Not really; you receive a callback when the screen changues orientation, but you can explicitly ignore it and Android will resize your app just fine (if you dont have hard coded sizes, of course.) You can also provide alternative layouts in case you prefer to alter the disposition of widgets with different orientations like hidding or showing some buttons instead of resizing the same set, but it's not required.
Posted Jun 2, 2010 18:21 UTC (Wed)
by did447 (guest, #49454)
[Link] (6 responses)
Posted Jun 2, 2010 21:47 UTC (Wed)
by daglwn (guest, #65432)
[Link] (5 responses)
The analogy is not correct. CVS had many deficiencies which could not be easily fixed. X has deficiencies but provides an extension mechanism to fill them. The primary reasoning behind most "dump X" comments I read ends up being "it's ugly." While that may be true and it may be worthwhile to clean up, it's not a reason to completely dump a system that's served us very well for over two decades. Old is not bad. Old is good, stable and useful. Most of the "dump X" people unfortunately don't seem to care that some of us actually use things like X's network transparency, for good reasons.
Posted Jun 3, 2010 6:02 UTC (Thu)
by jmorris42 (guest, #2203)
[Link] (4 responses)
And seem to have a lot of overlap with the set of Linux people who care not for UNIX and are hellbent on dumping everything that makes *NIX worth having.
1. The Network is the Computer. The Computer is the Network. SUN may be defunct but that idea is more than just a marketing slogan.... for some of us.
2. Everything is a file. Things like PulseAudio and the eternally changing *bus/*kit stuff tend to violate this one. Hint: if the primary interface is a library instead of a device you are probably doing it wrong. Lets not bring over bad habits from Windows.
3. Small is beautiful. Lots of small programs are better than one big one.
4. X provides mechanism, not policy.
But worse still, the old UNIX books are all but useless on a 'modern' linux and there are no replacements available, i.e. it is all scrounge around on obscure wikis and blogs for docs that are either out of date or describe an even newer version than your distro shipped.
Posted Jun 3, 2010 6:40 UTC (Thu)
by intgr (subscriber, #39733)
[Link] (1 responses)
> Hint: if the primary interface is a library instead of a device you are
That also includes X11.
> 3. Small is beautiful. Lots of small programs are better than one big one.
Again, you are proving the point. X11 is a bloated mess with lots of redundant functionality that nobody uses or needs anymore.
Posted Jun 6, 2010 20:45 UTC (Sun)
by Wol (subscriber, #4433)
[Link]
>Again, you are proving the point. X11 is a bloated mess with lots of redundant functionality that nobody uses or needs anymore.
That WAS true of XFree, which was project-managed by a bunch of Windows fans. It's not true of Xorg except insofar of the legacy code they haven't yet managed to rewrite out of the system.
It's a sad fact that X stagnated for years, precisely because the people in charge thought MS-Windows was a better solution. Fortunately the project threw the main developer out, because he didn't agree with them, so now we have the Xorg fork and X is motoring along nicely again.
Cheers,
Posted Jun 3, 2010 16:46 UTC (Thu)
by Spudd86 (subscriber, #51683)
[Link]
dbus is an IPC mechanism, that's all.
most of the *kit stuff is a small daemon doing one thing (replacing hal the huge daemon that did everything) it uses dbus because it's a good IPC mechanism that avoids re-implementing a bunch of stuff for each new thing and brings some nice features (like automatic activation, think inetd for IPC)
and if you're not interested int running a desktop environment than all you need to look at is udev and a little bit of dbus (And you can leave dbus out if you really want to).
The other things are for desktop environments where small programs that do one thing are not always the best for a good UI design, you have to put the splits in a different place (for one thing pipes are no longer really an option for connecting your apps together, again dbus comes in here)
dbus is basically a small daemon that does one thing well, and a small library for talking to it. It is an enabler for writing MORE small daemons that do one thing well, while at the same time avoiding the massive bloat of hundreds of small daemons by providing on demand starting of them.
The modern Linux desktop is moving towards a GUI version of the UNIX philosophy of small programs doing one job well as much as it possibly can, which is exactly where things like pulseaudio and *kit come in.
(pulse is bigish because it's solving a big problem, consolekit, polkit, udisks, etc are all replacing some big monolithic things with lots smaller programs so what are you complaining about?)
Posted Jun 6, 2010 15:58 UTC (Sun)
by ortalo (guest, #4654)
[Link]
Posted Jun 2, 2010 23:09 UTC (Wed)
by jonabbey (guest, #2736)
[Link] (1 responses)
X has come a very, very long way since then, both in the server space (thanks to XFree86 for carrying the torch and driving improvements during the 90's) and in toolkits, but it could certainly use a clean up and slim down of the protocol, with greater emphasis on asynchrony and maybe even some local server-side GUI event processing support, imho.
Of course, all of the projects that were structured in that way (NeWS, Fresco) seem to have fallen away, and everyone's efforts seem to be focused on optimizing the local desktop/phonetop experience, and using HTTP for everything network oriented.
Which I think is a shame. I'm one of those users who regularly uses the network portability of X for individual apps with great enthusiasm.
There's a great whole lot of development under the bridge now, though, and it doesn't seem likely that there will ever be an X12.
Posted Jun 2, 2010 23:19 UTC (Wed)
by jonabbey (guest, #2736)
[Link]
Posted Jun 3, 2010 19:57 UTC (Thu)
by cdmiller (guest, #2813)
[Link] (1 responses)
Posted Jun 8, 2010 17:52 UTC (Tue)
by nix (subscriber, #2304)
[Link]
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
>You're contradicting yourself. If the "workaround" exists, then the "penalty" doesn't exist.
>How is it "better"?
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
>Says who?
About 45,600 results (0.32 seconds)
>And will ARM-based cell phones and netbooks be able to run OpenGL 4 with good performance?
Ranting on the X protocol
For example, it's not possible to have tiled frontbuffer - because all of the code in X.org has to be rewritten. And X.org is LARGE.Ranting on the X protocol
This will probably come as a huge surprise to the >95% of desktop X users (all Intel, most AMD, all NVIDIA beyond G80) who have a tiled frontbuffer.
Ranting on the X protocol
Ranting on the X protocol
The input thread needs to read the display state to pass the events to the correct applications, so there's also a kind of locking which must be done here: wouldn't this create the same issue as before?
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
IMHO, there should be an IPC provided by the OS which would allow a process to say: deliver this message to this other process and run it as a part of my 'runtime quota'.
This wouldn't allow CPU hogging, and would provide lower latency, but note that for this to truly work, you still need that a shared server works on the message provided by the client who give the 'scheduling time' and not on something else..
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
On a fast machine - sure.
Windows 95 could be started on a computer with 3Mb of RAM (official minimal requirement was 4Mb) and ran just fine on 8Mb. Linux with X struggled to start xterm on that configuration.
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
As for VNC, it never seems to work as well for me, last time I tried it, youtube videos were all blue...
Ranting on the X protocol
Ranting on the X protocol
VNC works well for the cases it's meant for, and on slow links it's much faster than X11 forwarding.
Ranting on the X protocol
Can anyone think of *any* application (other than e.g. talking to Windows systems on the far side, or handling disconnected operation, neither of which X was designed for) for which VNC is preferable to X? I can't think of any.
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
- Jim
Ranting on the X protocol
Ranting on the X protocol
- Jim
Ranting on the X protocol
CCNx is important work...
- Jim
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Ranting on the X protocol
Wol
Ranting on the X protocol
Ranting on the X protocol
Android
Android
Android
Android
Android
We need to eliminate all direct access from the X server to your hardware
Isn't KMS very nearly there already? The only thing it needs root for now is to ensure that nobody else is snooping on your input devices.
Android
Android
Android
X network transparency
X network transparency
"If I simply want to log into a test machine and run one GUI app (perhaps a browser), remotely, without having an entire desktop setup, I can do that easily with X. I can have ten different apps on my desktop running across ten different remote hosts transparently."
X network transparency
Yawn. So most of people won't care about it. Apple's GUI is not network transparent, but I don't see people crying rivers about it. That's what is going to happen.
X network transparency
X network transparency
X network transparency
X network transparency
X network transparency
Android
Android
I've read that the vast majority of X and X extensions is no longer used by most modern applications.
X11 extensions are not dead
X11 extensions are not dead
X11 extensions are not dead
Android
Android
Danjou: Thoughts and rambling on the X protocol
Danjou: Thoughts and rambling on the X protocol
Danjou: Thoughts and rambling on the X protocol
Danjou: Thoughts and rambling on the X protocol
> probably doing it wrong.
Danjou: Thoughts and rambling on the X protocol
Wol
Danjou: Thoughts and rambling on the X protocol
Danjou: Thoughts and rambling on the X protocol
You are probably true, but maybe there is hope not so far... I really enjoyed spending the last decade reading something like LWN.net - I do not lack so much of the old Unix books that were deprecating at an incredibly fast pace (and were so expensive btw).
Danjou: Thoughts and rambling on the X protocol
Danjou: Thoughts and rambling on the X protocol
Threads on X11 being useless
Threads on X11 being useless
