Not logged in
Log in now
Create an account
Subscribe to LWN
LWN.net Weekly Edition for May 23, 2013
An "enum" for Python 3
An unexpected perf feature
LWN.net Weekly Edition for May 16, 2013
A look at the PyPy 2.0 release
LCA: The X-men speak
Posted Feb 12, 2013 14:20 UTC (Tue) by marduk (subscriber, #3831)
It does for specific use cases. If you are a developer coding on a 3D first-person shooter game in emacs then it's probably fine. If you then want to *play* that game, then I think shared memory is not enough. Moreover, what if you had one of those hybrid laptops and wanted to switch from the Intel GPU to the Nvidia one mid-play (although that may be possible now, but it sounds like maybe not). If/when the server can talk directly to the GPU *and* hot-plug them then that makes it easier do to things like this (easier albeit not easy) than if there is an intermediate server that can at best share memory with the server that owns the hardware.
The question though is what to do when there is no GPU attached and the clients are still running... perhaps you would need a virtual GPU layer or something like that.
Anyway that was my "dream". I realize that there are some solutions out there that kinda do what my dream is, but usually they do so inefficiently or require patches/workarounds for non-trivial use cases.
Posted Feb 12, 2013 16:20 UTC (Tue) by drag (subscriber, #31333)
Instead of the server doing the rendering all the rendering is handled by the clients. The clients themselves talk to the GPU via whatever API they feel like using and then render their output to a off screen buffer.
Then all the 'display server' has to do at this point, besides handle the grunt work of doing things like handling inputs and modesetting, is flip the client buffer and composite the display.
With a little bit effort then it should be possible to kill the 'display server' and then have another 'display server' be able to grab the client application's output buffer.
Taking that and extending the idea further it should then be possible for you to take the client application's buffer it's rendering to, perform some sort of efficient streaming image compression, and then send that output over the network.
It may need slightly more bandwidth then traditional X, but it would still benefit from accelerated rendering (especially if your image compression can be accelerated) and would be virtually unaffected by latency... which is with X is it's Achilles's heal.
You could then have a single application on your desktop and then seamlessly transfer it to your phone or anything else running on any network. No special effort needed on the side of the application developer and you still get acceleration and all that happy stuff.
All theoretical at this point, of course.
/end wayland troll
Posted Feb 12, 2013 18:58 UTC (Tue) by dlang (✭ supporter ✭, #313)
Posted Feb 12, 2013 20:42 UTC (Tue) by drag (subscriber, #31333)
Is it going to be a big win for you to distribute the rendering of the applications?
It seems to me that the sort of applications were the bulk of the resources is put into the final rendering (video games, video playback, and the like) are exactly the sort of things that X11 is terrible at and is all done by direct rendering when possible.
Posted Feb 21, 2013 11:34 UTC (Thu) by mmarq (guest, #2332)
Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds