LCA: The ways of Wayland
LCA: The ways of Wayland
Posted Mar 17, 2013 10:15 UTC (Sun) by Cyberax (✭ supporter ✭, #52523)In reply to: LCA: The ways of Wayland by Serge
Parent article: LCA: The ways of Wayland
Something that doesn't really exist. Modern videocards don't have any intrinsic way to accelerate drawing of thick lines (especially with antialiasing), so X-server has to do it in software. Of course, thick lines and everything else can be emulated in shaders but as far as I know, no driver in X.org does this.
> First, as long as you have a single videoadapter having a single thread to work with it is perfectly fine. Second, as long as Xorg uses less than 100% CPU, yes, single-threading is the best design, because it makes X-server faster.
X.org can use 100% of CPU in graphics-intensive apps.
>Third, that's not a limitation of X11 protocol, you know?
It is. There is exactly ONE implementation of X protocol with XRender extension.
>And finally, Wayland/Weston is also single-threaded.
Which is totally fine, because Wayland/Weston do not care about client rendering (unlike X). Moreover, Wayland and Weston have thought-out architecture, so clients can happily render the next frame while Wayland is processing the request to render the current frame.
>The rest of your reasons were just childish excuses like:
If you haven't noticed, I've just used your logic for these reasons.
Posted Mar 17, 2013 13:24 UTC (Sun)
by Serge (guest, #84957)
[Link] (6 responses)
> Something that doesn't really exist. Modern videocards don't have any intrinsic way to accelerate drawing of thick lines (especially with antialiasing), so X-server has to do it in software. Of course, thick lines and everything else can be emulated in shaders but as far as I know, no driver in X.org does this.
Many (most?) Xorg drivers use shaders. And that's not a problem of X11 protocol, not even Xorg problem, that's a problem of good drivers implementation. Drivers are the best place to implement hardware-specific optimisations. It's much better than having every software to carry all the possible hacks for all video adapters in the world. Unfortunately you don't have such option in Wayland.
> X.org can use 100% of CPU in graphics-intensive apps.
Sure. If it was multithreaded it could use 1600% CPU. But in real world it does not. So single-threaded design is better here. "Do not add new functionality unless you know of some real application that will require it."
> It is. There is exactly ONE implementation of X protocol with XRender extension.
Really? Xfree86, Xorg, Kdrive, Xwin32, there're even VNC servers supporting it.
> clients can happily render the next frame while Wayland is processing the request to render the current frame.
You can use Xorg this way too. The difference is that you CAN do that on Xorg, but you HAVE TO do that on Wayland.
> If you haven't noticed, I've just used your logic for these reasons.
I haven't. My logic is simple: Wayland adds more work to people, lacks lots of features, but fixes no X11 problems and nobody wins from Wayland.
Posted Mar 17, 2013 18:09 UTC (Sun)
by jrn (subscriber, #64214)
[Link] (5 responses)
Clearly its developers win, or they wouldn't be developing it. ;-) It looks like a neat project and I am happy to see it move forward.
Posted Mar 18, 2013 10:25 UTC (Mon)
by nix (subscriber, #2304)
[Link] (4 responses)
Posted Mar 18, 2013 11:40 UTC (Mon)
by anselm (subscriber, #2796)
[Link] (1 responses)
Possibly the people who are really knowledgeable about X11 and Wayland and the state and roadmaps of the respective projects have more important things to do with their time than to engage into pointless back-and-forth on random web sites. It's not as if Serge was a much better debater than Cyberax, either.
Posted Mar 18, 2013 21:43 UTC (Mon)
by nix (subscriber, #2304)
[Link]
Posted Mar 18, 2013 12:10 UTC (Mon)
by PaXTeam (guest, #24616)
[Link] (1 responses)
this coming from you is more than ironic ;). pot meet kettle!
Posted Mar 18, 2013 21:44 UTC (Mon)
by nix (subscriber, #2304)
[Link]
Posted Mar 18, 2013 10:24 UTC (Mon)
by nix (subscriber, #2304)
[Link] (1 responses)
TBH, while it would be nice for my BARTS Radeon card here to have XRender acceleration as good as my old Matrox had, it's also pointless: the card is so fast that except in its lowest-power mode a full-screen refresh of two 1680x1050 monitors filled with tiny text composited onto a bitmap backdrop is instantaneous (I clocked it at 35fps in medium-performance mode, 60fps in high). Visible latencies happen only if the glyph cache fills up -- which hardly ever happens these days, I notice it once every few months -- and that's a latency imposed by the sloth of moving data into the GPU, which can hardly be fixed by ditching XRender's glyph caching and moving even more data across the bus.
You appear to be saying that XRender should be killed with fire because it can't be accelerated, even though I would require a prosthetic visual cortex to see the results of any such acceleration, and even though the only thing you've recommended replacing it with would exacerbate the only thing left in XRender that ever causes visible delays in my experience. Thanks, but no thanks.
It is. There is exactly ONE implementation of X protocol with XRender extension.
Posted Mar 18, 2013 22:06 UTC (Mon)
by raven667 (subscriber, #5198)
[Link]
No, but it's the exception that proves the rule, that XRender isn't a good fit for how GPUs work.
LCA: The ways of Wayland
LCA: The ways of Wayland
LCA: The ways of Wayland
LCA: The ways of Wayland
LCA: The ways of Wayland
LCA: The ways of Wayland
LCA: The ways of Wayland
LCA: The ways of Wayland
Something that doesn't really exist. Modern videocards don't have any intrinsic way to accelerate drawing of thick lines (especially with antialiasing), so X-server has to do it in software. Of course, thick lines and everything else can be emulated in shaders but as far as I know, no driver in X.org does this.
There's a reason for that. It's pointless. Almost no applications draw thick lines anymore, just as almost no applications use the ROPpery in the core protocol.
>Third, that's not a limitation of X11 protocol, you know?
You clearly didn't understand Serge's point. The protocol does not mandate lack of acceleration: indeed, in the past, at least one driver has implemented it. So the current ONE implementation does not make it impossible to accelerate XRender. (In fact, one of the problems with XRender is that it needs too much support code inside the drivers. If there was "ONE implementation" this would actually be better than what we have now.)
LCA: The ways of Wayland