LCA: The ways of Wayland
LCA: The ways of Wayland
Posted Mar 14, 2013 5:37 UTC (Thu) by Serge (guest, #84957)In reply to: LCA: The ways of Wayland by Cyberax
Parent article: LCA: The ways of Wayland
Quite the opposite, Xrender is good for complicated things, and as far as I remember its support in drivers is rather good too. Xrender is only bad for drawing custom images pixel-per-pixel.
Posted Mar 14, 2013 17:24 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (29 responses)
Besides, XRender is not that great even in the best of times. Its support for gradients is not that great, line drawing is mediocre, no support for text rendering, etc.
Posted Mar 14, 2013 18:56 UTC (Thu)
by dlang (guest, #313)
[Link] (1 responses)
People keep saying that, but then there are articles like this one
2D Support Still Coming To NVIDIA's Open Tegra
besides, if nothing else, isn't 2D acceleration a subset of 3D (with Z=0 in all cases)??
Posted Mar 14, 2013 20:19 UTC (Thu)
by raven667 (subscriber, #5198)
[Link]
Posted Mar 15, 2013 7:14 UTC (Fri)
by Serge (guest, #84957)
[Link] (26 responses)
Well, it actually depends on a particular chip but ok, let's say that they ALSO have 2D acceleration.
> Recent Radeons simply use OpenGL to do 2D drawing
Radeon is not the best example. Simple CPU-based software 2D rendering there is often faster than doing that through driver/hardware.
> (GLAMOUR layer).
Ehm. Glamour is an option. And it's not the best option. It's idea is to make driver support easier by making things simpler AND SLOWER, because you can't do many specific optimisation with it any more. That's why Intel SNA acceleration rocks. :)
> And XRender is focused on trapezoids instead of triangles, which makes it non-trivial to implement. So it works quite poorly.
Hm... Xrender does not stop you from drawing triangles, but... Do you know that images are usually rectangular, not triangular? How do you expect to put rectangular image on a triangle?
> Besides, XRender is not that great even in the best of times.
XRender is a protocol. Or are you talking about its support in drivers?
> Its support for gradients is not that great, line drawing is mediocre, no support for text rendering, etc.
XRender, as a protocol, has great support for gradients, line drawing and text rendering. As for its hardware acceleration in drivers it depends on a driver of course.
That's the point of XRender! You make a good implementation of XRender hardware acceleration in a driver just once, each driver can have different implementation, optimized for that particular piece of hardware. The software just use XRender and does not care which hardware it works on.
Without XRender you have to implement hardware-specific hacks in EVERY PROGRAM in the world, or actually that would be every toolkit. That's what Wayland wants you to do. Otherwise your program may be even slower than using a software rendering.
Posted Mar 15, 2013 16:21 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (25 responses)
> Radeon is not the best example. Simple CPU-based software 2D rendering there is often faster than doing that through driver/hardware.
> Ehm. Glamour is an option. And it's not the best option.
> XRender, as a protocol, has great support for gradients, line drawing and text rendering.
> Without XRender you have to implement hardware-specific hacks in EVERY PROGRAM in the world, or actually that would be every toolkit. That's what Wayland wants you to do. Otherwise your program may be even slower than using a software rendering.
Applications can use software rendering which is almost always faster than XRender, especially with multi-threaded rasterizers. And in future - just stick to EGL or OpenGL3 - they provide guaranteed hardware acceleration.
Posted Mar 16, 2013 7:52 UTC (Sat)
by Serge (guest, #84957)
[Link] (13 responses)
They have 2D XRender acceleration if they can be used to accelerate XRender drawing compared to CPU-based software rendering.
...
Ok. I can see that you don't know things you talk about. But I don't understand what are you really trying to say any more. This has nothing to do about Mir protocol, Wayland protocol, X11 protocol, their problems or limitations. Then what's this all about? Are you trolling? Or do you just like talking to me? ;-)
Posted Mar 16, 2013 9:21 UTC (Sat)
by Cyberax (✭ supporter ✭, #52523)
[Link] (12 responses)
New Radeons do not even HAVE specialized 2D units. Ditto for ALL embedded cards (from Poulsbo to Mali and Reno) - they sometimes have accelerated overlays, but never 2D drawing acceleration.
Besides, you've missed the part about threading - XRender in X.org works in a single thread for ALL clients. Do you think it's a good design?
> Ok. I can see that you don't know things you talk about. But I don't understand what are you really trying to say any more. This has nothing to do about Mir protocol, Wayland protocol, X11 protocol, their problems or limitations. Then what's this all about? Are you trolling? Or do you just like talking to me? ;-)
Posted Mar 17, 2013 9:51 UTC (Sun)
by Serge (guest, #84957)
[Link] (11 responses)
That was supposed to be a definition of "acceleration".
> No. They. Don't.
Obviously you have your own understanding of the word "acceleration" too. Can you explain what would you call "accelerated xrender" then?
> Besides, you've missed the part about threading - XRender in X.org works in a single thread for ALL clients. Do you think it's a good design?
First, as long as you have a single videoadapter having a single thread to work with it is perfectly fine. Second, as long as Xorg uses less than 100% CPU, yes, single-threading is the best design, because it makes X-server faster. Third, that's not a limitation of X11 protocol, you know? And finally, Wayland/Weston is also single-threaded.
> I think that X should be killed with fire as soon as possible. It was THE greatest stumbling block for Linux users for a long, long time (XF86Config - ugh).
Yeah, I understand that, but I don't understand your reasons. If the problem was in Xorg bugs then the best option would be to fix Xorg or write another implementation of X11 protocol (there're many of them). Since you prefer Wayland instead there should be some fundamental limitation of the X11 protocol itself that can't be fixed by different implementation.
You should be saying where Wayland is better, instead you're mainly saying what you don't like (or rather don't know) about X. Every time you said that Wayland has some feature while X has not, it appeared that X actually has it but you didn't knew that. Or worse, that X has it, while Wayland has not or it's useless there. The rest of your reasons were just childish excuses like: "Look, at the Wayland code! 'd->gbm'. Do you think that calling variable 'd' is a good design? Wayland should be killed with fire!"
If all your reasons were wrong then why do you still hate X? Have one of X devs stepped on your foot?
Posted Mar 17, 2013 10:15 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link] (9 responses)
> First, as long as you have a single videoadapter having a single thread to work with it is perfectly fine. Second, as long as Xorg uses less than 100% CPU, yes, single-threading is the best design, because it makes X-server faster.
>Third, that's not a limitation of X11 protocol, you know?
>And finally, Wayland/Weston is also single-threaded.
>The rest of your reasons were just childish excuses like:
Posted Mar 17, 2013 13:24 UTC (Sun)
by Serge (guest, #84957)
[Link] (6 responses)
> Something that doesn't really exist. Modern videocards don't have any intrinsic way to accelerate drawing of thick lines (especially with antialiasing), so X-server has to do it in software. Of course, thick lines and everything else can be emulated in shaders but as far as I know, no driver in X.org does this.
Many (most?) Xorg drivers use shaders. And that's not a problem of X11 protocol, not even Xorg problem, that's a problem of good drivers implementation. Drivers are the best place to implement hardware-specific optimisations. It's much better than having every software to carry all the possible hacks for all video adapters in the world. Unfortunately you don't have such option in Wayland.
> X.org can use 100% of CPU in graphics-intensive apps.
Sure. If it was multithreaded it could use 1600% CPU. But in real world it does not. So single-threaded design is better here. "Do not add new functionality unless you know of some real application that will require it."
> It is. There is exactly ONE implementation of X protocol with XRender extension.
Really? Xfree86, Xorg, Kdrive, Xwin32, there're even VNC servers supporting it.
> clients can happily render the next frame while Wayland is processing the request to render the current frame.
You can use Xorg this way too. The difference is that you CAN do that on Xorg, but you HAVE TO do that on Wayland.
> If you haven't noticed, I've just used your logic for these reasons.
I haven't. My logic is simple: Wayland adds more work to people, lacks lots of features, but fixes no X11 problems and nobody wins from Wayland.
Posted Mar 17, 2013 18:09 UTC (Sun)
by jrn (subscriber, #64214)
[Link] (5 responses)
Clearly its developers win, or they wouldn't be developing it. ;-) It looks like a neat project and I am happy to see it move forward.
Posted Mar 18, 2013 10:25 UTC (Mon)
by nix (subscriber, #2304)
[Link] (4 responses)
Posted Mar 18, 2013 11:40 UTC (Mon)
by anselm (subscriber, #2796)
[Link] (1 responses)
Possibly the people who are really knowledgeable about X11 and Wayland and the state and roadmaps of the respective projects have more important things to do with their time than to engage into pointless back-and-forth on random web sites. It's not as if Serge was a much better debater than Cyberax, either.
Posted Mar 18, 2013 21:43 UTC (Mon)
by nix (subscriber, #2304)
[Link]
Posted Mar 18, 2013 12:10 UTC (Mon)
by PaXTeam (guest, #24616)
[Link] (1 responses)
this coming from you is more than ironic ;). pot meet kettle!
Posted Mar 18, 2013 21:44 UTC (Mon)
by nix (subscriber, #2304)
[Link]
Posted Mar 18, 2013 10:24 UTC (Mon)
by nix (subscriber, #2304)
[Link] (1 responses)
TBH, while it would be nice for my BARTS Radeon card here to have XRender acceleration as good as my old Matrox had, it's also pointless: the card is so fast that except in its lowest-power mode a full-screen refresh of two 1680x1050 monitors filled with tiny text composited onto a bitmap backdrop is instantaneous (I clocked it at 35fps in medium-performance mode, 60fps in high). Visible latencies happen only if the glyph cache fills up -- which hardly ever happens these days, I notice it once every few months -- and that's a latency imposed by the sloth of moving data into the GPU, which can hardly be fixed by ditching XRender's glyph caching and moving even more data across the bus.
You appear to be saying that XRender should be killed with fire because it can't be accelerated, even though I would require a prosthetic visual cortex to see the results of any such acceleration, and even though the only thing you've recommended replacing it with would exacerbate the only thing left in XRender that ever causes visible delays in my experience. Thanks, but no thanks.
It is. There is exactly ONE implementation of X protocol with XRender extension.
Posted Mar 18, 2013 22:06 UTC (Mon)
by raven667 (subscriber, #5198)
[Link]
No, but it's the exception that proves the rule, that XRender isn't a good fit for how GPUs work.
Posted Mar 18, 2013 10:16 UTC (Mon)
by nix (subscriber, #2304)
[Link]
However, in practice, locking overhead seems to dominate if you try to parallelize the existing server. (It's been done, in the 1990s. It didn't scale, even then, when the memory hierarchy was less hostile to such things than it is now. There's a reason that even the mouse driver works as it has always worked, via SIGIO in a single thread.)
Posted Mar 16, 2013 8:53 UTC (Sat)
by renox (guest, #23785)
[Link] (10 responses)
This reply is at kindegarden level.. What exactly do you think is missing from XRender to have good support for gradients, line drawing and text rendering?
>And with XRender you have to pray and make burnt offerings to videocard gods to make sure that your application even runs.
If you've made the correct "burn of offerings" so that your videocard support well the correct fooGL, how is that different from XRender situation?
Posted Mar 16, 2013 9:17 UTC (Sat)
by Cyberax (✭ supporter ✭, #52523)
[Link] (9 responses)
That's a list from a guy who makes a Flash renderer, btw.
> If you've made the correct "burn of offerings" so that your videocard support well the correct fooGL, how is that different from XRender situation?
However, good OpenGL/EGL compatibility is necessary, because basically all videocard users need it. So manufacturers have incentives to make something that at least works some of the time.
Have you noticed that developers (OEM's and reverse engineering guys) don't even bother to create X 2D drivers for chips like Mali or Reno?
Posted Mar 17, 2013 10:31 UTC (Sun)
by Serge (guest, #84957)
[Link] (8 responses)
These are supported in XRender. But even if they were not, it's always possible to extend it. For example older XRender version (more than 10 years ago) have not supported gradinets, gradients were added later.
But that's not important because Wayland has nothing like XRender, so in that part its worse than Xorg anyway.
> However, good OpenGL/EGL compatibility is necessary, because basically all videocard users need it.
You mix hardware and drivers support. Do you know why SNA works faster than EXA on the same hardware?
XRender is better, because it can be heavily optimized in drivers for the particular adapter. XRender is also better for users, because they don't have to manually check and implement all the GL quirks. Do you think that OpenGL/ES is just either supported or not? Some feature that you have used to on your video adapter can be missing on another one. Check "OpenGL extensions" section in `glxinfo` output. For XRender that can be done once in drivers, without it, i.e. in Wayland, you have to do that in all your programs yourself.
Wayland makes things easier for wayland developers, while X makes things easier for the rest of developers. Feel the difference. :)
Posted Mar 18, 2013 0:17 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (7 responses)
> But that's not important because Wayland has nothing like XRender, so in that part its worse than Xorg anyway.
> You mix hardware and drivers support.
> Do you know why SNA works faster than EXA on the same hardware?
>Do you think that OpenGL/ES is just either supported or not?
>Some feature that you have used to on your video adapter can be missing on another one.
BTW, how can I check that a certain feature of XRender is accelerated? For example, vmware svga driver supports accelerated compositing but not trapezeoid rendering.
Posted Mar 20, 2013 15:00 UTC (Wed)
by Serge (guest, #84957)
[Link] (6 responses)
I'm not blaming OpenGL, I'm just telling that it's not easier to use than XRender.
> SNA does not work anywhere except for i965 hardware. So it's kinda irrelevant. Besides, it's only used to implement XRender.
You're wrong here. Check http://www.x.org/wiki/IntelGraphicsDriver at least. But I wasn't asking where it works, I was asking why it works faster on the same hardware. Hint: because it uses hardware-specific optimisations for 2D rendering, impossible outside of the driver, impossible in Wayland.
> Nope. Wayland simply doesn't need it.
That universal excuse "I don't need it"... Wayland doesn't need anything. It just makes optimised 2D interface impossible, and it does not care that almost everything in modern interface is 2D.
> That used to be true 5 years ago. Now OpenGL standards mandate the required extension sets
And new extensions appear... So the problem is still there.
> BTW, how can I check that a certain feature of XRender is accelerated?
You don't need to check that. That's the beauty of XRender. You only need to check that it's supported. And it is supported for 10+ years.
Acceleration is up to the driver. To make sure if it's really accelerated you have to benchmark it. That applies to both GL and XRender. Because on some drivers/hardware some "accelerated" features actually work slower than software rendering.
Posted Mar 20, 2013 15:07 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link]
> You're wrong here. Check http://www.x.org/wiki/IntelGraphicsDriver at least. But I wasn't asking where it works, I was asking why it works faster on the same hardware. Hint: because it uses hardware-specific optimisations for 2D rendering, impossible outside of the driver, impossible in Wayland.
Nobody stops you from using driver-specific rendering methods.
I've actually checked the SNA source code and it looks like it's using the regular GEM and command submission system of the kernel driver. So you definitely can use something like it for 2D rendering with Wayland. You probably can even use the current SNA with Wayland.
>That universal excuse "I don't need it"... Wayland doesn't need anything. It just makes optimised 2D interface impossible, and it does not care that almost everything in modern interface is 2D.
> You don't need to check that. That's the beauty of XRender. You only need to check that it's supported. And it is supported for 10+ years.
Posted Mar 20, 2013 16:51 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (4 responses)
Yes, you can run an X server with SNA acceleration _inside_ Wayland with full acceleration - http://wayland.freedesktop.org/xserver.html
Posted Mar 20, 2013 17:14 UTC (Wed)
by renox (guest, #23785)
[Link] (1 responses)
I wonder why Serge is so hell bent in finding imaginary drawbacks of Wayland even though it's not difficult to find real "drawbacks" aka trade-off in Wayland.
I hope that these kind of screen are rare, bleah.
Posted Mar 20, 2013 17:23 UTC (Wed)
by renox (guest, #23785)
[Link]
Posted Mar 20, 2013 18:39 UTC (Wed)
by chris.wilson (guest, #42619)
[Link] (1 responses)
Claiming that will be the same speed is stretching the truth by about a factor of 2.
Posted Mar 20, 2013 18:48 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link]
LCA: The ways of Wayland
LCA: The ways of Wayland
http://www.phoronix.com/scan.php?page=news_item&px=MT...
LCA: The ways of Wayland
LCA: The ways of Wayland
LCA: The ways of Wayland
Nope. Almost all of the embedded chips and most of desktop GPUs don't have any 2D XRender acceleration. At most, they have accelerated compositing.
Yup. Especially since X server is single-threaded and complicated XRender scenes can easily swamp it.
For new Radeons - it's the only one. They simply don't have any 2D acceleration.
No it doesn't.
And with XRender you have to pray and make burnt offerings to videocard gods to make sure that your application even runs.
LCA: The ways of Wayland
LCA: The ways of Wayland
No. They. Don't.
I think that X should be killed with fire as soon as possible. It was THE greatest stumbling block for Linux users for a long, long time (XF86Config - ugh).
LCA: The ways of Wayland
LCA: The ways of Wayland
Something that doesn't really exist. Modern videocards don't have any intrinsic way to accelerate drawing of thick lines (especially with antialiasing), so X-server has to do it in software. Of course, thick lines and everything else can be emulated in shaders but as far as I know, no driver in X.org does this.
X.org can use 100% of CPU in graphics-intensive apps.
It is. There is exactly ONE implementation of X protocol with XRender extension.
Which is totally fine, because Wayland/Weston do not care about client rendering (unlike X). Moreover, Wayland and Weston have thought-out architecture, so clients can happily render the next frame while Wayland is processing the request to render the current frame.
If you haven't noticed, I've just used your logic for these reasons.
LCA: The ways of Wayland
LCA: The ways of Wayland
LCA: The ways of Wayland
LCA: The ways of Wayland
LCA: The ways of Wayland
LCA: The ways of Wayland
LCA: The ways of Wayland
LCA: The ways of Wayland
Something that doesn't really exist. Modern videocards don't have any intrinsic way to accelerate drawing of thick lines (especially with antialiasing), so X-server has to do it in software. Of course, thick lines and everything else can be emulated in shaders but as far as I know, no driver in X.org does this.
There's a reason for that. It's pointless. Almost no applications draw thick lines anymore, just as almost no applications use the ROPpery in the core protocol.
>Third, that's not a limitation of X11 protocol, you know?
You clearly didn't understand Serge's point. The protocol does not mandate lack of acceleration: indeed, in the past, at least one driver has implemented it. So the current ONE implementation does not make it impossible to accelerate XRender. (In fact, one of the problems with XRender is that it needs too much support code inside the drivers. If there was "ONE implementation" this would actually be better than what we have now.)
LCA: The ways of Wayland
LCA: The ways of Wayland
First, as long as you have a single videoadapter having a single thread to work with it is perfectly fine. Second, as long as Xorg uses less than 100% CPU, yes, single-threading is the best design, because it makes X-server faster.
This is guaranteed true only if the GPU is also single-threaded and the X server does nothing but talk to it: but the GPU is massively parallel, and the X server does a lot of things other than talk to it.
LCA: The ways of Wayland
> No it doesn't.
>And in future - just stick to EGL or OpenGL3 - they provide guaranteed hardware acceleration.
LCA: The ways of Wayland
Elliptical gradients, font compositing with gamma-correction, seamless polygon joining, rotated fonts and images, pixel-perfect matching, etc.
---><----
That's the amount of care videocard manufacturers now devote to XRender support. I.e. none at all - nobody needs this shit on phones or new computers (Windows doesn't care about 2D accel as well).
LCA: The ways of Wayland
LCA: The ways of Wayland
Hah. It's funny that in one paragraph you talk about how optional extensions are great and then in the next paragraph you blame OpenGL for optional extensions.
Nope. Wayland simply doesn't need it.
No I don't. An uber-great architecture is useless if there's no hardware for it.
SNA does not work anywhere except for i965 hardware. So it's kinda irrelevant. Besides, it's only used to implement XRender.
Now? Pretty much yes.
That used to be true 5 years ago. Now OpenGL standards mandate the required extension sets, so one can just check for the implemented OpenGL/EGL level and use functionality from that set.
LCA: The ways of Wayland
LCA: The ways of Wayland
Is it? OpenGL now has very nice wrappers that can abstract gritty details.
That's incorrect. You can use whatever rendering API you want, the only requirement is that you use a surface that can be composited by Wayland.
Incorrect on both counts.
Yeah, sure. That's why many XRender-based apps are about as fast as molasses.
LCA: The ways of Wayland
Actually, running SNA on _Wayland_ is not only possible, but it works RIGHT NOW.
LCA: The ways of Wayland
The latest one I've read in Wayland mailing list: some weird screen have different subpixel ordering depending on the pixel position(!!), so as Wayland clients doesn't know where their window are going to be displayed, they cannot do subpixel aware rendering on these screen..
LCA: The ways of Wayland
LCA: The ways of Wayland
LCA: The ways of Wayland