|
|
Subscribe / Log in / New account

Vetter: Why no 2D Userspace API in DRM?

Vetter: Why no 2D Userspace API in DRM?

Posted Aug 22, 2018 19:35 UTC (Wed) by Tara_Li (guest, #26706)
Parent article: Vetter: Why no 2D Userspace API in DRM?

So, what do 3D engines do about those 2D faces that come up in their 3D polygons? Any 3D engine should of necessity contain a 2D engine, I would have thought.


to post comments

Vetter: Why no 2D Userspace API in DRM?

Posted Aug 22, 2018 20:22 UTC (Wed) by daniels (subscriber, #16193) [Link] (9 responses)

One difference is that 2D engines operate on quads whereas 3D engines operate on triangles. 3D engines can certainly do flat 2D rendering, but it's not optimal. 2D engines typically give you better image quality, and what they certainly give you over 3D engines is much, much, lower power usage.

Vetter: Why no 2D Userspace API in DRM?

Posted Aug 23, 2018 4:48 UTC (Thu) by luya (subscriber, #50741) [Link] (4 responses)

The only reason 2D engine seems to give better image quality is mainly due to familiarity compared to the 3D engine. The key part is the way engine renders object. Tile-based method (or deferred texture if I remember right) is a popular application on mobile device (previous Apple iPhone series and Sega Dreamcast are one notable examples) to due to power efficiency and the fact it only render visible objects thus avoid to use invisible object on screen unless done on purpose. So 3D engine can surpass dedicated 2D version when applied correctly.

Vetter: Why no 2D Userspace API in DRM?

Posted Aug 23, 2018 7:20 UTC (Thu) by daniels (subscriber, #16193) [Link] (3 responses)

Tiling is something different. Tiling means that you slice your square buffers up into smaller square regions, e.g. 32x32, and operate on those smaller regions one at a time. A lot of 3D GPUs do this: PowerVR (in the iPhone and Dreamcast as you noted), Arm Mali (in basically every non-Qualcomm mobile device), Qualcomm/freedreno, VC4 (Raspberry Pi), even NVIDIA does it.

This doesn't change the choice of primitive though. Whether tiled/deferred or fully immediate, 3D GPUs still operate on triangles. On the iPhone, you still need to splice two triangles together to make a quad. The only 3D GPU which operated on quads was the extremely early NVIDIA GPUs, which operated on quads rather than triangles. But that didn't last further than the Sega Saturn.

Anyway, manipulating 2D images isn't what 3D GPUs are optimised for. It is, on the other hand, what 2D engines are optimised for, and they can make choices about image quality which would be unavailable to a 3D GPU, for reasons of architecture or performance.

Vetter: Why no 2D Userspace API in DRM?

Posted Aug 23, 2018 10:27 UTC (Thu) by excors (subscriber, #95769) [Link] (2 responses)

(If I remember correctly, NVIDIA's version is not really that similar to mobile GPUs.

Mobile GPUs can render many draw calls and state changes, possibly the entire frame, into one tile before moving onto the next tile. The driver has to be careful to insert pipeline flushes if a pixel depends on the output from an 'earlier' draw call in a different tile, and avoid all unnecessary flushes - ideally it can render the entire frame with zero reads of the framebuffer from RAM.

The NVIDIA behaviour described in that video is (when I tested it on a GTX 970) limited to working within a single draw call. That draw call's primitives are rasterised in a tile-based order (over large ~256x512 tiles, subdivided between the GPU's 4 raster engines), but it waits until that draw call is fully rasterised before starting the first tile of the next draw call. And if the draw call generates more than about 64KB of primitive data, the first 64KB gets fully rasterised before moving onto the next chunk of primitives. That means draw calls and other state changes remain fully sequential as before, which makes it simpler and hugely less effective than mobile GPUs at avoiding framebuffer-to/from-(V)RAM traffic, though still somewhat better than earlier NVIDIA GPUs.)

Vetter: Why no 2D Userspace API in DRM?

Posted Aug 23, 2018 11:26 UTC (Thu) by daniels (subscriber, #16193) [Link] (1 responses)

Yes, Imagination helpfully conflated the two concepts into the 'tile-based deferred rendering' catchphrase. GPUs can be tiled or not, deferred or immediate. There's a huge mix of both around. The only reason I brought it up is because the original poster conflated the two, and to point out that tiled render targets are not the same thing as using quads rather than triangles as a primitive.

Vetter: Why no 2D Userspace API in DRM?

Posted Aug 23, 2018 12:43 UTC (Thu) by excors (subscriber, #95769) [Link]

As far as I can tell, what Imagination means by "deferred" in TBDR is that fragment shading is deferred until after visibility is completely determined, specifically using their patented hidden surface removal technique. That's what they claim makes them unique and the best.

Other mobile GPUs (which Imagination calls TBR, i.e. not deferred) are basically the same, they just use different (perhaps slightly less optimal) hidden surface removal techniques. They still defer rasterisation until after the entire frame has been submitted and vertex-processed and binned. So the TBR/TBDR distinction is not much more than a marketing tactic, and I think it's sensible to just use the term "tile-based" for all of those GPUs, because their whole pipeline is fundamentally designed around tiles.

That's very different from NVIDIA's Maxwell, which still processes draw calls sequentially, and just uses one extra level of tiles when rasterising compared to older GPUs (~256x512 split into 16x16 split into 4x8 split into 2x2). So I'm just quibbling about grouping that together with the other 'tile-based' GPUs because it's not the same at all, except in the trivial sense that every GPU uses some kinds of tiles somewhere.

Anyway, I don't mean to disagree with your point, I just think that NVIDIA video/article spread misinformation about Maxwell being much more similar to mobile GPUs than it really is, and I can't resist the urge to try to clarify that whenever it comes up :-)

Vetter: Why no 2D Userspace API in DRM?

Posted Aug 23, 2018 9:08 UTC (Thu) by Sesse (subscriber, #53779) [Link] (3 responses)

I guess we need to talk about what “2D rendering” means first. Nobody really cares about accelerated pretty 2D lines and circles and such these days; it's all about getting an image from A to B, possibly including color conversion and rescaling (it depends a bit).

In this understanding of “2D rendering” 3D engines can and do 2D rendering; your composited desktop (even without wobbly windows) is an example. Quads are rendered as two triangles—that's not a problem at all. But they're not suitable for extremely embedded platforms, since they use more area, gates and power than a pure 2D engine would do.

Vetter: Why no 2D Userspace API in DRM?

Posted Aug 23, 2018 14:17 UTC (Thu) by nim-nim (subscriber, #34454) [Link] (2 responses)

Most people will assume 2D includes text. And text is definitely "lines and circles and curves".

Text is the 2D rendering deal-breaker, unless you restrict yourself to ASCII-only single-size bitmap fonts, that do not work on hidpi screens, and that users loathe.

Even basic video players are increasingly expected to render nice non-pixelated i18n subtitles.

Vetter: Why no 2D Userspace API in DRM?

Posted Aug 23, 2018 15:28 UTC (Thu) by louai (guest, #58033) [Link]

Text rendering happens in two stages:

1) Rendering individual glyphs (usually cached)
2) Putting these glyphs on screen

Either one of those can be accelerated, but generally speaking you get a lot more bang for your buck by making the second step fast. And that second step is really just blitting to screen.

Vetter: Why no 2D Userspace API in DRM?

Posted Aug 23, 2018 18:27 UTC (Thu) by Sesse (subscriber, #53779) [Link]

I doubt most of these devices accelerate glyph rendering. You just render them once and cache them.

They still need to be blit to screen one by one, but then we're back to the “get a rectangle from A to B” land.

Vetter: Why no 2D Userspace API in DRM?

Posted Aug 22, 2018 20:35 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (4 responses)

If you have a 2D image and just want to show it on a screen then it's not a problem. You can just use it as a texture on a simple quad. It'll work acceptably well.

The next question is, how do you MAKE this 2D image in the first place?

And this is the core of the problem. It turns out to be not easy at all. Videocards can certainly render a lot of triangles into a texture very fast, but the quality will not be good for anti-aliased text.

Also, it's not quite true that there are no standard 2D APIs. There's Direct2D and DirectWrite in Microsoftland: https://docs.microsoft.com/en-us/windows/desktop/direct2d... - both are actually well designed and can render stunningly good text.

Vetter: Why no 2D Userspace API in DRM?

Posted Aug 23, 2018 6:31 UTC (Thu) by blackwood (guest, #44174) [Link] (3 responses)

I guess I wasn't entirely clear in the blog: The problem isn't that there's no standards at all, but that there's no universally adopted 2D rendering standard everyone could aim for (like we have with OpenGL/Vulkan on the 3D side). The list of 2D rendering standard absolutely, each with their own little niche, is extremely long. And a standard without good adoption is useless.

Vetter: Why no 2D Userspace API in DRM?

Posted Aug 23, 2018 14:24 UTC (Thu) by jnareb (subscriber, #46500) [Link] (2 responses)

There is 2D rendering standard by Khronos (organization behind OpenGL and Vulcan), namely OpenVG, but I don't think it is popular (as opposed to OpenGL).

Vetter: Why no 2D Userspace API in DRM?

Posted Aug 23, 2018 15:27 UTC (Thu) by ledow (guest, #11753) [Link] (1 responses)

Indeed... it looks like OpenVG is the standard we want.

But people aren't putting it in hardware, so it's like having a lovely OpenGL standard, but no graphics card on the market supporting it with acceleration.

(Don't be fooled by the conformant products page, only some of those appear under the OpenVG filter:
https://www.khronos.org/conformance/adopters/conformant-p...
)

The article dismisses it out of hand because of this (and that it was then removed from Mesa because nobody was using it), but it seems to solution is to make decent OpenVG acceleration drivers for whatever we need until it gains traction.

Surely those devices that already have open-source drivers should be quite easy to get onboard?

Vetter: Why no 2D Userspace API in DRM?

Posted Aug 23, 2018 16:15 UTC (Thu) by excors (subscriber, #95769) [Link]

I have the impression that OpenVG had some minor popularity for pre-Android mobile devices, when the only alternative for graphics was OpenGL ES 1.1 (fixed-function, no shaders), and mobile GPUs all added support for it in case some potential customers wanted it. But it never really took off, then OpenGL ES 2.0 came along and mobile CPUs/GPUs got fast enough to render UIs with software (like Skia on Android) combined with OpenGL and hardware compositing, and nobody cared about OpenVG any more. Mobile GPUs continued to support it just because it took very little effort to maintain the drivers, especially when nobody was going to use it and care about performance or bugs. Nowadays it's just a dead standard, unless you're trying to support one of those equally dead pre-GLES2 chips that doesn't give you any other options.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds