Vetter: Why no 2D Userspace API in DRM?
3D has it easy: There’s OpenGL and Vulkan and DirectX that require a certain feature set. And huge market forces that make sure if you use these features like a game would, rendering is fast. Aside: This means the 2D engine in a browser actually needs to work like a 3D action game, or the GPU will crawl. The [impedance] mismatch compared to traditional 2D rendering designs is huge. On the 2D side there’s no such thing: Every blitter engine is its own bespoke thing, with its own features, limitations and performance characteristics. There’s also no standard benchmarks that would drive common performance characteristics - today blitters are [needed] mostly in small systems, with very specific use cases. Anything big enough to run more generic workloads will have a 3D rendering block anyway. These systems still have blitters, but mostly just to help move data in and out of VRAM for the 3D engine to consume."
Posted Aug 22, 2018 19:35 UTC (Wed)
by Tara_Li (guest, #26706)
[Link] (15 responses)
Posted Aug 22, 2018 20:22 UTC (Wed)
by daniels (subscriber, #16193)
[Link] (9 responses)
Posted Aug 23, 2018 4:48 UTC (Thu)
by luya (subscriber, #50741)
[Link] (4 responses)
Posted Aug 23, 2018 7:20 UTC (Thu)
by daniels (subscriber, #16193)
[Link] (3 responses)
Tiling is something different. Tiling means that you slice your square buffers up into smaller square regions, e.g. 32x32, and operate on those smaller regions one at a time. A lot of 3D GPUs do this: PowerVR (in the iPhone and Dreamcast as you noted), Arm Mali (in basically every non-Qualcomm mobile device), Qualcomm/freedreno, VC4 (Raspberry Pi), even NVIDIA does it. This doesn't change the choice of primitive though. Whether tiled/deferred or fully immediate, 3D GPUs still operate on triangles. On the iPhone, you still need to splice two triangles together to make a quad. The only 3D GPU which operated on quads was the extremely early NVIDIA GPUs, which operated on quads rather than triangles. But that didn't last further than the Sega Saturn. Anyway, manipulating 2D images isn't what 3D GPUs are optimised for. It is, on the other hand, what 2D engines are optimised for, and they can make choices about image quality which would be unavailable to a 3D GPU, for reasons of architecture or performance.
Posted Aug 23, 2018 10:27 UTC (Thu)
by excors (subscriber, #95769)
[Link] (2 responses)
Mobile GPUs can render many draw calls and state changes, possibly the entire frame, into one tile before moving onto the next tile. The driver has to be careful to insert pipeline flushes if a pixel depends on the output from an 'earlier' draw call in a different tile, and avoid all unnecessary flushes - ideally it can render the entire frame with zero reads of the framebuffer from RAM.
The NVIDIA behaviour described in that video is (when I tested it on a GTX 970) limited to working within a single draw call. That draw call's primitives are rasterised in a tile-based order (over large ~256x512 tiles, subdivided between the GPU's 4 raster engines), but it waits until that draw call is fully rasterised before starting the first tile of the next draw call. And if the draw call generates more than about 64KB of primitive data, the first 64KB gets fully rasterised before moving onto the next chunk of primitives. That means draw calls and other state changes remain fully sequential as before, which makes it simpler and hugely less effective than mobile GPUs at avoiding framebuffer-to/from-(V)RAM traffic, though still somewhat better than earlier NVIDIA GPUs.)
Posted Aug 23, 2018 11:26 UTC (Thu)
by daniels (subscriber, #16193)
[Link] (1 responses)
Posted Aug 23, 2018 12:43 UTC (Thu)
by excors (subscriber, #95769)
[Link]
Other mobile GPUs (which Imagination calls TBR, i.e. not deferred) are basically the same, they just use different (perhaps slightly less optimal) hidden surface removal techniques. They still defer rasterisation until after the entire frame has been submitted and vertex-processed and binned. So the TBR/TBDR distinction is not much more than a marketing tactic, and I think it's sensible to just use the term "tile-based" for all of those GPUs, because their whole pipeline is fundamentally designed around tiles.
That's very different from NVIDIA's Maxwell, which still processes draw calls sequentially, and just uses one extra level of tiles when rasterising compared to older GPUs (~256x512 split into 16x16 split into 4x8 split into 2x2). So I'm just quibbling about grouping that together with the other 'tile-based' GPUs because it's not the same at all, except in the trivial sense that every GPU uses some kinds of tiles somewhere.
Anyway, I don't mean to disagree with your point, I just think that NVIDIA video/article spread misinformation about Maxwell being much more similar to mobile GPUs than it really is, and I can't resist the urge to try to clarify that whenever it comes up :-)
Posted Aug 23, 2018 9:08 UTC (Thu)
by Sesse (subscriber, #53779)
[Link] (3 responses)
In this understanding of “2D rendering” 3D engines can and do 2D rendering; your composited desktop (even without wobbly windows) is an example. Quads are rendered as two triangles—that's not a problem at all. But they're not suitable for extremely embedded platforms, since they use more area, gates and power than a pure 2D engine would do.
Posted Aug 23, 2018 14:17 UTC (Thu)
by nim-nim (subscriber, #34454)
[Link] (2 responses)
Text is the 2D rendering deal-breaker, unless you restrict yourself to ASCII-only single-size bitmap fonts, that do not work on hidpi screens, and that users loathe.
Even basic video players are increasingly expected to render nice non-pixelated i18n subtitles.
Posted Aug 23, 2018 15:28 UTC (Thu)
by louai (guest, #58033)
[Link]
1) Rendering individual glyphs (usually cached)
Either one of those can be accelerated, but generally speaking you get a lot more bang for your buck by making the second step fast. And that second step is really just blitting to screen.
Posted Aug 23, 2018 18:27 UTC (Thu)
by Sesse (subscriber, #53779)
[Link]
They still need to be blit to screen one by one, but then we're back to the “get a rectangle from A to B” land.
Posted Aug 22, 2018 20:35 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (4 responses)
The next question is, how do you MAKE this 2D image in the first place?
And this is the core of the problem. It turns out to be not easy at all. Videocards can certainly render a lot of triangles into a texture very fast, but the quality will not be good for anti-aliased text.
Also, it's not quite true that there are no standard 2D APIs. There's Direct2D and DirectWrite in Microsoftland: https://docs.microsoft.com/en-us/windows/desktop/direct2d... - both are actually well designed and can render stunningly good text.
Posted Aug 23, 2018 6:31 UTC (Thu)
by blackwood (guest, #44174)
[Link] (3 responses)
Posted Aug 23, 2018 14:24 UTC (Thu)
by jnareb (subscriber, #46500)
[Link] (2 responses)
Posted Aug 23, 2018 15:27 UTC (Thu)
by ledow (guest, #11753)
[Link] (1 responses)
But people aren't putting it in hardware, so it's like having a lovely OpenGL standard, but no graphics card on the market supporting it with acceleration.
(Don't be fooled by the conformant products page, only some of those appear under the OpenVG filter:
The article dismisses it out of hand because of this (and that it was then removed from Mesa because nobody was using it), but it seems to solution is to make decent OpenVG acceleration drivers for whatever we need until it gains traction.
Surely those devices that already have open-source drivers should be quite easy to get onboard?
Posted Aug 23, 2018 16:15 UTC (Thu)
by excors (subscriber, #95769)
[Link]
Posted Aug 22, 2018 19:53 UTC (Wed)
by sam.ravnborg (guest, #183)
[Link] (2 responses)
Qt has recently posted the following blog post:
With no support for OpenVG I wonder how Qt can utilize OpenVG on a Linux platform.
Anyway, it does not look like the effort of Qt using OpenVG has created any push to get OpenVG into mesa again
Posted Sep 3, 2018 2:51 UTC (Mon)
by ssmith32 (subscriber, #72404)
[Link] (1 responses)
Posted Sep 3, 2018 9:53 UTC (Mon)
by excors (subscriber, #95769)
[Link]
OpenVG was and is obsolete; but obsolete embedded platforms can easily hang around for a decade and some people still want to write new software for them, so it's a niche that it might be worth Qt filling (especially if the work on Qt was funded by some company that decided it was cheaper than updating their hardware). But outside that niche, I don't think it has any significant value.
Posted Aug 22, 2018 20:13 UTC (Wed)
by Beolach (guest, #77384)
[Link]
Posted Aug 22, 2018 22:58 UTC (Wed)
by jreiser (subscriber, #11027)
[Link] (2 responses)
Posted Aug 22, 2018 23:01 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Aug 23, 2018 10:00 UTC (Thu)
by kusma (subscriber, #125632)
[Link]
Posted Sep 9, 2018 21:35 UTC (Sun)
by xxiao (guest, #9631)
[Link]
Vetter: Why no 2D Userspace API in DRM?
One difference is that 2D engines operate on quads whereas 3D engines operate on triangles. 3D engines can certainly do flat 2D rendering, but it's not optimal. 2D engines typically give you better image quality, and what they certainly give you over 3D engines is much, much, lower power usage.
Vetter: Why no 2D Userspace API in DRM?
The only reason 2D engine seems to give better image quality is mainly due to familiarity compared to the 3D engine. The key part is the way engine renders object. Tile-based method (or deferred texture if I remember right) is a popular application on mobile device (previous Apple iPhone series and Sega Dreamcast are one notable examples) to due to power efficiency and the fact it only render visible objects thus avoid to use invisible object on screen unless done on purpose.
So 3D engine can surpass dedicated 2D version when applied correctly.
Vetter: Why no 2D Userspace API in DRM?
Vetter: Why no 2D Userspace API in DRM?
Vetter: Why no 2D Userspace API in DRM?
Vetter: Why no 2D Userspace API in DRM?
Vetter: Why no 2D Userspace API in DRM?
Vetter: Why no 2D Userspace API in DRM?
Vetter: Why no 2D Userspace API in DRM?
Vetter: Why no 2D Userspace API in DRM?
2) Putting these glyphs on screen
Vetter: Why no 2D Userspace API in DRM?
Vetter: Why no 2D Userspace API in DRM?
Vetter: Why no 2D Userspace API in DRM?
Vetter: Why no 2D Userspace API in DRM?
Vetter: Why no 2D Userspace API in DRM?
https://www.khronos.org/conformance/adopters/conformant-p...
)
Vetter: Why no 2D Userspace API in DRM?
The future of OpenVG?
http://blog.qt.io/blog/2017/03/31/qt-quick-openvg/
And maybe the simple answer is that they cannot, and OpenVG is only supported on simpler platforms.
The future of OpenVG?
Maybe OpenVG was more relevant then?
The future of OpenVG?
Going off on a funny tangent, reminds me of New ATI Card Pushes
Limits of ASCII Gaming linked from nethack.org.
Vetter: Why no 2D Userspace API in DRM?
"The VT100 standard has been around for years, but there really hasn't been a video card company willing to make a card for it."
Vetter: Why no 2D Userspace API in DRM?
Vetter: Why no 2D Userspace API in DRM?
Vetter: Why no 2D Userspace API in DRM?
Vetter: Why no 2D Userspace API in DRM?