|
|
Subscribe / Log in / New account

Linux and high dynamic range displays

By Jake Edge
October 5, 2016

X.Org Developers Conference

Andy Ritger began his talk at the 2016 X.Org Developers Conference (XDC) with a disclaimer of sorts: "I am very much not a color expert" and welcomed corrections from attendees. But his talk delved into chromaticity and other highly technical color characteristics in order to talk about the advent of high dynamic range (HDR) displays—and what is needed for Linux to support them.

Ritger works for NVIDIA, which has been getting requests for HDR display support from its customers. The company has implemented that support in its Windows and Android drivers, but has not yet done so for its discrete GPU Linux drivers. Ritger has been studying the subject to try to understand what is needed to support these displays for Linux; the talk was meant to largely be a report on what he has learned.

[Andy Ritger]

The trend these days is toward ultra-high definition (UHD) displays, he said. That means higher pixel resolution (4K and 8K) but also a wider color gamut to display a larger range of colors than today's displays. There are also efforts to expand the range of luminance values that can be displayed, which is what characterizes an HDR display.

The ITU-R BT.2020 [PDF] specification has recommendations for UHD parameters, including resolutions, refresh rates, chromaticity, formats, transfer functions, and so on. But when people refer to "BT.2020", they typically mean the color gamut that is specified. Few displays today even get close to the gamut described, he said.

Different color spaces represent different sets of colors, which is what makes up a color gamut. They are typically described using the CIE XYZ color space, in particular using a 2D projection of the color space. Other color spaces are described using that projection by specifying the x and y coordinates of the red, green, and blue primary colors as well as the coordinates of the "white point".

It is important to recognize the difference between linear and non-linear color spaces, Ritger said. Linear color spaces behave in an intuitive way, where doubling a value doubles the intensity, for example. Graphics operations should always be done in a linear color space.

But human perception is not linear. Humans are more sensitive to darks than lights, so given a set of discrete steps like 0-255, a linear color space "is not great". There is insufficient granularity in the darks and wasted precision in the lights, so it is generally recommended to store color information in a non-linear color space. The most common one used is sRGB, which is what most pre-HDR monitors expect for input.

High dynamic range

Informally, HDR is about making the brights brighter and the darks darker so that details are more perceptible in the dark and bright regions. HDR increases the range and granularity of the luminance information to make the highlights brighter, but not to make the entire image brighter. Luminance is measured in candelas per square meter, which are also known as "nits". Pre-HDR displays have a maximum of around 100 nits, while first generation HDR displays max out at around 1,000 nits. The maximum value defined for HDR, though, is 10,000 nits.

Many 3D applications already do HDR rendering, Ritger said, using FP16 (half-float) buffers. Those buffers are tone mapped to a lower-precision, lower-luminance representation. But now that there are more capable displays, there is a need to give the applications the information they need to tone map for the HDR display. There is also a need to be able to pass the application's higher-precision data through to the display.

Ritger then outlined what the flow for 3D applications doing HDR rendering and display would look like. Applications would still render into FP16 buffers, but would use the scRGB color space, which makes it easier to composite HDR and standard dynamic range (SDR) content. That would then be tone mapped for the target monitor's capabilities, which would be handed to the driver or compositor along with some metadata for the monitor.

The driver or compositor would composite the tone-mapped image with any SDR content. The driver or GPU would then take the scRGB FP16 composited result and perform an inverse "electro-optical transfer function" (EOTF) to encode the FP16 data into the display signal. That would be sent to the monitor along with an "HDR InfoFrame" containing the metadata. The monitor would then apply the EOTF to decode the digital signal into HDR content.

The scRGB color space is also known as the "canonical compositing color space". It was introduced by Microsoft in the Vista time frame and has the same chromaticity coordinates as sRGB, but is linear. That makes it a good color space for compositing SDR and HDR content. For the HDR metadata, there are several relevant standards that specify the information needed by the GPU for rendering as well as the information needed by the monitor to know how to interpret the data it is receiving.

The EOTF defines how the display should convert the non-linear digital signal to linear light values. It is optimized for bandwidth, so it compresses the signal into as few bits as possible by sacrificing precision where it won't be missed. The de facto EOTF for SDR is sRGB and there are two common EOTFs for HDR. In order to create the digital signal for the monitor, the GPU needs to do an inverse EOTF (or OETF).

Missing pieces

There are still some missing pieces for Linux from that flow, however. Applications can already do the rendering, but they will need some API to get the HDR information from the display. It is available in the Extended Display Identification Data (EDID) that monitors provide, so maybe just parsing the information out of that would be sufficient. He was concerned, though, that drivers and possibly compositors might need to change some of the parameters.

In addition, the application needs a way to provide its HDR metadata to the monitor. That information might also need to be arbitrated, for example if two applications were rendering to windows using different HDR configurations. Currently, the NVIDIA Windows driver is full-screen only.

There is also the need for a way to display the FP16 buffers. NVIDIA hardware does not have support for doing the inverse EOTF, so a shader is being used for now. He is unsure about whether other graphics hardware has support for that. It would be nice if SDR content could be composited with HDR in a single desktop, he said; scRGB should help make that possible.

Wayland compositors will need to be FP16-aware so that they can accept FP16 buffers from clients. For X, there are a lot of unanswered questions, Ritger said. The primary producers of HDR content will be the 3D APIs (OpenGL, Vulkan) and the video APIs (VDPAU, VAAPI). He wondered if there was reason to allow X rendering into an FP16 buffer; it isn't strictly needed, but it might be easier to just allow it. Also, should the root window be allowed to be FP16?

He concluded by noting that the talk [YouTube] was intended to give folks some context; there are lots of design decisions that still need to be made. NVIDIA is definitely interested in participating in that process. He would like to see some straw-man proposals being made in the coming months. He noted that his final few slides [PDF] had links to specifications and web resources of interest.

[I would like to thank the X.Org Foundation for sponsoring my travel to Helsinki for XDC.]

Index entries for this article
ConferenceX.Org Developers Conference/2016


to post comments

Linux and high dynamic range displays

Posted Oct 6, 2016 14:18 UTC (Thu) by alankila (guest, #47141) [Link]

Is the optical transfer function link really related? I expected to discover the HDR compression formulas, but instead the link seemed to just discuss the spatial blurring of optical systems, which is a fascinating topic in itself but hardly related to the subject matter.

I don't think it really matters from point of view of applications, which will simply render image in some wide-gamut, high-precision, linear color space, and it will be the compositor's job to convert this to the best approximation possible given the display hardware, while making some user-configurable tradeoffs regarding out of gamut colors. Dithering, chroma-preserving clamping, etc.

Linux and high dynamic range displays

Posted Oct 7, 2016 0:51 UTC (Fri) by flussence (guest, #85566) [Link] (2 responses)

Possibly relevant, what's the status of 30bpp display support in Linux?

I have a 5 year old monitor that advertises it, but I can't get any information out of any part of the OS to help figure out whether it's being limited by hardware, software or firmware. Poring through wikis and bits of xf86-video* code leave me none the wiser too.

Linux and high dynamic range displays

Posted Oct 7, 2016 6:53 UTC (Fri) by halla (subscriber, #14185) [Link] (1 responses)

Same here... Well, the monitor is new, not five years old, but I cannot figure it out either. I know it is or used to be possible: Kai-Uwe from oyranos once had a working setup (http://www.oyranos.org/2014/05/image-editing-with-30-bit-...). And I know it needs an NVidia Quadro graphics car, which I have. But I fail at the next step.

Linux and high dynamic range displays

Posted Oct 8, 2016 10:08 UTC (Sat) by vivo (subscriber, #48315) [Link]

An nvidia quadro _or_ displayport and whatever recent nvidia card.
Then there are problems with software, mostly composited kde5 does not react well for example.

Linux and high dynamic range displays

Posted Oct 8, 2016 10:17 UTC (Sat) by vivo (subscriber, #48315) [Link] (1 responses)

sRGB has a very, very small volume compared to scSRG, I suspect that going to 8bit per channel to 16 will reduce substantially the precision of the output.
using a colorspace that cover only real colors like rec2020 would provide much better results.
Unless someone can achieve 32 bpp on 8k monitors at 50Hz, well, that would be awesome :)

Linux and high dynamic range displays

Posted Oct 13, 2016 2:15 UTC (Thu) by gwg (guest, #20811) [Link]

> sRGB has a very, very small volume compared to scSRG, I suspect that going to 8bit per channel to 16 will reduce substantially the precision of the output.

By definition sRGB is an 8 bit subset of 16 bit scRGB, so there is no change in precision.

scRGB is not an efficient HDR coding space though.

Linux and high dynamic range displays

Posted Oct 13, 2016 2:18 UTC (Thu) by gwg (guest, #20811) [Link]

> He wondered if there was reason to allow X rendering into an FP16 buffer; it isn't strictly needed, but it might be easier to just allow it. Also, should the root window be allowed to be FP16?

Supporting HDR in X11 would make it simpler to extend current Linux color calibration and profiling tools to handle HDR displays, rather than requiring a whole new OpenGL display back end.


Copyright © 2016, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds