LWN.net Logo

Toward a unified display driver framework

By Jonathan Corbet
September 20, 2011
In a separate article, LWN looked at the discussion around how display drivers should be managed in the X server; one of the things that was noted there was that the movement of much of the driver logic into the kernel has reduced the rate of change on the user-space side. Seemingly simultaneously, the kernel community got into an extended discussion of how display drivers should be managed within the kernel. Here, the complexity of contemporary hardware is likely to drive both a consolidation of and some extensions to the kernel's interfaces.

It all started rather innocently with Tomi Valkeinen's description of the challenges posed by the display system found on OMAP processors. System-on-chip architectures like OMAP tend not to bother with the nice separation between devices found on desktop- and server-oriented architectures. So, instead of having a "video card," the OMAP has, on one side, an acceleration engine that can render pixels into main memory and, on the other, a "display subsystem" connecting that memory to the video display. That subsystem consists of a series of overlay processors, each of which can render a window from memory; the output of all the overlay processors is composited by the hardware and actually put on the display. Or, more specifically, the output from these processors is handed to the panel controller, which may be a complex bit of hardware in its own right.

So OMAP graphics depends on a set of interconnected components. Filling video memory can be done via the framebuffer interface, via the direct rendering (DRM) interface, or, for video captured from a camera, via the Video4Linux2 overlay interface. Video memory must be managed for those interfaces, then handed to the display processors which, in turn, must communicate with the panel controller. All of this works, but, as Tomi noted, there seems to be a lot of duplication of code between these various interfaces and no generic way for the kernel to manage things at any of these levels. Wouldn't it be nicer, he asked, to create a low-level display framework to handle these tasks?

He is not the first to ask such a question; the graphics developers have been working on this problem for some years, and much of the solution seems clear. The DRM code is where the bulk of the work has been done in the last few years; it is the only display subsystem that comes close to being able to describe and drive contemporary hardware. As the memory management issues associated with graphics become more complex, it becomes increasingly necessary to use a management framework like GEM, and that means using DRM. It also, as a result of its X-server heritage, contains a couple decades' worth of experience on dealing with the quirks of real-world video hardware. So most developers seem to believe that, over time, DRM should become the interface for mode setting and memory management, while the older framebuffer interface should become a compatibility layer over DRM until it fades away entirely.

That said, Florian Tobias Schandinat, who recently took over the maintainership of the framebuffer code, has a different opinion. To Florian, the framebuffer layer is alive and well, it has more users than DRM does, and it will not be going away anytime soon. His biggest complaint with DRM appears to be that (1) it is significantly more complex, making the drivers more complex, and (2) exposing the acceleration capabilities of the graphics processor makes it easy for applications to crash the system. The fact that the framebuffer API does not provide any mechanism for acceleration is, in his view, an advantage.

Florian would appear to be in the minority here, though; most developers seem to feel that it will be increasingly hard to manage contemporary hardware without the capabilities that the DRM layer provides. The presence of bugs in DRM drivers that can crash the system - especially when acceleration is used - is not really denied by anybody, but it was pointed out that use of DRM does not require the use of acceleration. The hardware is also apparently getting better in that it makes it easier for the operating system to regain control of the GPU when necessary. In any case, crashes and bugs are seen as something to fix and not as a reason to avoid DRM outright.

That leaves the question of how to handle the Video4Linux2 overlay feature. Overlay has been somewhat deprecated and unloved for some years, though it remains an official part of the interface; it was designed for an earlier, simpler era. When CPUs reached a point where they could easily manage a video stream from a camera device, the motivation for overlay faded - for a while. More recently, the resolution of video streams has increased notably and power consumption has become a much more important consideration. Even if the CPU can process a video stream in real time on a mobile device, the battery will last longer if the CPU sleeps and the stream goes straight to video memory. That means that the ability to overlay video streams onto the display in a zero-copy manner has become interesting again.

Given that the old overlay interface is seen as inadequate, there is a clear need for a new one. Jesse Barnes floated a proposal for a new overlay API back in April; the DMA buffer sharing proposal posted more recently is also aimed at this requirement. The word is that this topic was discussed at the X Developers Conference and that a new proposal is forthcoming soon.

As an indication of where things could be heading in the longer term, it is worth looking at this message from Laurent Pinchart, the author of the V4L2 media controller subsystem. The complexity of video acquisition devices has reached a point where treating them as a single device no longer works well; thus the media controller, which allows user space to query and change the connections between a pipeline of devices. The display problem, he said, is essentially the same; perhaps, he suggested, the media controller could be useful for controlling display pipelines as well. The idea did not immediately take the world by storm, but it may give an indication of where things will eventually need to go in the future.

The last few years have seen the consolidation of a lot of display-oriented code into the kernel; that code is increasingly using common infrastructure like the GEM memory manager. It is not hard to imagine that this consolidation will continue to the point where the DRM subsystem becomes the supported way for controlling displays, with the other interfaces implemented as some sort of compatibility layer. The complexity of the DRM code is, in the end, driven by the complexity of the hardware it must drive, and that hardware does not look like it will be getting any simpler anytime soon.


(Log in to post comments)

Toward a unified display driver framework

Posted Sep 22, 2011 9:34 UTC (Thu) by xav (guest, #18536) [Link]

Nice to see progress on this front.
And yet the 3D cores of thoses nice systems is still lacking proper OpenGL acceleration; apart from a binary blob.

Toward a unified display driver framework

Posted Sep 22, 2011 18:54 UTC (Thu) by linusw (subscriber, #40300) [Link]

The problem, if I have understood it correctly, is that there are just a few (3-5) vendors of mobile 3D cores, all of which are in fierce competition and pointing legal guns at one another. In that situation predominantly patent fears make them afraid of releasing code for driving their hardware. The OpenGL ES rendering libraries seem to be especially troublesome for some reason.

Toward a unified display driver framework

Posted Sep 22, 2011 10:49 UTC (Thu) by ortalo (subscriber, #4654) [Link]

Ironically (and certainly slowly), the KGI project design has only started to advance in the kernel when the project itself faded away to a stop (around 2001).
I am happy anyway.

What would make me even more happy would be technical sheets publication about all these new graphic chipsets. Do you know of any?

Toward a unified display driver framework

Posted Sep 22, 2011 16:26 UTC (Thu) by daniels (subscriber, #16193) [Link]

Tomi was talking about the TI OMAP SoC, whose Technical Reference Manuals are freely available to all and sundry, and contain a description of (and programming information for) the entire display pipeline apart from the 3D accelerator.

Toward a unified display driver framework

Posted Oct 9, 2011 18:46 UTC (Sun) by oak (guest, #2786) [Link]

I was also a happy KGI user in 90's. A real framebuffer provided a much nicer console (larger resolution & more text, higher frequency & less flicker etc) and the code was nicely structured into drivers for different parts in gfx cards and if I remember right, it was also documented. I was *really* disappointed when it failed to get into kernel despite several efforts.

Copyright © 2011, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds