By Jonathan Corbet
September 20, 2011
In
a separate article, LWN looked at the
discussion around how display drivers should be managed in the X server;
one of the things that was noted there was that the movement of much of the
driver logic into the kernel has reduced the rate of change on the
user-space side. Seemingly simultaneously, the kernel community got into
an extended discussion of how display drivers should be managed within the
kernel. Here, the complexity of contemporary hardware is likely to drive
both a consolidation of and some extensions to the kernel's interfaces.
It all started rather innocently with Tomi Valkeinen's description of the
challenges posed by the display system found on OMAP processors.
System-on-chip architectures like OMAP tend not to bother with the nice
separation between devices found on desktop- and server-oriented
architectures. So, instead of having a "video card," the OMAP has, on one
side, an acceleration engine that can render pixels into main memory and,
on the other, a "display subsystem" connecting that memory to the video
display. That subsystem consists of a series of overlay processors, each
of which can render a window from memory; the output of all the overlay
processors is composited by the hardware and actually put on the display.
Or, more specifically, the output from these processors is handed to the
panel controller, which may be a complex bit of hardware in its own right.
So OMAP graphics depends on a set of interconnected components. Filling
video memory can be done via the framebuffer interface, via the direct
rendering (DRM) interface, or, for video captured from a camera, via the
Video4Linux2 overlay interface. Video memory must be managed for those
interfaces, then handed to the display processors which, in turn, must
communicate with the panel controller. All of this works, but, as Tomi
noted, there seems to be a lot of duplication of code between these various
interfaces and no generic way for the kernel to manage things at any of
these levels. Wouldn't it be nicer, he asked, to create a low-level
display framework to handle these tasks?
He is not the first to ask such a question; the graphics developers have
been working on this problem for some years, and much of the solution seems
clear. The DRM code is where the bulk of the work has been done in the
last few years; it is the only display subsystem that comes close to
being able to describe and drive contemporary hardware. As the memory
management issues associated with graphics become more complex, it becomes
increasingly necessary to use a management framework like GEM, and that
means using DRM. It also, as a result of its X-server heritage, contains
a couple decades' worth of experience on dealing with the quirks of
real-world video hardware. So most developers seem to believe that, over
time, DRM should become the interface for mode setting and memory
management, while the older framebuffer interface should become a
compatibility layer over DRM until it fades away entirely.
That said, Florian Tobias Schandinat, who recently took over the
maintainership of the framebuffer code, has a
different opinion. To Florian, the framebuffer layer is alive and
well, it has more users than DRM does, and it will not be going away
anytime soon. His biggest complaint with
DRM appears to be that (1) it is significantly more complex, making
the drivers more complex, and (2) exposing the acceleration
capabilities of the graphics processor makes it easy for applications to
crash the system. The fact that the framebuffer API does not provide any
mechanism for acceleration is, in his view, an advantage.
Florian would appear to be in the minority here, though; most developers seem to
feel that it will be increasingly hard to manage contemporary hardware
without the capabilities that the DRM layer provides. The presence of bugs
in DRM drivers
that can crash the system - especially when acceleration is used - is not
really denied by anybody, but it was
pointed out that use of DRM does not require the use of acceleration. The
hardware is also apparently getting better in that it makes it easier for the
operating system to regain control of the GPU when necessary. In any case,
crashes and bugs are seen as something to fix and not as a reason to avoid
DRM outright.
That leaves the question of how to handle the Video4Linux2 overlay
feature. Overlay has been somewhat deprecated and unloved for some years,
though it remains an official part of the interface; it was designed for an
earlier, simpler era. When CPUs reached a point where they could easily
manage a video stream from a camera device, the motivation for overlay
faded - for a while. More recently, the resolution of video streams has
increased notably and power consumption has become a much more important
consideration. Even if the CPU can process a video stream in
real time on a mobile device, the battery will last longer if the CPU
sleeps and the stream goes straight to video memory. That means that the
ability to overlay video streams onto the display in a zero-copy manner has
become interesting again.
Given that the old overlay interface is seen as inadequate,
there is a clear need for a new one. Jesse Barnes floated a proposal for a new overlay API back in
April; the DMA buffer sharing proposal
posted more recently is also aimed at this requirement. The word is that this topic was discussed at the X
Developers Conference and that a new proposal is forthcoming soon.
As an indication of where things could be heading in the longer term, it is
worth looking at
this message from Laurent Pinchart, the
author of the V4L2 media controller
subsystem. The complexity of video acquisition devices has reached a
point where treating them as a single device no longer works well; thus the
media controller, which allows user space to query and change the
connections between a pipeline of devices. The display problem, he said,
is essentially the same; perhaps, he suggested, the media controller could
be useful for controlling display pipelines as well. The idea did not
immediately take the world by storm, but it may give an indication of where
things will eventually need to go in the future.
The last few years have seen the consolidation of a lot of display-oriented
code into the kernel; that code is increasingly using common
infrastructure like the GEM memory manager. It is not hard to imagine that
this consolidation will continue to the point where the DRM subsystem
becomes the supported way for controlling displays, with the other
interfaces implemented as some sort of compatibility layer. The complexity
of the DRM code is, in the end, driven by the complexity of the hardware it
must drive, and that hardware does not look like it will be getting any
simpler anytime soon.
(
Log in to post comments)