By Jonathan Corbet
May 25, 2010
The 2.6.35 development cycle has been a busy one for the Video4Linux2
subsystem, with quite a bit of new infrastructure and some new drivers
merged. This article will provide an overview of some of the new
capabilities in V4L2.
Memory-to-memory devices
Video hardware often includes subsystems which are capable of processing
video streams in various ways. The VIA chipset that your editor has
recently been working with, for example, has a "high quality video" (HQV) engine
which can be used to change video formats, rotate frames, convert between
color spaces, perform deinterlacing, and more. It is not uncommon for
video drivers to make use of an engine like this to make a wider range of
formats and options available to applications. When used in this mode, the
processing engine is hidden from user space; it looks like a part of the
video input or output device.
But there can be value in making the video processing engine available as a
device in its own right; applications will then be able to use it to
accelerate operations on video data. The kernel has made other
data-transformation engines available through various interfaces, the
"dmaengine" API in particular. Simple DMA engines can move data around and
possibly perform a transformation - exclusive-or with a value, for
example. More complex engines can perform cryptographic transformations,
and, indeed, are used for this purpose within the kernel's cryptographic
code.
The dmaengine API has not been used for video processing engines, though.
Your editor has not been told the reasons for that decision, but there are
a couple of obvious guesses, starting with the fact that video engines
might, in fact, not do DMA.
For example, the VIA HQV engine requires that the relevant data be present in
framebuffer memory. Perhaps more telling, though, is the complexity of the
transformations which might be performed. Video data streams have an
appalling number of formats and parameters; it takes a fairly complex API
to allow applications to describe the sort of stream they want to deal
with. Such an API could certainly be created for a new video processing
facility, but that API already exists in the form of the V4L2
specification. It makes far more sense to reuse that API than to try to
create it anew. Reusing the API happens naturally if the video processing
engine looks like a V4L2 device in its own right.
So the new memory-to-memory (m2m) infrastructure is designed to enable the
creation of V4L2 devices which move video frames from one memory buffer
to another, performing some sort of transformation on the way. Frames are
fed to the device as if it were an ordinary video output device, with all
of the appropriate configuration done to describe the format of those
frames. The video input side is, instead, configured with the format the
application would like to have. The driver takes frames written to the
device by applications, runs them through the processing engine, then makes
them available for reading as if it were an ordinary video capture device.
The m2m API will only be discussed in the most superficial way here; see
<media/v4l2-mem2mem.h> for more details. Drivers start by
defining a set of callbacks:
struct v4l2_m2m_ops {
void (*device_run)(void *priv);
int (*job_ready)(void *priv);
void (*job_abort)(void *priv);
};
The device_run() callback will be called when there is work for
the engine to do; job_abort(), instead, is called when the engine
must be stopped as quickly as possible. The optional job_ready()
function should return a nonzero if the driver could currently start a job
without sleeping.
The callbacks are registered with:
struct v4l2_m2m_dev *v4l2_m2m_init(struct v4l2_m2m_ops *m2m_ops);
When the device is opened, the driver should make a call to:
struct v4l2_m2m_ctx *v4l2_m2m_ctx_init(void *priv,
struct v4l2_m2m_dev *m2m_dev,
void (*vq_init)(void *priv, struct videobuf_queue *,
enum v4l2_buf_type));
The priv value will be passed through to the above-described
callbacks. Two calls to vq_init() will be made, one each for the
input and output buffer queues; vq_init() should, in turn, make a
call to the appropriate videobuf initialization
function.
There is a whole set of helper functions meant to be used to implement many
of the V4L2 operations: v4l2_m2m_reqbufs(),
v4l2_m2m_qbuf(), v4l2_m2m_streamon(),
v4l2_m2m_mmap(), etc. They handle most of the heavy lifting of
implementing a memory-to-memory driver, calling the device_run()
callback when there is work to do and buffers are available. As the driver
fills buffers with processed frames and passes them back to the videobuf
system, they will, in turn, be handed back to the application.
Once again, most of the details have been glossed over, but that's the core
of what this API does. As of this writing, there are no drivers for real
hardware using this API in the mainline, though some have been posted for
review. A sample user can be seen in
drivers/media/video/mem2mem_testdev.c.
Events
The V4L2 API is dominated by the description of video streams and the
actual movement of frames. There are other things of interest, though,
which may happen in an asynchronous manner. To support the passing of
asynchronous events to user space, a new set of events operations has been
added. The initial use of this code is to allow applications to request
notification for vertical sync events or the loss of the video stream.
At the user-space API level, there are a few additions to the seemingly
endless list of V4L2 ioctl() commands:
VIDIOC_SUBSCRIBE_EVENT, VIDIOC_UNSUBSCRIBE_EVENT, and
VIDIOC_DQEVENT. Once an application has subscribed to one or more
events, it can learn about them in the usual ways, including signals and
poll(); a VIDIOC_DQEVENT call allows the application to
see what actually happened.
Within the kernel, the event API starts with a new mechanism for the
management of file handles associated with a device. Each new open file
must be set up with:
#include <media/v4l2-fh.h>
int v4l2_fh_init(struct v4l2_fh *fh, struct video_device *vdev);
void v4l2_fh_add(struct v4l2_fh *fh);
That sets up the connections which allow V4L2 drivers to associate things
(like events) with specific open files.
A driver which supports events should start with a call to:
#include <media/v4l2-event.h>
int v4l2_event_alloc(struct v4l2_fh *fh, unsigned int n);
This call will try to ensure that storage is available for at least
n events on this file handle. Actual events are signaled with:
struct v4l2_event {
__u32 type;
union {
struct v4l2_event_vsync vsync;
__u8 data[64];
} u;
/* ... */
};
void v4l2_event_queue(struct video_device *vdev,
const struct v4l2_event *ev);
In addition, there is the usual set of helper functions
(v4l2_event_dequeue(), v4l2_event_subscribe(), ...) meant
to be called from the driver's V4L2 callbacks.
Currently, events are only supported by the IVTV driver, so that is the
place to look for a usage example.
The infrared core
Back in December, LWN looked at
the state of infrared receiver drivers in the kernel - or, in the case
of the long out-of-tree LIRC subsystem, out of the kernel. That discussion
has long since died down, but the hacking did not stop. The result is
that, with 2.6.35, the V4L2 subsystem has a new framework for IR
receivers. There is support for a number of controllers at this point.
The IR core also includes an interface where drivers (or user space) can
feed simple "mark" or "space" timing information which is then decoded in
software; that should make it possible to hook a number of user-space LIRC
drivers into the system.
Documentation on the new IR core is sparse; basic kernel API information
can be found in include/media/ir-core.h and ir-common.h.
In conclusion
It has been a busy merge window for one of the most active subsystems in
the kernel. Over the last few years, the V4L2 subsystem has built up an
impressive amount of infrastructure and has reached the point where it
supports almost all of the available hardware. That said, there is still
quite a bit of work in progress; traffic on the mailing list is concerned
with multi-plane video buffer support, a new control framework, and more.
So future merge windows are likely to be busy in this part of the tree as
well.
(
Log in to post comments)