The LibreOffice office suite is a large program with an active, growing
community; it can be hard to keep up with what is happening with it.
Fortunately, LibreOffice developer Michael Meeks showed up at LinuxCon
Europe to update the audience on what LibreOffice has been up to and what
can be expected in the near future. The picture that emerged showed a busy
community rapidly adding features and fixing bugs while trying to ease the
way for new developers to join the project.
Much of the work in LibreOffice, Michael said, continues to be oriented
toward making the code base easier to work with. The community wants to
grow, and making the code easy to work with seems like a good way to make
progress toward that goal. Recently we have seen the completion of an
effort to replace the
build system which, he said, was "traditionally appalling." All it took to
make things better was "three years of hard work."
Another attempt to make things friendly for new developers is the easy hacks
page. Experienced developers routinely deny themselves the pleasure of
making an easy improvement, choosing instead to add a description of the
problem to this page. That allows new developers to come in and find a
straightforward project that lets them immediately make things better for
A unique development tool used in the LibreOffice community is the
"BiBisection" tool. A lot of changes can go in during a six-month development
cycle; by the time somebody finds a bug, it can be difficult to determine
which change caused it. Such problems call for bisection, but asking users
to go through the process of building LibreOffice a dozen or more times to
isolate a buggy commit is a hard sell. So, instead, the BiBisection system
automatically creates a binary build every ten commits or so, then checks
the resulting binary in as a git commit. Git, it seems, is good at
compressing the result, to the point the repository holding binaries for
the full LibreOffice development history fits in a mere 3GB of space. Now users can
quickly bisect a problem by checking out the binaries and running them
Releases, past and future
The LibreOffice 4.0 release came out in January 2013, with a lot of new
stuff. Some of
the headline features Michael mentioned include range comments (allowing a
comment to be attached to a range of text), import of files in the RTF
format, "ink annotation" support (drawing over a document, essentially),
support for the CMIS protocol (allowing interoperability with applications
like SharePoint, Alfresco, Nuexo, and more), Microsoft Publisher import,
Visio import, support for Firefox themes, and an implementation of the Logo
LibreOffice 4.1 followed in May. A significant new feature in this
release is font embedding, where the fonts used in a document can be
embedded within the document itself. That improves interoperability
between office suites, and, even, between systems; one need no longer fear
the mess that results from the replacement of one's fonts when opening
a document on a new
machine. There is also support for the easy creation of slide decks from
photo albums, 90° rotation of images in Writer (Draw has long since had
arbitrary rotation), GStreamer 1.0 support, an experimental side bar taken
from Apache OpenOffice, and about 3,000 bug fixes, 400 of which came from
OpenOffice. With those fixes, there are "only" 5,000 open bug reports for
LibreOffice — many of those, he said, are likely to be duplicates.
The 4.2 release can be expected in January 2014. It should feature an
improved "start center," the screen that appears when the application first
starts up. There will be a new math panel for improved entry of
mathematical expressions, a
much improved Android remote for running presentations, an iOS remote
control app, and integration with Google's GDrive service.
Another big change in this release is a "data structure drug bust" in the Calc
spreadsheet application. Calc, it seems, suffers from "rampant object
orientation," with the concept of a cell used as the base class for
everything. That results in cells being used for everything, including
undo, change tracking, and so on; everything has to know about cells. The
result is extreme memory use, slow computation, and difficult code. The
new code gets rid of the cell base class and, whenever possible, stores
data contiguously in arrays. That lets things run faster with a lot less
memory; it also opens up possibilities for using the graphics processor
(GPU) for computations in the future.
4.2 will have a much faster XML parser. It is fashionable to be
against XML, Michael said; XML is seen as being slow to parse, but that
doesn't have to be. The LibreOffice developers have sped things up; they
have also split XML parsing into a separate thread that can execute in
parallel with ordinary processing. There will be better layout
integration, with many of the dialogs created with Glade, even though
LibreOffice is not based on GTK. The result should be more consistent
dialogs and, someday, the ability to use native toolkit dialogs.
This release will also have a fast Firebird embedded database. It replaces
HSQLDB, which, he said, is a living proof that Java code can be difficult to
integrate with a C++ application. The configuration code will be cleaned
up. Even people who like to tweak their applications heavily might think
that the 25,000 knobs provided by LibreOffice are a bit much.
Configuration options are not being removed — Michael was clear that the
project doesn't believe in that — but many of them will be hidden in an
"expert" menu. StarBasic will support code completion. And there will be
better support for the display of character borders.
Further in the future, we should see a much-improved version of LibreOffice
running on Android. There is also a "CloudOffice" prototype HTML5
application, but there is not much additional work being done in that area yet.
Collaborative editing is seeing rather more work, with a new mechanism
(based on undo tracking) to help keep documents synchronized. Work is
progressing on "hybrid PDF files" where the OpenDocument file is embedded
within a PDF generated from it. That is a path toward the creation of
editable PDF files. Finally, "flattened" files run the normal file format
through a pretty printer, allowing them to be inspected (or changed) in an
ordinary text editor.
In summary, Michael said, the LibreOffice community is thriving and
growing. There are not only a lot of contributors; there are also a number
of companies participating; he suggested that projects supported primarily
by one company are somewhat risky to be a part of. But, he said, more
contributors are always welcome; the audience was invited to jump in and
start sending patches.
[Your editor would like to thank the Linux Foundation for travel support to
Comments (8 posted)
One of the most popular sessions at any LinuxCon is the kernel panel,
Jim Zemlin, executive director of the Linux Foundation, said in his
introduction. Another lively panel made its appearance on October 22 in
Edinburgh, Scotland as part of LinuxCon
Europe. The panel was ostensibly focused on "core and embedded", but
predictably ranged further than that.
The panel was moderated by LWN executive editor Jonathan Corbet and had
four panelists who are developers from all over the kernel. Each panelist
introduced himself, with Greg Kroah-Hartman starting things off. He, of
course, works for the Linux Foundation on a number of different kernel
areas including USB and the driver core as well as both the stable and
staging trees, which he called "the two extremes in the kernel". Will
Deacon works for ARM on very low-level architecture-specific code like
memory barriers and TLB handling—"stuff that most people hate". He also
co-maintains the 64-bit ARM (aarch64) support in the kernel.
Sebastian Hesselbarth was invited as a "hobbyist" attendee to the Kernel
Summit. He started working on the mainline kernel about a year ago. Peter
Zijlstra from Red Hat rounded things out. He rattled off a number of
things that he co-maintains with Ingo Molnar including the scheduler, perf,
and lockdep. He also works on the memory management subsystem, IRQs,
timers, and various low-level architectures "when I have to". "I get
around a bit", he said.
Corbet noted that there is a huge amount of code coming in now from the
mobile and embedded world mostly in the ARM tree, but that when you look at just the contributions
core kernel (scheduler, kernel and mm directories), it is companies like
Red Hat and IBM. He asked "are we able to work together and make one
kernel that works for everybody?" Zijlstra said that it was a "fun
exercise" to make
one kernel that scaled from tiny little machines to the largest ones out
That led Corbet to direct a question to Zijlstra about some frustrations he
had heard from the ARM community about getting scheduler patches reviewed.
Zijlstra said that he wants to avoid having a different scheduler for every
architecture. Power-awareness is important to ARM, but it will be (or
important for others as well. There are several subsystems that currently
work together to handle power management (CPU idle, CPU frequency, and the
scheduler), but he wants to see a "coherent framework"
proposed that will help solve all the problems and not just "random tweaks
here and there". There is a mini-summit
for power-aware scheduling later in the week, he said, where he hoped that
some of the issues could be resolved.
But, as Deacon pointed out, the problem is not just for ARM and mobile,
the server community is interested in better power performance as well.
Zijlstra agreed but noted that it is "difficult to get a straight answer"
from that community about its needs. The hardware vendors really don't
want to talk to each other, he said. Deacon pointed out that it might just
be that they aren't used to talking to each other and that they see it as
to get things done when they don't have to. Kroah-Hartman said that the
reason hardware vendors start out with
drivers is because they are self-contained and don't require coordination
with other vendors. It is harder to change the core kernel, he said, as it
should be, but companies eventually get there. Neither Intel nor IBM
worked on the core a while back, but now that's a large part of what they
do. Moving into the core is simply a migration that many companies
The ARM big.LITTLE architecture is different than anything we have seen
before, moving us from symmetric multiprocessing (SMP) to asymmetric,
heterogeneous multiprocessing, Corbet said. He asked how that would impact
the scheduler. Zijlstra was upbeat about supporting that model, saying
that the scheduler already deals with SMP
systems that have asymmetric loads due to realtime or interrupt processing.
Big.LITTLE is different than that, certainly, but there are enough
similarities that some of the code which tries to balance "fairness"
by tracking the realtime and interrupt usage on the CPUs, which could map
to more and
less powerful processors
Kroah-Hartman isn't sure that the idea behind big.LITTLE will really pan
out. There are other system components (memory, buses, etc.) that take
more power and still need
to stay powered up even when only a "little" CPU is running, so it's not
clear that just turning off
power-expensive processors is enough of a win. It may be an "experiment
that is failing", he said. Deacon had not heard that, he said. Companies
are putting big.LITTLE into products, so they believe it will bring power
savings. We should see proof one way or another before too long, he said.
Large, small, and tiny
There have long been complaints that the kernel is enterprise-focused,
though they have died out some over the years; does that make it hard for the
embedded side, Corbet asked. Deacon said that there is a history of that
in the kernel, and that many of the maintainers came about during that era,
but that he, at least, didn't see a real problem with that. As the kernel
evolves, so do the maintainers and the focus; it all balances out.
Kroah-Hartman was quick to note that the enterprise systems of five or ten
were now in many people's pockets, so that those differences flatten out
In addition, the choices rarely come down to making one better completely at the
expense of the other, Deacon said, so people air their concerns and a
compromise is found.
Going the other direction, Corbet noted that Tim Bird has been concerned
that Linux is leaving tiny systems behind. Most embedded devices these
days are fairly capable systems with fewer memory and other constraints
from years past, but ignoring the really small systems may leave Linux out
of the "internet of things". He asked if the panelists were concerned
Zijlstra didn't think it was a real problem as there are patches to shrink
the kernel that are still accepted. In addition, a recent suggestion to
remove uniprocessor (UP) support—essentially turning it into an SMP system with
one processor, though with all of the extra SMP data structure overhead—was
rejected as there are still plenty of single-core processors out there and Intel
recently introduced more with the Quark UP line. Deacon noted that support for a
tiny ARM core ("M class") was recently merged as well. As long as people
are still running tiny systems and are willing to put in the work, support
for the super low-end should be fairly easy to maintain, but once the use
of UP systems goes away, for example, "that code will bitrot really fast",
Zijlstra said. "I can't wait for that day, but I'm afraid I will have to
wait a long time."
Somewhat controversially, this year's Kernel Summit committee set aside
slots for hobbyists, Corbet said, so he
wanted to explore that a bit. He asked the audience how many had contributed
a patch to the kernel and somewhere around one-third raised their hands;
perhaps half of those indicated they had done it on their own, not as part
of their job. On stage, Hesselbarth served as a representative from the
hobbyist community; Corbet started by asking him why he contributed to the
kernel on his own when so many around him were being paid to do so.
Hesselbarth said that he had a personal interest in tiny devices, but that
he probably was not a prototypical Linux hobbyist. He is a hardware
engineer with an interest in
building systems-on-chip (SoCs) for devices. He writes Linux drivers as
part of his job, which led him to work on the ARM SoC code on the side.
Corbet asked: "Is our community friendly?" He wondered if Hesselbarth was
able to get the answers he needed on mailing lists, for example.
Hesselbarth said that it depends on how you approach the mailing lists. If
you start out "kindly", most on the list will accept you, will look at your
code, and will correct you if you are wrong, even if they have done that
many times for others in the past. Zijlstra had some advice for anyone
looking for assistance: ask specific questions, and always on the mailing
list. Specific questions are easier to answer and emailing him privately
puts a message on "a very short path to /dev/null".
Corbet brought up the steady decline in the percentage of hobbyist
developer patches over the
years. Zijlstra and Deacon both thought that perhaps the absolute numbers
weren't really declining that much, it was that the number of paid
contributors has risen so much. Kroah-Hartman said that people who show
any ability to get patches into the kernel immediately get job offers. He
knows of at least five people who were doing style patches to staging
drivers in their spare time, got offers, and now do kernel development full
time. It is quite difficult to find kernel developers, so companies watch
the lists and ask other kernel developers for leads on new employees, he said.
When everyone gets hired, though, that causes a problem as sometimes they
can no longer maintain the code they were working on. Or, as Zijlstra
pointed out, they get reassigned to a different project and maintenance
falters. Corbet mentioned that he sat in on a Matthew Garrett talk earlier
in the conference where he
talked about some buggy code
he had merged before getting reassigned and just kind of left it all behind.
Kroah-Hartman recently backed out a change when the
email to the developer bounced. It is a problem, Kroah-Hartman said,
because a maintainer needs to trust that someone contributing will be around
to fix the code, "or I have to". He noted a "big, hairy" network change
that went in a few years ago, where, literally, email to the developer
started bouncing the day after it was merged. It took many months to
unwind those changes, which is part of what makes it difficult to get large
changes into the network subsystem today.
Device tree woes
Non-discoverable hardware and the ARM device tree solution for that was
next up. Once upon a time, the devices attached to a system could be
determined at run time, but that is often not the case anymore, so some kind of
external description of those devices is required. For ARM, device trees
are being used to do that, but there are some problems with the consistency
in how those
device trees are specified (i.e. the bindings). A bigger issue is whether
device tree bindings constitute an unbreakable kernel ABI, which means that
even more care is required before merging any device tree support. Corbet
asked: "Is device tree the right answer, what have we done wrong, and how
can we fix that?"
Deacon said that a full day of the ARM mini-summit (concurrent with the
conference) is devoted to trying to work out those problems. Device tree
has been an improvement over the older mechanisms, and it has allowed a lot
more code and drivers to go into the kernel. The ABI question is a
"religious" one and there is a fundamental disagreement between those who
think it is an unstable interface that shouldn't be used in any products
and those who think it is an unbreakable ABI that is fixed in stone. He
was hopeful that some of that could be ironed out in the mini-summit.
Hesselbarth found that device tree makes things harder for hobbyists
because they have to consider all of the possible ways that someone might hook
up a particular IP block when designing the device tree for it. There is
something of a trend to cut the static configuration data right out of the
drivers and essentially paste it into a device tree, Deacon said, which is
not the right way to approach creating a device tree entry. In addition,
Kroah-Hartman said that it is hard for driver maintainers to decide
whether to merge a driver because they are unsure if the device tree
support is "correct"—or even what "correct" looks like.
Kernel ARM maintainer Russell King recently put out a detailed critique of
device tree that Corbet said he hadn't had a chance to digest yet, but
clearly concerned the ability of device tree to describe the complexities
of today's devices. They are no longer just a single device, often, but a
collection of devices connected by buses. "Is there a fundamental flaw
there?", he asked.
Kroah-Hartman likened the problem to that which the Video4Linux (V4L)
developers have been grappling with for years. There needs to be a way to
devices and how they interconnect, which is a complex problem, but it has
to be described somewhere. "What's
wrong with making it all discoverable?", Zijlstra asked. That's what
kernel developers want, but it's difficult to convince the hardware makers,
Kroah-Hartman said. Zijlstra's suggestion to "take them out back" and make
them "do what we want" was greeted with laughter, but lots of nodding heads
Corbet noted that there was a trend toward getting the knowledge from user
space, both in the V4L world with the media
controller interface and in the ION memory
allocator; are we getting to the point where we just can't solve those
problems in the kernel, he asked. Deacon said that he doesn't think it is
unsolvable, but it isn't fun to think about how to solve and it is easier
to think it is someone else's problem. He said that not only is hardware
configuration being pushed into user space, it is also being pushed into
The hardware today is "plug and play" modules, Kroah-Hartman said, where
hardware makers buy IP blocks from multiple vendors and hook them all up in
different ways. Linux has drivers for each of the pieces, but not all the
different ways they might be hooked up and communicate with each other. The
hardware folks have solved the problem, but it needs to be handled on the
kernel side. He returned to the discoverable hardware idea, noting that
simple, static tables that could be read by the kernel would help solve the
problem. Zijlstra suggested that perhaps the tools used by the hardware
designers could be changed to make it easier to provide the kernel what it
needed. Hesselbarth seemed a bit skeptical that Synopsys and other tool
vendors would be all that interested in helping out.
The conversation turned to security and whether the kernel developers were
doing enough to deliver a secure kernel. Kroah-Hartman said that the
community can "always do better", but that the bugs reported to the
kernel security email address are fixed "as soon as we possibly can". He
noted that various
static code analysis tools are being run on the kernel and are finding lots
of bugs that get fixed right away. Some of the creators of those tools, Julia Lawall (Coccinelle) and Dan
Carpenter (Smatch) for example, have fixed more security bugs than anyone, he said. It
is an area that is ripe for more research and the community is always open
to better ways.
Some 4000 patches were made to the 3.0 kernel in the two years it has been
maintained as a stable kernel (it reached end of life with 3.0.101, which was released on the day of the
session). Corbet asked if that was reasonable. Deacon said that you have
to trust that kernel developers are not adding bugs on purpose, so when
they are found, they need to be fixed; "what's the alternative?", he asked.
Kroah-Hartman noted that many of those patches were for things like new
device IDs, but that the kernel developers are learning over time.
Mistakes were made—and fixed.
Corbet asked if we could expect fewer fixes for 3.10 (which will also be
supported for two years), but Kroah-Hartman said that there would likely be
more. He now maintains the stable kernel as part of his job, so he has
more time to find patches that need to go into the tree. Beyond that, the
code base has grown. But the number of fixes going into the kernel starts
to tail off significantly after a year, he said. For the stable updates
released on that day, 3.0 had eight patches, while 3.10 had nearly 100.
The world changes; processors speed up (which leads to new timing issues),
for example. "If we stop our rate of change, then we are dead", he said, we
have to keep up with the changes going on in the rest of the world.
Referring to Deacon's assertion that bugs are not being introduced on
purpose, Corbet asked the panel how it is we know that. The whole
maintenance system relies on trust, Deacon said, if that's missing, the
whole thing breaks down. Kroah-Hartman said that if you look at the known
exploits, they are all attacking some "stupid" mistake that someone
(including him) has made,
not some kind of introduced backdoor. People who research flaws per line
of code find Linux to have lower rates than anything else, he said, so we
are doing something right. Corbet pointed out that the
kernel developers introduce enough bugs on their own, so there is no real
need for anyone else to do so—to some chuckles across the stage.
While OpenBSD has fixed its "2038 problem"
(when 32-bit timestamps will wrap), Linux still has not, Corbet said, and
wondered if Linux would be ready for that event. He also asked: Will
there be no 32-bit processors to be affected in 2038? Deacon noted that
32-bit ARM processors are shipping today, so it is hard to believe they
will all be gone in 25 years. Kroah-Hartman said that Intel came out with
a 486-based processor recently as well. He suggested that they could all come
out of retirement to fix the problems.
Corbet said that OpenBSD broke its ABI to
handle the change—something it can do because it ships its user space with the
kernel—but that is not something that Linux can do. Something
clever will be required to fix the problem, which suggests we should be
thinking about it now.
Deacon indicated that he thought an ABI break will eventually have to
happen at some
point. The real problem will be for devices that are being deployed now
that will still be running in 2038.
That led to the last question, which is how to handle things like control
groups that were added to the kernel, but were "wrong" both internally and
externally. The internal problem can be fixed relatively easily, but how
can we continue on without carrying tons of baggage from early mistakes at
the ABI level, he asked. Zijlstra said that even "simple things" need to
be written at least three times before you get them "sort of right". Other
systems have a way to deprecate things, and some of those have been tried
in Linux without success, Kroah-Hartman said.
The Linux method is to write
something new and wait for the old one to die, then try to sneak the code
out of the kernel and wait to see if anyone screams, Zijlstra said.
On the other hand, Deacon said, if you wait for a perfect solution, you'll
never get anywhere. Kroah-Hartman said that there is no model for Linux,
everything it is doing is new; it is at the forefront and we are learning how to deal with these kinds of
problems. "We do it all in public", unlike companies that struggle with
the same things, he said. There are lots of hard, and fun, problems to be
solved going forward, he said. It is a high-quality problem to have,
Corbet said to general agreement.
With that, the time was up and Zemlin retook the stage. Mostly, he wanted
to clear up some misinformation that he heard during the session: Kroah-Hartman would not be
retired by 2038, Zemlin said—to laughter from the assembled attendees.
[Thanks to the Linux Foundation for travel assistance to Edinburgh for
Comments (15 posted)
The GStreamer multimedia framework is around twelve years old, but it just made its official 1.0 release last year. The 1.0 milestone was, among other things, a statement of the project's commitment to keeping the API and ABI stable throughout the duration of the 1.x cycle. As several talks at the 2013 GStreamer Conference demonstrate, however, such stability does not mean that there is nothing left to be done in order to make GStreamer an appealing framework for developers. Instead, several ancillary projects have taken on the task and are building higher-level APIs meant to attract further development.
One of the higher-level API projects is GStreamer Editing Services (GES), which is a framework designed to support nonlinear video editing. There have been several GStreamer-based nonlinear editors (NLEs) in the past—most notably PiTiVi—but even fans of those applications would have to admit that they have not experienced the same level of (relative) popularity as seen by GStreamer-based media players.
Within the GStreamer community, the reason for this disparity is generally accepted as the fact that GStreamer itself is optimized for tasks like playback, capture, and transcoding. NLEs, while they obviously need to make use of such functionality, have their own set of primitives—for example, video effects and transitions between two (or more) tracks.
GES is a framework that implements NLE functions. Mathieu Duponchelle and Thibault Saunier spoke about it on the first day of the conference. While GES is now used as the NLE framework for PiTiVi, the two said, it is intended to serve as a general-purpose framework that developers can use to add video editing functionality to their own applications or to build other editors.
In fact, they explained, GES itself is a wrapper around another intermediary framework, GNonLin. The goal is for GES to offer just the higher-level NLE APIs, while GNonLin serves as the glue between GES and GStreamer itself. As such, GNonLin implements things like the GnlObject base class, which adds properties not found in base GStreamer elements like duration, start and stop positions, and an adjustable output rate (for speeding up or slowing down playback without altering the underlying file). Similarly, the GnlOperation class encapsulates a set of transformations on GnlObjects, as is needed to define filters and effects.
GES, in turn, defines the objects used to build an editing application. The most important, they said, is the idea of the GESTimeline. A timeline is the basic video editing unit; it contains a set of layers (GESLayers in this case) stacked in a particular priority. GESLayers contain audio or video clips, and the timeline can be used to re-arrange them, to change the layer stacking order, and to composite layers together. But ultimately a GESTimeline is just a GStreamer element, the speakers said, so it can be used like any other element: its output can be plugged into a pipeline, which makes it easy for any NLE to output video in a supported GStreamer format or to send it to another application.
GES also defines several features of interest to NLE users, they said. First, it has a high-level Effects API, which is a wrapper around GStreamer filter elements. The Effects API exposes features necessary for using video effects in an editor, such as keyframes. Keyframes are control points in a media track, where the user can set a property of interest (for example, audio volume). GES will automatically interpolate the property's value between keyframes, allowing smooth changes. But GES also implements some of the most common transition effects, like cross-fading and screen swipes, making those effects trivial to use. The previous version of GES was not nearly as nice, they said; it required the user to manually create and delete even simple transition effects.
GES's other editing APIs include an advanced timeline editing API, which implements trimming a clip, "rippling" a timeline (which shifts all of the clips further down the timeline whenever a change is made to a clip earlier in the timeline), and "rolls" (switching instantly between two clips on different tracks). GES attempts to implement the most-often-used features by default, so for instance it automatically rescales imported clips to be the same size, but this behavior can be manually switched off when not needed. There is also a titling API, which overlays a text layer on top of a video track.
GES is currently at version 1.1.90, and should reach 1.2.0 shortly—which will be compatible with GStreamer 1.2 (which was released September 25). It represents nearly two years of work, they said, and although they are doing a lot of testing and debugging, GES naturally needs real-world testing on real-world media clips in order to really uncover all of its bugs. They have an integrated test suite that tests a lot of media formats (input and output) and the various effects, but real-life scenarios are often quite different.
PiTiVi is meant to be a general-purpose NLE, they said, but there are several different editing scenarios they hope GES will be useful for, such as an online video editor (perhaps a collaborative one) and a live-editing NLE for use with broadcasting. GES should also be useful for any GStreamer application that needs to mix video tracks; even if you just have two tracks, they said, mixing them in GES will be easier than doing it in lower-level GStreamer pipelines.
The work on GES is not finished, they said. Things still on the to-do list include playback speed control (the first implementation of which is being worked on by a Google Summer of Code student), automatic hardware acceleration (which is scheduled for GStreamer itself), nesting timelines (for combining multiple scenes that are edited separately into a longer finished product), and proxy editing (where a low-resolution version of a video is used in the editor but the high-resolution version is used for the final output). The latter two features are important for high-end video work.
Playback made simple
In contrast to GES, which has been developed in the open for several years, the other new GStreamer API layer discussed at the conference was Fluendo's new media player API, FluMediaPlayer, which is not open source ... yet. As Julien Moutte explained it in his session, the goal of the player is to fill in a missing piece that keeps GStreamer from being used in more third-party projects.
Ultimately, Fluendo wants world domination for GStreamer, Moutte said. So when the company sees a recurring problem, it wants to do something to fix it. Consequently, Fluendo has been spending time at developer events for Android, Windows, and OS X, the platforms where GStreamer is available but not dominating. One of the most common problems that seems to be encountered by people who incorporate GStreamer into their products is that they want to use GStreamer's powerful playback functionality to put a video on screen, but they do not want to take a course or spend a lot of time learning GStreamer internals to do so. In other words, GStreamer needs to improve its "ease of integration" offerings with simple, high-level APIs.
Of course, there is already an abstraction intended to do drop-it-in-and-run playback: playbin. But using playbin still requires developers to learn about GStreamer events, scheduling, and language-specific bindings, he said. There are some other good options, such as the Bacon Video Widget, but it is very GTK+-specific and is GPL-licensed, which means many third-party developers will not use it.
Fluendo's solution is FluMediaPlayer, a new player library that implements the same feature set as playbin2 (the current incarnation of playbin), and is built on top of the GStreamer SDK. The SDK is a bit of a controversial topic on its own; it was created by Fluendo and Collabora specifically to target third-party developers, many of whom are on proprietary operating systems. It is also not up-to-date with the latest GStreamer release (relying instead on GStreamer 0.10), but Moutte said the company intends to handle the transition from 0.10 to 1.x transparently with FluMediaPlayer. The player also adds some new features, such as platform-specific bindings and the ability to connect playback to a DRM module.
FluMediaPlayer uses a simple C API; there are no dependencies to worry about, Moutte said, "just use the header." The player object takes a URI as input, creates the output media stream, and listens for a simple set of events (_play, _stop, _close, etc.). Media streams themselves are created by the player on demand (alleviating the need for the developer to set up parameters by hand), newer streaming protocols like DASH are supported, and multiple players can be run simultaneously, each with its own controls and its own error events.
Moutte then said that his aim for the talk was to get feedback from the GStreamer community: was this a good approach, for example, and if Fluendo were to open the source code up, would others be interested in participating? The company was already moving forward with the product, Moutte said, but he hoped to take the temperature of the GStreamer community and make a case to management for releasing the source. By a show of hands, it seemed like most people liked the approach, but opinion was more divided about participating. One audience member observed that several of the features Moutte had described should be landing in upcoming GStreamer releases, which makes the player seem less appealing. Another commented that the audience present might not be the best group to ask—after all, it is a self-selected group of people who are quite comfortable digging into GStreamer itself. Ideally, such a player would draw in new developers not already working with GStreamer.
Of course, the FluMediaPlayer product can certainly coexist with other GStreamer initiatives. Furthermore, if GStreamer itself does implement several of the higher-level features built into FluMediaPlayer, that will not reduce the appeal of the product to outside developers, but will simplify Fluendo's maintenance. There does seem to be a general agreement that GStreamer itself is technically sound at this point in its history, and that the next big hurdle to overcome is building a layer of services on top of it—services that up until now many GStreamer users have had to re-implement on their own.
[The author would like to thank the Linux Foundation for travel assistance to Edinburgh for GStreamer Conference 2013.]
Comments (none posted)
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
- Security: Verified U-Boot; New vulnerabilities in chromium, java, kernel, xen, ...
- Kernel: The power-aware scheduling mini-summit; 3.12 development statistics; The Android microconference.
- Distributions: Debian moving to a CDN?; CentOS, Debian, Ubuntu, Yocto, ...
- Development: What's next for GStreamer; Wireshark switches to Qt; An XBMC fork; OpenStack Havana; ...
- Announcements: OVA moves to LF, OSI names new general manager, ...