At the end of April, Lennart Poettering announced
initial availability of systemd, a new system initialization and session
management daemon. This announcement caused a bit of surprise and concern
for those who didn't know it was coming. Lennart's work with PulseAudio
remains a bit of a difficult memory for some users (though it seems to be
working well for most people now), and some people had thought that the
problem was solved with the growing adoption of upstart. Systemd is a
different approach, though, which may yet prove sufficiently compelling to
motivate another big change.
There are many new features in systemd, but the core change is a concept
stolen from the MacOS launchd daemon - and from others that came
There are (at least) two ways to ensure that a service is available when it
is needed: (1) try to keep track of all other services which may need
it and be sure to start things in the right order, or (2) just wait
until somebody tries to connect to the service and start it on demand.
Traditional Linux init systems - and upstart too - use the first approach.
Systemd, instead, goes for the second. Rather than concern itself with
dependencies, it simply creates the sockets that system daemons will use to
communicate with their clients. When a connection request arrives on a
specific socket, the associated daemon will be started.
This approach simplifies the system configuration process because there is
no longer any need to worry about dependencies between services. It holds
out the promise of a faster bootstrap process because nothing is started
before it is actually needed (plus a fair amount of other work has been
done to improve boot time). The systemd approach to managing daemons
allows a fair amount of boilerplate code to be removed, at least under the
difficult assumption that the daemon
no longer needs to work with other initialization systems. Lennart clearly
thinks that it is a better way to manage system processes, and a number of
others seem to agree.
That said, there are some obstacles to the widespread adoption of systemd
by distributors. To begin with, a number of them are just now beginning to
use upstart in its native mode; the idea of jumping into another transition
is not necessarily all that appealing. Daemons must be patched to work
optimally with systemd; otherwise the socket-based activation scheme is not
available. The patching is a relatively simple task, but it must be done
with a number of daemons and the result accepted back upstream. There are
also concerns about how well some types of services (CUPS was mentioned)
will work under systemd, but Lennart seems to think there will not be
Another area of concern, strangely enough, is the use of control groups
(cgroups) by systemd. Cgroups are a Linux-specific feature initially
created for use with containers; they allow the grouping of processes under
the control of one or more modules which can restrict their behavior.
Systemd uses cgroups to track daemon processes that it has created; they
allow these processes to be monitored even if they use the familiar daemon
tricks for detaching themselves from their parents. So if systemd is told
to shut down Apache, it can do a thorough job of it - even to the point of
cleaning up leftovers of rogue CGI scripts and such.
Cgroups would also make it easy for system administrators to set up
specialized sandboxes for daemons to run in. The problem there is that
there is no easy way for systemd to pick up a cgroup setup already created
by somebody else; there is no transparent inheritance for cgroups now. So
Lennart is asking for that type of
inheritance to be added.
Beyond that, though, some people have concerns about the use of cgroups in
the first place. Peter Zijlstra worries
about adding yet another feature which must be built into the kernel for
the system to even boot. The Debian community does not like the use of the
"debug" group, which is not currently configured into its kernels. Systemd
may eventually get a more appropriately-named cgroup subsystem for its use,
but it is not going to work without the cgroup feature at all. So people
wanting to boot systems with systemd will need to have cgroups built in.
Lennart has this message for people who
don't like that:
Next time something is added to the kernel please mark it as "Hey,
please don't use it, this is only here so that you don't use
it. Thanks!" Maybe then dumb-ass folks like me will notice and
refrain from using it.
There are also claims that work on systemd
is primarily motivated by antipathy toward Ubuntu and, especially, its
copyright assignment policies. There can only be a bit of truth in some of
that; hearing early talk about the work which became systemd is part of
what inspired this article on
assignment policies back in October. That said, Lennart insists that the motivations behind systemd
are technical, and he asks that it be judged on its technical merits.
So where do things stand with regard to adoption of systemd?
- There is an
intent to package bug filed for Debian; the packager plans
to make it easy to switch between sysvinit and systemd at boot time.
- Lennart plans to have a systemd
package ready for Fedora 14, saying "whether we can have it
as default is to be seen". Given that the Fedora 14 cycle
has already begun, even thinking about adding a change that
fundamental as the default seems ambitious. So it may be a hard sell,
but Lennart would like to see it: "It would certainly be a shame
though if other distros would ship systemd by default before we
- Gentoo has an
experimental systemd package available, but it has not found its
way into the main distribution yet.
- openSUSE is apparently (according to Lennart's original
announcement) discussing it internally, but, as is often the
case with openSUSE, there is no public indication that it is being
- Ubuntu seems unlikely to consider a change anytime soon.
So it is not clear that any distribution will make the jump to systemd.
But, then, even the above is a fair amount of attention for a project which
has been public for less than one month. This program has reopened the
discussion on how our systems should initialize themselves, and things may
go on from there: there is talk of using systemd to take over the tasks of
processes like cron and gnome-session. Regardless of who
ends up running systemd, the ideas it expresses are likely to influence
development for some time.
Comments (200 posted)
Although many proponents of free software and an open web don't like
Flash, the multimedia platform has become so ubiquitous that it is
difficult to imagine the web without it. However, Flash support has always
been a challenge for Linux distributions. Adobe has had a proprietary Linux release of its Flash player software for years now, but only for the x86 processor architecture. Meanwhile, open source projects trying to recreate Flash functionality are lagging behind and struggling with lack of manpower. Luckily, there are also some interesting new technical developments in the open source Flash world. One that sparked our interest recently is Lightspark, which was written from scratch based on the SWF documentation Adobe published in June 2009 as part of the Open Screen Project.
The official Flash player
For years, Adobe treated Linux as a second-class citizen. As recently as 2007, Linux users had to wait six months after the Windows release for their version of Adobe Flash 9. At the end of 2008, that changed: with the release of Flash Player 10, Adobe released versions for Windows, MacOS X and Linux on the same day. However, that's not to say there are no problems with the proprietary Flash player. 64-bit support is still a sore point: although it's possible to use Adobe Flash player on a x86_64 Linux system using a 32-bit emulation layer such as nspluginwrapper, native 64-bit support is only available as an alpha version that was first released in November 2008.
In hindsight, it's ironic that, as late as it may come to the party,
Linux is the first platform that gets a peek at a 64-bit Adobe Flash
player. In its FAQ
for the 64-bit prerelease, Adobe writes:
We chose Linux as our initial
platform in response to numerous requests in our public Flash Player bug
and issue management system and the fact that Linux distributions do not
ship with a 32-bit browser or a comprehensive 32-bit emulation layer by
default. Until this prerelease, use of 32-bit Flash Player on Linux has
required the use of a plugin wrapper, which prevents full compatibility
with 64-bit browsers. With this prelease [sic], Flash Player 10 is now a
full native participant on 64-bit Linux distributions.
Open source approaches to Flash
But x86 and preliminary x86_64 support for Flash obviously isn't enough
in the open source world. Granted, Adobe is or has been working with some
mobile phone manufacturers to offer a version for ARM (for example on MeeGo
or Android), but people running a Linux desktop system on a non-Intel
processor are left in the cold. Until last year, your author was in exactly
this position, running Debian on a PowerMac G5. If non-Intel users want to
run the official Flash player they have to use ugly solutions such as running Flash in an x86
Luckily there are some open source programs recreating Flash
functionality, of which the most well-known is Gnash ("GNU Flash"), which
also runs on PowerPC, ARM and MIPS processors. It's not even limited to
Linux: Gnash also supports FreeBSD, NetBSD, and OpenBSD, so it pleases a
lot of people that don't want to run proprietary software on their open
source operating system but have to be able to see Flash content. In March
we looked at the current state of
affairs of Gnash when project lead Rob Savoye talked about the project at SCALE 8x.
Although Gnash has been progressing well, the nature of the project
means that it will always be chasing Adobe. Moreover, Gnash is facing some
manpower challenges. The Open Media
Now! foundation was started in 2008 to fund Gnash development, but,
because of the economic crisis, the four full-time developers were cut back
to zero, Gnash developer Bastiaan Jacques said last
year. Recently, another issue appeared: a growing disagreement between
the two top
contributors Benjamin Wolsey and Sandro Santilli on the one hand, and
Rob Savoye on the other hand.
Different development styles
It all started with a message by Benjamin Wolsey to the Gnash-dev mailing list on Friday 21 May:
Recently there have been several commits to Gnash that break the testsuite, make Gnash unstable, and have serious issues with code quality.
Unfortunately this means that I have to spend considerable time reverting
faulty changes, reimplementing things properly, and fixing the testsuite
just to stop the damage spreading.
At the end of his message, Benjamin announced that he would start his
own stable branch of Gnash if another commit of this sort appeared,
implicitly threatening a fork. Benjamin's accusations seemed to be
primarily aimed at Rob, who answered
that the usual policy of free software projects is that frequent checkins
are good. However, Sandro
Santilli added that this policy would only work if the checkins are
small and do not break the test suite. Then the discussion became somewhat nasty with general accusations thrown back and forth, but Rob soon pinpointed the central point of disagreement:
We have very different coding styles. I prefer to work very publically, checking code in frequently, and then fixing it over the next few checkins. This is the way most all free software projects I've been involved in for 20 years operate.
Rob also defended himself against the accusation that he doesn't consider testing important: "Remember, I wrote the majority of our testsuite, so I think it's fair to say I consider testing important." But he also wants to focus on new features and he has the impression that this doesn't work when the "stable branch" has to remain stable all the time:
Instead I get endless rewrites of existing code, all aimed towards "code quality". This does not advance Gnash at all, which is why our funding evaporated.
John Gilmore tried to get the two parties back together behind their common cause ("We need each other, guys"), and Sandro suggested to use an experimental branch for code that breaks things.
However, because Benjamin reverted one of Rob's commits and threatened
to do it again in the future, Rob removed
Benjamin's commit rights to the Savannah repository for Gnash, because
he doesn't want to "allow a power-hungry developer to continue to
reverting my changes." In the meantime, Sandro worked on some
improvements and asked where
he should commit the code: to the Gnash trunk where Benjamin couldn't
review it and Rob maybe wouldn't accept the changes, or to a fork, which
would make the project diverge? Sandro obviously still cares for the Gnash
project and rightly fears that a fork will not be good for the common
After a few nights of sleep, Benjamin, Sandro, and Rob seem to acknowledge
that they have different development, project management, and communication
styles, that they all made mistakes, and were too rude in their responses
at some times. At the time of this writing, they were still on speaking
terms on the #gnash irc channel on Freenode and were actively trying to
reach a consensus and drafting some new commit rules (including
"Commits shall not be reverted except as a last resort." and
"No code shall be committed that causes failures in existing
tests."), so this whole crisis may well result in a better development process for the project.
The death of Swfdec
Another Flash decoder, Swfdec, has silently stopped
development a while ago. The last stable release, 0.8.4, is from December
2008, and the last
commits are from December 2009. Swfdec has been primarily run by one
person, Benjamin Otte, but he seems to have lost interest, although he is
still occasionally answering questions on the Swfdec mailing
list. In response to a question by Puppy Linux developer Barry Kauler in January of this year, Benjamin announced the death of his project in one sentence:
That said, active Swfdec development has pretty much stopped, so you'll likely not see any new features in the near future anyway.
The fact that Benjamin started a new job in
Red Hat's desktop team in January of this year is surely no
coincidence: it should remind us that a project with just one core
developer always has a fragile future because big changes in the developer's life can result in less time to work on the project.
A new open source Flash player
Development of Gnash and Swfdec was done using reverse engineering because Adobe only offered the SWF specification with a license that forbids the use of the specification to create programs that play Flash files. In June 2009, Adobe launched the Open Screen Project which made the SWF specification available without these restrictions. This made it possible for Alessandro Pignotti to work on a new open source Flash player, entirely based on this official SWF documentation. A part of this project is based on his bachelor's thesis at the university of Pisa, called An efficient ActionScript 3.0 Just-In-Time compiler implementation [PDF].
The result is Lightspark, which includes
OpenGL-based rendering, a mostly complete implementation of Adobe's
ActionScript 3.0 using the LLVM compiler, and a Mozilla-compatible browser plug-in. Because Lightspark has been designed and written from scratch based on Adobe's documentation, it promises a clean code base optimized for modern hardware.
By using OpenGL instead of XVideo, Lightspark allows for
hardware-accelerated rendering using OpenGL shaders. Moreover, this opens
the path for supporting blur and other effects that are implemented by efficient shaders. Another possibility is using OpenGL textures to display video frames, which is less efficient than XVideo but more flexible. For example, it makes it possible to implement the overlay and transformation effects that Flash supports.
For ActionScript 3.0 (introduced in Flash 9), Lightspark has both an interpreter and a JIT compiler that uses LLVM to compile ActionScript to native x86 code. However, because the previous ActionScript versions run on a completely different virtual machine, Alessandro has decided to not support them. This means that currently it's not really possible to compare the performance of Lightspark with that of Gnash: while Lightspark only supports ActionScript 3.0, Gnash only supports previous versions of the scripting language.
For people that want to try Lightspark in their browser, Alessandro has
released a Mozilla-compatible plug-in. When encountering an unsupported
Flash file, the plug-in should fail gracefully. For now, there's only a PPA
(Personal Package Archive) for Ubuntu
users, but packages are being created for Arch Linux and Debian. In
this alpha phase of development, the current release is more of a
technological demo. Alessandro is currently the only developer, although
some external contributions have started trickling in.
After the first wave of testing, Alessandro published some information
on the plan for the next releases. A stability release with no new features is planned for the first week of June, while release 0.5.0 will be focused on YouTube support. He also clarified that his current implementation only works on x86/x86_64 because of some assembly code, but he welcomes ports to other architectures:
The code is build using standard technologies, such as pthreads and STL and
should be quite portable, but some critical code paths has been written in
assembly to guarantee atomicity or improve performance. I've very little
experience with anything beside x86/x86-64, so I prefer not to port such
critical code. However I will gladly accept any contributions for other
platforms, such as PPC and ARM. The good news is that a contributor managed
to compile lightspark on FreeBSD/x86 with minimal changes to the build
system and a windows port is also planned.
The Gnash developers have been talking with Alessandro about joining their efforts, but he decided to work on Lightspark because it was very difficult to include an optimizing JIT compiler into the existing Gnash architecture. That said, code sharing or even a closer collaboration between the two projects certainly seems possible. Alessandro has already said that Lightspark's code could be integrated with Gnash in time when it's good enough, and Rob would like to add support for using Lightspark in Gnash to handle AVM2, the ActionScript virtual machine that Adobe introduced in Flash 9. If this idea is implemented, Gnash could essentially hand off all ActionScript 3 functionality to Lightspark.
Although most free and open source proponents agree that Flash is a bad
thing and that it should be replaced by open web technologies such as HTML
5, the transition to an open web will happen slowly as all evolutions in
the computer world do. Moreover, we are stuck with a lot of existing Flash
content that should remain accessible. Therefore, open source Flash
projects like Gnash and Lightspark will remain important for many Linux
users for years. There is hope that the Gnash developers will reach a
consensus on their development model and hopefully Lightspark will
not face the same fate as Swfdec.
For something as critical as Flash is to many users, more developers for
both projects could certainly help.
Comments (15 posted)
On May 19, Google unveiled something that many in the open source community had
been expecting (and which the Free Software Foundation asked
for in March): it made the VP8 video codec available to the public
under a royalty-free, open source BSD-style license. Simultaneously, it
introduced WebM, an
HTML5-targeted open source audio-and-video delivery system using VP8, and
announced a slew of corporate and open source WebM partners supporting the format, including web browsers and video sites such as its own YouTube property.
Dueling assessments, interested parties
The move was not unexpected. Google began trying to acquire VP8's creator, the codec shop On2, months ago, and speculation began even before the acquisition was final. The public reaction to the WebM launch was not unexpected, either. MPEG-LA, the commercial sellers of license for the competitor H.264 codec, suggested that anyone who used VP8 would get sued for patent infringement. An independent H.264 hacker quickly attacked VP8 as inferior on all technical counts, and surely in violation of multiple H.264 patents as well. H.264 proponents and general news sites began circulating that blog post, more so when Apple's Steve Jobs allegedly forwarded a link to it in response to an email asking his opinion on VP8.
Responses from the open source community itself have come in two flavors. The first was a long line of multimedia projects and companies announcing support for VP8 and WebM; some (like Mozilla and Collabora) were in the know before the deal was made public and working on their code, while others just reacted swiftly following the unveiling.
The second took on the opposition, rebutting both the MPEG-LA's public
statements and the attacks of the H.264 hacker, Jason Garrett-Glaser. Many
pointed out Garrett-Glaser's vested interest in H.264 being regarded as the
technically best codec, given that he develops the x264 encoder project,
and suggested that he was prejudiced against VP8 before even examining the
release. StreamingMedia.com compared
the codecs side-by-side, encoding the same source media at the same audio
and video data rates, which Garrett-Glaser did not do, and concluded that
there was no noticeable difference for most applications. Theora hacker Gregory Maxwell addressed the technical issues in an email to the Wikitech list, arguing that the initial release of Google's VP8 encoder represents a starting point ripe for optimization.
Other naysayers dismissed VP8 on the grounds that H.264 is already widely supported in hardware devices. That may be true, but most of this hardware support is in the form of embedded digital signal processor (DSP) code, and DSP ports of Theora were already in the works. Considering that Google has already funded ARM optimizations of Theora, there is grounds to believe it will push DSP playback of VP8 as well, and the company's Android platform is a likely place for it to make an appearance.
Patents and ambiguity
More important than the current (or even the potential-future) technical
performance of VP8 is the question of whether it can legally be used under
the terms spelled out in the WebM license and patent grant. It is clearly
a technical improvement over Theora, but if the competition proved a genuine instance of patent infringement, the codec would need to be changed before it could be safely used.
On this point, again, there are two main threads of discussion. The
first boils down to debate over the belief that VP8 must surely
infringe on patents used in H.264 because the codecs share such a similar
structure. Garrett-Glaser takes this stance, pointing out similarities in
the algorithms. Xiph.org's Christopher "Monty" Montgomery dismissed that
assessment as "serious hyperbole," and others in web article comment
threads pointed out that all discrete cosine transform (DCT)-based codecs utilize the same basic steps; those steps are not what video codec patents cover.
Maxwell rebuffs the similarity argument as well, saying that
Garrett-Glaser "has no particular expertise with patents, and even
fairly little knowledge of the specific H.264 patents" due to the
fact that x264 ignores them when implementing H.264 itself. He continued:
Codec patents are, in general, excruciatingly specific — it makes
passing the examination much easier and doesn't at all reduce the patent's
ability to cover the intended format because the format mandates the exact
behavior. This usually makes them easy to avoid.
The second discussion thread amounts to divining whether H.264's patent licensor MPEG-LA will actually sue over a patent infringement charge against VP8. Here again, the public debate is dominated by assumptions: surely Google did a patent search that completely exonerated VP8; surely On2's patent lawyers knew what they were doing as they developed VP8 — and, alternatively, surely VP8 infringes somewhere, because there are just so many patents in H.264; surely VP8 infringes somewhere, because H.264 was created by the best codec authors using the best technologies.
To get out of the "surely" mire, consider the actual possibilities case
by case. It is logical to suggest that if MPEG-LA has a genuine case, it
will sue. If it does not have a genuine case, the question is whether the
consortium will sue anyway to cause market confusion and buy time to
continue selling H.264 patent licenses. But either way, the risks in filing a lawsuit are extraordinarily high — because Google could easily counter-sue.
Despite MPEG-LA's promotional material suggesting that blanket
rights to use H.264 come with a license, the actual guarantee
of the patent pool is quite weak:
No assurance is or can be made that the License includes every essential
patent. The purpose of the License is to offer a convenient licensing
alternative to everyone on the same terms and to include as much essential
intellectual property as possible for their convenience. Participation in
the License is voluntary on the part of essential patent holders, however.
In other words, submarine patents and patent trolls can threaten H.264 — and in theory, On2 and Google may hold such patents. So what will MPEG-LA do? CEO Larry Horn already suggested, without directly claiming, that it believes it has a genuine case against VP8. Whether it does or doesn't, actually filing an infringement lawsuit could gamble away the H.264 cash cow. The far safer route is to make noise in public, pursue licensing deals with software and hardware vendors as long as possible, and work on the next codec licensing bundle. For its part, Google has done little in public other than express its confidence that there is no patent issue.
That sounds unsatisfying to the left-brained software developer, who
would prefer a clear, bright line to be drawn with VP8 either on the "safe"
or "unsafe" side. Unfortunately, the modern patent game does not work that
way. In practice, patents are hidden weapons that can be used to sue (and
threaten to sue) opponents. All commercial players hold them,
and due to the vast number of patents granted — as well as the
unknown reach of those patents — many are effectively hidden
away until used in an attack.
Still, some have already suggested that Google can and should provide
some level of increased clarity by publicly and transparently
documenting the patents it now owns on VP8, and the patent search process
it used to determine that nothing in VP8 infringed on a competitor's
patents. Florian Mueller of FOSS Patents commented:
At the very least I think Google should look at the patents held by the
MPEG LA pool as well as patents held by some well-known 'trolls' and
explain why those aren't infringed. Programmers have a right to get that
information so they can make an informed decision for themselves whether
to take that risk or not. It's not unreasonable to ask Google to perform a
well-documented patent clearance because they certainly have the resources
in place while most open source developers don't.
Rob Glidden, formerly of Sun,
contrasted Google's one-shot announcement of VP8 with the process Sun used
when working on the now-shuttered Open Media Stack video
codec project, which "based their work on identifiable IPR
[intellectual property rights]
foundations, documented their patent strategy, and [was] willing to work
with bona-fide standards groups to address and resolve IPR issues."
By choosing to "go on their own," he added, Google actually
undermines the open standards process the web relies on.
On the other hand, Google might
consider it to be to its own advantage to keep the company's VP8 patent
research secret, in order to force potential attackers to do
more work looking for an infringement. No one does (nor should they) expect
MPEG-LA to act with the clarity being asked of Google. At times MPEG-LA
likes to present itself as if it is a standards body — one that
produces technical work reflecting the consensus of industry, and ratifying
the best possible ideas into global specifications. But that simply is not
true. MPEG-LA is a for-profit business, selling its products and marketing
them on behalf of its members and against all competitors.
Since its product is protection from a lawsuit by MPEG-LA itself, it gains nothing by drawing clear, bright lines. Even Horn's comment about creating a VP8 patent pool is couched in qualifiers and vague language: "there have been expressions of interest" and "we are looking into the prospects of doing so."
Of course, this is all really about HTML5 ... and money
Behind this entire fight is the availability of a free-to-implement
video codec for HTML5. MPEG-LA and its pool members fought against Theora,
and they will now do the same against VP8. Do not expect MPEG-LA to change
its tune and support a completely free codec, ever; if it did the
organization would have no reason to exist. MPEG-LA wants H.264 to win, not because it is better technically, but because it is their product.
Open source software is in a weird position in relation to MPEG-LA's
licensing model. Even though it is the end users who infringe on the
patents by watching H.264 content, the MPEG-LA requires anyone
distributing codecs, like browser vendors, to pay for a license.
That's just not possible for free software.
MPEG-LA has pushed back the date at which it will start charging royalty
fees for streaming H.264 on the Internet until 2015, and even then there is
a chance that they will push it back again. It does not explicitly care
about the open source browser market itself; it has simply set up a fee
structure that puts free software in an awkward position. The real money comes from video production and editing suites, and from large video hosting sites that transcode millions of videos.
Consequently, the real battle for VP8 adoption may
be there as well. Google put out a long list of WebM-supporting
partners when it unveiled the project, including several important
proprietary software companies like creative-application-juggernaut Adobe
and Quicktime's former star Sorenson. While MPEG-LA has more to lose than
to gain by suing Google over VP8 today, that could change if these video
production pipeline players start to shift over to WebM in a big way. If
that happens, it might be the final straw which causes MPEG-LA to resort to
Comments (58 posted)
Page editor: Jonathan Corbet
Next page: Security>>