Open Invention Network (OIN) CEO Keith Bergelt came to the Linux Foundation
Collaboration Summit to update attendees on OIN's recent expansion of the
"Linux System Definition" as
well as its plans for the future. The system definition is important
because the packages listed there are eligible for patent protection via
OIN and its members' patents. The update was significant, but there are
other interesting things going on at OIN as well.
OIN was formed nearly eight years ago, "in the wake of SCO", Bergelt said.
SCO "went away", but that case served as a wakeup call to some of the
companies that were making big commitments to Linux. They recognized that
patents could end up being a real problem for Linux. Three companies in
particular, IBM, Red Hat, and Novell, got together to try to combat that
problem; they were later joined by NEC, Phillips, and Sony.
Those companies invested "hundreds of millions" of dollars in a standing
fund to acquire patents.
In other businesses, there are a fixed number of competitors that one can
build up patent cross-licensing agreements with. But Linux has "neither a
head or a tail", Bergelt said, so it can't really handle patents in the
standard way. The companies realized that Linux is not a model that will deal well
with patent attacks. The idea behind OIN was that those companies would
share their patents on a "fully paid-up basis" (i.e. without requiring
royalties) among themselves, but also with others who joined the
The basic idea, Bergelt said, was to allow freedom of choice for those who
might be looking to adopt Linux by removing some of the patent
uncertainty. If Linux is not the best choice, then "shame on the
developers", but in most cases it is the best choice when it can compete
fairly. Open source does not operate in a vacuum, but is faced with a
world where litigation is a vehicle for companies that cannot compete in
the marketplace, he said.
OIN has hundreds of patents in "dozens of portfolios", he said, some of
which are fundamental or essential in "several key areas". It does 45-50
patent applications each year, as well as filing multiple defensive
publications. The defensive publications are meant to combat aggressive
patenting by companies outside of the Linux ecosystem, which often results
in them "patenting things that open source had developed long ago". These
efforts are meant to deal with the excessive number of patents that have
been granted in the last 20 years.
The patents that read on the packages that make up the Linux System
Definition are available to OIN licensees on a
royalty-free basis. That includes the patents that OIN owns as well as any
that are owned by the licensees. In addition, licensees must "forbear
Linux-related litigation" using their patents, he said.
OIN is also involved in defensive activities of various sorts. It gives
assistance to companies that are undergoing patent attacks. That includes finding people that can help the company fend off
the attack or for OIN to help them directly. Giving companies ideas of
where they might be able to acquire useful patents to head off an attack,
consulting on how to create a formidable defense, and offering up
information about prior art are the kinds of things that OIN
will do. Typically the attack is really not against just the one company,
but is part of a larger strategy from the patent aggressor, which means
that the information gathered and shared may be able to be reused at some
There is also the Linux Defenders
project which OIN helped to start with the Software Freedom Law Center and
the Linux Foundation four years ago. It seeks to work with the community
to identify prior art for
patent applications to potentially cause them to be rejected. In addition,
it tries to invalidate patents that have already been granted by finding
prior art. OIN is considering becoming "much more aggressive" in those
endeavors, Bergelt said, and may be hiring more people to work on that.
Part of that effort would be to work with projects and community members to
help "defuse the time bomb" that patents held by foes represent.
There are currently over 440 OIN licensees, he said. The organization
holds 400+ US patents and 600+ when foreign patents are added in. It has
also created more than 250 defensive publications.
Those defensive publications generally aren't created by project members,
Bergelt said, in answer to a question from the audience. OIN does not want
community members to have to become experts in defensive publications and,
instead, will do the "heavy lifting" to create them. Anything that a
developer thinks is "new or novel" has the potential to be written up as a
defensive publication. The bar is "not very high", he said, as the patents
that are granted will attest. The bar is even lower for defensive
Community members do not have to codify their idea, if they can just
verbalize it, OIN can turn that into a
defensive publication. Those publications can then work as an "anti-toxin"
of sorts against bad patents and applications. OIN has been trying to get
graduate students involved, as well, to work on the publications. It is
"inventing in a way that prevents others from getting bad patents". One
question that has come up in conversations with Bradley Kuhn, he said, is
if we eliminate all of the bad patents, are we left with really good
patents? The answer is that what's left after getting rid of the bad
patents is "very limited".
Expanding the Linux System Definition
In its effort to "demilitarize the patent landscape", OIN expanded
the Linux System Definition in March. The number of packages covered
rose from 1100 to 1800 and will grow further over time, he said. OIN is
looking at doing yearly updates to the definition, but it may do a mobile
Linux update this (northern hemisphere) summer. There are already some
packages that have been nominated for inclusion and he encouraged everyone
to look at the definition
and lists of packages in order to nominate those they thought should be
From the audience, Kuhn asked about whether the decisions on adding or not
adding packages could be made transparent, so that the community could get
feedback on its nominees. Bergelt said that OIN will explain its
decisions, and will be putting up a web page for nominations sometime
soon. He said that Simon Phipps had already recommended LibreOffice, for
Linux distributions have moved to that office suite.
One of the main efforts that OIN is now making is to reach out to the
community. He said that while he wears a suit, many of the people that
work for him do not and are members of the community. The organization is
attending more events, and trying to involve the community in its efforts.
In addition, it is doing more workshops, rather than talks, to try to get
the community up to speed on the defensive publication anti-toxin.
Attending various events has "made me realize what Linux is about", he
said, which is "people sharing ideas". Those ideas can get "codified as
either swords or shields".
OIN has expanded its staff recently as well. Deb Nicholson was named as the
community outreach director for OIN. In addition, Linux Defenders has added
Andrea Casillas as its director and Armijn Hemel as its European
Unique economic benefits
OIN is also leading an effort to highlight the "unique economic benefits
from open source" to the International Trade Commission (ITC). Patent
trolls and others are increasingly bringing complaints of patent
infringement to the ITC, but the only remedy it has available is to order
an injunction against the import of the product. Those injunctions can
"put you out of business" because it can take $8-12 million to defend
against an ITC injunction. That process takes a year, he said, which
sounds good in comparison to patent suits which often stretch much longer,
but it means that a company needs "lots of fast money" to defend
themselves, so most are willing to settle for a much higher license fee
than the market would normally bear.
Those higher license fees are artificially raising the total cost of
ownership (TCO) for the Linux platform. It is a "purposeful process" to
take away choice by raising the cost of running the Linux platform. OIN is
attempting to have the US Congress recognize this and instruct the ITC to
allow a parallel District Court process hear the case, as it can make
decisions other than just injunctions. Paying some license fee is "not a
death knell", Bergelt said, while unfairly high costs are. We want a
"yield sign if not a stop sign" to the ITC-ordered injunctions.
Another initiative that OIN is working on with Google and IBM is something
that Bergelt is (temporarily) calling "Code Search". It will be a way to
search pre-existing code and "unstructured project data" for prior art.
It will make existing code searchable and could be used by the patent examiners as well as the
community and OIN. There will be an announcement "soon" about that project.
Bergelt then turned to the current patent wars, and what they mean for
Linux, especially in the mobile arena. Microsoft has been using 25 patents
in its litigation over mobile devices. It is not just selling licenses to smartphone
vendors, but is going after the contract manufacturers as well. Both are paying
license fees in many cases, so Microsoft is now turning to going after the
mobile carriers as well.
There is a strategic agenda at play, he said. In difficult times,
companies that appear to be strange bedfellows will come together to attack
a rival. The attacks are tightly coordinated and multi-tiered to
artificially raise the TCO. Established companies are comfortable with
reduced innovation because that means there are fewer threats to their
There are lots of speculators out there who have spent "billions to
acquire patents" and expect a return on that investment. But there are
also operating companies that are threatened by Linux and open source. He
is wondering what will happen with HP's Palm patent portfolio, for
example. Patent aggregators are being approached to sell their portfolios
to operating companies; those companies may want them for defensive or
offensive purposes. In addition, we have seen things like Microsoft's
investment that ensured Nokia's relationship with MeeGo ended. Mobile
Linux is being attacked from many different directions at this point.
But, efforts like Tizen, webOS, and MeeGo (which is still being used by KDE
and others) add resilience to the mobile Linux patent situation. It is
much easier for foes of Linux to fight a one front war against Android,
rather than have to deal with Tizen, webOS, and MeeGo, he said. He is
"very encouraged" to see that there are multiple options for mobile Linux.
There are lots of antagonists lined up against Linux. Some companies are
funding patent trolls and providing them with patents to attack other
companies. But, so far, we haven't
seen funding for direct attacks
by patent trolls against Linux, though it could happen. The key is to watch
where trolls are using patents from known Linux antagonists, those kinds of
attacks could turn to Linux next. There is a lot of litigation activity
right now, but he sees things moving in a direction where eventually choice
in the market will win out over attempts to artificially raise prices
through patent attacks.
Bergelt painted a picture of a complex and active patent attack landscape,
particularly against mobile devices. But he also described lots of things
that OIN and others are doing to combat those attacks. Reform at the ITC
level could effect some major changes to the tactics that are currently
being employed, though it is unclear how likely that reform actually
is. Until there is some kind of major patent overhaul in the US (and
elsewhere, really), the OIN projects and efforts will clearly be needed.
Whether they will be enough, eventually, remains to be seen.
Comments (8 posted)
The core library that sits between user space and the kernel, the GNU C
library (or GLIBC), has undergone some changes recently in its governance, at least
make it a more inclusive project. On the last day of the Linux Foundation
Collaboration Summit, Carlos O'Donell gave an update on the project, the
way it will be governed moving forward, and its
plans for the future. GLIBC founder Roland McGrath was on hand to
contribute his thoughts as well.
Though he wears several hats,
O'Donell introduced himself as an "upstream GLIBC community member", rather
than as a maintainer, because the GLIBC developers have recently been trying to
change the idea of what it means to be involved in the project. He works for
Mentor Graphics—by way of its acquisition of CodeSourcery—on
open-source-based C/C++ development tools. Those tools are targeted at
Linux developers and Mentor is committed to working upstream on things like
GCC and GDB, he said. He personally got involved in the GLIBC project ten
to support the PA-RISC (hppa) architecture; he now works on GLIBC both as
part of his job and as a volunteer.
Recent changes in the GLIBC community
The changes for GLIBC had been coming for a long
time, O'Donell said. The idea is to transition the project from one that
has a reputation for being a small community that is hard to work with to
one that will work well with the kernel developers as well as other
projects in the
free software world.
As part of that effort, the moderation of the main
GLIBC mailing list (libc-alpha) was removed after four years. The goal of
that moderation had been to steer new contributors to the libc-help mailing
list so that they could learn about the open source (and GLIBC) development
they were exposed to the harsher libc-alpha environment. The mentoring
process that was done on libc-help has continued; it is a place for
"random questions" about GLIBC (both for users and new contributors), while
libc-alpha is for more focused
patches once developers have a firm understanding of the process and
There has also been a lot of "wiki gardening" to make more internal
documentation of GLIBC available, he said.
The most visible recent change was the dissolution of the steering committee in
March. The project is moving to a "self-governed community of developers" that is
consensus driven, he said. There is a wiki page that
describes what the project means by "consensus". Trivial or typo patches can just
be checked into the repository, without waiting for approval. The GLIBC
community is willing to accept all sorts of patches now, he said, which
is a "change from where we were five years ago". All of the changes in the
community have come about as gradual process over the last four or five
years; there was no "overnight change", he said.
There are around 25-30 committers for GLIBC, O'Donell said in response to a
question from the audience, and they are listed on the
wiki. Ted Ts'o then asked about getting new features into GLIBC, noting
that in the past there was an assumption that trying to do so was not worth
the effort. He pointed out that BSD union mounts got help from its libc,
but that couldn't be done for Linux in GLIBC, partly because it was not in
the POSIX standard. What is the philosophy that
is evolving on things like that, he asked.
O'Donell said that it comes down to a question of "relevance"; if there are
features that users want, the project may be willing to accept things that
are not in POSIX. GLIBC is the layer between programs and the kernel, so
if there are things missing in that interface it may make sense to add
them. If GLIBC fails to provide pieces that are needed, it will eventually
not be relevant for its users. For example, he said, there is a lot of
work going on in tracing these days, but GLIBC has not been approached to
expose the internals of its mutexes so that users are better able to debug
problems in multi-threaded programs; things like that might make good
"we are conservative", he said.
Ts'o then mentioned problems that had occurred in the past in trying to
expose the kernel's thread ID to user space. There has been a huge amount
of work done to get that information, which bypassed GLIBC because of the
assumption that GLIBC would not accept patches to do so. People are
working around GLIBC rather than working with it, he said.
There is no overriding philosophy about what changes would be acceptable,
McGrath said. Much like with the kernel, features will be evaluated on a
case-by-case basis. There is a need to balance adding something to every
process that runs all over the world and adding interfaces that will need
to be supported forever against the needs and wishes of users. Things that
have "bounced off" GLIBC in the past should be brought up again to "start
the conversations afresh". But don't assume that it will be easy to get
your pet feature into GLIBC, he said.
With 25-30 committers for the project, how will competing philosophies
among those people be handled, Steven Rostedt asked. That problem has not
been solved yet, O'Donell said. At this point, they are trying to
"bootstrap a community that was a little dysfunctional" and will see how it
works out. If problems crop up, they will be resolved then. McGrath said
that things will be governed by consensus and that there won't be "I do it,
you revert it, over and over" kinds of battles. In addition, O'Donell
said, that in a Git-based world reverts won't happen in that way because
new features will happen on branches.
The most static part of GLIBC is the portion that implements standards,
O'Donell said, moving on to the next part of his talk. Standards support is
important because it allows people and code to move between different
architectures and platforms. The "new-ish" standards support that the
GLIBC community is working on now is the C11 support, which he guesses will
available in GLIBC 2.16 or 2.17. One of the more interesting features in
C11 is the C-level atomic operations, he said. Some of the optional annexes
to C11 have not been fully implemented.
Ulrich Drepper is also working on conformance testing for POSIX 2008 and any
problems that are found with that will need to be addressed, O'Donell said.
There are no plans to add the C11 string bounds-checking interfaces from
one of the annexes as there are questions about their usefulness even within
the standards groups. That doesn't mean that those interfaces couldn't end
up in the libc_ports tree, which provides a place for optional add-ons that
aren't enabled by default. That would allow distributions or others to
build those functions into their version of GLIBC.
The math library, libm, is considered "almost
complete" for C11 support, though there are a "handful" of macros for
imaginary numbers that are missing, but Joseph Myers is working on
All of the libm bugs that have been reported have been reviewed by Myers;
he and Andreas Jaeger are working on fixing them, O'Donell said. Some
functions are not rounding correctly, but sometimes fixing a function to
make it right
makes it too slow. Every user's requirements are different in terms of
accuracy vs. speed, so something
needs to be done, but it is not clear what that is.
Bugs filed in
bugzilla are being worked on, though, so he asked that users file or
reopen bugs that need to be addressed.
O'Donell then moved on to the short-term issues for the project, which he
called "beer soluble" problems because they can be fixed over a weekend
or by someone offering a case of beer to get them solved; "the kind of thing
you can get done quickly". First up is to grow the community by attracting
more developers, reviewers, and testers. The project would also like to
get more involvement from distributions and, to that end, has identified
a contact person for each distribution.
Part of building a larger community is to document various parts of the
development process. So there is information on the wiki about what
constitutes a trivial change, what to do when a patch breaks the build, and
so on. The idea is that the tree can be built reliably so that regression
testing can be run frequently, he said.
The release process has also changed. For a while, the project was not
releasing tarballs, but it has gone back to doing so. It is also making
release branches early on in the process, he said.
GLIBC 2.15 was released on March 21 using the new process. There will be
an 2.15.1 update at the end of April and the bugs that are targeted for
that release are tagged with "glibc_2.15". In addition, they have been
tagging bugs for the 2.16 release and they are shooting for twice-a-year
releases that are synchronized with Fedora releases.
Spinning out the
transport-independent remote procedure call (TIRPC aka Sun RPC) functions
into a separate library is an example of the kinds of coordination and
cooperation that the GLIBC project will need to do with others, he said.
Cooperation with the
distributions and the TIRPC project is needed in order to smooth that
There have been some "teething problems" with the TIRPC transition, like
some header file overlaps in the installed files. Those
problems underscore the need to coordinate better with other projects.
It's "just work", he said, but cooperating on
who is going to distribute which header and configuration files needs to
happen to make these kinds of changes go more smoothly.
The medium-term problems for the project were called "statistically
significant" by O'Donell because the only way to solve them is to gather a
bunch of the right people together to work on them. A good example is the
merger of EGLIBC and GLIBC. The fork of GLIBC targeted at the embedded
space has followed all of the FSF copyright policies, so any of that code
can be merged into GLIBC. He is "not going to say that all of EGLIBC" will
be merged into GLIBC, but there are parts that should be. In particular,
the cross-building and cross-testing support are likely to be merged.
Another area that might be useful are the POSIX profiles that would allow
building the library with only certain subsets of its functionality, which
would reduce the size of GLIBC by removing unneeded pieces.
In answer to a question from Jon Masters, O'Donell said that new
architecture ports should target GLIBC, rather than EGLIBC. Though if
there is a need for some of the EGLIBC patches, that might be the right
starting point, he said.
The GLIBC testing methodology needs to enhanced. For one thing, it is
difficult to compare the performance of the library over a long period of
time. The project gets patches to fix the performance of various parts,
but without test cases or benchmarks that could be used down the road to
evaluate new patches. Much of the recent work that has gone into GLIBC is
to increase performance, so it is important to be able to have some
baselines to compare against.
framework also needs work. It is currently just a test skeleton C file,
though there have been suggestions to use DejaGNU or QMTest. The test
infrastructure in GLIBC is not the "most mature" part of the project. It
should be, though, because if the project is claiming that it is
conservative, it need tests to ensure that things are not breaking, he said.
More independent testing is needed, perhaps using the Linux
Test Project or the Open POSIX test suite. Right now Fedora
is used to do full user-space rebuild testing, but it would be good to do
that with other distributions as well. Build problems are easy to find
that way, but runtime problems are not.
In the next section of the talk, O'Donell looked at ideas for
things that might be coming up to five years out. No one
can really predict
what will happen in the kind of time frame, he said, which is why he dubbed
science". One area that is likely to need attention is tracing
support. Exposing the internal state of GLIBC for user-space tracing
SystemTap, LTTng, or other tools) will be needed.
Another idea is
to auto-generate the libm math library from a C code description of "how
libm should work". There is disappointment in the user community because
the libm functions have a wide spectrum between "fast and inaccurate"
and "slow and accurate" functions. Auto-generating the code would allow users
to specify where on that spectrum their math library would reside.
idea that he "threw in for fun" is something that some researchers have
been talking to the project about: "exception-less system calls".
The idea is to avoid the user to kernel
space transition in GLIBC by having it talk to "some kind of user-space
service API" that would provide an asynchronous kernel interface, rather
than doing a trap into the kernel directly.
To close out his talk, O'Donell stressed that the project is very welcoming
to new contributors. He suggested that if you had a GLIBC bug closed or
submitted a patch and never heard back, then you should get involved with
the project as it will be more open to working with you than it may have been
in the past. If you have GLIBC wishlist items, please put them on the
wiki; or if you have read a piece of code in GLIBC and "know what it
does", please submit a comment patch, he said.
Questions and answers
With that, he moved onto audience questions, many of which revolved around
the difference between the glibc_core and glibc_ports. The first was a
question about whether it made sense to merge ports into the core.
O'Donell said that the two pieces have remained close over the years and
essentially live in the same repository, though they are split into two Git
trees. There is no real need to merge them, he said, but if it was deemed
necessary, it could be done with a purely mechanical merge. Ports is meant
as an experimental playground of sorts, that also allows users to pick add-ons
that they need.
That "experimental" designation would come back to haunt O'Donell a bit.
An audience member noted that the Sparc version of GLIBC lives in the core,
while ARM (and others) lives in ports. McGrath said that was really an
accident of history. Ports helps ensure that the infrastructure for
add-ons doesn't bitrot, he said. "ARM is by no means a second-class
citizen" in GLIBC, O'Donell added. The ports mechanism allows vendors to
add things on top of GLIBC so keeping it working is worthwhile.
But the audience reminded O'Donell of his statement about ports being
experimental, and that it might give the wrong impression about ARM
support. "I'm completely at fault", he responded, noting
that he shouldn't have used "experimental" for ports. With a bit of a
chuckle, McGrath said: "That's the kind of statement GLIBC maintainers now
At the time of the core/ports split, all of the architectures that didn't
have a maintainer were put into ports, McGrath said. Now it is something
of an "artificial distinction" for architectures, O'Donell said.
Ts'o suggested that perhaps all of the architectures should be in
ports, while the core becomes architecture-independent to combat the
perception problem. O'Donell seemed amenable to that approach, as did
McGrath, who said that it really depends on people showing up to do the
work needed to make things like that happen.
Another question was about the "friction" that led to the creation of
EGLIBC; has that all been resolved now? O'Donell said that the issues
haven't been resolved exactly, but that there are people stepping up in the
GLIBC community to address the problems that led to the split. There may
still be some friction as things move forward, but they will be resolved by
technical arguments. If a feature makes sense technically, it will get
merged into GLIBC, he said.
The last question was about whether there are plans to move to the LGPLv3
for GLIBC. McGrath said that there is a problem doing so because of the
complexity of linking with GPLv2-only code. The FSF would like to move the
library to LGPLv3, but it is committed to "not breaking the world". There
have been some discussions on ways to do so, but most GLIBC developers are
"just fine" with things staying the way they are.
The talk clearly showed a project in transition, with high hopes of a
larger community via a shift to a more-inclusive project. GLIBC is an
extremely important part of the Linux ecosystem, and one that has long
suffered from a small, exclusive community. That looks to be changing, and
it will be interesting to see how a larger GLIBC community fares—and
what new features will emerge from these changes.
video of this talk
has been posted by the Linux Foundation.]
Comments (8 posted)
GPLv3-licensed Flash player Lightspark has released its
latest update, version 0.5.6. The new release includes
expanded media support, experimental support for the Google Maps "Street View" feature, and a browser plug-in compatible with Firefox 10. Despite the new features, however, the prospect of an open source player for all Flash content does not appear to be getting any closer.
But despite introducing a new virtual machine, Adobe did not remove support for AVM1 code from Flash 9 (or subsequent releases), for fear of breaking existing Flash applications. Lightspark version 0.4.3 overcame this limitation by introducing a fallback mechanism that would call on the stand-alone version of Gnash, the GNU Flash player supporting AVM1, whenever it encountered AVM1 files. Lightspark 0.5.0 introduced a host of new features, but by and large new releases in recent years have incorporated fixes designed to support specific Flash-based web sites or applications.
What's new in Lightspark
Such is the case with the 0.5.6 release, which boasts fixes for YouTube and "experimental" support for Google Maps Street View, plus new features implemented to support Flash-based games like FarmVille. Lightspark includes two front-ends, a stand-alone GTK+ player and an NPAPI web browser plug-in. The new release is compatible with Firefox 10 (which itself landed in January 2012), and is the first release with support for using PNG files. For now, source code is the only downloadable incarnation of the release, but the project maintains a personal package archive (PPA), so Debian packages should arrive soon enough.
Of course, the fact that a Flash game is the motivator for a particular feature implementation in no way lessens its importance. In this case, the new features include support for Flash's NetConnection, which is a two-way client-server RPC channel, and support for custom serialization and deserialization of data, which is likely to prove helpful for other client-server Flash applications relying on JSON or XML, too.
We last looked at
Lightspark in 2010; the intervening releases have added much more that is
note than does the 0.5.6 bump on its own. Many more sites are supported,
including Vimeo and Flowplayer (which is a web video playback product, not
a site itself), and the project now uses the Tamarin test suite to run tests for correctness. While Pignotti took a break, Canonical's Jani Monoses served as release manager and improved ARM support for embedded devices. Of particular interest to embedded users is support for hardware-acceleration through an EGL or OpenGL ES rendering back-end. In addition, the project added support for building the stand-alone player and the browser plug-in on Windows.
Gnash and AVM1
On the other hand, there was also an effort undertaken to add AVM1 support to Lightspark, which would have ultimately enabled the project to play all generations of Flash content. The 0.5.1 release, however, dropped Lightspark's attempt to write its own AVM1 interpreter. That effectively makes Gnash a dependency for any Flash content that uses features from Flash 8 or older.
Similarly, for a while Gnash attempted to add AVM2 support to its own codebase, but it, too, eventually abandoned the effort (at the time of the 0.8.8 release). All open source projects are perpetually starved for developers, but perhaps reverse-engineering two incompatible virtual machine implementations is proving to be beyond even the usual level of overload.
The trouble is that this leaves users without an all-in-one open source Flash implementation, a gap which may feel more acute once Adobe releases Flash Player 11.3, the first version that drops the NPAPI version of the plug-in for Linux.
Lightspark can still be installed as a browser plug-in, and call out to
Gnash whenever it encounters AVM1 objects — but Gnash's own future is
uncertain at the moment. Lead developer Rob Savoye told
the gnash-dev mailing list in December that the 0.8.10 release would likely be the last to incorporate new features, as he has been unable to find funding to support further development. Gnash development had been supported by Lulu.com, who funded four full-time developers, but the company withdrew its support in 2010. Since then, Savoye has maintained the code as best he could, but is limited by financial reality to bug-fixes.
In an email, he said "I'd love to continue working on Gnash full time, but I need an income pretty bad at this point, and can only work for free for so long... So I'm looking for contract work at this point, Gnash related or not. I plan to continue hacking on Gnash, it's been a labor of love for 7 years now." Savoye added that he has brought the idea of funding development to multiple open source companies and foundations — including major Linux distributions — but that none were interested.
Gnash is an FSF high priority project, but that status does not include any financial contribution. For its part, Mozilla is on the record as not being interested in contributing or underwriting code for a Flash plug-in, on the grounds that the format is a vendor-controlled proprietary product, and Mozilla's resources are better used developing open solutions.
Perhaps that is true, but the argument generally goes along the lines of
"users are better off without Flash, and everything that Flash does now
will soon be replaced by HTML5." Unfortunately, open source advocates have
been saying that for years and Flash is still pervasive. It would be
betting against history for Linux companies to assume that Flash support
will soon be an issue of the past using any reasonable definition of "soon." In the meantime, one can count on Lightspark to fill in the gaps for many modern games and video sites — and hope that Gnash will find new sources of support on its own.
Comments (27 posted)
Page editor: Jonathan Corbet
Next page: Security>>