Once upon a time, IBM was seen as the dark force in the computing industry
- Darth Vader in a Charlie Chaplin mask. More recently, though, the
company has come across as a strong friend of Linux and free software. It
contributes a lot of code and has made a point of defending against SCO in
ways which defended Linux as a whole. But IBM still makes people nervous,
a feeling which is not helped by the company's massive patent portfolio and
support for software patents in Europe. So, when the word got out that IBM
was asserting its patents against an open-source company, it's not
surprising that the discussion quickly got heated. But perhaps it's time
to calm down a bit and look at what is really going on.
The story starts with the Hercules
emulator, which lets PC-type systems pretend to be IBM's System/370 and
ESA/390 mainframe architectures. Hercules is good enough to run systems
like z/OS or z/VM, and, according to the project's FAQ, it has been used
for production use at times, even if that's not its stated purpose. The
project is licensed under the OSI-certified Q Public License.
Enter TurboHercules SAS, which
seeks to commercialize the Hercules system. The company offers supported
versions of Hercules - optionally bundled with hardware - aimed at
the disaster recovery market. Keeping a backup mainframe around is an
expensive proposition; keeping a few commodity systems running Hercules is
rather cheaper. It's not hard to imagine why companies which are stuck
with software which must run on a mainframe might be tempted by this
product - as a backup plan or as a way to migrate off the mainframes
The problem is that systems like z/OS and z/VM are proprietary software,
subject to the usual obnoxiousness. In particular, IBM's licensing does
not allow these systems to be run on anything but IBM's hardware. So when
TurboHercules tried to get IBM to license its operating system to run on
Hercules-based boxes, IBM refused. TurboHercules responded by filing
a complaint with the European Commission alleging antitrust
violations. According to TurboHercules, IBM's licensing restrictions
amount to an illegal tying of products.
One need not agree with IBM's position to understand it. IBM understands
well the power of commoditizing its competitors' proprietary technology -
that's what its support for Linux is all about, in the end. Emulated
mainframes running on generic Linux or Windows boxes can only look like an
attempt to commoditize one of IBM's cash cows. The fact that this product
requires running IBM's proprietary software gives the company a lever with
which to fight back. Whether one feels that refusing to license that
software in this situation is a proper action or not, one should agree that
it's unsurprising that IBM exercised that option.
TurboHercules evidently sent IBM a letter questioning whether IBM actually
owned any useful intellectual property in this area. IBM responded with a
letter listing 175 patents owned or applied for, all of which are said
to apply to IBM's mainframe architectures. Two of these patents, it turns
out, are on the list of patents which IBM explicitly pledged not to assert
against the free software community.
To many, this looked like the dark side of IBM coming through at last.
Florian Mueller wrote:
This proves that IBM's love for free and open source software ends
where its business interests begin. In market segments where IBM
has nothing to lose, open source comes in handy and the developer
community is courted and cherished. In an area in which IBM
generates massive revenues (an estimated $25 billion annually just
on mainframe software sales!), any weapon will be brought into
position against open source. Even patents, which represent to open
source what nuclear arms are in the physical world.
Those are strong words, and they strike a chord with anybody in the
community who is concerned about the software patent threat. But it is
also worth considering IBM's response, as reported in this
In response to a query from eWEEK, IBM issued the following
statement: 'IBM sent TurboHercules a non-exhaustive list of patents
that pertain to our mainframe technology. We did not make any
explicit assertions or claims that TurboHercules had violated
them. We were merely responding to TurboHercules' surprise that IBM
had intellectual property rights on a platform we've been
developing for more than 40 years. We stand behind the pledge we
made in 2005, and also our rights to protect our significant
investments in mainframe technology.
There are a couple of ways in which one could interpret this statement.
Perhaps somebody within IBM has realized that this whole affair does not
look very good and has started furiously backpedaling. Or, perhaps, it
should be accepted on its face; there is, indeed, no assertion of
infringement in any publicly-available communication from IBM - though the
March 11 letter, listing patents "that would, therefore, be
infringed" comes close. Either way, perhaps
this whole thing has been blown just a little bit out of proportion.
At this time, what we have is an argument over proprietary software
licensing and European antitrust law. IBM has engaged in the sort of
unpleasant behavior which is common to proprietary software; it is a
classic example of why many of us try to avoid dealing with that world
whenever possible. In response, a formal complaint has been brought
against IBM, which has responded with some intemperate
rhetoric claiming that TurboHercules is a Microsoft-funded "cheap
knock-off" of its mainframe products. And while the waving around of
patents is disconcerting, no direct assertion of patent infringement has been
made. If IBM were to make such an assertion against Hercules, its
credibility and trust within the free software community would suffer considerably.
Until that happens, though, it might be best to avoid jumping to
conclusions and encourage these companies to work out their proprietary
software squabble on their own.
Comments (57 posted)
Video codecs attract most of the attention in the multimedia format wars, from Theora adoption in HTML5 to debates about the subjective quality and objective technical demands of Dirac versus H.264. But the oft-overlooked container format is just as important; it adds overhead, it determines seekability, subtitle support, and other important features, and it can introduce patent-licensing issues for open source projects. Xiph.org's Ogg container format is the most well-known in open source, though as recent events show it has its critics and its competition.
Ogg has been under development since the beginnings of Xiph.org in 1994, and was originally designed for use with the Vorbis audio codec. As the Xiph project undertook additional codecs, Ogg continued to evolve to support them. FFmpeg developer Mans Rullgard posted a lengthy criticism of Ogg on his personal blog in March, accusing the format of falling short in six areas: poor generality, excessive overhead, high end-to-end latency, lack of random-access seeking, ill-defined timestamps, and unnecessary complexity. Rullgard cites examples from the specification and several "typical usage" numbers to support his claims.
As the blog post was picked up, debate about the merits of the complaints quickly erupted in the comments and on web discussion forums. Several commenters chided Rullgard for claiming that Ogg's latency and seek times were unsuitably "bad" without citing any numbers from other formats for comparison, and for overstating the size of the problem (such as the overhead created by Ogg's headers at close to 1%) or its relative importance.
On some points, there was more of a genuine disagreement on principle,
however. Rullgard said that Ogg wastes space by using a full byte for the
"version" field, where a single flag bit would suffice. Xiph's Greg
Maxwell explained in the Reddit discussion of the article that a byte is used for the field in order to keep the header structure byte-aligned for simplicity. Maxwell also disagreed with Rullgard's assertion that Ogg's 32-bit checksum was a waste of space, noting that Ogg also uses a 32-bit capture pattern at the beginning of each page, as opposed to the 64-bit capture pattern in FFmpeg's NUT format — thus using the same number of bits, but providing error-detection "for free."
Rullgard also argues that Ogg's ability to concatenate different streams
one-after-another creates undue complexity for the decoder, without
providing any practical benefit. But one blog commenter claimed to take advantage of this feature when ripping CDs with seamless track transitions.
Xiph's Christopher "Monty" Montgomery replied at length in the Slashdot discussion of the critique, admitting that Ogg has its flaws, and conceding that several design decisions made years ago would be made differently today, but attributing more of Rullgard's complaints to long-standing bad blood between the projects. Moreover, he said, even with its flaws, Ogg remains the best free option for important cases like streaming video. Neither the popular MP4 nor Matroska container formats are well-suited for streaming (particularly live streaming), and MP4 is also patent-encumbered. Additionally, he said, making changes to the Ogg format as suggested by Rullgard might improve performance at high-bitrate video, but would be detrimental to low-bitrate and audio payloads where Ogg excels.
Montgomery said that after the Rullgard blog post gained attention, Xiph decided that part of the problem with its reception was poor documentation on the Ogg format. He subsequently began rewriting and expanding the documentation, some of which is already available online. There are changes that Xiph would like to make, he added, as well as ongoing work in the metadata layer. "One of the legitimately weird things about Ogg is that we knew metadata was going to be a source of constant flux, so we moved as much as we possibly could out of the container itself. The Ogg container only does framing and delivery. [ ...] Most folks are used to this being part of the container, and so consider it 'part of Ogg' which it isn't really."
The Ogg Skeleton project is the primary focus in this area. Skeleton is essentially a "metadata track" that can hold information like MIME-types, protocol messages, and timestamps to allow the decoder to easily seek within the media. A Skeleton track could then be multiplexed or interleaved within an Ogg container file, alongside video and audio tracks.
Skeleton's timestamping capabilities are documented at the Ogg Index page, and are introduced in Skeleton 3.3. A sample indexer called OggIndex is available, and both the ffmpeg2theora converter and development builds of Firefox support it.
Montgomery concludes his Slashdot comments by noting that breaking compatibility with the existing hardware and software Ogg decoders (most of which see only Vorbis and Theora content) is probably not going to happen until the next major new codec release from Xiph.org.
Regardless of whether one finds any or all of Rullgard's criticisms
valid, there are other container format options out there for content
providers. The most popular on the Internet writ large is the .MP4 file,
which is properly known as MPEG-4 Part 14, and was approved by ISO as ISO/IEC 14496-14:2003. A part of the larger MPEG-4 specification, MPEG-4 Part 14 is a revision of two earlier standards, MPEG-4 Part 1 and MPEG-4 Part 12. Part 14 is based on the QuickTime container format created by Apple and recognized by the .MOV file extension. It can hold content in any codec (including free codecs like Vorbis and Theora), although there is a "registered" codec list maintained by the MPEG.
There is a degree of uncertainty regarding the ability to write MPEG-4 Part 14 decoders, however. The rest of the MPEG-4 specification, like all MPEG standards, is patented, and implementations must adhere to the license terms made available by the MPEG-LA licensing authority. Part 14 was once available as part of the MPEG-4 Systems patent pool, which has subsequently been withdrawn. Many people on mailing lists and discussion forums assume that the format is free to implement since it is not explicitly mentioned in the remaining MPEG-4 patent pools: "MPEG-4 Visual" and "AVC/H.264," but this is not officially stated. MPEG-LA makes it difficult to find specific information about specific patents in its technologies, preferring instead to steer all customers into the "patent pool" products instead. The ISO specification, which should document specific patent claims, is only available to paying customers. When asked, MPEG-LA representatives said that they did not know the specific status of Part 14 in the current patent pools.
The Matroska format, like Ogg, was created to serve as an open, patent-unencumbered container. The two formats do differ in emphasis, a fact that both projects readily acknowledge. Whereas Ogg was designed alongside Vorbis with streaming audio as its primary use case, Matroska was designed to support as many codecs as possible. Xiph.org says that Matroska has better support for seeking, editing files, and using menus and chapter markers, while Matroska says that Ogg is superior at streaming media delivery (for example, Matroska only recently added support for interleaving frames from different tracks).
Matroska was launched in 2002 as a derivative of the older Multimedia Container Format (MCF). The copyright on the specification and the trademark of the name Matroska are both held by CoreCodec, Inc., but the specification is available free-of-charge. A reference library is available for download under the LGPL, and a "core parser" is offered upon request under the BSD license. The format is generally seen with the .MKV file extension for video content, although .MKA for audio is also valid.
The NUT file format Maxwell mentioned on Reddit was created by developers on the FFmpeg and MPlayer teams, but appears to be supported only in that project. The NUT project site is sparse, with a broken link to the actual specification, but there is a mailing list that indicates that development is still underway, albeit slowly. Montgomery described it as very Ogg-like in its design, incorporating some design choices that would improve Ogg, such as a simpler way of encoding the packet-length in each header (which was one of Rullgard's complaints).
Container formats are far less exciting than multimedia codecs, but the choice of containers has a very real impact on what a content provider can do. Quickly and accurately seeking within a file — while important — is just one example; another active topic right now is support for subtitle tracks. As multimedia content on the Internet grows, having subtitles accessible in their own track (or tracks, with multiple languages supported) has implications for accessibility, internationalization, and subtitle-based searching. For the record, Ogg, MP4, Matroska, and NUT all support subtitles.
As usual, the right choice depends on the usage. To some, non-free
formats like MP4 ought to be avoided at all costs, even if MPEG-LA is not
likely to request licensing fees. If streaming, audio-only, or
low-bitrate performance are important, Ogg remains the simplest and
probably best option. For seeking, video editing, menu/chapter support,
or combining a wide array of codecs, Matroska offers functionality Ogg
cannot. Moving forward, the relative weight of those factors
may shift as either the codecs or the container formats evolve —
but until then, choice is good.
Comments (42 posted)
Recently, we have seen two projects come under considerable criticism for
the development directions that they have taken. Clearly, the development
space that a project chooses to explore says a lot about what its
developers' interests are and where they see their opportunities in the
future. These decisions also
have considerable impact on users. But, your editor would contend, it's
time to give these projects a break. There is both room and need for
different approaches to free software development.
The Subversion project recently posted a "vision and roadmap proposal"
describing where this popular source code management system can be expected
to go in the future. The Subversion developers have made some clear
decisions; these include not even trying to compete in the distributed
version control system space, a reworked storage layer, rename tracking,
better conflict handling, and more. The mission of the Subversion project is not to
chase after the flashier distributed systems which are displacing it in a
number of contexts; instead, Subversion will exist to serve the needs of
users who feel the need for a simple tool with a centralized repository.
This announcement drew well over 100 comments on LWN, and similar numbers
elsewhere. Quite a few of them were of the "here's a nickel, get a real
SCM" variety; it seems that many see Subversion as old, unfashionable, and
past its expiration date.
Others were critical of Subversion's users, claiming that there's no reason
why they couldn't move to a proper distributed system like all of the cool
people have. Quite a few people, it seems, would be happy to see
Subversion curl up and die; others think that the decision not to pursue
distributed features will cause that to happen.
But there are plenty of
distributed version control systems out there, a few of which have
accumulated substantial user and developer communities. The Subversion
developers are right to believe that they would be hard put to create a
credible offering in that "market" at this point; they would have to create
something which is demonstrably better than the existing systems, bearing
in mind that those systems are improving quickly. Beyond that, there truly are
large numbers of Subversion users who are mostly happy with what they
have. Those users may have "look into distributed version control" on
their long-term to-do lists, but, meanwhile, they have projects to
manage. They are best served by a plan which calls for improvements
in the Subversion they are using now.
Subversion is mature software. There will certainly be no shortage of
things which can be improved in it, but its period of rapid development may
be well behind it. There is nothing wrong with the developers saying so;
in fact, there is much to commend there. Developers looking for
fast-moving, distributed systems have a variety of offerings to choose
from. Subversion, instead, will focus on what it does best: better serving
the users that it has now. It seems entirely likely that there will be
quite a few of them for some time yet.
Here, instead, we're told
that users like the way things are now, and that trying to make changes is a
A very different discussion has surrounded the minor user interface changes
planned for the upcoming Ubuntu 10.04 release. Here, instead, we're told
that users like the way things are now, and that trying to make changes is a
mistake. It's tempting to throw all of the complaints into the "bike shed"
category, but this is a shed that Ubuntu users will be staring at all day
long. These changes risk creating gratuitous differences between
distributions and causing confusion in users who are used to finding their
window buttons in a different place. Might not it be better to leave well
Note the difference, though: while there is probably limited scope for
innovations in the problem space that Subversion has chosen for itself,
anybody who tries to argue that our desktop system usability problems have
been solved will face a skeptical audience indeed. We have come a long
way, but "usability" as a problem in general is far from solved. It makes
sense to be conducting experiments in this area, especially for a
distribution like Ubuntu, which has always had a focus on desktop
usability. Other Ubuntu experiments - less intrusive desktop
notifications, for example - have found their way into other distributions
This line of reasoning can be taken farther: we desperately need more
experimentation with usability in the free software space. We have spent
years trying to catch up to proprietary alternatives; this work, for the
most part, is done. At this point, we can focus with trying to match
usability changes made by others, or we can try to come up with interesting
new stuff of our own. Your editor clearly prefers the latter.
Given the scale of the problem, the biggest complaint with moving window
buttons to the left might well be: why spend so much time and energy on
such little things? We're not at the stage where we work for months to
yield a 1% improvement; it's time to be a bit more bold. Projects like
MeeGo seem much more interesting in this regard; those developers are
seriously trying to rethink how specific groups of users will use their
computers in the future. Android, too, has done some interesting work
toward the creation of finger-friendly interfaces. And so on. That is the
kind of experimentation we need to have.
Two other criticisms have been aimed at the Ubuntu changes: that user
interface changes require the participation of human-computer interaction
experts, and the top-down decision mechanism is not particularly
community-oriented. On the first charge your editor - who made
human-computer interaction the focus of his Master's degree work - has a
bit of sympathy. But that claim also sounds vaguely reminiscent of the SCO
Group's assertion that the Linux community could never have come up with an
enterprise-class server operating system on its own; one should never
underestimate what our community can do. In the end, the real
key to usability is to pay attention to the users. Free software
developers have a high degree of access to their users; those who take
advantage of that access will have a higher chance of creating successful
Beyond that, we do also have usability experts in our community.
On the second charge: undoubtedly Mark Shuttleworth's ability to direct
Ubuntu by decree will be irksome to some. The "behind closed doors" nature
of some Ubuntu development is also annoying and detrimental to the creation
of a true developer community. On the other hand, it's a rare distribution
which makes these decisions in a democratic way; even Debian doesn't hold
general resolutions on window button placement. There comes a time when
it's best to make the decision and move on; individual users can always fix
things they don't like.
In summary: Ubuntu could certainly be more open about the changes it is
trying to make and, perhaps, more open-minded about accepting input from
its user community. But Ubuntu's work toward improving usability is
desperately needed, and any interaction changes are certain to upset some
users. Even if the specific change in question is not necessarily the
best, experimenting with this kind of change needs to be done, regardless
of the inevitable complaints.
More generally, every project has to have some idea of the problem it is
trying to solve. In some ways, that's a far more important part of a
project than any specific body of code or any specific developer. One of
the best things about free software is that it's alive; it will evolve and,
with any luck, be better tomorrow. A project's goals say a lot about how
it can be expected to evolve. In your editor's opinion, both Subversion
and Ubuntu have set worthwhile goals, and both seem to be trying to work
toward those goals. These are good things; our community is richer for the
existence of both.
Comments (64 posted)
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
- Security: Enabling Intel TXT in Fedora; New vulnerabilities in firefox, gnome-screensaver, java, mahara,...
- Kernel: CONFIG_NO_BOOTMEM; Memory management for virtualization; Receive flow steering; The padata parallel execution mechanism.
- Distributions: The role of the Debian ftpmasters; new releases from Mandriva and Puredyne; Shuttleworth on window buttons and Maverick Meerkat; What is Unity Linux?; Android vs Maemo review.
- Development: Visualizing open source projects and communities; Grease, Notmuch, Subversion, X server, ...
- Announcements: Fedora: A Case Study of Design in a FLOSS Community; IBM and TurboHercules; iPad alternatives; interviews with Dan Frye and Serguei Beloussov...