At LinuxCon
North America 2013 in New Orleans, Gabe Newell from video game–maker Valve delivered a
keynote talk explaining the company's interest in Linux. Although
Valve does not expect Linux games to outsell those on proprietary operating
systems in the near term, it sees several valuable side effects
emerging from its Linux support efforts, and it predicts that in the
long term, open systems built on Linux represent the future of gaming.
Linux Foundation executive director Jim Zemlin introduced Newell by
observing that for many years critics had said that the thing holding
back Linux adoption was the lack of good games. Now, however,
they can no longer say so. Valve is one of the gaming industry's top
players, and it has offered its Steam game-delivery platform for Linux
since February 2013.
Newell opened his remarks by pointing out that Valve has produced
games for a variety of device platforms: Windows, Mac OS X, the XBox,
the Playstation, and so on. The Steam service is best known as an online
content-delivery mechanism (to buy, download, and update games), but it also incorporates a suite of tools
for game developers and users to create content. Valve has always
recognized that as a company it will need to adapt to structural
changes in the technology marketplace, he said, including the
declining costs of computing and networking. The drop in these costs
has led to a number of changes in the gaming marketplace, increasing
the relative value of game design and decreasing the value of
marketing and distribution.
Those changes have made digital distribution systems like Steam the
new norm, but they have had unpredictable effects as well, such a the
emergence of "free to play" games. At some point, he said, the
marginal cost of adding a new player to the
game falls below the marginal benefits that a new player adds to the
game community, so it no longer needs to charge. Another
example is the "electronic sports" phenomenon, where sites like
Twitch.tv have arisen that operate as a game-driven economy that does
not rely on game sales. While it would be nice to assume that this
evolving marketplace will stop now that Valve is doing well
financially, he said, the company realizes that things will just
continue to change, which is why it feels Linux is important in the
long run. Eventually, games themselves will become nodes in a
connected economy where digital goods and content are created by
individuals in the community.
Valve has had a long history with Linux, Newell continued. It
first deployed Linux game servers (handling the back-end of multiplayer
online games) in 1999, and Linux now accounts for
the majority of all game servers. Internally, the company uses Linux
in its development infrastructure, where it manages 20 terabytes of game data
in version control. 20 terabytes sounds like a lot, he added, but the
company moves around one exabyte of game data every year (not counting game
servers), which accounts for two to three percent of global IP
traffic.
Nevertheless, he said, Linux game players still account for less than
one percent of Steam's users, and are insignificant by any metric.
But the company has seen multiple knock-on effects since it first
started working on Linux support in Steam. Working on Linux has
improved graphics driver quality and has increased developer interest in
Steam. But the openness of Linux is the factor that Valve considers the most
important.
Several years ago, Newell said, the company became concerned about
the direction that the PC industry was moving. Platform vendors
rolled out systems that could be locked down, so that vendors could
exert control over customers' machines. "If you didn't like Google,
you could keep Google from installing on your platform." That line of
thinking was seductive to platform vendors, but the result was a significant
drop in year-over-year PC sales.
On the other hand, while PC sales have dropped, PC game sales have
risen steadily. Ironically, this success in the gaming industry has
been led by how much more open the PC platform is than the console
industry—at least, on the hardware side. Proprietary hardware
used to dominate game consoles, but commodity PC hardware based on
open standards has evolved much faster and has displaced it. It is at
the point now where gaming consoles re-use graphics hardware from PCs
on the inside. PC gaming is where the real innovation happens, from
social gaming to free-to-play to massively-multiplayer online games
(MMOs)—and the rate of change is increasing.
The most significant innovation from Valve's perspective is the
democratization of the gaming ecosystem. The "power" has shifted from
console makers to game developers and now to end users. For example,
he said, the Team Fortress community creates ten times the amount of
code that Valve's paid developers do. "We're pretty cocky about how
well we could compete with Bungie or other game makers," he said, "but
the one group we can't compete with are our own users."
The community of users can already outproduce the company by an order of magnitude,
he said, but if that is the trend, then proprietary systems are a
problem because they create way too much friction in the process of
creating and releasing content. For example, it can take six months
to get Apple to approve an update to an existing iOS game; that is at
odds with innovation. The company has concluded that closed systems
are not the future of gaming: Linux is.
Of course, if Linux is the future of gaming as Valve had decided,
the next logical question is what the company should do about that fact. It
decided it had to put its efforts into making Linux a good solution
both for gamers and for game developers. Initially, that effort was
distressing, since there was so much work to be done. So the company
decided to tackle it in stages, planning each stage with partners and
customers.
The first step was getting its first game (Left 4 Dead 2) running on Linux, which
Newell described as a "sweater thread of issues": there were problems
with the NVIDIA driver, which revealed problems that the distributions
needed to solve, which led to user experience problems. "'Just compile
it yourself'," Newell said, "does not count as a solution for users."
But the company persevered, and eventually it got Left 4 Dead 2
running on Linux, and running faster than it did on Windows. It
discovered that many of the solutions it crafted for that game
solved problems for its other games, too.
When Valve shipped its Steam client for Linux in February, Newell
said, as much as anything, the action was a signal to its
partners that the company was serious about Linux. It has since added
to its stable of Linux games (totaling 198 as of now), but it has
increased its Linux operations in other areas as well. It has
committed engineers to Simple DirectMedia Layer (SDL) development and
to the Khronos Group (which manages OpenGL and related standards), and
it has started work on a Linux debugger—independent of the LLVM
debugger effort, in which Valve also participates.
Newell closed by remarking that something the world has learned
from the recent explosion of cloud computing is that once you abstract
away certain particular problems, you recognize that the same
abstraction should serve you everywhere. That is true of gaming as it
is for other forms of computing, he said. Nobody thinks they should
have to buy separate copies of games for the living room TV set and
for their PC. Likewise, game developers do not think they should have
to write separate input stacks for the different controllers found on
the PC, the mobile device, and the living room game console. Valve thinks that
Linux—and not proprietary platforms—can solve that
problem. He promised to talk more about it "next week."
Game-industry watchers saw that final remark as a hint that Valve
will be announcing a Linux-based console shortly. Whatever form such
an announcement takes, Newell made it quite clear that the company
sees Linux not just as a community of game-buyers to target, but as a
technological platform on which it can develop products—and
develop them faster and better than it can on the alternatives.
[The author would like to thank the Linux Foundation for
assistance with travel to New Orleans.]
Comments (80 posted)
Planetary Resources is a
company with a sky-high (some might claim "pie in the sky") goal: to find and
mine asteroids for useful minerals and other compounds. It is also a
company that uses Linux and lots of free software. So two of the
engineers from Planetary Resources, Ray Ramadorai and Marc Allen, gave a
presentation at LinuxCon
North America to describe how and why the company uses FOSS—along with
a bit about what it is trying to do overall.
Ramadorai, who is a Principal Avionics Engineer with the company, began the
talk by relating that he joined the company by noticing that it was forming
in Bellvue, Washington in 2012 and phoning the CEO. He started the talk
with a question: what does asteroid mining have to do with Linux? It turns
out, he said, that as they looked at the requirements for the spacecraft
and compared it with those of a data center, there was a "lot of overlap"
between the two. A spacecraft is a distributed system that requires high
availability. In addition, power efficiency is important as the spacecraft are powered by solar panels. Using free software was an opportunity
to use what's already been done in those areas, he said.
By way of some context, Ramadorai explained "why asteroids" and "why now".
Technology has reached a point where it is viable to build small spacecraft capable of prospecting and eventually mining near-Earth objects
(NEOs). Part of the reasoning is idealistic as many of the employees are
"space fans", but there is also a significant opportunity for financial
returns, he said.
There is more of an awareness about asteroids recently. The Russian meteor
earlier this year is one example, but NASA in the US is also talking about
capturing a NEO and orbiting it around the moon. There are a lot more
launch opportunities these days due to the private space companies
(e.g. SpaceX, Orbital Sciences). That means companies can get things to
orbit for less than hundreds of millions of dollars. There has been a
steady growth of the small satellite industry because of that. It's not so
much that the price is coming down, but that there is much more capacity
available for launches, he said.
Hardware has also gotten cheaper and more powerful. The MIPS per unit watt
have been increasing, at least in standard (not space-rated) parts. There
has been a lot of resistance within the aerospace industry to using
off-the-shelf parts, but the cost and performance difference is huge.
What's needed is a system that can handle some failures caused by space
radiation.
It has gotten to the point where a small company can actually build and
launch a spacecraft. FOSS has played a large role in that. The Planetary
Resources software team is small, and Ramadorai estimates his team will only
write 5-10% of the code that runs on the craft—the rest will come from existing
free software.
He emphasized that this was a long-term endeavor for the company. Actually
mining asteroids is a long way off. The first steps are to validate the
technology by starting to prospect and visit NEOs. There are some 1.5
million asteroids larger than 1km in the solar system, with nearly 1000 of
those being near Earth. If you look at smaller asteroids, those 100m or
less, there are around 20,000 of them near Earth. Seventeen percent of those NEOs are
"energetically closer" (i.e. require less energy
to reach) than the Moon.
He showed some images of various NEOs that had been visited by probes, then
showed one of the smallest on that slide (Itokawa) to scale
with the Space Needle—it is wider than that building is tall (184m). The
idea is that these are immense objects. They also can contain a great deal
of interesting material. A 75m C-type asteroid has enough H2
and O2 to have launched all 135 Space Shuttle missions, while a
500m LL-Chondrite asteroid can contain more platinum than has been mined in
human history.
Unfortunately, the US International Traffic in Arms
Regulations (ITAR) restrict the kind of information Planetary
Resources can share. Spacecraft are classified as munitions, which means
that the company can't work with free software communities the way it would
prefer to. The company strives to contribute as it can, while working
within ITAR. It is "annoying" and in Ramadorai's opinion, "spacecraft
should not be classified as munitions". He suggested that those interested
"write Congress" about the problem.
The first step is the Arkyd 100 spacecraft that will be tested in
low-Earth orbit. After that is the Arkyd 200 that will travel to
Earth-crossing asteroids, and the Arkyd 300 that will actually land on
asteroids. These are small craft; the Arkyd 100 can be relatively easily held by a
human (say three shoe boxes in size).
Part of how they can be that small is by dual-purposing everything that can
be. For example, the telescope that is used for prospecting and imaging
asteroids is also used for laser communications with Earth. When a spacecraft is out 1-2 astronomical units (AUs), sending directional
communication is a must for a low-power device. But at 2 AUs, the
round-trip time is 32 minutes, so autonomy in the craft is essential.
The "state of the art" space-rated processor is the Rad750, a 32-bit
PowerPC running at 133MHz. It uses 10 watts of power and costs $200,000.
He compared that with an Intel Atom processor running at 1.6GHz, consuming
2 watts, and available for less than $1000. That is why the team is
planning to use off-the-shelf parts and to deal with faults that will
happen because the processor is not space rated.
Linux is important because they can run the same operating system on the
craft, in the ground station systems, on their desktops, and in the cloud.
The cloud is useful for doing simulations of the system code while
injecting faults. It is common to spin up 10,000 instances in the cloud to
do Monte
Carlo simulations while injecting faults for testing purposes, he said.
Ramadorai then turned the floor over to Allen, who he
described as one of the Jet Propulsion Laboratories
(JPL) refugees at Planetary
Resources. While at JPL, he worked on the backup landing software for the
Curiosity Mars rover. He was, Ramadorai said, one of the few software
people that was quite happy that his code
never actually ran. Allen noted that he worked on flight software at JPL
for five years, which gave him a different perspective than some others at
the company; there is a "mix of both worlds" on the team.
Traditional deep space missions are expensive and take a long time to
design and launch. There is a tendency to pick some technology (like the
Rad750 processor) and stick with it. There are at most 2-3 vehicles built
per project, but Planetary Resources has a different philosophy and set of
motivations. It needs to look at "lots of asteroids" to find ones of interest.
That means using cheap, commodity hardware which can be upgraded as needed
throughout the life of the project. Because the company is a low-cost
spacecraft developer, it wants to use Linux and FOSS everywhere it can.
Traditionally, each separate component was its own silo, so the software
for flight, ground station, and operations were completely separate.
There is so much free software available that it is easy to find to reuse and
repurpose for their needs, Allen said. The challenge is how to stitch
all of the
disparate pieces together into a high-availability system. But a proprietary
system would have far fewer contributors and wouldn't get new features as
quickly, he said.
For example, inter-process communication (IPC) has traditionally been
created from scratch for each project, with custom messaging formats, state
machines, and
serialization mechanisms—all written from scratch. Instead of doing that,
the Planetary Resources team specified a
state machine model and message model in XML and fed it to some Python code that
auto-generated the state machine and IPC code. It uses protobuf and Nanopb for
serialization and ZeroMQ for message
passing (among other FOSS components). "Why reinvent the wheel?", he asked.
That could not have been done in the past, because processors like the
Rad750 would not support it. By using commodity hardware and handling
faults when they occur, it opens up more possibilities. For example, a
hypervisor is used to simulate redundant hardware in order to support
triple modular redundancy. Three separate versions of the flight software
can be run in virtual machines, voting on the outcome to eliminate a
problem caused by space radiation in one of the programs. It isn't a
perfect solution, but "we're not NASA, we don't have to have 100%
reliability".
The team is considering putting a SQL database in the spacecraft itself.
The communication availability, bandwidth, and reliability is such that
there needs to be an easy and compact way for operations to request data
sets to be retrieved. "Fundamentally, we a data company", Allen said, but
much of the data will never reach Earth.
There are a number of challenges with maintaining database integrity in
space. But there is only a need to make it "reliable enough" to be useful,
corruption will happen, and the software must be designed with faults in
mind. Using features like ECC memory or RAID for storage are two
techniques that can be explored because of the flexibility that commodity
hardware and FOSS provide.
Perhaps surprisingly, the cloud has been instrumental in developing the
software. Simulating faults, stressing the software, and so on have all
used the cloud extensively, he said. ITAR plays a role there too, as they
must use Amazon's GovCloud, which is ITAR compliant.
Ramadorai wrapped things up with a quick pitch for anyone looking for
employment. The company needs both hardware and software engineers;
judging from the discussions going on after the talk, there were some
interested folks in the audience.
The plan is to launch a prototype next year sometime, Ramadorai said. It
is difficult to be more specific because launch dates (and carriers)
frequently change. Beyond that, it would depend on how the early tests
go. Mining asteroids is, alas, still quite a ways off it seems.
[ I would like to thank LWN subscribers for travel assistance to New
Orleans for LinuxCon North America. ]
Comments (21 posted)
The WebKit web rendering engine is used by free software projects
of just about every flavor, from embedded Linux products to desktop
browsers. Until recently, it was also the rendering engine that
powered Google's Android and Chrome. At LinuxCon
North America 2013 in New Orleans, Juan Sanchez from Igalia
presented an inside look at the history of WebKit, and shared some
thoughts on how the project will be affected by Google's decision to
fork WebKit and create its own Blink engine.
Considering all the talk about HTML5 as the future of application
development, Sanchez said, people talk surprisingly little about the
actual components that go into delivering that future. WebKit is a
key player, he explained. It is essentially the only practical choice
for free software developers who want to embed web rendering into
their code; Mozilla's Gecko is a highly capable rendering engine as
well, but it is tied to the internals of Firefox in ways that make it
less flexible for other uses. Igalia is one of the top contributors
to WebKit; the company maintains the WebKitGTK+ "port" (a term with a
specific meaning in the WebKit community) and it does a lot of
consulting work for other companies which often includes embedding
the WebKit engine.
The bird's eye kit
Sanchez reviewed the history of the project, and how the
architecture of the code has evolved along with the community. WebKit
started off as a fork of the KHTML renderer and KJS JavaScript
engine from KDE. Apple began WebKit development in 2001, then opened the
project up (with its contributions under a permissive BSD license) in 2005. WebKit's scope
is limited: it just provides an engine that can understand and
render HTML, CSS, JavaScript, and the Document Object Model (DOM),
plus ways to interact with those web page contents. It
does not attempt to be a browser on its own, and project members are
not interested in turning WebKit into one. The project's focus is
on compatibility: adhering to standards and providing components that
are rigorously tested for compliance. As such, the project is
pragmatic about its decision making—or as Sanchez put it, "WebKit is
an engineering project, not a science project."
The entirety of what is commonly called WebKit consists of a few
discrete parts, he said. There is the component called "WebKit,"
which simply provides a thin API layer for applications above it in
the stack. Underneath this layer is "WebCore," which contains the
rendering engine, page layout engine, and network access—the
bulk of the functionality. In addition, there is a JavaScript engine
(although which one is used varies from one WebKit-based application
to another), and a platform-specific compatibility layer that hooks
into general-purpose system libraries.
A "port" in WebKit is a version of WebKit adapted to specific
target platform. The main ports maintained by the project are iOS,
OS X, GTK+, Qt, EFL (for Tizen), and Google's Chromium. One new
port is being discussed: WebKitNIX, which would be a lightweight port
for OpenGL. But there are many smaller ports of WebKit as well, such
as those for Symbian and Blackberry. To further illustrate how ports
are designed, Sanchez explained how the WebKitGTK+ port is structured:
The WebKit API layer becomes webkit/gtk, the JavaScript engine is
JavaScriptCore, and the platform-specific compatibility layer calls
out to GNOME components like libsoup, GStreamer, Cairo, and the
Accessibility Toolkit (ATK).
Recent history and changes
In addition to the architecture described above, there is also
another, known as WebKit2, which is currently in development. As
Sanchez explained, WebKit2 is intended to replace the "WebKit" API
layer, primarily to support a split-process model. In the new model,
the user interface runs in one process, the main parsing and rendering
of web content runs in a second process, and browser plugins run in a
third. The goals are increased stability and performance, plus better
security brought about by isolating the processes. Apple initiated
the WebKit2 effort, and the code is currently working in the GTK+ and Qt
ports as well. It is more-or-less complete, so that WebKit1 will move
into maintenance mode shortly.
A major issue, however, is that WebKit2 is unrelated to Google's
own work adapting Chrome to a split-process design. Chrome draws the
"split process" barrier lower, so that each Chrome instance can have
several rendering processes. This difference of opinion between Apple
and Google goes back quite a ways, but it certainly contributed to
Google's recent decision to fork WebKit and create Blink.
Another possible factor in the split related to project
governance. The project has long had just two officially-defined
roles: committer and reviewer. A committer must have sent in 20 to 30
patches and demonstrated a commitment to the project's rules: once
designated a committer, a person can submit changes, although their
changes are always reviewed. Becoming a designated reviewer is more
difficult: around 300 patches are required, and three existing
reviewers must vouch for the candidate. But once designated, a
reviewer can commit changes directly, without review. More recently, though, Apple
introduced the new top-level role of "owner," and declared that only owners
would be allowed to commit changes to WebKit2.
Google abruptly left WebKit in April, via a surprise announcement
that Sanchez said came as a shock to the WebKit community. Over the
preceding years, as it had contributed to WebKit for Chrome, Google
had risen to the point of equaling Apple in terms of the number of
commits—each company contributed about one third of the changes
to the code, with the remaining third coming from everyone else
(Igalia included). In addition to the number of commits, Google
actually had more engineers working on WebKit than did any other
company (including Apple, who had fewer developers even though those
developers produced as many patches as Google's).
WebKit from here on out
The divergence of Chrome and WebKit's multi-process model and
Apple's desire to add a top-tier level of committer were both factors
in Google's decision to start Blink, but there were other factors,
too. Sanchez said that Google expressed its desire to simplify
Blink's architecture, eliminating the concept of ports altogether and
building the rendering engine specifically to suit Chrome's needs.
The Blink code has already started to diverge, he said, and so far the
Blink governance model is far simpler: there is no official code
review, just people submitting changes and looking at others' changes
informally. The majority of the Blink developers are Google
employees, he said, but outside contributors have begun to appear, too.
Google's departure will certainly have an impact on WebKit, Sanchez
said, although how much remains to be seen. The company obviously
contributed a lot of code, so the developer-power will be missed, as
will Google's ability to balance out Apple. With Apple as the largest
remaining contributor to the project (and by a large margin), it will likely be more difficult for
other companies or volunteers to have a say in steering the project.
The WebKit community is already seeing the impact of Google's
departure in terms of orphaned modules that now need new maintainers.
But there are also several instances of downstream projects dropping
WebKit in favor of Blink. Sanchez noted that Opera had announced it
would switch to WebKit just a few days before the Blink announcement;
the company then issued a follow-up announcement announcing that it
would be switching from WebKit to Blink. More recently, of course, Qt
announced that it would adopt Blink as its HTML rendering engine, too.
On the other hand, Sanchez said, there were positive outcomes to
Google's departure. Apple now seems likely to drop its proposal for a
formal "owner" contributor role. There are also several hacks put
into WebKit specifically for Chrome support that have already been
removed. But that also means that the two rendering engines have
already begun to diverge, and Sanchez said he was not sure how long
code would remain portable between the two. It is good that free
software developers now have two engines to choose from when creating
an embedded web application, after so many years of WebKit being the
only viable option. But the divergence could ultimately prove bad for
the community, at least in some ways. Only time will tell.
[The author would like to thank the Linux Foundation for
assistance with travel to New Orleans.]
Comments (8 posted)
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
- Security: The post-PRISM internet; New vulnerabilities in kernel, libzypp, mediawiki, mozilla, ...
- Kernel: The 3.12 merge window closes; Random number generation; Copy offload with splice().
- Distributions: Rethinking the guest operating system; Slackware, ...
- Development: edX welcomes Google; Qt WebEngine; Firefox 24; Calendar and contact data in the smartphone era; ...
- Announcements: CyanogenMod Inc., IBM invests in Linux, CloudOn joins TDF, ...
Next page:
Security>>