At LinuxCon
North America 2013 in New Orleans, Gabe Newell from video game–maker Valve delivered a
keynote talk explaining the company's interest in Linux. Although
Valve does not expect Linux games to outsell those on proprietary operating
systems in the near term, it sees several valuable side effects
emerging from its Linux support efforts, and it predicts that in the
long term, open systems built on Linux represent the future of gaming.
Linux Foundation executive director Jim Zemlin introduced Newell by
observing that for many years critics had said that the thing holding
back Linux adoption was the lack of good games. Now, however,
they can no longer say so. Valve is one of the gaming industry's top
players, and it has offered its Steam game-delivery platform for Linux
since February 2013.
Newell opened his remarks by pointing out that Valve has produced
games for a variety of device platforms: Windows, Mac OS X, the XBox,
the Playstation, and so on. The Steam service is best known as an online
content-delivery mechanism (to buy, download, and update games), but it also incorporates a suite of tools
for game developers and users to create content. Valve has always
recognized that as a company it will need to adapt to structural
changes in the technology marketplace, he said, including the
declining costs of computing and networking. The drop in these costs
has led to a number of changes in the gaming marketplace, increasing
the relative value of game design and decreasing the value of
marketing and distribution.
Those changes have made digital distribution systems like Steam the
new norm, but they have had unpredictable effects as well, such a the
emergence of "free to play" games. At some point, he said, the
marginal cost of adding a new player to the
game falls below the marginal benefits that a new player adds to the
game community, so it no longer needs to charge. Another
example is the "electronic sports" phenomenon, where sites like
Twitch.tv have arisen that operate as a game-driven economy that does
not rely on game sales. While it would be nice to assume that this
evolving marketplace will stop now that Valve is doing well
financially, he said, the company realizes that things will just
continue to change, which is why it feels Linux is important in the
long run. Eventually, games themselves will become nodes in a
connected economy where digital goods and content are created by
individuals in the community.
Valve has had a long history with Linux, Newell continued. It
first deployed Linux game servers (handling the back-end of multiplayer
online games) in 1999, and Linux now accounts for
the majority of all game servers. Internally, the company uses Linux
in its development infrastructure, where it manages 20 terabytes of game data
in version control. 20 terabytes sounds like a lot, he added, but the
company moves around one exabyte of game data every year (not counting game
servers), which accounts for two to three percent of global IP
traffic.
Nevertheless, he said, Linux game players still account for less than
one percent of Steam's users, and are insignificant by any metric.
But the company has seen multiple knock-on effects since it first
started working on Linux support in Steam. Working on Linux has
improved graphics driver quality and has increased developer interest in
Steam. But the openness of Linux is the factor that Valve considers the most
important.
Several years ago, Newell said, the company became concerned about
the direction that the PC industry was moving. Platform vendors
rolled out systems that could be locked down, so that vendors could
exert control over customers' machines. "If you didn't like Google,
you could keep Google from installing on your platform." That line of
thinking was seductive to platform vendors, but the result was a significant
drop in year-over-year PC sales.
On the other hand, while PC sales have dropped, PC game sales have
risen steadily. Ironically, this success in the gaming industry has
been led by how much more open the PC platform is than the console
industry—at least, on the hardware side. Proprietary hardware
used to dominate game consoles, but commodity PC hardware based on
open standards has evolved much faster and has displaced it. It is at
the point now where gaming consoles re-use graphics hardware from PCs
on the inside. PC gaming is where the real innovation happens, from
social gaming to free-to-play to massively-multiplayer online games
(MMOs)—and the rate of change is increasing.
The most significant innovation from Valve's perspective is the
democratization of the gaming ecosystem. The "power" has shifted from
console makers to game developers and now to end users. For example,
he said, the Team Fortress community creates ten times the amount of
code that Valve's paid developers do. "We're pretty cocky about how
well we could compete with Bungie or other game makers," he said, "but
the one group we can't compete with are our own users."
The community of users can already outproduce the company by an order of magnitude,
he said, but if that is the trend, then proprietary systems are a
problem because they create way too much friction in the process of
creating and releasing content. For example, it can take six months
to get Apple to approve an update to an existing iOS game; that is at
odds with innovation. The company has concluded that closed systems
are not the future of gaming: Linux is.
Of course, if Linux is the future of gaming as Valve had decided,
the next logical question is what the company should do about that fact. It
decided it had to put its efforts into making Linux a good solution
both for gamers and for game developers. Initially, that effort was
distressing, since there was so much work to be done. So the company
decided to tackle it in stages, planning each stage with partners and
customers.
The first step was getting its first game (Left 4 Dead 2) running on Linux, which
Newell described as a "sweater thread of issues": there were problems
with the NVIDIA driver, which revealed problems that the distributions
needed to solve, which led to user experience problems. "'Just compile
it yourself'," Newell said, "does not count as a solution for users."
But the company persevered, and eventually it got Left 4 Dead 2
running on Linux, and running faster than it did on Windows. It
discovered that many of the solutions it crafted for that game
solved problems for its other games, too.
When Valve shipped its Steam client for Linux in February, Newell
said, as much as anything, the action was a signal to its
partners that the company was serious about Linux. It has since added
to its stable of Linux games (totaling 198 as of now), but it has
increased its Linux operations in other areas as well. It has
committed engineers to Simple DirectMedia Layer (SDL) development and
to the Khronos Group (which manages OpenGL and related standards), and
it has started work on a Linux debugger—independent of the LLVM
debugger effort, in which Valve also participates.
Newell closed by remarking that something the world has learned
from the recent explosion of cloud computing is that once you abstract
away certain particular problems, you recognize that the same
abstraction should serve you everywhere. That is true of gaming as it
is for other forms of computing, he said. Nobody thinks they should
have to buy separate copies of games for the living room TV set and
for their PC. Likewise, game developers do not think they should have
to write separate input stacks for the different controllers found on
the PC, the mobile device, and the living room game console. Valve thinks that
Linux—and not proprietary platforms—can solve that
problem. He promised to talk more about it "next week."
Game-industry watchers saw that final remark as a hint that Valve
will be announcing a Linux-based console shortly. Whatever form such
an announcement takes, Newell made it quite clear that the company
sees Linux not just as a community of game-buyers to target, but as a
technological platform on which it can develop products—and
develop them faster and better than it can on the alternatives.
[The author would like to thank the Linux Foundation for
assistance with travel to New Orleans.]
Comments (80 posted)
Planetary Resources is a
company with a sky-high (some might claim "pie in the sky") goal: to find and
mine asteroids for useful minerals and other compounds. It is also a
company that uses Linux and lots of free software. So two of the
engineers from Planetary Resources, Ray Ramadorai and Marc Allen, gave a
presentation at LinuxCon
North America to describe how and why the company uses FOSS—along with
a bit about what it is trying to do overall.
Ramadorai, who is a Principal Avionics Engineer with the company, began the
talk by relating that he joined the company by noticing that it was forming
in Bellvue, Washington in 2012 and phoning the CEO. He started the talk
with a question: what does asteroid mining have to do with Linux? It turns
out, he said, that as they looked at the requirements for the spacecraft
and compared it with those of a data center, there was a "lot of overlap"
between the two. A spacecraft is a distributed system that requires high
availability. In addition, power efficiency is important as the spacecraft are powered by solar panels. Using free software was an opportunity
to use what's already been done in those areas, he said.
By way of some context, Ramadorai explained "why asteroids" and "why now".
Technology has reached a point where it is viable to build small spacecraft capable of prospecting and eventually mining near-Earth objects
(NEOs). Part of the reasoning is idealistic as many of the employees are
"space fans", but there is also a significant opportunity for financial
returns, he said.
There is more of an awareness about asteroids recently. The Russian meteor
earlier this year is one example, but NASA in the US is also talking about
capturing a NEO and orbiting it around the moon. There are a lot more
launch opportunities these days due to the private space companies
(e.g. SpaceX, Orbital Sciences). That means companies can get things to
orbit for less than hundreds of millions of dollars. There has been a
steady growth of the small satellite industry because of that. It's not so
much that the price is coming down, but that there is much more capacity
available for launches, he said.
Hardware has also gotten cheaper and more powerful. The MIPS per unit watt
have been increasing, at least in standard (not space-rated) parts. There
has been a lot of resistance within the aerospace industry to using
off-the-shelf parts, but the cost and performance difference is huge.
What's needed is a system that can handle some failures caused by space
radiation.
It has gotten to the point where a small company can actually build and
launch a spacecraft. FOSS has played a large role in that. The Planetary
Resources software team is small, and Ramadorai estimates his team will only
write 5-10% of the code that runs on the craft—the rest will come from existing
free software.
He emphasized that this was a long-term endeavor for the company. Actually
mining asteroids is a long way off. The first steps are to validate the
technology by starting to prospect and visit NEOs. There are some 1.5
million asteroids larger than 1km in the solar system, with nearly 1000 of
those being near Earth. If you look at smaller asteroids, those 100m or
less, there are around 20,000 of them near Earth. Seventeen percent of those NEOs are
"energetically closer" (i.e. require less energy
to reach) than the Moon.
He showed some images of various NEOs that had been visited by probes, then
showed one of the smallest on that slide (Itokawa) to scale
with the Space Needle—it is wider than that building is tall (184m). The
idea is that these are immense objects. They also can contain a great deal
of interesting material. A 75m C-type asteroid has enough H2
and O2 to have launched all 135 Space Shuttle missions, while a
500m LL-Chondrite asteroid can contain more platinum than has been mined in
human history.
Unfortunately, the US International Traffic in Arms
Regulations (ITAR) restrict the kind of information Planetary
Resources can share. Spacecraft are classified as munitions, which means
that the company can't work with free software communities the way it would
prefer to. The company strives to contribute as it can, while working
within ITAR. It is "annoying" and in Ramadorai's opinion, "spacecraft
should not be classified as munitions". He suggested that those interested
"write Congress" about the problem.
The first step is the Arkyd 100 spacecraft that will be tested in
low-Earth orbit. After that is the Arkyd 200 that will travel to
Earth-crossing asteroids, and the Arkyd 300 that will actually land on
asteroids. These are small craft; the Arkyd 100 can be relatively easily held by a
human (say three shoe boxes in size).
Part of how they can be that small is by dual-purposing everything that can
be. For example, the telescope that is used for prospecting and imaging
asteroids is also used for laser communications with Earth. When a spacecraft is out 1-2 astronomical units (AUs), sending directional
communication is a must for a low-power device. But at 2 AUs, the
round-trip time is 32 minutes, so autonomy in the craft is essential.
The "state of the art" space-rated processor is the Rad750, a 32-bit
PowerPC running at 133MHz. It uses 10 watts of power and costs $200,000.
He compared that with an Intel Atom processor running at 1.6GHz, consuming
2 watts, and available for less than $1000. That is why the team is
planning to use off-the-shelf parts and to deal with faults that will
happen because the processor is not space rated.
Linux is important because they can run the same operating system on the
craft, in the ground station systems, on their desktops, and in the cloud.
The cloud is useful for doing simulations of the system code while
injecting faults. It is common to spin up 10,000 instances in the cloud to
do Monte
Carlo simulations while injecting faults for testing purposes, he said.
Ramadorai then turned the floor over to Allen, who he
described as one of the Jet Propulsion Laboratories
(JPL) refugees at Planetary
Resources. While at JPL, he worked on the backup landing software for the
Curiosity Mars rover. He was, Ramadorai said, one of the few software
people that was quite happy that his code
never actually ran. Allen noted that he worked on flight software at JPL
for five years, which gave him a different perspective than some others at
the company; there is a "mix of both worlds" on the team.
Traditional deep space missions are expensive and take a long time to
design and launch. There is a tendency to pick some technology (like the
Rad750 processor) and stick with it. There are at most 2-3 vehicles built
per project, but Planetary Resources has a different philosophy and set of
motivations. It needs to look at "lots of asteroids" to find ones of interest.
That means using cheap, commodity hardware which can be upgraded as needed
throughout the life of the project. Because the company is a low-cost
spacecraft developer, it wants to use Linux and FOSS everywhere it can.
Traditionally, each separate component was its own silo, so the software
for flight, ground station, and operations were completely separate.
There is so much free software available that it is easy to find to reuse and
repurpose for their needs, Allen said. The challenge is how to stitch
all of the
disparate pieces together into a high-availability system. But a proprietary
system would have far fewer contributors and wouldn't get new features as
quickly, he said.
For example, inter-process communication (IPC) has traditionally been
created from scratch for each project, with custom messaging formats, state
machines, and
serialization mechanisms—all written from scratch. Instead of doing that,
the Planetary Resources team specified a
state machine model and message model in XML and fed it to some Python code that
auto-generated the state machine and IPC code. It uses protobuf and Nanopb for
serialization and ZeroMQ for message
passing (among other FOSS components). "Why reinvent the wheel?", he asked.
That could not have been done in the past, because processors like the
Rad750 would not support it. By using commodity hardware and handling
faults when they occur, it opens up more possibilities. For example, a
hypervisor is used to simulate redundant hardware in order to support
triple modular redundancy. Three separate versions of the flight software
can be run in virtual machines, voting on the outcome to eliminate a
problem caused by space radiation in one of the programs. It isn't a
perfect solution, but "we're not NASA, we don't have to have 100%
reliability".
The team is considering putting a SQL database in the spacecraft itself.
The communication availability, bandwidth, and reliability is such that
there needs to be an easy and compact way for operations to request data
sets to be retrieved. "Fundamentally, we a data company", Allen said, but
much of the data will never reach Earth.
There are a number of challenges with maintaining database integrity in
space. But there is only a need to make it "reliable enough" to be useful,
corruption will happen, and the software must be designed with faults in
mind. Using features like ECC memory or RAID for storage are two
techniques that can be explored because of the flexibility that commodity
hardware and FOSS provide.
Perhaps surprisingly, the cloud has been instrumental in developing the
software. Simulating faults, stressing the software, and so on have all
used the cloud extensively, he said. ITAR plays a role there too, as they
must use Amazon's GovCloud, which is ITAR compliant.
Ramadorai wrapped things up with a quick pitch for anyone looking for
employment. The company needs both hardware and software engineers;
judging from the discussions going on after the talk, there were some
interested folks in the audience.
The plan is to launch a prototype next year sometime, Ramadorai said. It
is difficult to be more specific because launch dates (and carriers)
frequently change. Beyond that, it would depend on how the early tests
go. Mining asteroids is, alas, still quite a ways off it seems.
[ I would like to thank LWN subscribers for travel assistance to New
Orleans for LinuxCon North America. ]
Comments (21 posted)
The WebKit web rendering engine is used by free software projects
of just about every flavor, from embedded Linux products to desktop
browsers. Until recently, it was also the rendering engine that
powered Google's Android and Chrome. At LinuxCon
North America 2013 in New Orleans, Juan Sanchez from Igalia
presented an inside look at the history of WebKit, and shared some
thoughts on how the project will be affected by Google's decision to
fork WebKit and create its own Blink engine.
Considering all the talk about HTML5 as the future of application
development, Sanchez said, people talk surprisingly little about the
actual components that go into delivering that future. WebKit is a
key player, he explained. It is essentially the only practical choice
for free software developers who want to embed web rendering into
their code; Mozilla's Gecko is a highly capable rendering engine as
well, but it is tied to the internals of Firefox in ways that make it
less flexible for other uses. Igalia is one of the top contributors
to WebKit; the company maintains the WebKitGTK+ "port" (a term with a
specific meaning in the WebKit community) and it does a lot of
consulting work for other companies which often includes embedding
the WebKit engine.
The bird's eye kit
Sanchez reviewed the history of the project, and how the
architecture of the code has evolved along with the community. WebKit
started off as a fork of the KHTML renderer and KJS JavaScript
engine from KDE. Apple began WebKit development in 2001, then opened the
project up (with its contributions under a permissive BSD license) in 2005. WebKit's scope
is limited: it just provides an engine that can understand and
render HTML, CSS, JavaScript, and the Document Object Model (DOM),
plus ways to interact with those web page contents. It
does not attempt to be a browser on its own, and project members are
not interested in turning WebKit into one. The project's focus is
on compatibility: adhering to standards and providing components that
are rigorously tested for compliance. As such, the project is
pragmatic about its decision making—or as Sanchez put it, "WebKit is
an engineering project, not a science project."
The entirety of what is commonly called WebKit consists of a few
discrete parts, he said. There is the component called "WebKit,"
which simply provides a thin API layer for applications above it in
the stack. Underneath this layer is "WebCore," which contains the
rendering engine, page layout engine, and network access—the
bulk of the functionality. In addition, there is a JavaScript engine
(although which one is used varies from one WebKit-based application
to another), and a platform-specific compatibility layer that hooks
into general-purpose system libraries.
A "port" in WebKit is a version of WebKit adapted to specific
target platform. The main ports maintained by the project are iOS,
OS X, GTK+, Qt, EFL (for Tizen), and Google's Chromium. One new
port is being discussed: WebKitNIX, which would be a lightweight port
for OpenGL. But there are many smaller ports of WebKit as well, such
as those for Symbian and Blackberry. To further illustrate how ports
are designed, Sanchez explained how the WebKitGTK+ port is structured:
The WebKit API layer becomes webkit/gtk, the JavaScript engine is
JavaScriptCore, and the platform-specific compatibility layer calls
out to GNOME components like libsoup, GStreamer, Cairo, and the
Accessibility Toolkit (ATK).
Recent history and changes
In addition to the architecture described above, there is also
another, known as WebKit2, which is currently in development. As
Sanchez explained, WebKit2 is intended to replace the "WebKit" API
layer, primarily to support a split-process model. In the new model,
the user interface runs in one process, the main parsing and rendering
of web content runs in a second process, and browser plugins run in a
third. The goals are increased stability and performance, plus better
security brought about by isolating the processes. Apple initiated
the WebKit2 effort, and the code is currently working in the GTK+ and Qt
ports as well. It is more-or-less complete, so that WebKit1 will move
into maintenance mode shortly.
A major issue, however, is that WebKit2 is unrelated to Google's
own work adapting Chrome to a split-process design. Chrome draws the
"split process" barrier lower, so that each Chrome instance can have
several rendering processes. This difference of opinion between Apple
and Google goes back quite a ways, but it certainly contributed to
Google's recent decision to fork WebKit and create Blink.
Another possible factor in the split related to project
governance. The project has long had just two officially-defined
roles: committer and reviewer. A committer must have sent in 20 to 30
patches and demonstrated a commitment to the project's rules: once
designated a committer, a person can submit changes, although their
changes are always reviewed. Becoming a designated reviewer is more
difficult: around 300 patches are required, and three existing
reviewers must vouch for the candidate. But once designated, a
reviewer can commit changes directly, without review. More recently, though, Apple
introduced the new top-level role of "owner," and declared that only owners
would be allowed to commit changes to WebKit2.
Google abruptly left WebKit in April, via a surprise announcement
that Sanchez said came as a shock to the WebKit community. Over the
preceding years, as it had contributed to WebKit for Chrome, Google
had risen to the point of equaling Apple in terms of the number of
commits—each company contributed about one third of the changes
to the code, with the remaining third coming from everyone else
(Igalia included). In addition to the number of commits, Google
actually had more engineers working on WebKit than did any other
company (including Apple, who had fewer developers even though those
developers produced as many patches as Google's).
WebKit from here on out
The divergence of Chrome and WebKit's multi-process model and
Apple's desire to add a top-tier level of committer were both factors
in Google's decision to start Blink, but there were other factors,
too. Sanchez said that Google expressed its desire to simplify
Blink's architecture, eliminating the concept of ports altogether and
building the rendering engine specifically to suit Chrome's needs.
The Blink code has already started to diverge, he said, and so far the
Blink governance model is far simpler: there is no official code
review, just people submitting changes and looking at others' changes
informally. The majority of the Blink developers are Google
employees, he said, but outside contributors have begun to appear, too.
Google's departure will certainly have an impact on WebKit, Sanchez
said, although how much remains to be seen. The company obviously
contributed a lot of code, so the developer-power will be missed, as
will Google's ability to balance out Apple. With Apple as the largest
remaining contributor to the project (and by a large margin), it will likely be more difficult for
other companies or volunteers to have a say in steering the project.
The WebKit community is already seeing the impact of Google's
departure in terms of orphaned modules that now need new maintainers.
But there are also several instances of downstream projects dropping
WebKit in favor of Blink. Sanchez noted that Opera had announced it
would switch to WebKit just a few days before the Blink announcement;
the company then issued a follow-up announcement announcing that it
would be switching from WebKit to Blink. More recently, of course, Qt
announced that it would adopt Blink as its HTML rendering engine, too.
On the other hand, Sanchez said, there were positive outcomes to
Google's departure. Apple now seems likely to drop its proposal for a
formal "owner" contributor role. There are also several hacks put
into WebKit specifically for Chrome support that have already been
removed. But that also means that the two rendering engines have
already begun to diverge, and Sanchez said he was not sure how long
code would remain portable between the two. It is good that free
software developers now have two engines to choose from when creating
an embedded web application, after so many years of WebKit being the
only viable option. But the divergence could ultimately prove bad for
the community, at least in some ways. Only time will tell.
[The author would like to thank the Linux Foundation for
assistance with travel to New Orleans.]
Comments (8 posted)
Page editor: Jonathan Corbet
Security
As the founder of the ownCloud project, Frank Karlitschek has spent a fair amount of time
considering the issues surrounding internet privacy. The recent
revelations of widespread internet surveillance embodied in the PRISM program (and other related efforts largely revealed by
Edward Snowden) have, essentially, broken the internet, he said.
Karlitschek came to LinuxCon
North America in New Orleans to talk about that
serious threat to the internet—one that he believes the free and
open source software communities have a responsibility to help fix.
A longtime open source developer, Karlitschek has worked with KDE,
opendesktop.org, along with the KDE-Look
and GNOME-Look sites. After starting
the ownCloud project, he also helped found an ownCloud company in 2012. OwnCloud is
"both a company and a community", he said.
But Karlitschek wasn't there to talk about ownCloud. Instead, he turned to
the news to highlight the problem facing the internet, noting a few
headlines from the last few
months on
surveillance-related topics: the NSA circumventing internet encryption, "full
take" (storing all data gathered), and
XKeyscore. The latter
is a program that collects "nearly everything a user
does on the internet", and because of the "full take" strategy used, the
data all gets stored. The NSA doesn't have the capacity to analyze all that
data now, so it stores it for later analysis—whenever it somehow becomes
"interesting". It turns out that if the
budget is high enough, one can essentially "store the internet", he said.
While XKeyscore only gathers metadata, that metadata is still quite
privacy invasive. It can include things like the locations of people, who is
"friends" with whom, what search terms people use, what they buy, and so on.
If an agency puts it all together in the context of a single person, it can
lead to surprisingly revealing conclusions.
In other news, Karlitschek noted that man-in-the-middle attacks are
increasing, at least partly due to the brokenness of the SSL certificate
authority scheme. He also pointed to the shutdowns of Lavabit and Groklaw
as recent events of note. And, "news from yesterday" that he had seen in
the European press (and not really in the US press, at least yet) indicated
that much of the worldwide credit card
transaction data had been compromised and collected by secret services.
The surveillance is not just a problem for one country, he said, as there
are secret
services all over the world that are reading our data. It is not just a
problem of the
NSA or the US—everyone who uses the internet anywhere is affected. These
agencies are
not just reading the data either, as man-in-the-middle attacks can also
be used to change the data that is being sent if that is of interest. It
is important
to realize that this surveillance covers all of the communication on
the internet, which increasingly is data that is coming from our devices.
The data collected by those devices is sometimes surprising, including
phones that never turn off their microphones—or, sometimes, their cameras.
He asked the audience to raise their hands if they used various internet
services (banking, search, ...) and got majorities for them all, until he
came to the last question. "Who thinks private data on the internet is
still private?", he asked—to zero raised hands.
"The internet is under attack", Karlitschek said. This network and
infrastructure that "we all love" and have used for years is being
threatened. This is a huge problem, he said, because it is "not just a fun
tool", the internet is "one of the most important inventions" ever
created. It enables a free flow of knowledge, which makes it the best
communication tool invented so far. It is an "awesome
collaboration tool" that enables projects like, for example, Linux.
Without the internet, there would be no Linux today, he said. Many
companies have been able to build businesses on top of the internet, but
all of that is now threatened.
There are various possible responses to this threat. One could decide to
no longer transmit or store private information on the internet, but there
is a problem with that approach. More and more things are tied to the
internet every day, so it is more than just the web browser. Smartphones,
gaming consoles, and regular phone conversations all use the internet even
without the user directly accessing it through the browser. "Not using the
internet for private data is not really an option these days", Karlitschek
said.
Another response would be to use ssh, rsync, GPG, and "super awesome
encrypted Linux tools". There are a few problems with that idea. For one
thing, we don't know that ssh and others are safe as there are "new
problems popping up
every day". In addition, the transmission may be encrypted successfully,
but the endpoints are still vulnerable; either the client or server end
could be compromised. Another problem is that regular users can't
really use those tools because they aren't targeted at those who are not
technically savvy.
One could also just decide not to care about the surveillance that is going
on, but privacy is very important. He is from Germany, which has some
experience with both right- and left-wing secret services that were
unconstrained, he said—it leads to "bad things".
Who invented and built the internet, he asked. The answer is that "we
invented it". There would be no internet in its current form without
Linux, he said. If users had to buy a Sun system to run a web server, it
would have greatly changed things. Beyond Linux itself, we created
languages like Java, PHP, and JavaScript; and free databases, open
protocols, and many
applications. Because we built it, "we also have to fix it".
There are political aspects to the problem that the politicians are,
supposedly, working on, but Karlitschek doesn't hold out much hope for that
kind of solution. Technologists have to work on it so that the internet "works
like it is supposed to". To try to define how the internet should
work, he and others have come up with a list of eight user rights that are
meant to help define "how a good internet works".
Those rights range from things like "own the data"—taking a photo and
uploading it to some service shouldn't change the ownership, the same goes
for texts, emails, and so on—to "control access"—the user decides on when
and with whom to share data, not the service. The other rights are in the
same vein;
the idea is to put users firmly in control of their data and the access to it.
Karlitschek then looked at four areas of internet use (email/messaging, the web, social
networking, and file sync/share/collaboration) to see how they stack up on
a few different "open data" criteria. Email and the web have
similar scores. Both are decentralized, people can host their own or
fairly easily migrate to a new service, they
use open protocols, and have open source implementations available. All of
that is very good, but both fail in the encryption area. Email has
encryption using GPG,
but regular users don't use it (and many technical people don't either),
while SSL encryption is largely broken because of a certificate model that
places too much trust in large governments and organizations.
Social networking is "very bad" on these criteria, he said. It is
centralized (there is just one Facebook or G+ provider), it can't be
self-hosted, migration is nearly impossible (and friends may not migrate
even if the data does), open protocols aren't used, open source
implementations don't really exist (Diaspora didn't really solve that
problem as was hoped), and so on.
Things are a bit better in the file
sharing realm, but that is still mostly centralized without open protocols
(there are APIs, but that isn't enough) and with no encryption (or it is done on
the server side, which is hopeless from a surveillance-avoidance
perspective). On the plus side, migration is relatively easy (just moving
files), and there
are some open source implementations (including ownCloud).
Overall, that paints a fairly bleak picture, so what can we do about it, he asked.
For regular users, starting to use GPG encryption and hoping that it is safe
is one step. Stopping reliance on SSL for internet traffic encryption and using a
VPN instead is another, he said. VPNs are hard for regular users to set
up, however.
Using Linux and open source as much as possible is important because "open
source is very good protection against back doors". He noted that there
were two occasions when someone tried to insert a back door into KDE and
that both were noticed immediately during code review. He strongly
recommends on-premises file-sharing facilities rather than relying on the
internet. Beyond that, users need to understand the risks and costs as
security is never really black or white, it is "all gray".
Developers "have a responsibility here", he said. They need to build
security into the core of all software, and to put encryption into
everything. Looking at SSL and the certificate system should be a
priority. Another area of focus should be to make secure software that is
usable for
consumers—it needs to be so easy to use that everyone does so. He showed
two examples of how not to do it: a Windows GPG dialog for key management
with many buttons, choices, and cryptic options and the first
rsync man page, which is just a mass of options. Those are not solutions
for consumers, he said.
He would like to have an internet that is "safe and secure", one that can
be used to transfer private data. Two groups have the power to make that
happen, but one, politicians, is unlikely to be of help as they are
beholden to the secret services and their budgets. So it is up to us, "we
have to fix the internet".
Two audience questions touched on the efficacy of current cryptographic
algorithms. Karlitschek said that he was no expert in the area, but was
concerned that the NSA and others are putting several thousand people to
work on breaking today's crypto. It is tough to battle against so many
experts, he said. It is also difficult to figure out what to fix when we
don't know
what is broken. That makes it important to support efforts like that of the
Electronic Frontier Foundation to find out what the NSA and others are
actually doing, so that we can figure out where to focus our efforts.
Outside of Karlitschek's talk,
there is some debate over how the "broken internet" will ever get fixed—if,
indeed, it does. Technical solutions to the problem seem quite attractive,
and Karlitschek is not the only one advocating that route. Whether well-funded
privacy foes, such as governments and their secret services, can ultimately
overwhelm those technical solutions remains to be seen. Outlawing
encryption might be seen as stunningly good solution by some, but the
unintended side effects of that would be equally stunning. E-commerce without
encryption seems likely to fail miserably, for example. Hopefully saner
heads will prevail, but those who prey on fear, while spreading uncertainty
and doubt along the way, are legion.
[ I would like to thank LWN subscribers for travel assistance to New
Orleans for LinuxCon North America. ]
Comments (25 posted)
Brief items
At the end of the day, there is no real replacement for a real HWRNG
[Hardware Random Number Generator].
And I've never had any illusions that the random driver could be a
replacement for a real HWRNG. The problem is though is that most
HWRNG can't be audited, because they are not open, and most users
aren't going to be able to grab a wirewrap gun and make their own ---
and even if they did, it's likely they will screw up in some
embarrassing way. Really, the best you can do is [hopefully] have
multiple sources of entropy. RDRAND, plus the random number generator
in the TPM, etc. and hope that mixing all of this plus some OS-level
entropy, that this is enough to frustrate the attacker enough that
it's no longer the easiest way to compromise your security.
—
Ted Ts'o
The NSA's belief that more data is always good, and that it's worth doing anything in order to collect it, is wrong. There are diminishing returns, and the NSA almost certainly passed that point long ago. But the idea of trade-offs does not seem to be part of its thinking.
The NSA missed the Boston Marathon bombers, even though the suspects left a really sloppy Internet trail and the older brother was on the terrorist watch list. With all the NSA is doing eavesdropping on the world, you would think the least it could manage would be keeping track of people on the terrorist watch list. Apparently not.
I don't know how the CIA measures its success, but it failed to predict the end of the Cold War.
More data does not necessarily mean better information. It's much easier to look backward than to predict. Information does not necessarily enable the government to act. Even when we know something, protecting the methods of collection can be more valuable than the possibility of taking action based on gathered information. But there's not a lot of value to intelligence that can't be used for action. These are the paradoxes of intelligence, and it's time we started remembering them.
—
Bruce
Schneier
Comments (12 posted)
This
ars technica article predicts some nasty security problems for
Java 6 users. "
The most visible sign of deterioration are
in-the-wild attacks exploiting unpatched vulnerabilities in Java version 6,
Christopher Budd, threat communications manager at antivirus provider Trend
Micro, wrote in a blog post published Tuesday. The version, which Oracle
stopped supporting in February, is still used by about half of the Java
user base, he said. Malware developers have responded by reverse
engineering security patches issued for Java 7, and using the insights to
craft exploits for the older version. Because Java 6 is no longer
supported ... those same flaws will never be fixed."
See
the
original blog post for more information.
Comments (58 posted)
New vulnerabilities
graphite-web: unspecified vulnerability
| Package(s): | graphite-web |
CVE #(s): | CVE-2013-5093
|
| Created: | September 18, 2013 |
Updated: | September 18, 2013 |
| Description: |
From the Fedora advisory:
Version 0.9.12 fixes an unspecified vulnerability. |
| Alerts: |
|
Comments (none posted)
kernel: multiple vulnerabilities
| Package(s): | kernel |
CVE #(s): | CVE-2013-2888
CVE-2013-2889
CVE-2013-2891
CVE-2013-2892
CVE-2013-2893
CVE-2013-2894
CVE-2013-2895
CVE-2013-2896
CVE-2013-2897
CVE-2013-2899
CVE-2013-0343
|
| Created: | September 13, 2013 |
Updated: | September 26, 2013 |
| Description: |
From the CVE entries:
Linux kernel built with the Human Interface Device bus (CONFIG_HID) support
is vulnerable to a memory corruption flaw. It could occur if an HID device
sends malicious HID report with the Report_ID of greater than 255. A local user with physical access to the system could use this flaw to crash
the system resulting in DoS or, potentially, escalate their privileges on the system. (CVE-2013-2888)
Linux kernel built with the Human Interface Device(HID) Bus support(CONFIG_HID)
along with the Zeroplus based game controller support(CONFIG_HID_ZEROPLUS) is
vulnerable to a heap overflow flaw. It could occur when an HID device sends
malicious output report to the kernel driver.
A local user with physical access to the system could use this flaw to crash the
kernel resulting in DoS or potential privilege escalation to gain root access via
arbitrary code execution. (CVE-2013-2889)
Linux kernel built with the Human Interface Device Bus support(CONFIG_HID)
along with a driver for the Steelseries SRW-S1 steering wheel
(CONFIG_HID_STEELSERIES) is vulnerable to a heap overflow flaw. It could occur
when an HID device sends malicious output report to the kernel driver.
A local user with physical access to the system could use this flaw to crash
the kernel resulting in DoS or potential privilege escalation to gain root
access via arbitrary code execution. (CVE-2013-2891)
Linux kernel built with the Human Interface Device(CONFIG_HID) bus support
along with the Pantherlord/GreenAsia game controller(CONFIG_HID_PANTHERLORD)
driver, is vulnerable to a heap overflow flaw. It could occur when an HID
device sends malicious output report to the kernel driver.
A local user with physical access to the system could use this flaw to crash
the kernel resulting in DoS or potential privilege escalation to gain root
access via arbitrary code execution. (CVE-2013-2892)
Linux kernel built with the Human Interface Device(CONFIG_HID) support along
with the Logitech force feedback drivers is vulnerable to a heap overflow flaw.
- CONFIG_LOGITECH_FF
- CONFIG_LOGIG940_FF
- CONFIG_LOGIWHEELS_FF
- CONFIG_LOGIRUMBLEPAD2_FF
It could occur when the HID device sends malicious output report to the kernel
drivers.
A local user with physical access to the system could use this flaw to crash
the kernel resulting in DoS or potential privilege escalation to gain root
access via arbitrary code execution. (CVE-2013-2893)
Linux kernel built with the Human Interface Device support(CONFIG_HID), along
with the Lenovo ThinkPad USB Keyboard with TrackPoint(CONFIG_HID_LENOVO_TPKBD)
driver is vulnerable to a heap overflow flaw. It could occur when an HID device
sends malicious output report to the kernel driver.
A local user with physical access to the system could use this flaw to crash
the kernel resulting in DoS or potential privilege escalation to gain root
access via arbitrary code execution. (CVE-2013-2894)
Linux kernel built with the Human Interface Device(CONFIG_HID) support along
with the Logitech Unifying receivers(CONFIG_HID_LOGITECH_DJ) driver is
vulnerable to a heap overflow flaw. It could occur when the HID device sends
malicious output report to the kernel driver.
A local user with physical access to the system could use this flaw to crash
the kernel resulting in DoS or potential privilege escalation to gain root
acess via arbitrary code execution. (CVE-2013-2895)
Linux kernel built with the Human Interface Device bus(CONFIG_HID) along with
the N-Trig touch screen driver(CONFIG_HID_NTRIG) support is vulnerable to a
NULL pointer dereference flaw. It could occur when an HID device sends
malicious output report to the ntrig kernel driver.
A local user with physical access to the system could use this flaw to crash
the kernel resulting in DoS or potential privilege escalation to gain root
access via arbitrary code execution. (CVE-2013-2896)
Linux kernel built with the Human Interface Device bus(CONFIG_HID) along with
the generic support for the HID Multitouch panels(CONFIG_HID_MUTLTITOUCH)
driver is vulnerable to a heap overflow flaw. It could occur when an HID device
sends malicious feature report the kernel driver.
A local user with physical access to the system could use this flaw to crash
the kernel resulting in DoS or potential privilege escalation to gain root
access via arbitrary code execution. (CVE-2013-2897)
Linux kernel built with the Human Interface Device(CONFIG_HID) support along
with the Minibox PicoLCD devices(CONFIG_HID_PICOLCD) driver is vulnerable to
a NULL pointer dereference flaw. It could occur when the HID device sends
malicious output report to the kernel driver.
A local user with physical access to the system could use this flaw to crash
the kernel resulting in DoS or potential privilege escalation to gain root
access via arbitrary code execution. (CVE-2013-2899) |
| Alerts: |
|
Comments (none posted)
kernel: denial of service
| Package(s): | kernel-rt |
CVE #(s): | CVE-2013-2058
|
| Created: | September 18, 2013 |
Updated: | September 18, 2013 |
| Description: |
From the Red Hat advisory:
A flaw was found in the Linux kernel's Chipidea USB driver. A local,
unprivileged user could use this flaw to cause a denial of service. |
| Alerts: |
|
Comments (none posted)
libvirt: group updating error
| Package(s): | libvirt |
CVE #(s): | CVE-2013-4291
|
| Created: | September 12, 2013 |
Updated: | October 2, 2013 |
| Description: |
From the Red Hat bug:
Upstream Commit 29fe5d7 (released in 1.1.1) introduced a latent problem for any caller of virSecurityManagerSetProcessLabel and where the domain already had a uid:gid label to be parsed. Such a setup would collect the list of supplementary groups during virSecurityManagerPreFork, but then ignores that information, and thus fails to call setgroups() to adjust the supplementary groups of the process. |
| Alerts: |
|
Comments (none posted)
libzypp: key verification bypass
| Package(s): | libzypp |
CVE #(s): | CVE-2013-3704
|
| Created: | September 12, 2013 |
Updated: | September 18, 2013 |
| Description: |
From the openSUSE advisory:
libzypp was adjusted to enhance the RPM GPG key import/handling to avoid a problem with multiple key blobs. Attackers able to supplying a repository could let the packagemanager show another keys fingerprint while a second one was actually used to sign the repository (CVE-2013-3704). |
| Alerts: |
|
Comments (none posted)
lightdm: information leak
| Package(s): | lightdm |
CVE #(s): | CVE-2013-4331
|
| Created: | September 13, 2013 |
Updated: | September 19, 2013 |
| Description: |
From the Ubuntu advisory:
It was discovered that Light Display Manager created .Xauthority files with incorrect permissions. A local attacker could use this flaw to bypass access restrictions. |
| Alerts: |
|
Comments (none posted)
mediawiki: information leak
| Package(s): | mediawiki |
CVE #(s): | CVE-2013-4302
|
| Created: | September 13, 2013 |
Updated: | September 23, 2013 |
| Description: |
From the Debian advisory:
It was discovered that in Mediawiki, a wiki engine, several API modules allowed anti-CSRF tokens to be accessed via JSONP. These tokens protect against cross site request forgeries and are confidential. |
| Alerts: |
|
Comments (none posted)
mediawiki: multiple vulnerabilities
| Package(s): | mediawiki |
CVE #(s): | CVE-2013-4301
CVE-2013-4303
|
| Created: | September 16, 2013 |
Updated: | September 23, 2013 |
| Description: |
From the Mandriva advisory:
Full path disclosure in MediaWiki before 1.20.7, when an invalid
language is specified in ResourceLoader (CVE-2013-4301).
An issue with the MediaWiki API in MediaWiki before 1.20.7 where an
invalid property name could be used for XSS with older versions of
Internet Explorer (CVE-2013-4303). |
| Alerts: |
|
Comments (none posted)
mozilla: code execution
| Package(s): | firefox thunderbird seamonkey |
CVE #(s): | CVE-2013-1719
|
| Created: | September 18, 2013 |
Updated: | September 27, 2013 |
| Description: |
From the CVE entry:
Multiple unspecified vulnerabilities in the browser engine in Mozilla Firefox before 24.0, Thunderbird before 24.0, and SeaMonkey before 2.21 allow remote attackers to cause a denial of service (memory corruption and application crash) or possibly execute arbitrary code via unknown vectors.
|
| Alerts: |
|
Comments (none posted)
mozilla: multiple vulnerabilities
| Package(s): | firefox thunderbird seamonkey |
CVE #(s): | CVE-2013-1718
CVE-2013-1722
CVE-2013-1725
CVE-2013-1730
CVE-2013-1732
CVE-2013-1735
CVE-2013-1736
CVE-2013-1737
|
| Created: | September 18, 2013 |
Updated: | September 30, 2013 |
| Description: |
From the CVE entries:
Multiple unspecified vulnerabilities in the browser engine in Mozilla Firefox before 24.0, Firefox ESR 17.x before 17.0.9, Thunderbird before 24.0, Thunderbird ESR 17.x before 17.0.9, and SeaMonkey before 2.21 allow remote attackers to cause a denial of service (memory corruption and application crash) or possibly execute arbitrary code via unknown vectors. (CVE-2013-1718)
Use-after-free vulnerability in the nsAnimationManager::BuildAnimations function in the Animation Manager in Mozilla Firefox before 24.0, Firefox ESR 17.x before 17.0.9, Thunderbird before 24.0, Thunderbird ESR 17.x before 17.0.9, and SeaMonkey before 2.21 allows remote attackers to execute arbitrary code or cause a denial of service (heap memory corruption) via vectors involving stylesheet cloning. (CVE-2013-1722)
Mozilla Firefox before 24.0, Firefox ESR 17.x before 17.0.9, Thunderbird before 24.0, Thunderbird ESR 17.x before 17.0.9, and SeaMonkey before 2.21 do not ensure that initialization occurs for JavaScript objects with compartments, which allows remote attackers to execute arbitrary code by leveraging incorrect scope handling. (CVE-2013-1725)
Mozilla Firefox before 24.0, Firefox ESR 17.x before 17.0.9, Thunderbird before 24.0, Thunderbird ESR 17.x before 17.0.9, and SeaMonkey before 2.21 do not properly handle movement of XBL-backed nodes between documents, which allows remote attackers to execute arbitrary code or cause a denial of service (JavaScript compartment mismatch, or assertion failure and application exit) via a crafted web site. (CVE-2013-1730)
Buffer overflow in the nsFloatManager::GetFlowArea function in Mozilla Firefox before 24.0, Firefox ESR 17.x before 17.0.9, Thunderbird before 24.0, Thunderbird ESR 17.x before 17.0.9, and SeaMonkey before 2.21 allows remote attackers to execute arbitrary code via crafted use of lists and floats within a multi-column layout. (CVE-2013-1732)
Use-after-free vulnerability in the mozilla::layout::ScrollbarActivity function in Mozilla Firefox before 24.0, Firefox ESR 17.x before 17.0.9, Thunderbird before 24.0, Thunderbird ESR 17.x before 17.0.9, and SeaMonkey before 2.21 allows remote attackers to execute arbitrary code via vectors related to image-document scrolling. (CVE-2013-1735)
The nsGfxScrollFrameInner::IsLTR function in Mozilla Firefox before 24.0, Firefox ESR 17.x before 17.0.9, Thunderbird before 24.0, Thunderbird ESR 17.x before 17.0.9, and SeaMonkey before 2.21 allows remote attackers to execute arbitrary code or cause a denial of service (memory corruption) via vectors related to improperly establishing parent-child relationships of range-request nodes. (CVE-2013-1736)
Mozilla Firefox before 24.0, Firefox ESR 17.x before 17.0.9, Thunderbird before 24.0, Thunderbird ESR 17.x before 17.0.9, and SeaMonkey before 2.21 do not properly identify the "this" object during use of user-defined getter methods on DOM proxies, which might allow remote attackers to bypass intended access restrictions via vectors involving an expando object. (CVE-2013-1737) |
| Alerts: |
|
Comments (none posted)
mozilla: multiple vulnerabilities
| Package(s): | firefox thunderbird seamonkey |
CVE #(s): | CVE-2013-1720
CVE-2013-1721
CVE-2013-1724
CVE-2013-1728
CVE-2013-1738
|
| Created: | September 18, 2013 |
Updated: | September 27, 2013 |
| Description: |
From the CVE entries:
The nsHtml5TreeBuilder::resetTheInsertionMode function in the HTML5 Tree Builder in Mozilla Firefox before 24.0, Thunderbird before 24.0, and SeaMonkey before 2.21 does not properly maintain the state of the insertion-mode stack for template elements, which allows remote attackers to execute arbitrary code or cause a denial of service (heap-based buffer over-read) by triggering use of this stack in its empty state. (CVE-2013-1720)
Integer overflow in the drawLineLoop function in the libGLESv2 library in Almost Native Graphics Layer Engine (ANGLE), as used in Mozilla Firefox before 24.0 and SeaMonkey before 2.21, allows remote attackers to execute arbitrary code via a crafted web site. (CVE-2013-1721)
Use-after-free vulnerability in the mozilla::dom::HTMLFormElement::IsDefaultSubmitElement function in Mozilla Firefox before 24.0, Thunderbird before 24.0, and SeaMonkey before 2.21 allows remote attackers to execute arbitrary code or cause a denial of service (heap memory corruption) via vectors involving a destroyed SELECT element. (CVE-2013-1724)
The IonMonkey JavaScript engine in Mozilla Firefox before 24.0, Thunderbird before 24.0, and SeaMonkey before 2.21, when Valgrind mode is used, does not properly initialize memory, which makes it easier for remote attackers to obtain sensitive information via unspecified vectors. (CVE-2013-1728)
Use-after-free vulnerability in the JS_GetGlobalForScopeChain function in Mozilla Firefox before 24.0, Thunderbird before 24.0, and SeaMonkey before 2.21 allows remote attackers to execute arbitrary code by leveraging incorrect garbage collection in situations involving default compartments and frame-chain restoration. (CVE-2013-1738) |
| Alerts: |
|
Comments (none posted)
perl-Crypt-DSA: improperly secure randomness
| Package(s): | perl-Crypt-DSA |
CVE #(s): | CVE-2011-3599
|
| Created: | September 13, 2013 |
Updated: | September 26, 2013 |
| Description: |
From the Fedora advisory:
As taught by the '09 Debian PGP disaster relating to DSA, the randomness source is extremely important. On systems without /dev/random, Crypt::DSA falls back to using Data::Random. Data::Random uses rand(), about which the perldoc says "rand() is not cryptographically secure. You should not rely on it in security-sensitive situations." In the case of DSA, this is even worse. Using improperly secure randomness sources can compromise the signing key upon signature of a message.
See: http://rdist.root.org/2010/11/19/dsa-requirements-for-random-k-value/
It might seem that this would not affect Linux since /dev/random is always available and so the fall back to Data::Random would never happen. However, if an application is confined using a MAC system such as SELinux then access to /dev/random could be denied by policy and the fall back would be triggered. |
| Alerts: |
|
Comments (1 posted)
pip: code execution
| Package(s): | pip |
CVE #(s): | CVE-2013-1629
|
| Created: | September 13, 2013 |
Updated: | September 18, 2013 |
| Description: |
From the CVE entry:
pip before 1.3 uses HTTP to retrieve packages from the PyPI repository, and does not perform integrity checks on package contents, which allows man-in-the-middle attackers to execute arbitrary code via a crafted response to a "pip install" operation. |
| Alerts: |
|
Comments (none posted)
python-django: denial of service
| Package(s): | python-django |
CVE #(s): | CVE-2013-1443
|
| Created: | September 18, 2013 |
Updated: | September 27, 2013 |
| Description: |
From the Debian advisory:
It was discovered that python-django, a high-level Python web
development framework, is prone to a denial of service vulnerability
via large passwords. |
| Alerts: |
|
Comments (none posted)
python-OpenSSL: certificate spoofing
| Package(s): | python-OpenSSL |
CVE #(s): | CVE-2013-4314
|
| Created: | September 13, 2013 |
Updated: | September 25, 2013 |
| Description: |
From the Mandriva advisory:
The string formatting of subjectAltName X509Extension instances in pyOpenSSL before 0.13.1 incorrectly truncated fields of the name when encountering a null byte, possibly allowing man-in-the-middle attacks through certificate spoofing (CVE-2013-4314). |
| Alerts: |
|
Comments (none posted)
python-pyrad: predictable password hashing
| Package(s): | python-pyrad |
CVE #(s): | CVE-2013-0294
|
| Created: | September 16, 2013 |
Updated: | September 18, 2013 |
| Description: |
From the Red Hat bugzilla:
Nathaniel McCallum reported that pyrad was using Python's random module in a number of places to generate pseudo-random data. In the case of the authenticator data, it was being used to secure a password sent over the wire. Because Python's random module is not really suited for this purpose (not random enough), it could lead to password hashing that may be predictable. |
| Alerts: |
|
Comments (none posted)
wireshark: multiple vulnerabilities
| Package(s): | wireshark |
CVE #(s): | CVE-2013-5718
CVE-2013-5720
CVE-2013-5722
|
| Created: | September 16, 2013 |
Updated: | September 19, 2013 |
| Description: |
From the CVE entries:
The dissect_nbap_T_dCH_ID function in epan/dissectors/packet-nbap.c in the NBAP dissector in Wireshark 1.8.x before 1.8.10 and 1.10.x before 1.10.2 does not restrict the dch_id value, which allows remote attackers to cause a denial of service (application crash) via a crafted packet. (CVE-2013-5718)
Buffer overflow in the RTPS dissector in Wireshark 1.8.x before 1.8.10 and 1.10.x before 1.10.2 allows remote attackers to cause a denial of service (application crash) via a crafted packet. (CVE-2013-5720)
Unspecified vulnerability in the LDAP dissector in Wireshark 1.8.x before 1.8.10 and 1.10.x before 1.10.2 allows remote attackers to cause a denial of service (application crash) via a crafted packet. (CVE-2013-5722) |
| Alerts: |
|
Comments (none posted)
wordpress: multiple vulnerabilities
| Package(s): | wordpress |
CVE #(s): | CVE-2013-4338
CVE-2013-4339
CVE-2013-4340
CVE-2013-5738
CVE-2013-5739
|
| Created: | September 16, 2013 |
Updated: | September 27, 2013 |
| Description: |
From the CVE entries:
wp-includes/functions.php in WordPress before 3.6.1 does not properly determine whether data has been serialized, which allows remote attackers to execute arbitrary code by triggering erroneous PHP unserialize operations. (CVE-2013-4338)
WordPress before 3.6.1 does not properly validate URLs before use in an HTTP redirect, which allows remote attackers to bypass intended redirection restrictions via a crafted string. (CVE-2013-4339)
wp-admin/includes/post.php in WordPress before 3.6.1 allows remote authenticated users to spoof the authorship of a post by leveraging the Author role and providing a modified user_ID parameter. (CVE-2013-4340)
The get_allowed_mime_types function in wp-includes/functions.php in WordPress before 3.6.1 does not require the unfiltered_html capability for uploads of .htm and .html files, which might make it easier for remote authenticated users to conduct cross-site scripting (XSS) attacks via a crafted file. (CVE-2013-5738)
The default configuration of WordPress before 3.6.1 does not prevent uploads of .swf and .exe files, which might make it easier for remote authenticated users to conduct cross-site scripting (XSS) attacks via a crafted file, related to the get_allowed_mime_types function in wp-includes/functions.php. (CVE-2013-5739) |
| Alerts: |
|
Comments (none posted)
Page editor: Jake Edge
Kernel development
Brief items
The current development kernel is 3.12-rc1,
released on September 16. Linus said:
"
I personally particularly like the scalability improvements that got
merged this time around. The tty layer locking got cleaned up and in the
process a lot of locking became per-tty, which actually shows up on some
(admittedly odd) loads. And the dentry refcount scalability work means that
the filename caches now scale very well indeed, even for the case where you
look up the same directory or file (which could historically result in
contention on the per-dentry d_lock)."
Stable updates:
3.0.96,
3.4.62,
3.10.12, and
3.11.1 were all released on
September 14.
Comments (none posted)
Yo Dawg, I heard you like kernel compiles, so I put a kernel
compile in your kernel compile so that you can compile the kernel
while you compile the kernel.
—
Linus Torvalds (Thanks to Josh Triplett)
Comments (1 posted)
The Linux Foundation has
announced
the release of its roughly annual report on the kernel development
community; this report is written by Greg Kroah-Hartman, Amanda McPherson,
and LWN editor Jonathan Corbet. There won't be much new there for those
who follow the development statistics on LWN, but it does take a bit of a
longer time perspective.
Comments (20 posted)
The OpenZFS project has
announced its
existence. "
ZFS is the world's most advanced filesystem, in
active development for over a decade. Recent development has continued in
the open, and OpenZFS is the new formal name for this open community of
developers, users, and companies improving, using, and building on
ZFS. Founded by members of the Linux, FreeBSD, Mac OS X, and illumos
communities, including Matt Ahrens, one of the two original authors of ZFS,
the OpenZFS community brings together over a hundred software developers
from these platforms."
Comments (35 posted)
Kernel development news
By Jonathan Corbet
September 17, 2013
Despite
toying with the idea of closing the
merge window rather earlier than expected, Linus did, in the end, keep it
open until September 16. He repeated past grumbles about
maintainers who send their pull requests at the very end of the merge
window, though; increasingly, it seems that wise maintainers should behave
as if the merge window were a single week in length. Pull requests that
are sent too late run a high risk of being deferred until the next
development cycle.
In the end, 9,479 non-merge changesets were pulled into the mainline
repository for the 3.12 merge window; about 1,000 of those came in after
the writing of last week's summary.
Few of the changes merged in the final days of the merge window were hugely
exciting, but there have been a number of new features and improvements.
Some of the more significant, user-visible changes include:
- Unlike its predecessor, the 3.12 kernel will not be known as "Linux
for Workgroups." Instead, for reasons that are not entirely clear,
the new code name was "Suicidal Squirrel" for a few days; it then was
changed to "One giant leap for frogkind."
- It is now possible to provide block device partition tables on the
kernel command line; see Documentation/block/cmdline-partition.txt
for details.
- The memory management subsystem has gained the ability to migrate huge
pages between NUMA nodes.
- The Btrfs filesystem has the beginning of support for offline
deduplication of data blocks. A new ioctl() command
(BTRFS_IOC_FILE_EXTENT_SAME) can be used by a user-space
program to inform the kernel of extents in two different files that
contain the same data. The kernel will, after checking that the
data is indeed the same, cause the two files to share a single copy of
that data.
- The HFS+ filesystem now supports POSIX access control lists.
- The reliable out-of-memory killer
patches have been merged. This work should make OOM handling more
robust, but it could possibly confuse user-space applications by
returning "out of memory" errors in situations where such errors were
not seen before.
- The evdev input layer has gained a new EVIOCREVOKE
ioctl() command that revokes all access to a given file
descriptor. It can be used to ensure that no evil processes lurk on
an input device across sessions. See this
patch for an example of how this functionality can be used.
- New hardware support includes:
- Miscellaneous:
MOXA ART real-time clocks,
Freescale i.MX SoC temperature sensors,
Allwinner A10/A13 watchdog devices,
Freescale PAMU I/O memory management units,
TI LP8501 LED controllers,
Cavium OCTEON GPIO controllers, and
Mediatek/Ralink RT3883 PCI controllers,
- Networking:
Intel i40e Ethernet interfaces.
Changes visible to kernel developers include:
- The seqlock locking primitive has gained a new "locking reader" type.
Normally, seqlocks allow for data structures to be changed while being
accessed by readers; the readers are supposed to detect the change (by
checking the sequence number) and retry if need be. Some types of
readers cannot tolerate changes to the structure, though; in current
kernels, they take an expensive write lock instead. The "locking
reader" lock will block writers and other locking readers, but allow
normal readers through. Note that locking readers could share
access with each other;
the fact that this sharing does not happen now is an implementation
limitation. The functions for working with this type of
lock are:
void read_seqlock_excl(seqlock_t *sl);
void read_sequnlock_excl(seqlock_t *sl);
There are also the usual variants for blocking hardware and software
interrupts; the full set can be found in
<linux/seqlock.h>.
- The new shrinker API has been merged.
Most code using this API needed to be changed; the result should be
better performance and a better-defined, more robust API. The new
"LRU list" mechanism that was a part of that patch set has also been
merged.
- The per-CPU IDA ID allocator patch set
has been merged.
Now begins the stabilization phase for the 3.12 kernel. If the usual
pattern holds, the final release can be expected on or shortly after
Halloween; whether it turns out to be a "trick" or a "treat" depends on how
well the testing goes between now and then.
Comments (18 posted)
By Jonathan Corbet
September 18, 2013
The ongoing disclosures of governmental attempts to weaken communications
security have caused a great deal of concern. Thus far, the evidence would
seem to suggest that the core principles behind cryptography remain sound,
and that properly encrypted communications can be secure. But the
"properly encrypted" part is a place where many things can go wrong. One
of those things is the generation of random numbers; without true
randomness (or, at least, unpredictability), encryption algorithms can be far
easier to break than their users might believe. For this reason,
quite a bit of attention has been paid to the integrity of random number
generation mechanisms, including the random number generator (RNG) in the
kernel.
Random number generation in Linux seems to have been fairly well
thought out with no obvious mistakes. But that does not mean that all is
perfect, or that improvements are not possible. The kernel's random number
generator has been the subject of a few different conversations recently,
some of which will be summarized here.
Hardware random number generators
A program running on a computer is a deterministic state machine that
cannot, on its own, generate truly random numbers. In the absence of a
source of randomness from the outside world, the kernel is reduced to the
use of a pseudo-random number generator (PRNG) algorithm that, in theory,
will produce numbers that could be guessed by an attacker. In practice,
guessing the results of the kernel's PRNG will not be an easy task, but
those who concern themselves with these issues still believe that it
is better to incorporate outside entropy (randomness) whenever it is
possible.
One obvious source of such randomness would be a random number generator
built into the hardware. By sampling quantum noise, such hardware could
create truly random data. So it is not surprising that some processors
come with RNGs built in; the RDRAND instruction provided by some Intel
processors is one example. The problem with hardware RNGs is that they
are almost entirely impossible to audit; should some country's spy agency
manage to compromise a hardware RNG, this tampering would be nearly
impossible to detect. As a result, people who are concerned about
randomness tend to look at the output of hardware RNGs with a certain level
of distrust.
Some recently
posted research [PDF] can only reinforce that distrust. The
researchers (Georg T. Becker, Francesco Regazzoni, Christof Paar, and Wayne
P. Burleson) have documented a way to corrupt a hardware RNG by changing the
dopant polarity in just a few transistors on a chip. The resulting numbers
still pass tests of randomness, and, more importantly, the hardware still
looks the
same at almost every level, regardless of whether one looks at the masks
used or whether one looks at the chip directly with an electron microscope.
This type of hardware compromise is thus
nearly impossible to detect; it is also relatively easy to carry out. The
clear conclusion is that hostile hardware is a real possibility; the
corruption of a relatively simple and low-level component like an RNG is
especially so. Thus, distrust of hardware RNGs would appear to be a
healthy tendency.
The kernel's use of data from hardware RNGs has been somewhat controversial
from the beginning, with some developers wanting to avoid such sources of
entropy altogether. The kernel's approach, though, is that using all
available sources of entropy is a good thing, as long as it is properly
done. In the case of a hardware RNG, the random data is carefully mixed
into the buffer known as the "entropy pool" before being used to generate
kernel-level random numbers. In theory, even if the data from the hardware
RNG is entirely hostile, it cannot cause the state of the entropy pool to
become known and, thus, it cannot cause the kernel's random numbers to be
predictable.
Given the importance of this mixing algorithm, it was a little surprising
to see, earlier this month, a patch that
would allow the user to request that the hardware RNG be used exclusively
by the kernel. The argument for the patch was based on performance:
depending entirely on RDRAND is faster than running the kernel's full mixing
algorithm. But the RNG is rarely a performance bottleneck in the kernel,
and the perceived risk of relying entirely on the hardware RNG was seen as
being far too high. So the patch was not received warmly and had no real
chance of being merged; sometimes it is simply better not to tempt users to
compromise their security in the name of performance.
H. Peter Anvin raised a related question:
what about hardware RNGs found in other components, and, in particular, in
trusted platform module (TPM) chips? Some TPMs may have true RNGs in them;
others are known to use a PRNG and, thus, are fully deterministic. What
should the kernel's
policy be with regard to these devices, which, for the most part, are
ignored currently? The consensus seemed to be that no particular trust
should be put into TPM RNGs, but that using some data from the TPM to seed
the kernel's entropy pool at boot could be beneficial. Many systems have
almost no entropy to offer at boot time, so even suspect random data from
the TPM would be helpful early in the system's lifetime.
Overestimated entropy
As noted above, the kernel attempts to pick up entropy from the outside
world whenever possible. One source of entropy is the timing of device
interrupts; that random data is obtained by (among other things) reading
the time stamp counter (TSC) with a call to get_cycles() and using
the least significant bits. In this way, each interrupt adds a little
entropy to the pool. There is just one little problem, pointed out by Stephan Mueller: on a number of
architectures, the TSC does not exist and get_cycles() returns
zero. The amount of entropy found in a constant stream of zeroes is rather
less than one might wish for; the natural consequence is that the kernel's
entropy pool may contain less entropy than had been thought.
The most heavily used architectures do not suffer from this problem; on the
list of those that do, the most significant may be MIPS, which is used in a
wide range of home network routers and other embedded products. As it
turned out, Ted Ts'o had already been working
with the MIPS maintainers to find a solution to this problem. He
didn't like Stephan's proposed solution — reading a hardware clock if
get_cycles() is not available — due to the expense; hardware
clocks can take a surprisingly long time to read. Instead, he is hoping
that each
architecture can, somehow, provide some sort of rapidly increasing counter
that can be used to contribute entropy to the pool. In the case of MIPS,
there is a small counter that is incremented each clock cycle; it doesn't
hold enough bits to work as a TSC, but it's sufficient for entropy
generation.
In the end, a full solution to this issue will take a while, but, Ted said, that is not necessarily a problem:
If we believed that /dev/random was actually returning numbers
which are exploitable, because of this, I might agree with the "we
must do SOMETHING" attitude. But I don't believe this to be the
case. Also note that we're talking about embedded platforms, where
upgrade cycles are measured in years --- if you're lucky. There
are probably home routers still stuck on 2.6 [...]
So, he said, it is better to take some time and solve the problem properly.
Meanwhile, Peter came to another conclusion about the entropy pool: when
the kernel writes to that pool, it doesn't account for the fact that it
will be overwriting some of the entropy that already exists there. Thus,
he said, the kernel's estimate for the amount of entropy in the pool is
almost certainly too high. He put together a
patch set to deal with this problem, but got little response. Perhaps
that's because, as Ted noted in a different
conversation, estimating the amount of entropy in the pool is a hard
problem that cannot be solved without knowing a lot about where the
incoming entropy comes from.
The kernel tries to deal with this problem by being conservative in its
accounting for entropy. Quite a few sources of unpredictable data are
mixed into the pool with no entropy credit at all. So, with luck, the
kernel will have a vague handle on the amount of entropy in the pool, and
its mixing techniques and PRNG should help to make its random numbers
thoroughly
unpredictable. The end result should be that anybody wanting to attack the
communications security of Linux users will not see poor random numbers as
the easiest approach; in this world, one cannot do a whole lot better than
that.
Comments (24 posted)
By Jonathan Corbet
September 18, 2013
One of the most common things to do on a computer is to copy a file, but
operating systems have traditionally offered little in the way of
mechanisms to accelerate that task. The
cp program can replicate
a filesystem hierarchy using links — most useful for somebody wanting to
work with multiple kernel trees — but that trick speeds things up by not
actually making copies of the data; the linked files cannot be modified
independently of each other. When it is necessary to make an
independent copy of a file, there is little alternative to reading the
whole thing through the page cache and writing it back out. It often seems
like there should be a better way, and indeed, there might just be.
Contemporary systems often have storage mechanisms that could speed copy
operations. Consider a filesystem mounted over the network using a
protocol like NFS, for example; if a file is to be copied to another
location on the same server, doing the copy on the server would avoid a lot
of work on the client and a fair amount of network traffic as well.
Storage arrays often operate at the file
level and can offload copy operations in a similar way. Filesystems like
Btrfs can "copy" a file by sharing a single copy of the data between the
original and the copy; since that sharing is done in a copy-on-write mode,
there is no way for user space to know that the two files are not
completely independent. In each of these cases, all that is needed is a
way for the kernel to support this kind of accelerated copy operation.
Zach Brown has recently posted a patch
showing how such a mechanism could be added to the splice() system
call. This system call looks like:
ssize_t splice(int fd_in, loff_t *off_in, int fd_out, loff_t *off_out,
size_t len, unsigned int flags);
Its job is to copy len bytes from the open file represented by
fd_in to
fd_out, starting at the given offsets for each. One of the key
restrictions, though, is that one of the two file descriptors must be a
pipe. Thus, splice() works for feeding data into a pipe or for
capturing piped data to a file, but it does not perform the simple task of
copying one file to another.
As it happens, the machinery that implements splice() does not
force that limitation; instead, the "one side must be a pipe" rule comes
from the history of how the splice() system call came about.
Indeed, it already does file-to-file copies when it is invoked behind the
scenes from the sendfile() system call. So there should be no
real reason why splice() would be unable to do accelerated
file-to-file copies. And that is exactly what Zach's patch causes it to
do.
That patch set comes in three parts. The first of those adds a new flag
(SPLICE_F_DIRECT) allowing users to request a direct file-to-file
copy. When this flag is present, it is legal to provide values for both
off_in and off_out (normally, the offset corresponding to
a pipe must be NULL); when an offset is provided, the file will be
positioned to that offset before the copying begins. After this patch, the
file copy will happen without the need to copy any data in memory and
without filling up the page cache, but it will not be optimized in any
other way.
The second patch adds a new entry to the ever-expanding
file_operations structure:
ssize_t (*splice_direct)(struct file *in, loff_t off_in, struct file *out,
loff_t off_out, size_t len, unsigned int flags);
This optional method can be implemented by filesystems to provide an
optimized implementation of SPLICE_F_DIRECT. It is allowed to
fail, in which case the splice() code will fall back to copying
within the kernel in the usual manner.
Here, Zach worries a
bit in the comments about how the SPLICE_F_DIRECT flag works: it
is used to request both
direct file-to-file copying and filesystem-level optimization. He suggests
that the two requests should be separated, though it is hard to imagine a
situation where a developer who went to the effort to use splice()
for a file-copy operation would not want it to be optimized. A
better question, perhaps, is why SPLICE_F_DIRECT is required at
all; a call to splice() with two regular files as arguments would
already appear to be an unambiguous request for a file-to-file copy.
The last patch in the series adds support for optimized copying to the
Btrfs filesystem. In truth, that support already exists in the form of the
BTRFS_IOC_CLONE ioctl() command; Zach's patch simply
extends that support to splice(), allowing it to be used in a
filesystem-independent manner. No other filesystems are supported at this
point; that work can be done once the interfaces have been nailed down and
the core work accepted as the right way forward.
Relatively few comments on this work have been posted as of this writing;
whether that means that nobody objects or nobody cares about this
functionality is not entirely clear. But there is an ongoing level of
interest in the idea of optimized copy operations in general; see the lengthy discussion of the proposed
reflink() system call for an example from past years. So,
sooner or later, one of these mechanisms needs to make it into the
mainline. splice() seems like it could be a natural home for this
type of functionality.
Comments (4 posted)
Patches and updates
Kernel trees
Core kernel code
Device drivers
Filesystems and block I/O
Memory management
Networking
Architecture-specific
Security-related
Virtualization and containers
Miscellaneous
Page editor: Jonathan Corbet
Distributions
By Jonathan Corbet
September 18, 2013
New distributions come along rather frequently. It is somewhat less often
that we see an entirely new operating system. A new system that is touted
as "probably the best OS for cloud workloads," but which provides no
separation between the kernel and user space and no multitasking is a rare
thing indeed. But we have just such a thing in the newly announced
OSv system. Needless to say, it does
not look like a typical Linux distribution.
OSv is the result of a focused effort by a company called Cloudius
Systems. Many of the people working on it will be familiar to people in
the Linux community; they include Glauber Costa, Pekka Enberg, Avi Kivity,
and Christoph Hellwig. Together, they have taken the approach that the
operating system stack used for contemporary applications "congealed into
existence" and contains a lot of unneeded cruft that only serves to add
complexity and slow things down. So they set out to start over and
reimplement the operating system with contemporary deployment scenarios in
mind.
What that means, in particular, is that they have designed a system that is
intended to be run in a virtualized mode under a hypervisor. The
fundamental thought appears to be that the host operating system is already
handling a lot of the details, including memory management, multitasking,
dealing with the hardware, and more. Running a full operating system in
the guest duplicates a lot of that work. If that duplication can be cut
out of the picture, things should go a lot faster.
OSv is thus designed from the beginning to run under KVM (ports to other
hypervisors are in the works), so it does not have to drag along a large
set of device drivers. It is designed to run a single application, so a
lot of the mechanisms found in a Unix-like system has been deemed to be
unnecessary and tossed out. At the top of the list of casualties is the
separation between the kernel and user space. By running everything within
a single address space, OSv is able to cut out a lot of the overhead
associated with context switches; there is no need for TLB flushes, for
example, or to switch between page tables. Eliminating that overhead helps
the OSv developers to claim far lower latency than Linux offers.
What about security in this kind of environment? Much of the
responsibility for security appears to have been passed to the host, which
will run any given virtual machine in the context of a specific user
account and limit accesses accordingly. Since OSv only runs a single
application, it need not worry about isolation between processes or between
users; there are no other processes or users. For the rest, the
system seems to target Java applications in particular, so the Java virtual
machine (JVM) can also play a part in keeping, for example, a compromised
application from running too far out of control.
Speaking of the JVM, the single-address-space design allows the JVM to be
integrated into the operating system kernel itself. There are certain
synergies that result from this combination; for example, the JVM is able
to use the page tables to track memory use and minimize the amount of work
that must be done at garbage collection time. Java threads can be managed
directly by the core scheduler, so that switching between them is a fast
operation. And so on.
The code is BSD licensed and available on GitHub.
Quite a bit of it appears to have been written from scratch in C++, but, much of
the core kernel (including the network stack) is taken from FreeBSD. A
fresh start means that a lot of features need to be reimplemented, but it
also makes it relatively easy for the system to use modern hardware
features (such as huge pages) from the outset. The filesystem of choice
would appear to be ZFS, but the
presentation slides from CloudOpen suggest that the developers are
looking forward to widespread availability of nonvolatile RAM storage
systems, which, they say, will reduce the role of the filesystem in an
application's management of data.
The cynical among us might be tempted to say that, with all this work, the
OSv developers have managed to reimplement MS-DOS. But what they really
appear to have is the ultimate expression of the "just enough operating
system" concept that allows an application to run on a virtual machine
anywhere in whichever cloud may be of interest at the moment. For anybody
who is just looking to have a system run on somebody's cloud network, OSv
may well look far more appealing than a typical Linux distribution: it does
away with the configuration hassles, and claims far better performance as
well.
So, in a sense, OSv might indeed be (or become) the best
operating system for cloud-based applications.
But it is not really a replacement for Linux; instead, it could be thought
of as an enhancement that allows Linux-based virtual machines to run more
efficiently and with less effort. Anybody implementing a host will still
need Linux around to manage separation between users, resource control,
hardware, and more. But those who are running as guests might just be
convinced to leave Linux and its complexity behind in favor of a minimal
system like OSv that can run their applications and no more.
Comments (19 posted)
Brief items
Being entirely honest, the reason I stood for the board was that I
was concerned about the potential for a refocusing of the project's
direction in a way that didn't interest me. It's great that people chose
to elect me, but right now I have no other involvement in Fedora's
governance. My contributions to Fedora are fairly minimal - I work on a
few tiny corners of some important packages, but otherwise I just talk a
lot.
--
Matthew Garrett
Although I can see that it will work, and work with dgit too. Hmmm.
Maybe my initial reaction is overblown and I should be thinking "neat
hack".
--
Ian Jackson
You wake up inside an OS installer in Timbuktu, and it's six months in the
future. But, there are bugs. Bugs everywhere.
--
Ankur
Sinha took
this
screenshot during an update to Fedora 20 Alpha RC3
Comments (none posted)
From the September 18 entry in the Slackware
changelog:
"
Hey folks, I'm calling this a beta! Really, it's been better than beta
quality for a while. There will probably still be a few more updates
here and there (and certainly updates to the docs). Enjoy, and please test."
Comments (24 posted)
Distribution News
Debian GNU/Linux
There will be a Bug Squashing part (BSP) for Debian and Ubuntu in Oslo,
Norway October 12-13, 2013.
Full Story (comments: none)
Fedora
Jaroslav Reznik reports that the Fedora 20 Alpha release will be delayed by
one week. All other milestones will also be delayed as a result.
Full Story (comments: 8)
Newsletters and articles of interest
Comments (none posted)
Steven Ovadia
interviews
the developers of Manjaro Linux. "
Building upon Arch is a bold move, given that it’s a philosophy as much as it’s a distribution. Arch is deliberately complex in order to give users the most control over their system. Manjaro’s goal of simplifying Arch can be seen as compromising that philosophy. But given Manjaro’s popularity, it’s filling a need for users who want a simpler Arch implementation — even at the cost of control over their system."
Comments (none posted)
Page editor: Rebecca Sobol
Development
By Nathan Willis
September 18, 2013
EdX is an online courseware site best known for hosting free
college-level courses from a variety of well-known educational
institutions like MIT, Rice University, and Kyoto University. While
edX's original emphasis was on providing the
courses themselves, the project is now engaged in building a user and
developer community around its underlying free software platform. The project aims to roll
out a do-it-yourself course hosting service in early 2014, an effort
which recently picked up the support of Google.
In the lingo of educational software circles, the edX platform is a
Learning Management System (LMS), although that term also encompasses
several other distinct application categories. EdX itself is focused
on massive
open online courses (MOOCs). MOOCs are generally open to all
participants, are accessed via the web (often solely via the
web), and the course materials are typically accessible to the public.
This is a distinction with LMSes built for use within an educational
institution, such as Moodle, where managing student records, enrollment,
billing, and other functionality is necessary, and where it may be
important to integrate course materials with a classroom experience.
MOOCs often allow users to sign up at will and many do not track
attendance or issue grades.
EdX is run by a non-profit organization which goes by the
utterly-not-confusing name of The xConsortium. Historically, the
chief backers of the service have been MIT and Harvard (although many
other universities offer courses on edx.org), but in April 2013 the
organization was joined by another major partner: Stanford
University. Stanford's support was significant not just for the institution's
clout, but because its team brought with it the expertise from Stanford's
own open source MOOC service, Class2Go. EdX and
Stanford announced their intention to merge the functionality of the
Class2Go project into Open edX, the
codebase on which the edx.org service is built.
Stanford's input is no doubt a big win for edX, but an arguably
more important new partner arrived in September. On September 10, Google joined
the Open edX project, too. Like Stanford, Google had had its own open
source LMS project. Google's was called Course Builder
and, naturally, it differed quite a bit from the LMS services run by
universities. Although there were course contributions from
educational institutions, Google released a number of
technology-related courses of its own, and there were offerings
for the general public on a range of non-academic subjects, from starting a business to planning a fantasy football team.
As is the case at Stanford, Google's developers are said to be contributing to
Open edX based on the lessons they learned in their own LMS
development experience. But the Google announcement also specified
that the project will be launching another web service, this one called
MOOC.org. Whereas edx.org is a university-centric course platform,
MOOC.org is described as an open site which "will allow any
academic institution, business and individual to create and host
online courses." Or, as The Chronicle of Higher Education called
it, "YouTube for courses." The Chronicle article also notes that edX has not yet decided on a
content-policing policy, but that it has established that the data on
who takes and completes courses on MOOC.org will belong to edX.
On the software side, the current-generation Open edX product is edx-platform, which
includes the LMS system and a separate course-content authoring tool
named Studio. Studio is more of a traditional content-management
system; instructors prepare lesson materials, quizzes, and tests in
Studio, which are then delivered to the students through the LMS front
end. This edx-platform code is what can be seen running on the
public edx.org site.
But Google's expertise (and perhaps Stanford's as well) is expected
instead to go into the next-generation replacement platform, which is
built around a completely different architecture. The replacement
architecture starts with the concept of XBlocks, which are small
web applications written in Python but which can be interpreted by one
or more renderers. So, for example, an XBlock-based quiz component would produce the
interactive quiz itself when run through the test-giving renderer, but
the grading renderer would actually be a different view on the same
data store, rather than an unrelated module. Each XBlock has its own private storage and can offer different handlers to different renderers, so each can be more or less a self-contained entity.
One of the benefits touted about this approach is that it allows
administrators to implement more than one grading renderer—as
the documentation explains it, the same block could be graded by the
instructor or peer-graded by other students, depending on the
renderer. If that sounds like a minor distinction, however, one of
Open edX's other new initiatives is an AI-based grading system. The
Enhanced AI Scoring Engine
(EASE) library and Discern API wrapper use
machine learning to classify free-form input text (such as student
essays). The theory is that the instructor can hand-grade a subset of
the student responses, and the API can use those scores as a metric to
classify the remaining responses in the set. That grading method
might not seem fair to tuition-paying students working toward a
degree, but for a free online course with thousands of concurrent
students, it is perhaps more understandable.
The Open edX platform also includes a discussion forum component, a
queueing system for processing large volumes of assignments, an
analytics package, and a notification system (the goal of which is
eventually to provide real-time notices for messages and assignment results
via SMS and other communication methods). For the moment, the new
XBlock-based platform is still in the early stages of development.
No template XBlocks have been released, although the documentation
discusses a few possibilities (lectures, videos, and quizzes, for
example).
Online courses are clearly here to stay, although whether MOOCs
will steal significant business away from universities (as some fear)
remains to be seen. Educational institutions have already adjusted to
the necessity of providing online courseware to supplement their
in-person classes, and many do considerable business every school year
offering online-only courses and degrees. But there is still a step up
from self-hosted LMSes to the mega-scale MOOC programs envisioned by
edX. It will certainly be a challenge for the edX team to
simplify MOOC deployment and management to the point where offering a
public class online is as simple as setting up a Wordpress blog. The
Open edX–powered MOOC.org is slated to launch in early 2014,
which is still plenty of time for the project to iterate on the
process until its software makes the grade.
Comments (none posted)
Brief items
If your son can't write his own mouse driver, then he does not
deserve a mouse.
— An unnamed customer support representative to Eben Upton's father,
when he inquired about the lack of software for his newly-purchased
BBC Micro mouse (as related by Upton in his LinuxCon North America keynote).
We've conquered the game-show market with an unblemished record.
— IBM's Brad McCredie, explaining why Watson was moving on to new
challenges, at LinuxCon North America.
Comments (none posted)
At the Digia blog, Lars Knoll announces that Qt has decided to migrate its web rendering engine from WebKit to Chromium. First among the reasons listed is that "Chromium has a cross-platform focus, with the browser being available on all major desktop platforms and Android. The same is no longer true of WebKit, and we would have had to support all the OS’es on our own in that project." Knoll also cites Chromium's better support for recent HTML5 features, and says that "we are seeing that Chromium is currently by far the most dynamic and fastest moving browser available. Basing our next-generation Web engine on Chromium is a strategic and long-term decision. We strongly believe that the above facts will lead to a much better Web engine for Qt than what we can offer with Qt WebKit right now. "
Comments (182 posted)
Mozilla has
released
Firefox 24. See the
release
notes for details.
Comments (6 posted)
OpenSSH version 6.3 has been released. Although primarily designated a bugfix release, this version adds support for encrypted hostkeys (i.e., on smartcards), optional time-based rekeying via the RekeyLimit option to sshd, and standardizes logging during user authentication.
Full Story (comments: none)
Version 1.4 of intel-gpu-tools has been released. Most of the changes come via the testcases, but a change in release policy is also noteworthy. "The plan now is to release intel-gpu-tools quarterly in sync or in
time for validation of our Intel Linux Graphics Stack."
Full Story (comments: none)
Newsletters and articles
Comments (none posted)
At his blog, Daniel Pocock assesses the state of free software support for calendar and contact data on smartphones. Historically, he notes, a number of approaches were tried and failed to pick up significant traction, such as LDAP and SyncML. "The good news is, CardDAV and CalDAV are gaining traction, in no small part due to support from Apple and the highly proprietary iPhone. Some free software enthusiasts may find that surprising. It appears to be a strategic move from Apple, using open standards to compete with the dominance of Microsoft Exchange in large corporate networks." Despite the openness of the standards, however, Pocock goes on to detail a number of challenges to working with them in free software on a mobile phone.
Comments (18 posted)
Page editor: Nathan Willis
Announcements
Brief items
CyanogenMod founder Steve Kondik has
disclosed that he
and sixteen others are now doing their CyanogenMod work as part of a
company founded for that purpose — and that they have been doing so since
April. "
You have probably seen the pace of development pick up
drastically over the past few months. More devices supported, bigger
projects such as CM Account, Privacy Guard, Voice+, a new version of
Superuser, and secure messaging. We vastly improved our
infrastructure. We’re doing more bug fixes, creating more features, and
improving our communication. We think that the time has come for your
mobile device to truly be yours again, and we want to bring that idea to
everybody." The first new thing will be an easier installer, but
there is very little information on what the business model will be.
Comments (50 posted)
IBM has announced plans to invest $1 billion over the next five years in
new Linux and open source technologies for IBM's Power Systems servers. "
Two immediate initiatives announced, a new client center in Europe and a Linux on Power development cloud, focus on rapidly expanding IBM's growing ecosystem supporting Linux on Power Systems which today represents thousands of independent software vendor and open source applications worldwide."
Full Story (comments: 12)
The Document Foundation (TDF) has announced that CloudOn is now a member of its Advisory Board. "
CloudOn is a productivity platform that allows users to create, edit and share documents in real time across devices."
Full Story (comments: none)
Calls for Presentations
CFP Deadlines: September 19, 2013 to November 18, 2013
The following listing of CFP deadlines is taken from the
LWN.net CFP Calendar.
If the CFP deadline for your event does not appear here, please
tell us about it.
Upcoming Events
The GNU 30th anniversary celebration and hackathon will take place
September 28-29 in Cambridge, MA. "
GNU lovers from all over the
world will converge on MIT in Cambridge, MA for cake, coding, and a special
address from Richard Stallman. If you want to attend the event in person,
there is still time to register! If you'll be watching online, be sure to
tune in to the event page
on Saturday the 28th at 17:00 EDT."
Full Story (comments: none)
The next CentOS Dojo will take place September 30 in New Orleans, LA. It
will be colocated with the Cpanel conference. "
Our focus for this
Dojo is onpremise and hosting specific talks."
Full Story (comments: none)
Events: September 19, 2013 to November 18, 2013
The following event listing is taken from the
LWN.net Calendar.
| Date(s) | Event | Location |
September 18 September 20 |
Linux Plumbers Conference |
New Orleans, LA, USA |
September 19 September 20 |
UEFI Plugfest |
New Orleans, LA, USA |
September 19 September 20 |
Open Source Software for Business |
Prato, Italy |
September 19 September 20 |
Linux Security Summit |
New Orleans, LA, USA |
September 20 September 22 |
PyCon UK 2013 |
Coventry, UK |
September 23 September 25 |
X Developer's Conference |
Portland, OR, USA |
September 23 September 27 |
Tcl/Tk Conference |
New Orleans, LA, USA |
September 24 September 25 |
Kernel Recipes 2013 |
Paris, France |
September 24 September 26 |
OpenNebula Conf |
Berlin, Germany |
September 25 September 27 |
LibreOffice Conference 2013 |
Milan, Italy |
September 26 September 29 |
EuroBSDcon |
St Julian's area, Malta |
September 27 September 29 |
GNU 30th anniversary |
Cambridge, MA, USA |
| September 30 |
CentOS Dojo and Community Day |
New Orleans, LA, USA |
October 3 October 4 |
PyConZA 2013 |
Cape Town, South Africa |
October 4 October 5 |
Open Source Developers Conference France |
Paris, France |
October 7 October 9 |
Qt Developer Days |
Berlin, Germany |
October 12 October 13 |
PyCon Ireland |
Dublin, Ireland |
October 14 October 19 |
PyCon.DE 2013 |
Cologne, Germany |
October 17 October 20 |
PyCon PL |
Szczyrk, Poland |
| October 19 |
Hong Kong Open Source Conference 2013 |
Hong Kong, China |
| October 19 |
Central PA Open Source Conference |
Lancaster, PA, USA |
| October 20 |
Enlightenment Developer Day 2013 |
Edinburgh, Scotland, UK |
October 21 October 23 |
Open Source Developers Conference |
Auckland, New Zealand |
October 21 October 23 |
KVM Forum |
Edinburgh, UK |
October 21 October 23 |
LinuxCon Europe 2013 |
Edinburgh, UK |
October 22 October 23 |
GStreamer Conference |
Edinburgh, UK |
October 22 October 24 |
Hack.lu 2013 |
Luxembourg, Luxembourg |
| October 23 |
TracingSummit2013 |
Edinburgh, UK |
October 23 October 24 |
Open Source Monitoring Conference |
Nuremberg, Germany |
October 23 October 25 |
Linux Kernel Summit 2013 |
Edinburgh, UK |
October 24 October 25 |
Embedded LInux Conference Europe |
Edinburgh, UK |
October 24 October 25 |
Xen Project Developer Summit |
Edinburgh, UK |
October 24 October 25 |
Automotive Linux Summit Fall 2013 |
Edinburgh, UK |
October 25 October 27 |
vBSDcon 2013 |
Herndon, Virginia, USA |
October 25 October 27 |
Blender Conference 2013 |
Amsterdam, Netherlands |
October 26 October 27 |
PostgreSQL Conference China 2013 |
Hangzhou, China |
October 26 October 27 |
T-DOSE Conference 2013 |
Eindhoven, Netherlands |
October 28 October 31 |
15th Real Time Linux Workshop |
Lugano, Switzerland |
October 28 November 1 |
Linaro Connect USA 2013 |
Santa Clara, CA, USA |
October 29 November 1 |
PostgreSQL Conference Europe 2013 |
Dublin, Ireland |
November 3 November 8 |
27th Large Installation System Administration Conference |
Washington DC, USA |
November 5 November 8 |
OpenStack Summit |
Hong Kong, Hong Kong |
November 6 November 7 |
2013 LLVM Developers' Meeting |
San Francisco, CA, USA |
| November 8 |
CentOS Dojo and Community Day |
Madrid, Spain |
| November 8 |
PGConf.DE 2013 |
Oberhausen, Germany |
November 8 November 10 |
FSCONS 2013 |
Göteborg, Sweden |
November 13 November 14 |
Korea Linux Forum |
Seoul, South Korea |
November 14 November 17 |
Mini-DebConf UK |
Cambridge, UK |
November 15 November 16 |
Linux Informationstage Oldenburg |
Oldenburg, Germany |
If your event does not appear here, please
tell us about it.
Page editor: Rebecca Sobol