By Jonathan Corbet
March 20, 2013
At first blush, the case of
Kirtsaeng v. John Wiley & Sons looks
like an obscure battle over the marketing of textbooks in the US with
little relevance to the free software community. But one need not look
deeply to realize that the US Supreme Court's recent
ruling
[PDF] has some interesting implications. For years, it appeared that
there was no resistance to increased use of copyright law to protect
threatened business models. With the ruling in this case, the power of
copyright holders has been pushed back slightly, and an important right has
been reaffirmed.
The case in question starts with Supap Kirtsaeng, who figured out that he
could buy textbooks in Thailand for resale in the US.
Those books are sold much more cheaply in Thailand, offering a classic
opportunity for arbitrage and a quick profit. The publisher of those
books, John Wiley & Sons, sued, claiming that importing those books
into the US was a violation of its copyright, despite the fact that the
books had been legitimately published and sold in Thailand with Wiley's
permission.
Kirtsaeng responded that the books, like most copyrighted materials, were
covered by the first sale doctrine; once Wiley had sold the books, it had
exhausted its right to control their fate.
Wiley's interesting claim in this case was that first sale does not apply to
items that
are manufactured outside of the US. Appeals courts in the US agreed with
this position, but the Supreme Court did not. Its conclusion (by a 6-3
ruling) was that the place of manufacture and sale was not relevant to
copyright law and that the import and resale of the books was a legal
activity. So, for now, the first sale doctrine lives and cannot be
eliminated simply by manufacturing an object abroad.
This ruling matters for a couple of reasons. One is that software, too, is
covered by copyright law, and it is often included in products manufactured
all over the world. Copyright law is often used in an attempt to control
what can be done with a larger product; the implications of eliminating
first-sale rights on products with important copyrightable components
could open the door
to no end of possible horrors. Consider, for example, the following text from
the decision:
Technology companies tell us that “automobiles, microwaves,
calculators, mobile phones, tablets, and personal computers”
contain copyrightable software programs or packaging. Many of
these items are made abroad with the American copyright holder’s
permission and then sold and imported (with that permission) to the
United States. A geographical interpretation would prevent the
resale of, say, a car, without the permission of the holder of each
copyright on each piece of copyrighted automobile software. Yet
there is no reason to believe that foreign auto manufacturers
regularly obtain this kind of permission from their software
component suppliers, and Wiley did not indicate to the contrary
when asked. Without that permission a foreign car owner could not
sell his or her used car.
The logic that applies to a car also applies to just about any sort of
electronic gadget that one can imagine — contemporary cars, after all, can
be thought of as rather heavy electronic entertainment systems with
self-propulsion capabilities and a problematic carbon footprint. It is a
rare device indeed that doesn't
contain copyrightable pieces imported from somewhere; the thought that all
of those devices remain under the control of the copyright holder is
discouraging at best. This ruling does not eliminate that threat (see
below), but it mitigates it somewhat.
Copyright law is often employed for the protection of business models.
Over 100 years ago, music publishers claimed that player pianos were a
threat to their existence and a violation of their copyrights; the attempts
to use copyright to keep business models alive have continued ever since. So it
is refreshing to see the Supreme Court state that there is no inherent
right to protection for a specific business model:
Wiley and the dissent claim that a nongeographical interpretation
will make it difficult, perhaps impossible, for publishers (and
other copyright holders) to divide foreign and domestic markets. We
concede that is so. A publisher may find it more difficult to
charge different prices for the same book in different geographic
markets. But we do not see how these facts help Wiley, for we can
find no basic principle of copyright law that suggests that
publishers are especially entitled to such rights.
We still live in a world where publishers feel entitled to exactly such
rights: the use of the CSS encryption scheme (and associated legal battles)
to divide the DVD market is an obvious example. Perhaps it is optimistic
to hope that a statement from the highest court in the US that such rights
do not inherently adhere to a specific business model will improve the
situation. But, then, your editor tends toward optimism.
That said, there is plenty of space for pessimism as well; the upholding of
first sale does not make our copyright-related problems magically vanish.
Much of the industry appears to be headed in directions where first sale
does not seem to apply — electronic books being an obvious
example. The use of DRM schemes to restrict first-sale rights continues,
and other aspects of copyright law (such as the DMCA in the US) support
that use. The DMCA also remains useful for companies trying to restrict
what the "owner" of a device can do with it; the debate over
jailbreaking is one example. Online or "cloud-based" resources are
subject
to no end of restrictions of their own.
And so on.
But, then, nobody ever said that the fight for freedom would be easy. One
Supreme Court victory is not going to change that situation. But it is an
important affirmation that copyright is meant to be a limited right
and not a means for absolute control by copyright holders. Those of us who
are users of copyrighted materials (i.e. all of us) have some rights too.
Comments (35 posted)
By Jake Edge
March 20, 2013
Raspberry Pi Foundation executive director Eben Upton started his PyCon 2013 talk with a complaint that
he had just been upstaged. He normally asks
the audience "who has a Raspberry Pi?", but conference organizer Jesse
Noller had "ruined that" by announcing that all
attendees
would be getting one of the tiny ARM-based computers. The Python
Software Foundation, which puts on PyCon, had arranged for
Raspberry Pi computers to be handed out to all 2500+ attendees. It also
set aside lab
space on the second floor of the Santa Clara (California)
Convention Center
where attendees could "play" with their new computers—complete with
keyboards, monitors, breadboards, LEDs, and other peripherals.
Genesis
The Raspberry Pi, which is a
"small computer for children", came about due
to observations that Upton and his colleagues
made about the computer skills of high school students applying to study
computer science at the
University of Cambridge. In his time, anyone that had an interest in
computers could probably get their hands on one that would beep when it was
turned on and boot directly into a programming language (typically
BASIC). Everyone knew how to write the canonical program:
10 PRINT "MYNAME IS GREAT!!!!"
20 GOTO 10
When visiting a computer store, that program (or something "filthier") was
typed in on each machine; it was a "simpler time" in the 1980s,
"we used to make our own entertainment", Upton said.
The availability of those kinds of machines allowed potential students to
have a basic level of programming knowledge. But, when interviewing
applicants in 2005, they noticed that many lacked that "built-in hacker
knowledge". In addition, the 80-90 available spots were only being
contested by around 200-250 applicants, rather than the 500 or so in the 1980s
and 1990s.
The problem, it seems, is that the 8-bit machines that were readily
available in his time no longer exist. Game consoles now serve a similar
niche, but are not programmable and are in fact programmer-hostile because
of the business models of the console makers. In addition, those 8-bit
hacker computers
have been "eaten from the top" by the PC. The PC is, obviously,
programmable, but users have to choose to install programming tools. This
"tiny little energy barrier" is enough to reduce the number of applicants
with the
requisite skills, he said.
So, there is a niche available to be filled. In order for a device to do
so, it has to be interesting to children, Upton said, which means that it
needs to be able to play games and
have audio/video capabilities. It also needs to be robust, so that it
could be "shoved" into a school bag many times without breaking. It needs to
be cheap, "like a textbook", which only "shows that we didn't know what
textbooks cost".
The target price was $25, so the team spent a lot of time trying to figure out
what can be fit into a device with that price. They started with an
Arduino-like microcontroller system, but that "didn't meet the 'interesting
to children' test". After university, Upton went to work for Broadcom,
where he is still employed, though he mostly does Raspberry Pi work these
days.
Working at Broadcom led him to a chip with a proprietary RISC core, which
the team was able to get to boot to Python. It would also do 720p video and
could hit the $25 price point. At that point, they decided to set up a
foundation. The "Pi" in Raspberry Pi is Python misspelled, Upton said,
which was done because he thought the symbol for pi (π) would make a
"fantastic logo". But it turns out that the pi symbol has never been used by the
foundation and he regularly has to explain that he does know how "Python"
is spelled.
Switching to Linux
As the project progressed, he realized that the team would have to write
all its own drivers for devices like network interfaces or SD card readers,
which was time consuming. About that time, Broadcom released another
version of the chip with an ARM 11 core. "There are advantages to
being on the chip design team", Upton said with a chuckle, suggesting that
the ARM was added for "unspecified business reasons". The ARM core meant
that the Raspberry Pi could benefit from the "enormous investment" that
the community has made in Linux.
The BBC Micro was one
of the original 8-bit computers that shaped many enthusiasts' early
computer experience, so the foundation wanted its computer to be called the
"BBC Nano". It approached the British Broadcasting Corporation (BBC) about
using that name several times, but were always turned down for "complicated
legal reasons", Upton said.
As part of the effort to convince the BBC,
though, a 45-second video pitch was created. Once that video got to
YouTube, it had 600,000 views in a single day, a day which Upton spent "not
working for Broadcom" and instead
"pressing F5" (to refresh the page). That night, he sat down with his wife
and realized that they had "promised 600,000 people" that "we would build
them a $25 computer", but had "no idea how to do it".
The CPU fit within the $25 budget, but there are lots of other components
that go into a computer board. Those can cost a few cents or even
more, which all adds up. It took a while, but the team finally fit the
design into the budget, or nearly. The Model A is $25, but the
more-popular Model B, which has ethernet, more
memory, and an additional USB connector, came in at $35.
Upton had just gotten an MBA degree, so he "knew all about business
models", he said with a laugh. The foundation had raised $250,000, which
could be used to build 10,000 of the devices, so the plan was to build
those, sell them, and take that money to build another 10,000. But they
started seeing danger signs almost immediately, he said. When a "buggy
beta version" of the SD card image that could only run in QEMU was
released, it was downloaded 50,000
times. That many people downloading software for hardware that didn't
exist and might not for quite some time led to the realization that the
"interest was high". Given that the lead time for more systems was three
months or so, and there was now a worry the devices would sell out in a
week, something needed to change.
Luckily, he said, they started working with two UK companies that put up
the capital
to build all of the Raspberry Pi computers that were needed. The
foundation licenses the name and "intellectual property" (IP) to those
companies who "do the heavy lifting". In the end, there were 100,000
orders on the first day, and the
millionth Raspberry Pi was sold "sometime last month".
It has been a truly amazing year, Upton said. One of the interesting
transitions that he has noted is that the content on the web site has
shifted away from mostly being about what the team (and "other adults")
were doing to get the devices
out the door. Over the last six months or so, the site has covered what
"children
are doing with the Pi".
Examples
Saying that he wanted to "inflict" some pictures of those activities on the
audience, Upton shifted gears to show and describe what has come about
since the release of the Pi. As a "graphics hacker", he expected that much
of the interesting work would be graphics-related, but that turned out not
to be true. There are few graphics demos, though he encourages people to
write more.
The first stop on the quick tour was a "Moog-like"
synthesizer program that is available for free. The second stop
involved beer. It turns out that there
is an "enormous
overlap" between people who like programming and people who like beer, he
said to big audience cheers, which led to a number of different
projects. The computers are being
used to run both home and commercial brewing equipment using BrewPi, for example.
There is a project to assist with focus stacking
using the
Pi, which can replace $1000 worth of photography gear for getting
better focused images when doing macro (extreme close-up)
photography. There is also a huge retro-gaming community for the
Pi. The hardware is powerful enough to emulate not only the consoles that
he played with, but can also emulate the following generation of gaming consoles
that he "complained about" because they "destroyed the era of computers
that I grew up with", he said with a grin. Art installations are
another fairly common use for the Pi, and Upton showed some lighted paper boats at
Canary Wharf on the Thames river.
"Dr. Who and space and the Raspberry Pi all in one" is Upton's description
of his favorite hack. A weather balloon with a Tardis
as its payload has been used to take pictures from 40km up. That means
that a "space program is
within the budgetary reach of every
primary school in the world".
The Raspberry Pi community has been very inventive with extras. Upton
noted The MagPi magazine,
which has type-in listings of programs, hearkening back to the 1980s.
Typing a program in has its advantages, including "learning opportunities"
from mistyping.
There is also a Haynes
manual for the device.
The simplest cases for the
device are PDF files that you print on the "thickest paper you can get through
the printer" and fold up into a case. While the Pi is described as "credit
card sized",
it is actually about 1mil off in both dimensions, he said, but in a
"fluke", both the X and Y dimensions turn out to be a multiple of the
Lego basis unit size. That led an 11-year-old girl to create a Lego case
design for which she now gets royalties. Since she is 11 years old, she
takes
her royalties in Lego, so she "now has more Lego than me", Upton said.
There is evidence coming in that kids are using the Pi to learn to
program, he said. He showed one who is learning with MIT Scratch and noted that the
foundation is spending some money right now to get better performance for
that language on the Pi. Though he set out to try to help solve a problem
for Cambridge University, it "turns out that kids all over the world want
to learn to program". He showed a photo of some kids from Uttar Pradesh in
India using the Raspberry Pi. Those kinds of pictures give him some hope
that they are actually accomplishing something with "this tiny computer".
He noted that there "needs to be a hook" to get the Pi into a kid's life
and "apparently a lot of children like to play Minecraft". Mojang, the
company behind Minecraft, has done a port of the pocket edition to the Pi:
Minecraft Pi Edition.
That version has a socket that can be used to talk to the game world from
any programming language, which "gives kids a reason to program".
Upton put up a "Try to take over the world" map showing where the computers
have been shipped to. Taking over the world seems to be progressing well
based on that map. North America has become the continent with the largest
"run rate" (i.e. purchase rate)
in the last three months, he said, and became the largest install base as
of last month. They would like to sell "a hell of a lot more" in South
America and India, he said, but "we'll get there".
Another interesting geographical note was a change in the manufacturing
location. In the beginning, the boards were built in China,
unsurprisingly. Sony contacted the foundation and said that it could build
the boards at its factory in South Wales for the same price point. Since
September, Raspberry Pi boards have been built there, which is a "big deal"
for Upton as he comes from about 10 miles from the factory. The fact that the
"lowest-priced general-purpose computer" can be built at a factory in the
developed
world is "good news" for anyone concerned that there would be no
manufacturing in regions like Wales.
Python
There are several connections between Raspberry Pi and Python, starting
with the name. The chip was designed using Python, Upton said. He was on
the GPU team for the Pi's Broadcom 2835 chip, which used Python to "design the
whole damn thing". All of the multimedia IP was verified using Python
because it is "100 times quicker" to do it that way. Python and its tools
are much easier to use (and faster) than Verilog tools.
The Raspberry Pi benefits from the large body of existing Python (and other
interpreted languages) code. Python brings a whole set of applications and
tools to the ARM Linux environment. Finally, Python also provides one of
the teaching languages for the device. The device supports Logo and
Scratch for the youngest children, and will always support C and C++ for
people who "want to get close to the metal", but Python has a special place
as a learning language. Upton said that Python allows educators to tell
children "learn this language" and "you will be on a smooth curve" leading
to a language that lots of companies program in. There are no
discontinuities in that curve, he said, which is important because it is those
steps where students get lost along the way.
Wrapping up, Upton had three more topics on his mind. At some point
"soon", the Pi team will need to decide between Python 2.7 and 3.3. It is
already a bit confusing as some packages (e.g. PyGame) are only available
for one of the two. He is also looking forward to PyPy as a way to get better performance out of
the fairly modest Pi processor. Beyond that, the "boot to Python" idea is
still floating around, though it is not yet clear what the best teaching
environment for the Pi will be.
In closing, he hoped that all of the new users in the room
would visit raspberrypi.org and
report back on what they did with Raspberry Pi.
Many of the examples Upton gave are not particularly Raspberry-Pi-specific,
in that they could run on any Linux system. But the Pi provides a
convenient package, with compact size, low weight, and lots of connectivity
options, that makes it a nice target. While the GPU drivers leave a lot to be desired and the USB
driver is a mess, it is still a rather interesting
device—particularly at its price point. Could something better have
been made? Perhaps, but it would take a dedicated group of folks to get
together to do so. Upton and the Raspberry Pi Foundation have made their
mark, some friendly competition would make things even more interesting.
Comments (28 posted)
By Jonathan Corbet
March 19, 2013
Many pixels have been expended in the discussion of contributor agreements
that transfer copyright from developers to a company or
foundation. But, for developers in many projects, the discussion is moot,
in that the requirement for an agreement exists and the papers must be
signed before
contributions to the project can be made. But, even then, there are some
interesting details that merit attention. A recent discussion regarding
one developer's contributions to the Emacs Org mode project shows how
expansive and poorly understood such agreements can be in some cases.
Some context
Org mode is a major mode for the Emacs
editor that can be used for the maintenance of task lists, project plans,
documents, and
more; it is a general-purpose tool that is held in high regard by a large
community of users. Org mode is run as an independent project but its
releases are incorporated into Emacs releases; for that to happen, Org mode
must be developed under the Free Software Foundation's copyright assignment
rules. So it is not (usually) possible to contribute changes to Org mode
without having signed a contributor agreement that assigns copyright to the
FSF.
Jambunathan K is a contributor to Org mode and an active participant on the
project's mailing list; his credits include a module to export to files in
the OpenDocument
(ODT) format and a rewrite of the HTML export module. It is fair to
characterize Jambunathan's relationship with other Org mode developers as
"difficult." His mailing list postings are seen as contentious and disruptive
by many;
he has, at times, been asked to leave the
project's mailing list. In February he made a half-hearted attempt to take over maintainership of Org
mode; his bid gained little support from other developers in the project.
More recently, he has requested removal of ox-odt.el and ox-html.el from the Org mode repository;
again, this idea was received with little enthusiasm. So his next step was
to take his case to the main Emacs list,
saying:
I have some disagreements with current Orgmode maintainer and the
community in general.
I would like to withdraw my pleasure in having these files
distributed as part of Org distribution. I would like to register
my displeasure in a substantial way.
More specifically, I would like to know how copyright assignment
works for files that are not yet part of Emacs. Is there is a way
I can withdraw my assignment (for a substantial period - say 3-6
months) big enough to create a minor discomfort for the Org
community.
In such a situation, it would be a natural response to drop the work in
question and refuse any further dealings with this developer. Experience
has shown that a single difficult developer can create all kinds of
problems for a community when given the chance. Somebody who sets out to
deliberately create "a minor discomfort" for the Org mode
community is showing signs of being just such a developer; his code may
well not be worth the trouble.
But, in this case, it appears that the request will be refused. The files
in question have already been merged into the Org mode repository, so the
community appears to feel that (1) it has invested enough of its own
energy into that work to have a legitimate interest in it, and (2) the
FSF, as the owner of the copyright for that work, has every right to retain
and distribute it. It is the second point that Jambunathan would like to
dispute; since the files have not yet been distributed with Emacs, he says,
the Emacs copyright assignment agreement should not apply to them. One could
argue that he is almost certainly wrong and should be dismissed as
an obvious troll, but there is still an interesting point raised by this
discussion.
When copyright transfer happens
There are numerous contributor agreements out there that include either
copyright assignment or the grant of a broad license to the work in
question. The agreement used by the
Apache Software Foundation, for example, includes a license grant for any
software that is "intentionally submitted" to the Foundation, where
"submitted" is carefully defined:
For the purposes of this definition, "submitted" means any form of
electronic, verbal, or written communication sent to the Foundation
or its representatives, including but not limited to communication
on electronic mailing lists, source code control systems, and issue
tracking systems that are managed by, or on behalf of, the
Foundation for the purpose of discussing and improving the Work...
Once the work is submitted, the grant applies. The Harmony Agreements, which can
involve either copyright assignment or licensing, have a very similar
definition. The Python
agreement requires a specific annotation in the source indicating that
the agreement applies. The agreement for Emacs is not publicly posted, and
a request to the GNU Project for a copy went unanswered as of this
writing. Numerous copies can be found on the net, for example, including
this one, which
mirrors the language found in a
set of template files shipped with the gnulib project:
For good and valuable consideration, receipt of which I
acknowledge, I, NAME OF PERSON, hereby transfer to the Free
Software Foundation, Inc. (the "Foundation") my entire right,
title, and interest (including all rights under copyright) in my
changes and enhancements to the program NAME OF PROGRAM, subject to
the conditions below. These changes and enhancements are herein
called the "Work." The work hereby assigned shall also include any
future revisions of these changes and enhancements hereafter made
by me.
Unlike the other agreements listed above, the FSF agreement has no
requirement that the work actually be submitted to the project; it simply
has to be a "change or enhancement" to the program in question. So it
could easily apply to changes that were never intended to be contributed
back to the original project. In the discussion started by Jambunathan,
Richard Stallman has made it clear that
this expansive interpretation is intentional:
Our normal future assignment contract covers all changes to Emacs.
Whether it is considered a "contribution" or a "fork" is not a
criterion.
Or, going further:
A diff for Emacs is always a change to Emacs.
I will think about the questions raised by a separate Lisp file.
It is worth noting that Jambunathan's work would be considered a submission
under the language used by most projects requiring contributor agreements:
he posted the code to the project's mailing list with the clear intent of
contributing it. The fact that the Org mode project had not yet gotten
around to including it in an official release (version 8 is due soon)
and feeding it into the Emacs repository is immaterial. So the broad
scope of the FSF agreement is not relevant to that particular dispute.
But anybody who has signed such agreement might want to be aware that the
FSF thinks it owns their changes, regardless of whether they have been
publicly posted or explicitly submitted for inclusion. One could argue
that entirely private changes made by a signatory to that agreement are,
despite being seen by nobody else, owned by the FSF. Even an entirely
separate function written in Emacs Lisp — something which is not necessarily a
derived work based on Emacs and which thus might not be required to be
distributed under the GPL — might be subject to a claim of ownership by the
FSF, at least until Richard has a chance to "think about" the situation.
That may be a bit more than some signatories thought they were
agreeing to.
For the record, one should immediately point out that the FSF has
absolutely no known history of ever abusing this power or claiming
ownership of code that was not clearly submitted to the project. But
organizations can change over time and Richard, who just celebrated his
60th birthday, will not be in charge of the FSF forever. A future FSF
might choose to exploit its perceived rights more aggressively, possibly
resulting in regret among some of those who have signed contributor
agreements (which, incidentally, have no termination
provision) with it.
In truth, even the FSF appears not to know what is covered by its
contributor agreement; Richard had to respond to some
questions from Jambunathan with a simple "I will study these
questions." Whatever the outcome of his study might be, it seems
reasonable to suggest that the FSF's contributor agreement may be due for a
review. Even if the FSF still feels it cannot live without such an
agreement, it would be good to have one that clearly defines which code is
covered — and when.
Comments (49 posted)
Page editor: Jonathan Corbet
Security
By Jake Edge
March 20, 2013
At first blush, PyCon doesn't seem
like quite the right venue for a talk on
Mozilla's Persona web
authentication and identity system. Persona is not Python-specific at all, but
given the number of web application and framework developers at the
conference, it starts to become clear why Mozilla's Dan Callahan was there.
Python
also gave him the ability to do a live demo of adding Persona support to a
Flask-based web site during the
well-attended talk.
Kill the password
In a nutshell, Persona is Mozilla's attempt to "kill the password",
Callahan said to applause. It is a simple, open system that is federated
and works cross-browser. Beyond that set of buzzwords, though, the idea
for Persona is that it "works everywhere for everyone".
For an example of using Persona, Callahan visited ting.com—a mobile phone service site
from Tucows—that has a login page supporting Persona.
Clicking the
"Sign in with Persona" button popped up a window with two of his email
addresses and
a sign-in button. Since he had already used the site before, to log in he
just needed
to choose one of his email addresses (if he is using a different address
from the last
time he visited the site) and click "Sign in". It's "dead simple", he said.
Persona ties identities to email addresses. That has several advantages,
he said. Everyone already has an email address and sites often already
track them. For many web sites, adding Persona support requires no change
to the database schema. That also helps prevent lock-in, as sites that decide
not to continue with Persona are not stuck with it.
Some in the audience might be saying "I can already log in with two
clicks" using a password manager, Callahan said. That's true, but Persona
is not managing passwords. There is no shared secret between the site and
the user.
That means a database breach at the site would not disclose
any information that would be useful for an attacker to authenticate to the
service as the user. While site owners will need to alert their users to a
breach, they won't have to ask them to change passwords. Better still,
they won't have to recommend that the users change their identical passwords at
other sites.
If there are no shared secrets, many of the existing account registration
questions can simply be skipped. The Persona sign-in process provides an
email address, so there
is no reason to prompt for that (twice in many cases), nor for a password
(twice almost always). For example, with sloblog.io and an existing Persona, he can
set up a blog with two clicks.
To prove a point, he was doing his demos from the Opera web browser.
Persona works the same in all major browsers (Firefox, Chrome, Safari,
IE). It uses existing technology and standards and "works everywhere the
web works", he said.
The story behind Persona comes right out of the Mozilla Manifesto,
Callahan said. That manifesto was "written at the height of the browser
wars" and lists ten points that are "crucial to the open web". Principle #2,
"The Internet is a global public resource that must remain open and
accessible", is particularly threatened today, while principle #5,
"Individuals must have the ability to shape their own experiences on
the Internet" speaks directly to the Persona ideal. Nothing is more
important to shape one's internet experience than is the choice of
identity, he said.
"Single" sign-on
There has been a movement toward single sign-on (SSO) in recent years, but
"single" is a misnomer at this point. Many sites allow people to sign in
with their Facebook or Twitter (or Google or Yahoo or MSN or ...) account.
His slide had an example login with a bunch of login icons for those
services, ending with a "Good luck with OpenID" button.
The problem with that approach is that it is like Tribbles (with a requisite
Kirk and Tribbles slide); there are more and more of these service-based
login mechanisms appearing. How does a site pick the right one (or, more
likely, ones)? How does a user remember which of the choices they
used so they can use it on a subsequent visit?
He gave another example: the 500px
login screen. It splits the screen in half, into two sets of choices,
either logging in
via a social network (Facebook, Twitter, or Klout) on one side, or with a
username and password on the other. If a user wants to use a Google or
Microsoft login, they are out of luck. They must create a username and
trust that 500px will do the right thing with their password. He was also
amused to note that he hadn't heard of Klout, so he visited to see what it
was and Klout wanted him to log in using either Facebook or Twitter.
There are also some implications of using the login network of certain
services. Google and Facebook have real-name policies that can sometimes
lead to account suspension when a violation is suspected. That suspension
then trickles out to any other services that use those login mechanisms.
Facebook policies disallow multiple accounts (e.g. personal and business)
as well. Basically, services using Facebook logins are outsourcing their
account policies to Facebook.
It is worth a lot of money for the social networks to get their buttons
onto sites, Callahan
said. So "any solution has to come from someone outside who is not trying
to make a buck off every login". Since Mozilla is on the outside, it is
well positioned to help solve the problem.
The earlier Persona demonstrations were for email addresses that had
already been set up, but Callahan also wanted to show what happens for
users who are not yet signed up. In that case, the user must type in an
email address in the Persona pop-up. Persona checks with the email
provider to see if it supports Persona, if so the email provider authenticates
the user via its normal mechanisms (e.g. web-based login) that the user has seen
plenty of times
before. If the user successfully authenticates, the email provider indicates
that to the site.
Using Persona team members as props, Callahan showed the process. The
user claims a particular email address and the site contacts the email
provider for verification. The email provider asks the user to authenticate
(using a password, two-factor authentication, facial recognition, ...) and
if that is successful, the provider signs the email address and hands it
back to the
site (along with some anti-replay-attack data). The site then verifies the
signature, at which point it knows that the user has that email identity.
Implementing Persona
As can be seen, the description of the protocol and cryptography used was
rather high-level. Callahan's clear intent was to try to convince web
application and framework programmers to get on board with Persona. There
is more information about the underlying details at developer.mozilla.org/persona,
he said.
For the moment, few email providers support Persona, so as an "optional
temporary" measure, sites can ask Mozilla to vouch for the email address.
For example, Gmail does not support Persona (yet), but Mozilla can vouch
for Gmail users by way of a challenge email. Authenticating the email
address to Mozilla need only be done once. But that puts Mozilla in the
middle of each initial authentication right now; eventually the user's email providers will be serving that role.
The documentation lists four things that a site owner needs to do to use
Persona. There is a JavaScript library to include in the login
page, the login/logout buttons need "onClick" attributes added, and the
library needs to be configured. The final piece of the puzzle is to add
verification of the identity assertions (signed email addresses from the
email provider or Mozilla). That verification needs to be done in the
server-side code.
In the future, the hope is that browsers will natively support Persona, but
for now the JavaScript is needed. On the client side, it is 30 or so lines
of JavaScript called from the login and logout paths. The server side is a
little
more complicated, as assertions are cryptographically signed, but that
verification can be handed off to a service that Mozilla runs. The back
end just posts some JSON to the Mozilla service and reads its response.
Those changes take less than 40 lines to implement.
Using the code directly from his slides, Callahan changed both client and
server sides of a demo application. That added the "great user experience"
of Persona logins. It also showed an "amazing developer experience" in how
easy it is to add Persona. Once the demo was done, and the applause died
down, Callahan said "I am so glad that worked" with a relieved grin.
Callahan had three tips for site developers adding Persona support. The
first was to make a library specific to the framework being used that can
be reused in multiple applications. Second, his example used the Mozilla
verifier,
but that is not a good long-term solution for privacy reasons. But, he
cautioned, make sure to use the Python "requests" library when doing
verification as the standard library does not check SSL certificates
properly. Lastly, he wanted to make it clear that using Persona did not
mean that a site had to get rid of the other login buttons, "just that
maybe you should", he said. Persona can peacefully coexist with these
other login mechanisms.
In conclusion, Callahan said he had a request: "spend one hour with Persona
this week". You could add it to your site in an hour, he said, but if not,
just try it out on some site.
Persona is still in beta, so it is "able to
be shaped by your feedback". Also, he requested, please ask one site that
you use to
support Persona, "that's how we are going to change the future of the
web". Persona will allow everyone—not just the few who understand
OpenID or password managers—to have a safer, more secure web.
[ In keeping with Callahan's request, we will be looking into Persona
support for LWN. ]
Comments (36 posted)
Brief items
Nationalism is rife on the Internet, and it's getting worse. We need to
damp down the rhetoric and-more importantly-stop believing the propaganda
from those who profit from this Internet nationalism. Those who are beating
the drums of cyberwar don't have the best interests of society, or the
Internet, at heart.
--
Bruce
Schneier
Comments (3 posted)
New vulnerabilities
apt: altered package installation
| Package(s): | apt |
CVE #(s): | CVE-2013-1051
|
| Created: | March 15, 2013 |
Updated: | March 20, 2013 |
| Description: |
From the Ubuntu advisory:
Ansgar Burchardt discovered that APT incorrectly handled repositories that
use InRelease files. The default Ubuntu repositories do not use InRelease
files, so this issue only affected third-party repositories. If a remote
attacker were able to perform a man-in-the-middle attack, this flaw could
potentially be used to install altered packages. |
| Alerts: |
|
Comments (none posted)
bugzilla: cross-site scripting
| Package(s): | bugzilla |
CVE #(s): | CVE-2013-0785
CVE-2013-0786
|
| Created: | March 18, 2013 |
Updated: | March 20, 2013 |
| Description: |
From the Bugzilla advisory:
* When viewing a bug report, a bug ID containing random code is not
correctly sanitized in the HTML page if the specified page format
is invalid. This can lead to XSS.
* When running a query in debug mode, it is possible to determine if
a given confidential field value (such as a product name) exists.
Bugzilla 4.1 and newer are not affected by this issue. |
| Alerts: |
|
Comments (none posted)
chromium: multiple vulnerabilities
| Package(s): | chromium |
CVE #(s): | CVE-2013-0879
CVE-2013-0880
CVE-2013-0881
CVE-2013-0882
CVE-2013-0883
CVE-2013-0884
CVE-2013-0885
CVE-2013-0886
CVE-2013-0887
CVE-2013-0888
CVE-2013-0889
CVE-2013-0890
CVE-2013-0891
CVE-2013-0892
CVE-2013-0893
CVE-2013-0894
CVE-2013-0895
CVE-2013-0896
CVE-2013-0897
CVE-2013-0898
CVE-2013-0899
CVE-2013-0900
|
| Created: | March 14, 2013 |
Updated: | March 20, 2013 |
| Description: |
From the openSUSE advisory:
Chromium was updated to version 27.0.1425 having both stability and security fixes:
- High CVE-2013-0879: Memory corruption with web audio
node
- High CVE-2013-0880: Use-after-free in database
handling
- Medium CVE-2013-0881: Bad read in Matroska handling
- High CVE-2013-0882: Bad memory access with excessive
SVG parameters.
- Medium CVE-2013-0883: Bad read in Skia.
- Low CVE-2013-0884: Inappropriate load of NaCl.
- Medium CVE-2013-0885: Too many API permissions
granted to web store
- Medium CVE-2013-0886: Incorrect NaCl signal handling.
- Low CVE-2013-0887: Developer tools process has too
many permissions and places too much trust in the
connected server
- Medium CVE-2013-0888: Out-of-bounds read in Skia
- Low CVE-2013-0889: Tighten user gesture check for
dangerous file downloads.
- High CVE-2013-0890: Memory safety issues across the
IPC layer.
- High CVE-2013-0891: Integer overflow in blob handling.
- Medium CVE-2013-0892: Lower severity issues across
the IPC layer
- Medium CVE-2013-0893: Race condition in media
handling.
- High CVE-2013-0894: Buffer overflow in vorbis
decoding.
- High CVE-2013-0895: Incorrect path handling in file
copying.
- High CVE-2013-0896: Memory management issues in
plug-in message handling
- Low CVE-2013-0897: Off-by-one read in PDF
- High CVE-2013-0898: Use-after-free in URL handling
- Low CVE-2013-0899: Integer overflow in Opus handling
- Medium CVE-2013-0900: Race condition in ICU
|
| Alerts: |
|
Comments (none posted)
clamav: unspecified vulnerabilities
| Package(s): | clamav |
CVE #(s): | |
| Created: | March 20, 2013 |
Updated: | March 28, 2013 |
| Description: |
From the Mandriva advisory:
ClamAV 0.97.7 addresses several reported potential security
bugs. Thanks to
Felix Groebert, Mateusz Jurczyk and Gynvael Coldwind of the Google Security
Team for finding and reporting these issues. |
| Alerts: |
|
Comments (none posted)
firebird: multiple vulnerabilities
| Package(s): | firebird |
CVE #(s): | CVE-2013-2492
CVE-2012-5529
|
| Created: | March 18, 2013 |
Updated: | April 3, 2013 |
| Description: |
From the CVE entries:
Stack-based buffer overflow in Firebird 2.1.3 through 2.1.5 before 18514, and 2.5.1 through 2.5.3 before 26623, on Windows allows remote attackers to execute arbitrary code via a crafted packet to TCP port 3050, related to a missing size check during extraction of a group number from CNCT information. (CVE-2013-2492)
TraceManager in Firebird 2.5.0 and 2.5.1, when trace is enabled, allows remote authenticated users to cause a denial of service (NULL pointer dereference and crash) by preparing an empty dynamic SQL query. (CVE-2012-5529) |
| Alerts: |
|
Comments (none posted)
glance: information disclosure
| Package(s): | glance |
CVE #(s): | CVE-2013-1840
|
| Created: | March 15, 2013 |
Updated: | March 20, 2013 |
| Description: |
From the Ubuntu advisory:
Stuart McLaren discovered an issue with Glance v1 API requests. An
authenticated attacker could exploit this to expose the Glance operator's
Swift and/or S3 credentials via the response headers when requesting a
cached image. |
| Alerts: |
|
Comments (none posted)
kernel: multiple vulnerabilities
| Package(s): | kernel |
CVE #(s): | CVE-2013-0913
CVE-2013-0914
|
| Created: | March 18, 2013 |
Updated: | March 22, 2013 |
| Description: |
From the Red Hat bugzilla [1, 2]:
[1] Linux kernel built with Direct Rendering Manager(DRM) i915 driver for the
the Direct Rendering Infrastructure(DRI) introduced by XFree86 4.0, is
vulnerable to a heap overflow flaw.
An user/program with access to the DRM driver could use this flaw to crash
the kernel, resulting in DoS or possibly escalate privileges.
[2] Linux kernel is vulnerable to an information leakage flaw. This occurs when a process calls routine - sigaction() - to access - sa_restorer - parameter. This parameter points to an address that belongs to its parent process' address space.
A user could use this flaw to infer address layout of a process.
|
| Alerts: |
|
Comments (none posted)
kernel: privilege escalation
| Package(s): | kernel |
CVE #(s): | CVE-2013-1860
|
| Created: | March 20, 2013 |
Updated: | March 22, 2013 |
| Description: |
From the Red Hat bugzilla:
Linux kernel built with USB CDC WDM driver is vulnerable to heap buffer overflow flaw.
An unprivileged local user could use this flaw to crash the kernel or, potentially, elevate their privileges.
Please note that a physical access to the system or plugging in random USB device is needed in order to exploit this bug. |
| Alerts: |
|
Comments (none posted)
krb5: denial of service
| Package(s): | krb5 |
CVE #(s): | CVE-2012-1016
|
| Created: | March 18, 2013 |
Updated: | March 20, 2013 |
| Description: |
From the CVE entry:
The pkinit_server_return_padata function in plugins/preauth/pkinit/pkinit_srv.c in the PKINIT implementation in the Key Distribution Center (KDC) in MIT Kerberos 5 (aka krb5) before 1.10.4 attempts to find an agility KDF identifier in inappropriate circumstances, which allows remote attackers to cause a denial of service (NULL pointer dereference and daemon crash) via a crafted Draft 9 request. |
| Alerts: |
|
Comments (none posted)
libvirt-bin: unintended write access
| Package(s): | libvirt-bin |
CVE #(s): | CVE-2013-1766
|
| Created: | March 18, 2013 |
Updated: | March 20, 2013 |
| Description: |
From the Debian advisory:
Bastian Blank discovered that libvirtd, a daemon for management of virtual
machines, network and storage, would change ownership of devices files so they
would be owned by user `libvirt-qemu` and group `kvm`, which is a general
purpose group not specific to libvirt, allowing unintended write access to
those devices and files for the kvm group members. |
| Alerts: |
|
Comments (none posted)
lighttpd: symlink attack
| Package(s): | lighttpd |
CVE #(s): | CVE-2013-1427
|
| Created: | March 18, 2013 |
Updated: | March 20, 2013 |
| Description: |
From the Debian advisory:
Stefan Bühler discovered that the Debian specific configuration file for
lighttpd webserver FastCGI PHP support used a fixed socket name in the
world-writable /tmp directory. A symlink attack or a race condition could be
exploited by a malicious user on the same machine to take over the PHP control
socket and for example force the webserver to use a different PHP version. |
| Alerts: |
|
Comments (none posted)
pam-xdg-support: privilege escalation
| Package(s): | pam-xdg-support |
CVE #(s): | CVE-2013-1052
|
| Created: | March 18, 2013 |
Updated: | March 20, 2013 |
| Description: |
From the Ubuntu advisory:
Zbigniew Tenerowicz and Sebastian Krzyszkowiak discovered that
pam-xdg-support incorrectly handled the PATH environment variable. A local
attacker could use this issue in combination with sudo to possibly escalate
privileges. |
| Alerts: |
|
Comments (none posted)
poppler: multiple vulnerabilities
| Package(s): | poppler |
CVE #(s): | CVE-2013-1788
CVE-2013-1790
|
| Created: | March 14, 2013 |
Updated: | April 2, 2013 |
| Description: |
From the Red Hat bugzilla:
CVE-2013-1788: A number of invalid memory access flaws were reported in poppler (fixed in version 0.22.1):
- Fix invalid memory access in 1150.pdf.asan.8.69 [1].
- Fix invalid memory access in 2030.pdf.asan.69.463 [2].
- Fix another invalid memory access in 1091.pdf.asan.72.42 [3].
- Fix invalid memory accesses in 1091.pdf.asan.72.42 [4].
- Fix invalid memory accesses in 1036.pdf.asan.23.17 [5].
CVE-2013-1790: An uninitialized memory read flaw was reported in poppler (fixed in version 0.22.1):
Initialize refLine totally
Fixes uninitialized memory read in 1004.pdf.asan.7.3 |
| Alerts: |
|
Comments (none posted)
sssd: privilege violation
| Package(s): | sssd |
CVE #(s): | CVE-2013-0287
|
| Created: | March 20, 2013 |
Updated: | April 1, 2013 |
| Description: |
From the Red Hat advisory:
When SSSD was configured as a Microsoft Active Directory client by using
the new Active Directory provider (introduced in RHSA-2013:0508), the
Simple Access Provider ("access_provider = simple" in
"/etc/sssd/sssd.conf") did not handle access control correctly. If any
groups were specified with the "simple_deny_groups" option (in sssd.conf),
all users were permitted access. |
| Alerts: |
|
Comments (none posted)
stunnel: code execution
| Package(s): | stunnel |
CVE #(s): | CVE-2013-1762
|
| Created: | March 18, 2013 |
Updated: | March 20, 2013 |
| Description: |
From the Mageia advisory:
stunnel 4.21 through 4.54, when CONNECT protocol negotiation and NTLM
authentication are enabled, does not correctly perform integer conversion,
which allows remote proxy servers to execute arbitrary code via a
crafted request that triggers a buffer overflow. |
| Alerts: |
|
Comments (none posted)
telepathy-gabble: denial of service
| Package(s): | telepathy-gabble |
CVE #(s): | CVE-2013-1769
|
| Created: | March 14, 2013 |
Updated: | March 22, 2013 |
| Description: |
From the Red Hat bugzilla:
So we have a remotely-triggered DoS: send Gabble a <presence> with a caps hash;
include a form with an anonymous fixed field in the reply; boom. Since anyone
can send presence to anyone else, and Gabble always looks up any caps it sees
in any presences it receives. (Note that this is a presence leak, too; another
bug, I think.) |
| Alerts: |
|
Comments (none posted)
typo3-src: multiple vulnerabilities
| Package(s): | typo3-src |
CVE #(s): | CVE-2013-1842
CVE-2013-1843
|
| Created: | March 18, 2013 |
Updated: | March 21, 2013 |
| Description: |
From the Debian advisory:
CVE-2013-1842:
Helmut Hummel and Markus Opahle discovered that the Extbase database layer was not correctly sanitizing user input when using the Query object model. This can lead to SQL injection by a malicious user inputing crafted
relation values.
CVE-2013-1843:
Missing user input validation in the access tracking mechanism could lead
to arbitrary URL redirection.
See the upstream advisory for additional information. |
| Alerts: |
|
Comments (none posted)
wireshark: multiple vulnerabilities
| Package(s): | wireshark |
CVE #(s): | CVE-2013-2478
CVE-2013-2480
CVE-2013-2481
CVE-2013-2483
CVE-2013-2484
CVE-2013-2488
|
| Created: | March 15, 2013 |
Updated: | March 20, 2013 |
| Description: |
From the Mageia advisory:
- The sFlow dissector could go into an infinite loop (CVE-2012-6054).
- The SCTP dissector could go into an infinite loop (CVE-2012-6056).
- The MS-MMS dissector could crash (CVE-2013-2478).
- The RTPS and RTPS2 dissectors could crash (CVE-2013-2480).
- The Mount dissector could crash (CVE-2013-2481).
- The AMPQ dissector could go into an infinite loop (CVE-2013-2482).
- The ACN dissector could attempt to divide by zero (CVE-2013-2483).
- The CIMD dissector could crash (CVE-2013-2484).
- The FCSP dissector could go into an infinite loop (CVE-2013-2485).
- The DTLS dissector could crash (CVE-2013-2488).
|
| Alerts: |
|
Comments (none posted)
wireshark: multiple vulnerabilities
| Package(s): | wireshark |
CVE #(s): | CVE-2013-2475
CVE-2013-2476
CVE-2013-2477
CVE-2013-2479
CVE-2013-2482
CVE-2013-2485
CVE-2013-2486
CVE-2013-2487
|
| Created: | March 20, 2013 |
Updated: | March 20, 2013 |
| Description: |
From the CVE entries:
The TCP dissector in Wireshark 1.8.x before 1.8.6 allows remote attackers to cause a denial of service (application crash) via a malformed packet. (CVE-2013-2475)
The dissect_hartip function in epan/dissectors/packet-hartip.c in the HART/IP dissector in Wireshark 1.8.x before 1.8.6 allows remote attackers to cause a denial of service (infinite loop) via a packet with a header that is too short. (CVE-2013-2476)
The CSN.1 dissector in Wireshark 1.8.x before 1.8.6 does not properly manage function pointers, which allows remote attackers to cause a denial of service (application crash) via a malformed packet. (CVE-2013-2477)
The dissect_mpls_echo_tlv_dd_map function in epan/dissectors/packet-mpls-echo.c in the MPLS Echo dissector in Wireshark 1.8.x before 1.8.6 allows remote attackers to cause a denial of service (infinite loop) via invalid Sub-tlv data. (CVE-2013-2479)
The AMPQ dissector in Wireshark 1.6.x before 1.6.14 and 1.8.x before 1.8.6 allows remote attackers to cause a denial of service (infinite loop) via a malformed packet. (CVE-2013-2482)
The FCSP dissector in Wireshark 1.6.x before 1.6.14 and 1.8.x before 1.8.6 allows remote attackers to cause a denial of service (infinite loop) via a malformed packet. (CVE-2013-2485)
The dissect_diagnosticrequest function in epan/dissectors/packet-reload.c in the REsource LOcation And Discovery (aka RELOAD) dissector in Wireshark 1.8.x before 1.8.6 uses an incorrect integer data type, which allows remote attackers to cause a denial of service (infinite loop) via crafted integer values in a packet. (CVE-2013-2486)
epan/dissectors/packet-reload.c in the REsource LOcation And Discovery (aka RELOAD) dissector in Wireshark 1.8.x before 1.8.6 uses incorrect integer data types, which allows remote attackers to cause a denial of service (infinite loop) via crafted integer values in a packet, related to the (1) dissect_icecandidates, (2) dissect_kinddata, (3) dissect_nodeid_list, (4) dissect_storeans, (5) dissect_storereq, (6) dissect_storeddataspecifier, (7) dissect_fetchreq, (8) dissect_findans, (9) dissect_diagnosticinfo, (10) dissect_diagnosticresponse, (11) dissect_reload_messagecontents, and (12) dissect_reload_message functions, a different vulnerability than CVE-2013-2486. (CVE-2013-2487)
|
| Alerts: |
|
Comments (none posted)
zoneminder: multiple vulnerabilities
| Package(s): | zoneminder |
CVE #(s): | CVE-2013-0232
CVE-2013-0332
|
| Created: | March 15, 2013 |
Updated: | April 3, 2013 |
| Description: |
From the Debian advisory:
Multiple vulnerabilities were discovered in zoneminder, a Linux video
camera security and surveillance solution. The Common Vulnerabilities
and Exposures project identifies the following problems:
CVE-2013-0232:
Brendan Coles discovered that zoneminder is prone to an arbitrary
command execution vulnerability. Remote (authenticated) attackers
could execute arbitrary commands as the web server user.
CVE-2013-0332:
zoneminder is prone to a local file inclusion vulnerability. Remote
attackers could examine files on the system running zoneminder. |
| Alerts: |
|
Comments (none posted)
Page editor: Jake Edge
Kernel development
Brief items
The current development kernel is 3.9-rc3,
released on March 17. Linus says:
"
Not as small as -rc2, but that one really was unusually calm. So
there was clearly some pending stuff that came in for -rc3, with network
drivers and USB leading the charge. But there's other misc drivers, arch
updates, btrfs fixes, etc etc too."
Stable updates:
3.8.3, 3.4.36, and 3.0.69 were released on March 14,
and
3.8.4, 3.4.37,
3.2.41, and 3.0.70 came out on March 20.
Comments (none posted)
Dave Jones has announced the creation of a mailing list for development of
the "Trinity" fuzz testing tool. It is hosted on vger, so the usual
majordomo subscription routine applies.
Full Story (comments: none)
Kernel development news
By Jonathan Corbet
March 20, 2013
Almost any I/O device worth its electrons will support direct memory access
(DMA) transactions; to do otherwise is to be relegated to the world of
low-bandwidth, high-overhead I/O. But "DMA-capable" devices are not all
equally so; many of them have limitations restricting the range of memory
that can be directly accessed. The 24-bit limitation that afflicted ISA
devices in the early days of the personal computer is a classic example,
but contemporary hardware also has its limits. The kernel has long had a
mechanism for working around these limitations, but it turns out that this
subsystem has some interesting problems of its own.
DMA limitations are usually a result of a device having fewer address lines
than would be truly useful. The 24 lines described by the ISA
specification are an obvious example; there is simply no way for an
ISA-compliant device to address more than 16MB of physical memory. PCI
devices are normally limited to a 32-bit address space, but a number of
devices are limited to a smaller space as a result of dubious hardware
design; as is so often the case,
hardware designers have shown a great deal of creativity in this area. But
users are not concerned with these issues; they just want their peripherals
to work. So the kernel has to find a way to respect any given device's
special limits while still using DMA to the greatest extent possible.
The kernel's DMA API (described in Documentation/DMA-API.txt) abstracts and hides
most of the details of making DMA actually work with any specific device.
This API will, for example, endeavor to allocate memory that falls within
the physical range supported by the target device. It will also
transparently implement "bounce buffering" — copying data between a
device-inaccessible buffer and an accessible buffer — if necessary. To do
so, however, the DMA API must be informed of a device's addressing limits.
That is done through the provision of a "DMA mask," a bitmask describing
the memory range reachable by the device. The documentation describes the
mask this way:
The dma_mask represents a bit mask of the addressable region for
the device. I.e., if the physical address of the memory anded with
the dma_mask is still equal to the physical address, then the
device can perform DMA to the memory.
The problem, as recently pointed out by
Russell King, is that the DMA mask is not always interpreted that way. He
points to code like the following, found in block/blk-settings.c:
void blk_queue_bounce_limit(struct request_queue *q, u64 dma_mask)
{
unsigned long b_pfn = dma_mask >> PAGE_SHIFT;
What is happening here is that the code is right-shifting the DMA mask to
turn it into a "page frame number" (PFN). If one envisions a system's
memory as a linear array of pages, the PFN of a given page is simply its
index into that array (though memory is not always organized so simply).
By treating a DMA mask as, for all practical purposes, another way of
expressing the PFN of the highest addressable page, the block code is
changing the semantics of how the mask is interpreted.
Russell explained how that can be problematic. On some ARM systems,
memory does not start at a physical address of zero; the physical
address of the first byte can be as high as 3GB (0xc0000000). If a
system configured in this way has a device with a 26-bit address limitation
(with the upper bits
being filled in by the bus hardware), then its DMA mask should be set to
0xc3ffffff. Any physical address within the device's range will be
unchanged by a logical AND operation with this mask, while any address
outside of that range will not.
But what then happens when the block code right-shifts that mask to get a
PFN from the mask? The result (assuming 4096-byte pages) is 0xc3fff, which
is a perfectly valid PFN on a system where the PFN of the first page will
be 0xc0000. And that is fine until one looks at the interactions with a
global memory management variable called max_low_pfn. Given that
name, one might imagine that it is the maximum PFN contained within low
memory — the PFN of the highest page that is directly addressable by the
kernel without special mappings. Instead, max_low_pfn is a
count of page frames in low memory. But not all code appears to
treat it that way.
On an x86 system, where memory starts at a physical address of zero (and,
thus, a PFN of zero), that difference does not matter; the count and the
maximum are the same. But on more
complicated systems the results can be interesting. Returning to the same
function in blk-settings.c:
blk_max_low_pfn = max_low_pfn - 1; /* Done elsewhere at init time */
if (b_pfn < blk_max_low_pfn)
dma = 1;
q->limits.bounce_pfn = b_pfn;
Here we have a real page frame number (calculated from the DMA mask)
compared to a count of page frames, with decisions on how DMA must be done
depending on the result. It would not be surprising to see erroneous
results from such an operation; with regard to the discussion in question,
it seems to have caused bounce buffering to be done when there was no need.
One can easily see other kinds of trouble that could result from this type
of confusion; inconsistent views of what a variable means will rarely lead
to good things.
Fixing this situation is not going to be straightforward; Russell had "no
idea" of how to do it. Renaming max_low_pfn to something like
low_pfn_count might be a first step as a way to avoid further
confusion. Better defining the meaning of a DMA mask (or, at least,
ensuring that the kernel's interpretation of a mask adheres to the existing
definition) sounds like a good idea, but it could be hard to implement in a
way that does not break obscure hardware — some of that code can be fragile
indeed. One way or another, it seems that the DMA interface, which was
designed by developers working with relatively straightforward hardware, is
going to need some attention from the ARM community if it's going to meet
that community's needs.
Comments (none posted)
By Michael Kerrisk
March 20, 2013
An exploit posted on March 13
revealed a rather easily exploitable security vulnerability (CVE 2013-1858)
in the implementation of user namespaces. That exploit enables an
unprivileged user to escalate to full root privileges. Although a fix was
quickly provided, it is nevertheless instructive to look in some detail at
the vulnerability, both to better understand the nature of this kind of
exploit and also to briefly consider how this vulnerability came to appear
inside the user namespaces implementation. General background on user
namespaces can be found in parts 5 and part
6 of our recent series of
articles on namespaces.
Overview
The vulnerability was discovered by Sebastian Krahmer, who posted
proof-of-concept code
demonstrating the exploit on the oss-security mailing list.
The exploit is based on the fact that
Linux 3.8 allows the following combination of flags when calling
clone() (and also unshare() and setns()):
clone(... CLONE_NEWUSER | CLONE_FS, ...);
CLONE_NEWUSER says that the new child should be in
a new user namespace, and with the completion of the user namespaces
implementation in Linux 3.8, that flag can now be employed by unprivileged
processes. Within the new namespace, the child has a full set of capabilities,
although it has no capabilities in the parent namespace.
The CLONE_FS flag says that the caller of clone() and
the resulting child should share certain filesystem-related attributes—root
directory, current working directory, and file mode creation mask
(umask). The attribute of particular interest here is the root directory,
which a privileged process can change using the chroot() system
call.
It is the mismatch between the scope of these two flags that creates
the window for the exploit. On the one hand, CLONE_FS causes the
parent and child process to share the root directory attribute. On the
other hand, CLONE_NEWUSER puts the two processes into separate
user namespaces, and gives the child full capabilities in the new user
namespace. Those capabilities include CAP_SYSCHROOT, which gives a
process the ability to call chroot(); the sharing provided by
CLONE_FS means that the child can change the root directory of a
process in another user namespace.
In broad strokes, the exploit achieves escalation to root privileges by
executing any set-user-ID-root program that is present on the system in a
chroot environment which
is engineered to execute attacker-controlled code. That code runs with user
ID 0 and allows the exploit to fire up a shell with root privileges. The
exploit as demonstrated is accomplished by subverting the dynamic linking
mechanism, although other lines of attack based on the same foundation are
also possible.
The vulnerability scenario
The first part of understanding the exploit requires some understanding
of the operation of the dynamic linker. Most executables (including most
set-user-ID root programs) on a Linux system employ shared libraries and
dynamic linking.
At run time, the dynamic linker loads the required shared libraries in
preparation for running the program. The pathname of the dynamic linker is
embedded in the executable file's ELF headers, and is listed among the
other dependencies of a dynamically linked executable when we use the
ldd command (here executed on an x86-64 system):
$ ldd /bin/ls | grep ld-linux
/lib64/ld-linux-x86-64.so.2 (0x00000035b1800000)
There are a few important points to note about the dynamic linker. First, it
is run before the application program. Second, it is run under whatever
credentials would be accorded to the application program; thus, for
example, if a set-user-ID-root program is being executed, the dynamic
linker will run with an effective user ID of root.
Executable files are normally protected so that they can't be modified
by users other than the file owner; this prevents, for example,
unprivileged users from modifying the dynamic linker path embedded inside a
set-user-ID-root binary. For similar reasons, an unprivileged user can't
change the contents of the dynamic linker binary.
However, suppose for a moment that an unprivileged user could construct a
chroot tree containing (via a hard link) the set-user-ID-root binary and
an executable of the user's own choosing at
/lib64/ld-linux-x86-64.so.2. Running the set-user-ID-root binary
would then cause control first to be passed to the user's own code, which
would be running as root. The aim of the exploit is to bring about the
situation shown in the following diagram, where pathnames are shown linked
to various binary files:
The key point in the above diagram is that two pathnames link to the
fusermount binary (a set-user-ID-root program used for mounting
and unmounting FUSE
filesystems). If a process outside the chroot environment executes the
/bin/fusermount binary, then the real dynamic linker will be
invoked to load the binary's shared libraries. On the other hand, if a
process inside the chroot environment executes the other link to the binary
(/suid-root), then the kernel will load the ELF interpreter
pointed to by the link /lib64/ld-linux-x86-64.so.2 inside the
chroot environment. That link points to code supplied by an attacker, and
will be run with root privileges.
How does the Linux 3.8 user namespaces implementation help with this
attack? First, an unprivileged user can create a new user namespace in which
they gain full privileges, including the ability to create a chroot
environment using chroot(). Second, the differing scope of
CLONE_NEWUSER and CLONE_FS described above means that
the privileged process inside a new user namespace can construct a chroot
environment that applies to a process outside the user namespace. If that
process can in turn then be made to execute a set-user-ID binary inside
the chroot environment, then the attacker code will be run as root.
A three-phase attack
Although Sebastian's program is quite short, there are many details
involved that make the exploit somewhat challenging to understand;
furthermore, the program is written with the goal of accomplishing the
exploit, rather than educating the reader on how the exploit is carried
out. Therefore, we'll provide an equivalent program, userns_exploit.c, that performs the
same attack—this program is structured in a more understandable way
and is instrumented with output statements that enable the user to see what
is going on. We won't walk though the code of the program, but it is well
commented and should be easy to follow using the explanations in this article.
The attack code involves the creation of three processes, which we'll
label "parent", "child", and "grandchild". The attack is conducted in
three phases; in each phase, a separate instance of the attacker code is
executed. This concept can at first be difficult to grasp when reading the
code. It's easiest to think of the userns_exploit program as
simply offering itself in three flavors, with the choice being determined
by command-line arguments and the effective user ID of the process.
The following diagram shows the exploit in overview:
In the above diagram, the vertical dashed lines indicate points where a
process is blocked waiting for another process to complete some action.
In the first phase of the exploit, the program starts by discovering its
own pathname. This is done by reading the contents of the
/proc/self/exe symbolic link.
The program needs to know its own pathname for two
reasons: so it can create a link to itself inside the chroot tree and so it
can re-execute itself later.
The program then creates two processes, labeled "parent" and "child"
in the above diagram. The parent's task is simple. It will loop, using the
stat() system call to check whether the program pathname
discovered in the previous step is owned by root and has the
set-user-ID permission bit enabled. This causes the parent to wait until
the other processes have finished their tasks.
In the meantime, the "child" populates the directory tree that will be used
as the chroot environment. The goal is to create the set-up shown in the
following diagram:
The difference from the first diagram is that we now see that it is the
userns_exploit program that will be used as the fake dynamic
loader inside the chroot environment. Furthermore, that binary is also
linked outside the chroot environment, and the exploit design takes advantage of
that fact.
Having created the chroot tree shown above, the child then employs
clone(CLONE_NEWUSER|CLONE_FS) to create a new process—the
grandchild. The grandchild has a full set of capabilities, which allows it
to call chroot() to place itself into the chroot tree. Because the
grandchild and the child share the root directory attribute, the child is
now also placed in the chroot environment.
Its small task complete, the grandchild now terminates. At that point,
the child, which has been waiting on the grandchild, now
resumes. As its next step, the child executes the program at the path
/suid-root. This is in fact a link to the fusermount
binary. Because the child is in the initial user namespace and the
fusermount binary is set-user-ID-root, the child gains root
privileges.
However, before the fusermount binary is loaded, the kernel
first loads its ELF interpreter, the file at the path
/lib64/ld-linux-x86-64.so.2. That, as it happens, is actually the
userns_exploit program. Thus, the userns_exploit program
is now executed for a second time (and the fusermount program is
never executed).
The second phase of the exploit has now begun. This instance of the
userns_exploit program recognizes that it has an effective user ID
of 0. However, the only files it can access are those inside the chroot
environment. But that is sufficient. The child can now change the ownership
of the file /lib64/ld-linux-x86-64.so.2 and turn on the file's
set-user-ID permission bit. That pathname is, of course, a link to the
userns_exploit binary. At this point, the child's work is now
complete, and it terminates.
All of this time, the parent process has been sitting in the background
waiting for the userns_exploit binary to become a set-user-ID-root
program. That, of course, is what the child has just accomplished. So, at
this point, the parent now executes the userns_exploit program
outside the chroot environment. On this execution, the program is
supplied with a command-line argument.
The third and final phase of the exploit has now started. The
userns_exploit program determines that it has an effective user ID
of 0 and notes that it has a command-line argument. That latter fact
distinguishes this case from the second execution of the
userns_exploit and is a signal that this time the program is being
executed outside the chroot environment. All that the program now
needs to do is execute a shell; that shell will provide the user with full
root privileges on the system.
Further requirements for a successful exploit
There are a few other steps that are necessary to successfully
accomplish the exploit. The userns_exploit program must be
statically linked. This is necessary so that, when executed as the dynamic linker
inside the chroot environment, the userns_exploit program does not
itself require a dynamic linker.
In addition, the value in the /proc/sys/fs/protected_hardlinks
file must zero. The protected_hardlinks file was a feature that
was added in Linux 3.6 specifically to prevent
the types of exploit discussed in this article. If this file has the
value one, then only the owner of a file can create hard links to it. On a
vanilla kernel, protected_hardlinks unfortunately has the default
value zero, although some distributions provide kernels that change this
default.
In the process of exploring this vulnerability, your editor
discovered that set-user-ID binaries built as hardened,
position-independent executables (PIE) cannot be used for this particular
attack. (Many of the set-user-ID-root binaries on his Fedora system were
hardened in this manner.) While PIE hardening thwarts this particular line of
attack, the chroot() technique described here can still be used to
exploit a set-user-ID-root binary in other ways. For example, the
binary can be placed in a suitably constructed chroot environment
that contains the genuine dynamic linker but a compromised libc.
Finally, user namespaces must of course be enabled on the system where
this exploit is to be tested, and the kernel version needs to be precisely
3.8. Earlier kernel versions did not allow unprivileged users to create
user namespaces, and later kernels will fix this bug, as described
below. The exploit is unlikely to be possible with distributor kernels:
because the Linux 3.8 kernel does not support the use of user namespaces
with various filesystems, including NFS and XFS, distributors are
unlikely to enable user namespaces in the kernels that they ship.
The fix
Once the problem was reported, Eric
Biederman considered two possible
solutions. The more complex solution is to create an association from a
process's fs_struct, the kernel data structure that records the
process's root directory, to a user namespace, and use that association to
set limitations around the use of chroot() in scenarios such as
the one described in this article. The alternative is the simple and
obviously safe solution: disallow the combination of CLONE_NEWUSER
and CLONE_FS in the clone() system call, make
CLONE_NEWUSER automatically imply CLONE_FS in the
unshare() system call, and disallow the use of setns() to
change a process's user namespace if the process is sharing
CLONE_FS-related attributes with another process.
Subsequently, Eric concluded
that the complex solution seemed to be unnecessary and would add a small
overhead to every call to fork(). He thus opted for the simple
solution: the Linux 3.9 kernel (and the 3.8.3 stable kernel) will disallow
the combination of CLONE_NEWUSER and CLONE_FS.
User namespaces and security
As we noted in an earlier
article, Eric Biederman has put a lot of work into trying to ensure
that unprivileged can create user namespaces without causing security
vulnerabilities. Nevertheless, a significant exploit was found soon after
the release of the first kernel version that allowed unprivileged processes
to create user namespaces. Another user namespace vulnerability that
potentially allowed unprivileged users to load arbitrary kernel modules was
also reported and fixed earlier this month. In addition, during
the discussion of the CLONE_NEWUSER|CLONE_FS issue,
Andy Lutomirski has hinted that there may
be another user namespaces vulnerability to be fixed.
Why is it that several security vulnerabilities have sprung from the
user namespaces implementation? The fundamental problem seems to be that
user namespaces and their interactions with other parts of the kernel are
rather complex—probably too complex for the few kernel developers
with a close interest to consider all of the possible security
implications. In addition, by making new functionality available to
unprivileged users, user namespaces expand the attack surface of the
kernel. Thus, it seems that as user namespaces come to be more widely
deployed, other security bugs such as these are likely to be
found. One hopes that they'll be found and fixed by the kernel developers
and white hat security experts, rather than found and exploited by black
hat attackers.
Updated on 22 February 2013 to clarify and correct some minor details of the
"simple and safe" solution under the heading, "The fix".
Comments (30 posted)
Patches and updates
Kernel trees
Core kernel code
Development tools
Device drivers
Documentation
Filesystems and block I/O
Memory management
Architecture-specific
Security-related
Virtualization and containers
Miscellaneous
Page editor: Jonathan Corbet
Distributions
By Nathan Willis
March 20, 2013
Stefano Zacchiroli has served as Debian Project Leader
(DPL) since April 2010, and has announced that he will not stand for
another term. The election for his
replacement will take place between March 31 and April 13. In the
meantime, much of the campaigning is taking place on the debian-vote
mailing list, where project members have posed questions to the
candidates and gotten detailed responses in reply (not to mention, in
some cases, lengthy discussion threads).When his term ends in April,
Zacchiroli will have been DPL for three years, the longest stretch
since project founder Ian Murdock's in Debian's early days, so the
2013 election is attracting more attention than in previous years.
Candidates and platforms
The candidates are Gergely Nagy, Moray Allan, and Lucas Nussbaum. Each
one volunteered (or, technically speaking, nominated himself) during
the official nomination period in early March. The three have written
their own "platform" documents, which are hosted at the 2013 election page.
Each sets out the general reasons the candidate is running for DPL and
the issues he hopes to address if elected.
Nagy ran for DPL in 2012 as well, and his platform
begins by revisiting the issues he raised back then, and how the
intervening 12 months have shifted his positions. In 2012, his
platform focused on recruiting new project members, and on building a
more "passionate" community in addition to one that excels
technically. His current platform expands on that idea, with more
specifics—such as increasing the opportunities for face-to-face
events, publicly highlighting the contributions of experienced Debian
Developers, and recruiting more non-packaging contributors.
Allan's platform
focuses on the role of the DPL itself; he says that the DPL can help
overcome the occasional conflicts between the many loosely-organized
teams in the project. The DPL can mediate in conflicts, he says, but
can also encourage more turnover in team leadership (and rotating
members between teams), and can facilitate more organized discussions
between teams. He would provide guidelines for more transparency and
openness in communication within the project, and build more active
relationships with outsiders for the teams that handle press,
publicity, and corporate relations. He, too, advocates bringing a
Debian presence to more local events and user group meetings, and
actively recruiting new contributors. He also proposes forming
fundraising teams to seek out donations for the project's various
expenses.
Nussbaum begins his platform
statement expressing concern that the core project is losing
momentum; "very cool things" happen in the ecosystem, but
outside Debian itself—which, if not careful, can lapse into
contenting itself as a "package supermarket." He
advocates reinforcing Debian in central position in the free software
ecosystem, by fostering innovative products within the project itself
(such as rolling releases and live images), reducing barriers to
contribution, and improving communication with upstream projects. He
also says he plans to continue Zacchiroli's DPL
helpers initiative, but with the long term hope of evolving it into a
driving team of Debian Developers who act as decision makers, with the
DPL as chairperson.
Position papers like the aforementioned platforms are nice, but in
the modern world a campaign thrives on debate in order to get the
candidates to explain themselves. In lieu of a stage, podium, and moderator,
the DPL candidates have at least been offered the chance to respond to
questions from project members via the debian-vote mailing list.
Several of the questions were directed to all three candidates
and asked for elaboration on specific points raised in one of more of
the platforms. For instance, Timo Juhani Lindfors asked how each candidate would attract
new people to the project, and Paul Tagliamonte asked how the candidates would represent
Debian to the outside world. In both instances, the replies were on
the safe side, although the question of attracting new participants
did shift to address the "graying" of the project, and how to attract
more youthful participants.
The big freeze
Where the discussion got more interesting, however, were the
threads in which a list member posed an original question. Many of
these dealt with the daily grind of keeping Debian development moving
forward, or on the long-term organization of the project. For
example, Lars Wirzenius observed that
Debian has been in feature freeze for eight months, which is
significantly longer than most distributions, and asked what the
candidates would do to fix the long and painful release process and
other development process problems.
Nussbaum responded with a number of
ideas, including prioritizing the QA process and improving support
tools like bug trackers. But he also suggested "more
exploration of gradual freezes," perhaps in stages:
I understand that it's important to freeze early packages that affect
the installer, and maybe also the base system. But freezing all packages with
the hope that it will force people to fix RC bugs in other people's packages
does not work: many people just stop working on Debian during the freeze.
This was discussed a bit already, but I'm not sure that we really were
throughout in the discussion. As a DPL, I will reopen that discussion at a
good time (after the release, and not just after the release).
Allan said he does not think the
DPL should "try to impose policies on teams like the Release
Team," but provided a few ideas, including removing buggy
packages much earlier in the freeze, and more actively flagging buggy
packages. He also said that some form of Constantly Usable Testing
(CUT) would be helpful, since many desktop users would prefer it:
This would solve the freeze problem for that group of users -- though
I worry that it might further reduce the number of people putting
energy into our current type of releases. Equally, I wouldn't expect
the existing Release Team to make CUT happen, both because of lack of
time, but also because they're likely to be self-selected as people
who like the current style of release.
Allan also observed that many packages ship with bugs that are
tagged -ignore, which sometimes means the packages in
question are unusable for some users even if they do not adversely
affect others. One solution might be to declare separate release
dates for different uses cases; "we could badge the new release
as ready for the desktop before we close it off as final and suggest
that people upgrade their servers."
Nagy responded that the fundamental
problem is that fixing bugs is not perceived as rewarding, and it is
considerably harder in many cases because upstream developers cannot
be relied on for assistance. The answer, he said, "is to make
upstreams, downstreams and everyone else involved realise that if we
work together, we'll release faster."
I feel the collaboration between Debian and downstreams is far from
perfect, and that is usually a fault of both sides. Tiny little speckles
of dust in the machinery some of these problems are, but if enough dust
accumulates, the machinery will suffer.
We need to figure out if - and how - we could work together more
closely, to reduce the workload on all sides, as that also reduces the
annoyance we may have with one another.
Decisions, decisions, decisions
Zacchiroli raised two questions about how Debian tackles
distribution-wide changes. First, he asked how they would improve on the
inertia that in the past has made Debian slow at planning and
deploying large changes. Subsequently, he asked the candidates how they, as DPL,
would approach the tricky problem of making such potentially
far-reaching decisions—for example, which init system this
distribution should use. As he put it,
Some of the longest -devel thread in recent years have been about
Debian's (default) init system: SysV, SystemD, Upstart, OpenRC, etc.
Despite folklore, I don't think those threads have been (entirely)
trollish, they all hint at a concrete problem:
How do we make an inherently archive-wide technical decision when
multiple, possibly equally valid solutions do exist?
No candidate advocated drastic changes (perhaps predictably; after
all, it is hard to imagine any candidate answering with a response
that does not make technical merit the top priority). But the replies
do provide insight into the candidates' viewpoints on how the project
should be managed—for instance, how big of a role the Technical Committee
(TC) should play in making calls between competing proposals.
Nagy replied that big technical
decisions like the init system should be addressed at in-person events
like DebConf, where the stakeholders can sit down and discuss the
topic:
We need to establish the key requirements (which in this case, will be
a tough cookie to crack too), and see what compromises the stakeholders
are willing to make. The primary role of the DPL in this case would be
organisation and mediation, to make sure that those involved understand
that compromises will have to be made, or we'll be stuck with sysvinit
forever, which is likely not what most of them would want.
Nussbaum said that the decision
should be left up to developers and key package maintainers, with the
TC only stepping in on rare occasions. The DPL can be helpful in
facilitating discussions, he said, which usually result in a thorough
review of the possible solutions—while the final decision will
usually involve compromise:
Often, there's the possibility to limit the impact of the decision, e.g.
by providing a default, but also supporting alternatives. When that's
possible, that's something that should be explored. It's a good thing
for the current alternatives, but also to help future alternatives
later.
Allan also said the DPL was not
usually the best fit for making large-scale decisions, but instead
could "encourage people to improve UIs, agree on best practice,
and write better documents." He also cited his platform
document, which suggests "that we might make distribution-wide
changes quicker by more vocally authorising NMUs [Non-Maintainer
Uploads] to help with changeovers."
To the polls
While the role of project leadership is an important one, and while
Debian has long grappled with how to improve and streamline its
release process, there are naturally plenty of other issues about
which Debian project members have questions. Money came up several
times; Raphael Hertzog inquired whether Debian should spend its
money on anything other than hardware and travel reimbursement;
various responses included code camps and underwriting individual
developers. Martin Zobel-Helas asked specifically whether or not the project's planned
hardware expenditures were justifiable. Here again, the candidates do not stake
out drastically different positions, but their responses to the
specific questions illuminate some distinctions, like how they connect
budgeting for expenses to fundraising (both in terms of ongoing
contributions and short-term campaigns).
The discussion on the debian-vote list is likely to continue until
the voting period begins on March 31. The candidates are also posting
material on their individual blogs (as are other Debian project
members). There is a generally amiable tone to the campaign, as one
would expect from a mature project staffed by volunteers. But the
campaign itself is interesting to watch for a few reasons. For one,
it has been three years since the last change of DPL; when Zacchiroli
was running for re-election the campaigns largely turned into a
referendum on his continued presence in the role (which, although
certainly a valid question for the voters, is different than the
question of setting new priorities for the project).
But another interesting facet of the election is the fact that it
provides the public with an insight into the inner workings of the
project management. Debian is not unique among Linux
distributions for electing its leadership by a completely democratic
process, but it is in the minority. The Fedora Project Leader is a
full-time employee of Red Hat (who also appoints four of the remaining
nine Fedora Board members); openSUSE is governed by a
community-elected board but with a chairperson appointed by SUSE;
Ubuntu sponsor Mark Shuttleworth serves as project leader until he
decides to step down, though the distribution has an elected community
council as well. In contrast, Debian's open leadership-selection
process provides a window into the topics of debate that every
distribution leadership team faces—at one time or another,
anyway.
And Debian is different in that the DPL is its sole elected
leadership position. Selecting four or five board members all at once
allows the voting public to make compromises and put together a broad
ticket; the DPL works alone. Then again, the DPL "works alone" solely
in the sense that the project's bylaws set out, and the DPL's role
differs significantly from the decision-making powers often wielded in
other projects. In reality, the DPL does not dictate policy, but
works to coordinate and communicate between developers and to
represent the project to external projects and communities. Thus, in
other ways, the DPL remains just one active participant in the Debian
community, which has always stayed both active and vocal—and
seems likely to continue doing so, regardless of who takes home the
most votes.
Comments (none posted)
Brief items
Or, to put it another way, for Fedora 18, Fedora got a lot of
publicity for being late to release. The project was able to say,
quite reasonably, "The installer has undergone a major rewrite, and we
want to make sure all the problems are ironed out first." For Fedora
19 do we want to say, "We're late to release because we wanted to put
an 'ö' in the release name."?
--
Ian Malone
Comments (4 posted)
Distribution News
Debian GNU/Linux
Stefano Zacchiroli presents his penultimate bits as Debian Project Leader.
Topics covered include DPL elections, delegations, DPL helpers, Debian
assets, and more.
Full Story (comments: none)
The Debian Project has announced that the backports service for Debian 7.0
"Wheezy" will be part of the main archive. "
Backports are packages
mostly from the testing distribution (and in few cases from unstable too,
e.g. security updates) recompiled in a stable environment so that they will
run without new libraries (whenever it is possible) on the Debian stable
distribution. While as for now this service was provided on a separated
archive, starting with wheezy-backports the packages will be accessible
from the regular pool."
Full Story (comments: none)
The Debian release managers have a report on the progress of Debian 7.0
"Wheezy". The freeze is in its final stages. There are still some RC bugs
that need to be squashed and the release notes are not ready yet, but
overall the release is getting ever closer.
Full Story (comments: none)
openSUSE
The openSUSE ARM team will be holding a hackathon April 8-12, 2013 at the
SUSE offices in Nuremberg, Germany. People may participate online.
Full Story (comments: none)
Newsletters and articles of interest
Comments (none posted)
Matthias Klumpp
introduces Tanglu,
a Debian-testing derivative still in early development. "
Tanglu is designed to be able to solve the issue that Debian is frozen for a long time and Debian Developers can’t make new upstream versions available for testing easily. During a Debian freeze, DDs can upload their software to the current Tanglu development version and later start the new Debian cycle with already tested packages from Tanglu. The delta between Tanglu and Debian should be kept as minimal as possible. However, Tanglu is not meant as experimental distribution for Debian, so please upload experimental stuff to Experimental. Only packages good enough for a release should go into Tanglu."
Comments (none posted)
The H
reports
that support for Ubuntu's non-LTS releases will be shortened to nine
months. "
In a meeting
of the Ubuntu Technical
Board last night, the technical leadership of Canonical's Linux
distribution decided to halve the support time for non-LTS releases to nine months. At the same time, the developers want to make it easier for users of the distribution to get up-to-date packages on a regular basis without the need to perform explicit upgrades of the whole distribution. Attending the meeting, Matt Zimmerman, Colin Watson and Stéphane Graber unanimously agreed on these points and also clearly voted against moving Ubuntu into a rolling release model. The changes will be implemented in the maintenance schedule starting with the release of Ubuntu 13.04 ("Raring Ringtail") on 25 April."
Comments (40 posted)
Page editor: Rebecca Sobol
Development
By Nathan Willis
March 20, 2013
The GNU Compiler Collection (GCC)
is nearing the release of version 4.8.0, approximately one year after
the release of 4.7.0. The new release is the first to be implemented with C++, but for most
developers the new optimizations and language support improvements are
of greater interest. Jakub Jelinek announced the first release candidate
builds of GCC 4.8.0 on March 16, noting that if all goes well the
final release could land in less than a week's time.
Chunks, dwarfs, and other optimization
The new release
merges in some important changes to the Graphite memory-optimization
framework, updating it to work with the upstream Chunky Loop Generator (CLooG) and Integer Set Library (ISL) libraries
(where it had previously used internal implementations), and
implementing the PLUTO algorithm as a
polyhedral optimizer. This work moves Graphite significantly closer
to being able to provide a generic polyhedral
interface, though there is still work remaining (such as Static
Control Part detection). Polyhedral loop optimization is a technique
in which nested loop iterations are mapped out in two dimensions, forming
lattice-like graphs to which various geometric transformations (such
as skews) can be applied in an attempt to generate an equivalent
structure that exhibits better performance.
There is a new general-purpose optimization level available in GCC
4.8.0 with the -Og switch, which should provide fast
compilation while still resulting in better runtime performance than the "straightforward" -O0.
The -ftree-partial-pre switch has also been added, which
activates the partial
redundancy elimination (PRE) optimization. PRE eliminates
expressions and values that are redundant in some execution paths,
even if they are not redundant in every path. In
addition, there is a new, more aggressive analysis used by default in
4.8.0 to determine upper bounds on the number of loop iterations. The
analysis relies on constraints imposed by language standards, but this
may cause problems for non-conforming programs which had worked
previously. Consequently, GCC has added a new
-fno-aggressive-loop-optimizations switch to turn off the new
analysis. Although breaking the constraints of the language standard
is frowned upon, there are some notable real-world examples that do
so, such as the SPEC CPU
2006 benchmarking suite.
Several other improvements to optimization arrive in the new
release, including a rewritten link-time optimizer (LTO) and a new symbol
table implementation. Together they should improve performance by
catching more unusual symbol situations (such as aliases) that result
in unreachable code—which can be safely cut out by the LTO. GCC
has also updated its support for the DWARF debugging format from
DWARF2 to DWARF4,
which brings it up to speed with newer versions of GDB and Valgrind.
Two other new features debuting in GCC 4.8.0 are AddressSanitizer
and ThreadSanitizer.
The first is a memory-error detector that is reportedly fast at
finding dangling pointers as well as heap-, stack-, and global-buffer
overflows. The second is a data-race detector, which spots conditions
where two threads try to access the same memory location—and at
least one of them is attempting a write. ThreadSanitizer offers a hybrid
algorithm not found in competing race detectors like Helgrind. Both
new additions are actively being developed at Google.
Language support
The release
notes accompanying 4.8.0 highlight a number of improvements in C,
C++, and Fortran support. The C improvements are all of a diagnostic
nature, such as -Wsizeof-pointer-memaccess, which is a new
option to issue a warning when the length parameters passed to certain
string and memory functions are "suspicious"—namely when the
parameter uses sizeof foo in a situation where an explicit
length is more likely the intent. This option can also suggest
possible fixes.
All diagnostic messages now including printing the offending source
line, and place a caret (^) underneath the appropriate column, to
(hopefully) guide the eye right to the error in question. A
similarly debugging-friendly option that displays the macro expansion
stack in diagnostic messages (-ftrack-macro-expansion=2) is
now enabled by default. In addition, -pedantic has been
deprecated (in favor of -Wpedantic), and -Wshadow has
been fixed. -Wshadow now permits a common use-case certain
kernel developers have long complained was
erroneously flagged as invalid.
C++11 support has been improved, with the addition of the
thread_local keyword, C++11's attribute syntax, and
constructor inheritance. There is also a -std=c++1y flag which
allows developers to experiment with features proposed for the
next revision of the C++ standard (although at the moment GCC
only supports one proposed feature, return
type deduction for normal functions). The libstdc++ library now
provides improved experimental C++11 support as well, plus several improvements
to <random>.
Fortran fans have quite a bit to look forward to, including the
addition of the BACKTRACE
subroutine, support for expressing floating point numbers using "q" as
the exponential notation (e.g., 2.0q31), and Fortran 2003's unlimited
polymorphic variables, which allow dynamic typing. There are also
several new warning flags that can report (among other things) when
variables are not C interoperable, when a pointer may outlive its
target, and when an expression compares REAL and COMPLEX data for
equality or inequality.
However, GCC 4.8.0 will also introduce some potential compatibility
dangers: the ABI changes some internal names (for procedure pointers
and deferred-length character strings), and the version number of
module files (.mod) has been incremented. Recompiling any
modules should allow them to work with any code compiled using GCC
4.8.0.
Targets
Finally, GCC 4.8.0 will introduce quite a few improvements for the
various architecture targets supported. In ARM land, AArch64 support
is brand new, initially supporting just the Cortex-A53 and Cortex-A57
CPUs. The (separate) 32-bit ARM support has added initial support for
the AArch32 extensions in ARMv8. There is also improved support for
Cortex-A7 and Cortex-A15 processors, and initial support for the
Marvell PJ4 CPU. There are also improvements to auto-vectorization
and to the scheduler; the latter can now account for the number of
live registers available (potentially improving execution performance
for large functions).
In the x86 world, GCC gains support for the "Broadwell" processor
family from Intel, the "Steamroller" and "Jaguar" cores from AMD, as
well as several new Intel instruction sets. There are also two new
built-in functions; __builtin_cpu_is is designed to detect
the runtime CPU type, and __builtin_cpu_supports is designed to
detect if the CPU supports specified ISA features. GCC now supports
function
multiversioning for x86, in which one can create multiple versions
of a function—for example, with each one optimized for a
different class of processor.
But the less popular architectures get their share of attention as
well; support has been added for several new MIPS chips (R4700,
Broadcom XLP, and MIPS 34kn), IBM's zEC12 processor for System z, and
the Renesas Electronics V850. There is a lengthy set of improvements
for the SuperH architecture, including multiple new instructions
and improved integer arithmetic. There are also improvements for several
existing architectures: optimized instruction scheduling for SPARC's
Niagara4, miscellaneous new features for PowerPC chips running AIX,
and several features targeting AVR microcontrollers.
Over the years, GCC has maintained a steady pace of new stable
releases, which is especially noteworthy when one stops to consider
how many languages and target architectures it now supports. In
recent years, the project has still managed to introduce interesting
new features, including the Graphite work, for example. There is
still a long list of to-dos, but 4.8.0 is poised to be yet another
dependable release with its share of improvements covering a wide variety
of processors and language features.
Comments (3 posted)
Brief items
Remember, there is such thing as false hope. And if ever there was an
example of false hope it is someone hoping for a decade old issue in
Bugzilla that has been passed by by thousands of other issues.
—
Rob Weir
No More Google Reader
from Google Operating System by Alex Chitu
via What's Hot in Google Reader
— Google Reader, spreading the news of its own demise, as noticed by
Richard Hartmann
Comments (1 posted)
Mozilla has announced the 1.0 release of Open Badges, an open framework for deploying verifiable digital recognition of achievements and awards. As the announcement explains, "With Open Badges, every badge has important data built in that links back to who issued it, how it was earned, and even the projects a user completed to earn it. Employers and others can dig into this rich data and see the full story of each user’s skills and achievements." Mozilla says there are more than 600 organizations using the Open Badges infrastructure, and they have issued more than 62,000 badges.
Comments (14 posted)
Version 2.4 of the MongoDB "NoSQL" database system has been
released.
Headline features include a new text search facility, spherical geometry
support, hash-based sharding, Kerberos authentication, and more; see
the release
notes for details.
Comments (none posted)
The first release of the Plasma Media Center has been
announced.
"
KDE's Plasma Media Center (PMC) is aimed towards a unified media
experience on PCs, Tablets, Netbooks, TVs and any other device that is
capable of running KDE. PMC can be used to view images, play music or watch
videos."
Comments (6 posted)
Yorba has announced the availability of version 0.3 of its open source email client Geary. There are numerous changes; the most significant is support for multiple email accounts, but there are updates to spam detection and message flagging, and the new release supports downloading mail in the background.
Comments (none posted)
Khaled Hosny announced that he has ported the XeTeX extension to TeX to use the HarfBuzz engine for OpenType layout, and has updated support for the Graphite engine as well. In keeping with XeTeX's longstanding habit of asymptotic version numbering, the new release is numbered 0.9999.0.
Comments (none posted)
Newsletters and articles
Comments (none posted)
David Rowe, creator of the Codec 2 speech codec (which we discussed briefly in our Linux.conf.au 2013 coverage) has published a three-part series analyzing the performance of Codec 2 and his open source digital radio application FreeDV (part 1, part 2, part 3). The series examines FreeDV and Codec 2 against other HF radio modes, both digital and analog, providing some insight into codec design. "My previous tests show the excitation bits (pitch, voicing, energy) are the most sensitive. The excitation bits affect the entire spectrum, unlike LSPs where a bit error introduces distortion to a localised part of the spectrum. So I dreamt up a new 1300 bit/s Codec 2 mode that has “less” sensitive bits."
Comments (none posted)
Page editor: Nathan Willis
Announcements
Brief items
The Django community
mourns
the passing of Malcolm Tredinnick. "
Malcolm was a long-time
contributor to Django, a model community member, a brilliant mind, and a
friend. His contributions to Django — and to many other open source
projects — are nearly impossible to enumerate. Many on the core Django team
had their first patches reviewed by him; his mentorship enriched us. His
consideration, patience, and dedication will always be an inspiration to
us."
Comments (8 posted)
Google has
announced
that it is accepting applications for mentoring organizations for the 2013
Google Summer of Code program. "
This year we are again encouraging
experienced Google Summer of Code mentoring organizations to refer newer,
smaller organizations they think could benefit from the program to
apply. We hope the referral program will again bring many more new
organizations to the Google Summer of Code program. Last year 47 new
organizations participated." The deadline is March 29.
Comments (none posted)
The Python Software Foundation has
reached a settlement in its trademark dispute with PO Box Hosting Limited trading as Veber in Europe. "
The issue centered around Veber's use of the Python name for their cloud hosting services and their application for a figurative trademark incorporating the word "Python". While the Foundation retains the trademark for Python within the United States, it did not have a filing within the European Union. According to the terms of the settlement, Veber has withdrawn its trademark filing and has agreed to support the Python Software Foundation's use of the term."
Comments (3 posted)
The Tor Project has
announced
the availability of its
2012
annual report. (PDF) "
Tor’s daily usage continues to increase in
size and diversity, bringing secure, global channels of communication and
privacy tools to journalists, law enforcement, governments,
human rights activists, business leaders, militaries, abuse
victims and average citizens concerned about online privacy."
(Thanks to Paul Wise)
Comments (3 posted)
Upcoming Events
The CentOS Dojo is a one day event on April 8, 2013 in Antwerp, Belgium.
"
[We] have tried to cover all the major conversation areas around
CentOS these days. Starting from provisioning, management, app deployments,
system and virtualisation tuning, virtual infrastructure and more."
Full Story (comments: none)
The Rocky Mountain IPv6 Task Force (RMv6TF) has announced the keynote
speakers for the North American IPv6 Summit, April 17-19 in Denver,
Colorado. Speakers include Latif Ladid, President, IPv6 Forum and Google
Vice President, and Chief Internet Evangelist Vint Cerf (via video).
"
This year's keynote covered by Ladid and Cerf explains why the big
shift to IPv6 Internet is on by default. "When a protocol is on by default,"
explains Ladid, "vendor readiness, network readiness, and service
enablement become critical. The issue now is can IPv6 be treated like
IPv4. The service providers with advanced deployment experiences have
discovered that IPv6 is a totally different networking paradigm.""
Full Story (comments: 33)
Applications are open for AdaCamp San Francisco. AdaCamp SF is an
unconference for supporters of women in open technology and culture held
June 8-9, 2013 in San Francisco, California. "
AdaCamp is invitation-only, but we encourage everyone interested in attending [9]to apply. You do not have to write code, have a job in open tech/culture, be a certain age, or be anything other than a supporter of women in open tech/culture. We value diversity of all sorts: age, race, geographical locations, language, sexuality, gender identity, educational background, hobbies, spiritual or religious beliefs, and other areas."
Full Story (comments: none)
Events: March 21, 2013 to May 20, 2013
The following event listing is taken from the
LWN.net Calendar.
| Date(s) | Event | Location |
March 13 March 21 |
PyCon 2013 |
Santa Clara, CA, US |
March 19 March 21 |
FLOSS UK Large Installation Systems Administration |
Newcastle-upon-Tyne , UK |
March 20 March 22 |
Open Source Think Tank |
Calistoga, CA, USA |
| March 23 |
Augsburger Linux-Infotag 2013 |
Augsburg, Germany |
March 23 March 24 |
LibrePlanet 2013: Commit Change |
Cambridge, MA, USA |
| March 25 |
Ignite LocationTech Boston |
Boston, MA, USA |
| March 30 |
Emacsconf |
London, UK |
| March 30 |
NYC Open Tech Conference |
Queens, NY, USA |
April 1 April 5 |
Scientific Software Engineering Conference |
Boulder, CO, USA |
April 4 April 5 |
Distro Recipes |
Paris, France |
April 4 April 7 |
OsmoDevCon 2013 |
Berlin, Germany |
April 6 April 7 |
international Openmobility conference 2013 |
Bratislava, Slovakia |
| April 8 |
The CentOS Dojo 2013 |
Antwerp, Belgium |
April 8 April 9 |
Write The Docs |
Portland, OR, USA |
April 10 April 13 |
Libre Graphics Meeting |
Madrid, Spain |
April 10 April 13 |
Evergreen ILS 2013 |
Vancouver, Canada |
| April 14 |
OpenShift Origin Community Day |
Portland, OR, USA |
April 15 April 17 |
Open Networking Summit |
Santa Clara, CA, USA |
April 15 April 17 |
LF Collaboration Summit |
San Francisco, CA, USA |
April 15 April 18 |
OpenStack Summit |
Portland, OR, USA |
April 17 April 18 |
Open Source Data Center Conference |
Nuremberg, Germany |
April 17 April 19 |
IPv6 Summit |
Denver, CO, USA |
April 18 April 19 |
Linux Storage, Filesystem and MM Summit |
San Francisco, CA, USA |
| April 19 |
Puppet Camp |
Nürnberg, Germany |
April 22 April 25 |
Percona Live MySQL Conference and Expo |
Santa Clara, CA, USA |
| April 26 |
MySQL® & Cloud Database Solutions Day |
Santa Clara, CA, USA |
April 27 April 28 |
LinuxFest Northwest |
Bellingham, WA, USA |
April 27 April 28 |
WordCamp Melbourne 2013 |
Melbourne, Australia |
April 29 April 30 |
Open Source Business Conference |
San Francisco, CA, USA |
April 29 April 30 |
2013 European LLVM Conference |
Paris, France |
May 1 May 3 |
DConf 2013 |
Menlo Park, CA, USA |
May 9 May 12 |
Linux Audio Conference 2013 |
Graz, Austria |
May 14 May 15 |
LF Enterprise End User Summit |
New York, NY, USA |
May 14 May 17 |
SambaXP 2013 |
Göttingen, Germany |
May 15 May 19 |
DjangoCon Europe |
Warsaw, Poland |
| May 16 |
NLUUG Spring Conference 2013 |
Maarssen, Netherlands |
If your event does not appear here, please
tell us about it.
Page editor: Rebecca Sobol