By Jonathan Corbet
December 19, 2012
Your editor was strongly tempted to skip out on the task of writing the
2012 year-end retrospective article. After all, the process of going
through January's predictions and seeing how wrong they were is somewhat
painful and embarrassing. Even worse, according to reliable sources, the
world is
scheduled to end
just after this article is to be published, so it
hardly seems worth the effort. But a quick consideration of how your
editor's investment portfolio has performed under the guidance of similarly
reliable sources suggests that covering all the bases might be prudent;
it seems that some other peoples'
predictions are even worse than those found here.
So, with a heavy heart, the process of reviewing last January's predictions began. And, in
fact, the news wasn't all bad. The mobile patent wars did, indeed, get
worse, to the point that it has gotten difficult to market certain devices
in certain parts of the world. The fight for a free Internet continues,
and SOPA was turned back as predicted. Red Hat did indeed have a good
year. A number of the other, relatively obvious predictions also came
through reasonably well. There is value, it seems, in not going too far
out on a limb.
Another prediction read that we would see more focused competition between
distributors, with each seeking to differentiate more from the others. To
an extent that has certainly happened; witness all the work that has gone
into making Ubuntu different from the rest. Other times, though,
differences have not been seen as a selling point; Oracle's attempt to woo
CentOS users is a case in point. So this prediction was, at best,
only partially right. In the end, we are still far from having an
understanding of what makes a perfect Linux distribution, even when judged
through the narrow lens of commercial success.
At the beginning of the year, it appeared that the Linux Mint project had
taken on too many projects; given that it had several versions of its
distribution to support, along with two desktop forks, your editor
reasoned that a
"reckoning with reality" was in the works. At the end of the year, one
might conclude that progress has slowed in some areas, especially with the
MATE desktop. But, if a "reckoning with reality" has occurred, it must be
concluded that reality does not drive a particularly hard bargain. The
Linux Mint project appears to be vital and healthy.
Similarly, one might well conclude that the LibreOffice project is
broader-based and stronger than Apache OpenOffice, but the former has not
eclipsed the latter as predicted. The Apache project has managed to get
its organizational issues worked out and graduate from the Apache
incubator; clearly, it has more staying power than many of us
might have thought.
Perhaps the worst, most wishful-thinking-tinged prediction, though, was the
one that "the GNOME 3 wars will be long forgotten by the end of the year."
One need only have a look at the comment stream that appears on any
GNOME-related news to see that the wounds are still open and fresh.
Someday the community will accept GNOME as it is, but that did not come to
pass in 2012. Learning from experience, your editor is unlikely to predict
a calming of the waters around GNOME 3 (or systemd) in 2013.
So it appears that this year's predictions were, as usual, a mixed bag.
A true evaluation of a set of predictions is not complete, though, without
looking at what was not predicted. As is often the case, your
editor missed a few things that, in retrospect, should have been on the
list.
For example, the regime change in the GNU libc
project was well underway when the 2012 predictions were written. A
more attentive eye would have called attention to the increasingly
consensus-oriented way in which that project was being run. Making this
project more contributor-friendly without a fork was a major achievement
for everybody involved; one can only hope that this type of change will be
repeated in other projects where it is necessary.
Mandriva's decision to hand control of its
distribution to the community also makes sense in retrospect. Letting go
of a project and hoping for community help is, after all, often the
response of a company with
intractable financial problems, especially if the company is somewhat
community-oriented to begin with. It has been clear for a while that
Mandriva SA has not been able to pull together the resources to develop its
distribution properly; hoping that the community can do better is an
obvious response to that situation.
Two items that would have been easy to predict in general — but difficult
in the specifics — were the leap-second bug
and the
backdooring of Piwik.
It is well understood that infrequently tested code will develop bugs over
time; the leap second code had not been invoked in the real world since
2008. One could argue that somebody should have checked the code
for cobwebs as the 2012 leap second approached, but nobody foresaw the
problem.
Meanwhile, it has been a while since the addition of backdoors to
software distributions seemed to be a regular occurrence. But free
software projects, especially those producing net-facing software like
Piwik, will remain an attractive target for those who would like easy ways
into otherwise well-secured systems. We will see this kind of thing
happening again.
Finally, sometimes the most difficult-to-predict events can do the most to
strengthen one's faith in humanity. An interesting trial in Oracle's
software patent suit against Google was easy to foresee. But who would
have imagined that the judge would learn Java and implement some of the
claimed techniques on his own? We are far from fixing the patent system in
the US (or anywhere else, for that matter), but there are signs that
influential people are starting to figure out that there is a problem.
Of course, some things are just too routine to predict. Once upon a time,
a community that could release six major kernels and an uncountable number
of major releases of higher-level software in one year would have been seen as
a hopeless fantasy. Now such things go almost unnoticed. Our community is
strong, and free software continues relentlessly toward world domination.
As a whole, it has been another good year.
We at LWN would like to thank our readers who have supported us through yet
another year; it is worth noting that LWN will celebrate its 15th
anniversary in January. We never predicted that we would be doing this for so
long; it has been a good ride and we are far from done. Thanks to all of
you who make it possible for us to continue to write from the heart of the
Linux development community. On a personal note, your editor would like to
especially thank all of you who offered your support through an
exceptionally difficult year; you made a difference and it is much
appreciated.
Comments (16 posted)
By Michael Kerrisk
December 20, 2012
Copyright assignment is a topic that brings out strong passions in
the free software community, especially when the assignee is a corporate
entity. Assignment to a nonprofit entity such as the Free Software
Foundation (FSF) may eliminate some of the problems that can occur
when assigning copyright to a company. However, recent events in the
GnuTLS project are a reminder that copyright assignment can have a number
of downsides even when assigning to a nonprofit chartered with the goal
of protecting free software.
The problem with (corporate) copyright assignment
Copyright assignment tends to evoke polarized opinions in the free
software community. On the one hand, various companies have promoted copyright
assignment agreements as being in the best interests of free
software—Canonical is perhaps the
most prominent example in recent times. On the other hand, the community at
large seems more skeptical of the value of these agreements; Michael Meeks
is among those who put the counterarguments well. Project Harmony, an attempt to create a
standardized set of copyright licensing agreements, has met with a chilly reception from various quarters of the
community, and (so far) seems to have gained little traction in the free
software world.
A blog
post by Richard Stallman on the FSF web site highlights the most
significant of the risks of assigning copyright when contributing code to a
commercially owned free software project. One of the powers conferred by a
copyright assignment agreement is control of the choice of license of the
software: as the copyright owner, the assignee (alone) has the power to
change the license of the project. Commonly, corporate copyright assignment
agreements place no restriction on the choice of license that the assignee
may in the future choose, thus putting a great deal of power in the hands
of the assignee. As Richard notes:
[The company] could release purely proprietary modified or extended
versions including your code. It could even include your code only in
proprietary versions. Your contribution of code could turn out to be, in
effect, a donation to proprietary software.
On the other hand, the Free Software Foundation requires copyright
assignment for many of the projects hosted under the GNU umbrella. This is
a choice of the
project creators when making the project a GNU project. The main reason given
for this is that being the sole copyright holder puts the FSF in the best
position to enforce copyright in the event of a GPL violation.
Of course, being the sole copyright owner also gives the FSF the
ability to change the license on a GNU project. However, the
motivations of a company and the FSF differ substantially: whereas
a company is ultimately motivated by profit (and its motivations can change
with shifting financial circumstances and changes of company ownership), the
FSF is a non-profit charity chartered to further the interests of free
software. Thus, its copyright assignment agreement includes
an explicit promise that any future distribution of the work will be
under some form of free software license.
GnuTLS and the FSF
GnuTLS is "a secure
communications library implementing the SSL, TLS and DTLS
protocols". The project was founded in 2000, under the GNU umbrella,
by Nikos Mavrogiannopoulos. Over the life of the project, the other major
contributor has been Simon Josefsson.
That there was a problem in the project became unmistakable on December
10, when Nikos posted the following note
(entitled "gnutls is moving") to the gnutls-devel mailing list:
I'd like to announce that the development of gnutls is moving outside the
infrastructure of the GNU project. I no longer consider GnuTLS a GNU
project and future contributions are not required to be under the copyright
of FSF. The reason is a major disagreement with the FSF's decisions and
practices. We note however, that we _do_ support the ideas behind the FSF.
This elicited a rather blunt response
(entitled "GNUTLS is not going anywhere") from Richard Stallman:
Nikos, when you volunteered to maintain GNUTLS, the GNU Project
entrusted its development to you. Your contributions so far are
appreciated. However, the project GNUTLS does not belong to you.
If you want to stop doing this job, you can. If you want to develop a
fork of GNUTLS under another name, you can, since it is free software.
But you cannot take GNUTLS out of the GNU Project. You cannot
designate a non-GNU program as a replacement for a GNU package.
We will continue the development of GNUTLS.
Richard's response raises a number of interesting issues. The matter of
ownership of the project name is perhaps the simplest, and was
acknowledged by Nikos:
I pretty much regret transferring all rights to FSF, but it seems there is
nothing I can do to change that. If I receive a formal request
from FSF I'll change the name of gnutls and continue from there.
In the days since then, however, the name hasn't changed and there does
not seem to have been a formal (public) request to do so. One possible
reason for this might be found in a response to Richard's mail from Werner Koch
(maintainer and primary author of GnuPG and libgcrypt, both of which are
GNU projects):
Nikos started the development of GNUTLS under the GNU label on my
suggestion. He is the principal author of GNUTLS and gave away his
legal rights on that work without any compensation or help. The success
of GNUTLS is not due to the GNU project but due to Nikos' and Simon's
work […]
Claiming that the FSF has any moral rights on the name of that software
is absurd.
Indeed, of the somewhat more than 11,000 commits in the GnuTLS Git
repository, all but around 400 are by either Nikos or Simon. Simon has not
spoken up in the current mail thread, but he remains an active
contributor to the project.
Thus, while the FSF might have some legal claim on the project name
based on common law trademarks, such a claim is, morally speaking, less
clear. Furthermore, there are existing projects, such as gnuplot and Gnutella that riff on the
"GNU" name without being official
GNU projects; indeed, Gnutella does this despite an FSF request that
the name should be changed. Also noteworthy in this context is the fact
that the gnutls.org domain is registered to Nikos.
Having worked within the framework of the GNU project for 12 years,
Nikos's reasons for wanting to move out of the project must have been
important ones. In response to a question
from Eli Zaretskii about his reasons, Nikos said:
The main issue is that I'm tired of pretending that I'm
participating to a project I am only allowed to contribute code (and not
even express a different opinion).
Nikos then went on to outline three criticisms of the FSF and GNU
projects. The first of these related to copyright assignment:
(a) I felt particularly frustrated when FSF (when gnutls started around
2000) was insisting the transfer of the copyright to it, even though I had
decided to transfer the copyright to FSFE (this is a very old issue but it
had great influence on me as I realized that the transfer of rights was not
simply for protection against copyright violations).
As Richard confirmed, assignment of
copyrights for all GNU projects (that employ assignment) is solely to the
US-based FSF, rather to one of the regional sister organizations (located
in Europe, India, and South America). One can easily imagine a number of
reasons for this state of affairs. Given the FSF's desire to have a single
copyright holder, it makes sense to assign all copyrights in an individual
GNU project to a single entity, and for administrative and legal reasons it
is probably simpler to assign copyrights for all projects to the same
entity.
However, one can also imagine that when the primary developers of a
project reside outside the US—both Nikos and Simon are in
Europe—the requirement to assign to the US-based FSF, rather than FSF
Europe, is irksome. In addition, the FSF Europe tends to have a quieter,
less confrontational style of working than its US counterpart, which may
also have been a factor in Nikos desire to assign copyright to the European
organization.
The other theme that came out in Nikos's criticisms was the problem of
feeling figuratively distanced from the parent project:
(b) The feeling of participation in the GNU project is very low, as even
expressing a different opinion in the internal mailing lists is hard if
not impossible.
(c) There is no process for decision making or transparency in GNU.
The only existing process I saw is "Stallman said so"…
The lack of openness was a theme echoed by Werner in reply to Eli's question:
I can't speak for Nikos, but there are pretty obvious reasons knowable
to all GNU maintainers. I don't know whether you, as GDB maintainer,
are subscribed and follow gnu-prog-discuss@gnu.org. We had a long
discussion a year ago about the way the GNU project is managed and first
of all about all of the secrecy involved there. The occasion was a
request to have at least an open archive of the g-p-d list, so that
non-GNU hackers would be able to learn about architectural discussions
pertaining to the GNU project.
The content of the discussion that Werner refers to is, of course,
unavailable, so it is difficult to gain further insight into why
discussions on the gnu-prog-discuss mailing list need to be
secret.
Clearly, at least a few GNU project maintainers are quite unhappy with
the current governance of the umbrella project. And when a maintainer of
twelve years' standing wants out of the GNU project, that suggests that
there are some serious governance problems.
Of course, this is hardly the first time that governance issues have
caused significant division in GNU projects. The GCC project is one of the
most notable cases, providing an example both in the late 1990s, with the
egcs fork of the GCC
compiler (where the fork ultimately supplanted
the original GCC project inside the GNU project), and more recently when questions on plugin
licensing led the FSF to pressure the GCC project to delay the GCC 4.4
release, to the disgruntlement of many GCC hackers.
The problems of assigning copyright to a nonprofit
The events in the GnuTLS project reveal a number of the problems of
copyright assignment that remain even when assigning to a nonprofit such as
the FSF.
The first of these problems has already been shown above: who owns the
project? The GnuTLS project was initiated in good faith by Nikos as a GNU
project. Over the lifetime of the project, the vast majority of the code
contributed to the project has been written by two individuals, both of whom
(presumably) now want to leave the GNU project. If the project had been
independently developed, then clearly Nikos and Simon would be considered
to own the project code and name. However, in assigning copyright to the
FSF, they have given up the rights of owners.
The mailing list thread also revealed another loss that developers
suffer when signing a copyright assignment agreement. As noted above, the
ability of the FSF—as the sole copyright holder—to sue license
violators is touted as one of the major advantages of copyright
assignment. However, what if, for one reason or another, the FSF chooses
not to exercise its rights? Juho Vähä-Herttua raised this point in the mail thread:
As a side note, I find Werner's accusations, as written on his blog, of FSF
not defending its rights in case of GnuPG copyright violations very
serious. When a copyright holder transfers their rights to FSF they also
transfer their rights to defend against copyright violations.
The blog
post that Juho refers to questions a number assumptions around FSF
copyright assignment. In the post, Werner states:
My experience with GnuPG and Libgcrypt seems to show that the FSF does not
care too much about [copyright violations]. For example, at least two
companies in Germany sold crypto mail gateways with OpenPGP support
provided by GnuPG; they did not release the source code or tell the
customers about their rights. The FSF didn't act upon my requests to stop
them violating their (assigned) copyright on GnuPG.
Once a developer assigns copyright, they are at the mercy of the
assignee to enforce the copyright. In this particular case, one can
speculate that the failure to pursue the violation was likely a shortage of
human resources. As Richard noted,
"We have staff for GPL enforcement, […] but there are so many
violations that they can't take action on all."
But that very response throws into question the wisdom of assigning a
large number of copyrights to a resource-starved central organization. A
more distributed approach to dealing with copyright violations would seem
more sensible. And indeed, organizations such as gpl-violations.org and the Software Freedom Conservancy have
shown that the GPL violations can be successfully fought without being the
sole copyright holder in a work of code. By now, the argument that
copyright assignment is necessary to successfully enforce free software
licenses is rather weak.
Werner outlines a few other problems with FSF copyright assignment. One
of these is the seemingly arbitrary nature of copyright assignment across
GNU projects. He points out that there are two cryptographic libraries that
are part of the GNU project, one of which (libgcrypt) requires copyright
assignment while the other (Nettle) does not. The seeming arbitrariness in
this example was further emphasized by the fact that GnuTLS (which, as we
already saw, requires copyright assignment) switched from using libgcrypt to
using Nettle.
The rationale that copyright assignment is necessary to allow
relicensing also strikes Werner as dubious. He considers the two likely
scenarios for relicensing GPLed software. One of these is relicensing to a
later version of the GPL. This scenario is in most cases already covered by
the default "or later" language that is usually applied to software licensed
under the GPL. Although there are projects that deliberately exclude the
"or later" clause when applying the GPL—most notably the Linux
kernel—it's likely that few or no GNU projects exclude that
clause. Projects that exclude the "or later" language of the GPL are likely
also to avoid using copyright assignment.
The other likely scenario for relicensing a GPL project is to relax the
license constraints—for example, switching from GPL to LGPL, so as to
allow interoperability with software that is not under a GPL-compatible
license. Such relicensing can be performed even when the project lacks a
copyright assignment (as was recently done
for portions of the VLC code base). However, this requires a formidable
effort to obtain permissions from all contributors. But, Werner
points out, the FSF has in practice rarely been interested in
relaxing licensing constraints in this way.
In summary, using copyright assignment as a tool to allow relicensing
seems to serve little practical use for the FSF, and comes at the cost of
removing the contributor's freedom to relicense their code.
Werner's blog post highlighted one more problem with copyright
assignment—a problem that occurs with assignment both to companies
and to nonprofits. The requirement to sign a copyright assignment agreement
imposes a barrier on participation. Some individuals and companies simply
won't bother with doing the paperwork. Others may have no problem
contributing code under a free software license, but they (or their
lawyers) balk at giving away all rights in the code. In general, those
contributors just silently fail to appear. By chance, a discussion on
copyright assignment is currently taking place in the Gentoo project, where
Greg Kroah-Hartman, a long-standing contributor to the project, commented on this point:
On a personal note, if any copyright assignment was in place, I would
never have been able to become a Gentoo developer, and if it were to be
put into place, I do not think that I would be allowed to continue to be
one. I'm sure lots of other current developers are in this same
situation, so please keep that in mind when reviewing this process.
To illustrate his point, Werner
related his recent experience with the libgcrypt project. Concluding that
copyright assignment served little practical use, he relaxed that
requirement for libgcrypt: starting in April of this year, he permitted
contributions accompanied by an emailed kernel-style developer
certificate of origin. The result was a noticeable increase in patches
sent in to the project.
Concluding remarks
The risks of assigning copyright to corporate entities have in the past
been well publicized. Assigning copyright to a nonprofit eliminates the
most egregious of those risks, but carries its own burdens and risks, which
have not necessarily been so well publicized. One of those burdens is the
necessity of buying into an associated governance model, one that may or
may not work well for project developers. The FSF governance model is, it
seems, not working well for a number of GNU projects. Developers should
consider (in advance) the questions of project ownership that are bound to
arise if the governance model does not work for them and they want to
separate from the governing project.
In addition, the costs of assigning copyright to a nonprofit should be
balanced against the (supposed) benefits. The assertion that assigning
copyright to a single entity improves the chances of successful prosecution
of a copyright violations looks dubious when one considers that the
assignee may not be well enough resourced to prosecute every violation. To
that should be added the facts that assignment means that the assigner
loses the ability to themselves prosecute copyright violations and that
copyright violations have been successfully prosecuted even when there is
no single copyright holder. Finally, the value of copyright assignment as a
tool that permits the FSF to relicense code seems rather limited in
practice. In summary, the arguments for copyright assignment start to look
decidedly weak, even when the assignee is a nonprofit such as the FSF, and
it is hard to find any justification for the FSF maintaining this
requirement.
(Thanks to Carlos Alberto Lopez Perez for the heads-up.)
Comments (83 posted)
By Michael Kerrisk
December 20, 2012
Here is LWN's fifteenth annual timeline of significant events in the
Linux and free software world. We have broken the timeline up into
quarters, and this is our report on October-December 2012 (updated on
December 31). Eventually, the
quarterly timelines will be stitched together to create a timeline for the
year as a whole, but in the meantime, you can find the other quarterly
articles here:
This is version 0.8 of the 2012 timeline. There are almost certainly
some errors or omissions; if you find any, please send them to timeline@lwn.net.
LWN subscribers have paid for the development of this timeline, along
with previous timelines and the weekly editions. If you like what you see
here, or elsewhere on the site, please consider subscribing to LWN.
If you'd like to look further back in time, our timeline index page has links to the
previous timelines and some other retrospective articles going all the way
back to 1998.
That is not how open source works, you need to do 90% of
the work upfront, people only join when you have something useful.
-- Miguel
de Icaza
Samsung releases the F2FS filesystem (blurb and article).
KDE releases a manifesto (LWN blurb).
HTTPS Everywhere 3.0 is released (announcement).
Systemtap 2.0 is released (announcement).
The first Korea Linux Forum is held in Seoul, October 11-12 (LWN report).
When I was on Plan 9, everything was connected and
uniform. Now everything isn't connected, just connected to the cloud, which
isn't the same thing. And uniform? Far from it, except in mediocrity. This
is 2012 and we're still stitching together little microcomputers with HTTPS
and ssh and calling it revolutionary. I sorely miss the unified system view
of the world we had at Bell Labs, and the way things are going that seems
unlikely to come back any time soon.
-- Rob Pike
Canonical provides users with a mechanism to directly fund development
of Ubuntu (LWN article).
NetBSD 6.0 is released (announcement).
The Whonix distribution makes an alpha release (LWN article).
The Privacyfix browser plugin is released (LWN article).
Plasma Active Three is released (LWN blurb).
The 2012 Realtime Minisummit is held in Chapel Hill, North Carolina,
in conjunction with the 14th Real Time Linux Workshop, October 18-20
(LWN minisummit coverage; LWN coverage of
workshop sessions: Modeling systems with Alloy; Realtime Linux for aircraft).
An ext4 data corruption bug receives wide media coverage, but in
practice is rather difficult to trigger (LWN blurb and article).
The Debian technical committee renders a judgement regarding
long-standing difficulties between the maintainers of various Debian Python
packages (LWN article).
Ubuntu 12.10 (Quantal Quetzal) is released (announcement).
Apache OpenOffice graduates from the Apache Incubator (announcement).
If you want to pick a fight, insult a designer by asking
why we don’t “just learn to code.”
-- Crystal
Beasley
Git 1.8.0 is released (announcement).
Wayland and Weston 1.0 are released (announcement).
Arduino 1.5 is released (announcement).
Yocto 1.3 "danny" is released (announcement).
The Linaro Enterprise group is formed (announcement).
The openSUSE project releases openSUSE 12.2 for ARM (announcement).
Put another way, having the career of the beloved CIA
Director and the commanding general in Afghanistan instantly destroyed due
to highly invasive and unwarranted electronic surveillance is almost enough
to make one believe not only that there is a god, but that he is an ardent
civil libertarian.
-- Glenn
Greenwald, commenting on the process leading to the fall of CIA
Director David Petraeus
The Fedora project announces an alpha release of Fedora 18 that
supports ARM; the Fedora ARM developers hope
to make F18 the first release that supports ARM as a primary architecture.
(announcement).
OpenBSD 5.2 is released (LWN blurb and article on some challenges that OpenBSD the
other BSDs face in trying to keep pace with Linux).
Asterisk 11 is released (LWN blurb).
The release date of Fedora 18 slips significantly, from the
originally expected November to January (announcement, LWN article).
LinuxCon Europe is held in Barcelona, Spain, November 5-9 (LWN
coverage: Challenges for Linux networking;
Systemd two years on; The failure of operating systems and how we can
fix it; All watched over by machines of
loving grace; Realtime, present and
future; Checkpoint/restore in user space:
are we there yet?; Don't play dice with
random numbers).
The GNOME project announces that fallback mode will be dropped in the
upcoming 3.8 release (LWN blurb; a
short time later, the project announces
plans for a "classic" mode).
Our patent system is the envy of the world.
-- David
Kappos, head of the United States Patent and Trademark Office
Android 4.2 is released (LWN article).
The VLC projects completes relicensing much of its code from GPL
to LGPL (LWN article).
The Portuguese government adopts ODF (LWN blurb).
A backdoor is inserted into the Piwik web server; the problem is
quickly fixed and notified (LWN blurb).
Linux Mint 14 is released (announcement).
Upstart 1.6 is released (LWN blurb).
The CyanogenMod project starts releasing stable builds of
CyanogenMod 10 (LWN article on running this version on the Nexus 7
tablet).
Wikipedia rolls out an HTML5 video player (LWN article).
Ubuntu makes a distribution for the Nexus 7 (LWN article).
Darktable 1.1 is released (LWN article).
So next time you're not happy about something: just prefix your criticism
with "I think". You may be surprised what difference it makes to the
conversation.
Oh, two other magic words: "for me". Compare "This workflow is
completely broken" vs "This workflow is completely broken for me". Amazing
what difference those two words make...
-- Peter
Hutterer
The Google Summer of Code Doc Camp 2012 takes place in Mountain
View, California, December 3-7 (LWN coverage: Documentation unconference; Book sprints).
NetBSD 5.2 is released (announcement).
The MariaDB Foundation is formed (announcement).
Ekiga 4.0 is released (announcement,
LWN article).
The first "shim" UEFI secure bootloader is released (announcement).
Linux 3.7 is released (announcement; KernelNewbies summary; LWN
merge window summaries: part 1, part 2, and part
3; LWN development statistics article).
I once scoffed at the idea that anyone would write in
COBOL anymore, as if the average COBOL programmer was some sort of
second-class technology citizen. COBOL programmers in 1991, and even today,
are surely good programmers — doing useful things for their jobs. The same
is true of Perl these days: maybe Perl is finally getting a bit old
fashioned — but there are good developers, still doing useful things with
Perl. Perl is becoming Free Software's COBOL: an aging language that still
has value.
Perl turns 25 years old today. COBOL was 25 years old in 1984, right at
the time when I first started programming. To those young people who start
programming today: I hope you'll learn from my mistake. Don't scoff at the
Perl programmers. 25 years from now, you may regret scoffing at them as
much as I regret scoffing at the COBOL developers. Programmers are
programmers; don't judge them because you don't like their favorite
language.
-- Bradley
Kuhn
Firefox OS Simulator 1.0 is released (LWN article).
SparkleShare 1.0 is released (LWN blurb).
Bison 2.7 is released (announcement).
Richard Stallman criticizes desktop searching in Ubuntu Unity,
which relays search terms to Canonical servers (LWN blurb and article).
Samba 4.0 is released (announcement and earlier article).
A number of Samsung Android phones are revealed to have a
significant security hole, a device file that gives write access to all
physical memory on the phone (LWN blurb).
Eudev, a project to create a Gentoo-based fork of udev,
is launched (announcement, LWN
article).
Qt 5.0 is released (LWN blurb).
PulseAudio 3.0 is released
(announcement).
The status.net service is phased out, and replaced by pump.io
(LWN blurb).
Gnumeric 1.12 released (announcement).
The Perl programming language turns 25 this month
(timeline
from Perl Foundation News).
So after I released 3.7-nohz1, I shut down the light then sat down
in front of an empty wall in my flat and waited in the darkness with
a black tea for december 21th's apocalypse.
But then after a few days, I've been thinking I should have taken a
second cup of tea with me.
So I eventually got up and turned the light on. Then I booted my
computer and started working on that new release of the full dynticks
patchset.
-- Frederic Weisbecker learns that the apocolypse
is not nigh
A hash-based DoS attack on Btrfs is disclosed (LWN blurb).
LLVM 3.2 is released (announcement, release notes).
Discontent in the GNU project becomes evident as the GnuTLS
maintainer moves the project outside GNU and the GNU sed maintainer
resigns (sed maintainer resignation note, LWN article on events in the GnuTLS project).
Awesome 3.5 is released (LWN blurb
and earlier article on this window manager).
Enlightenment 0.17 is released (announcement and earlier LWN article on this window manager).
The GNU C library (glibc) version 2.17 is released (announcement).
FreeBSD 9.1 is released (announcement,
release
notes).
BlueZ 5.0 is released (announcement, LWN article).
GNU Automake 1.13 is released (announcement).
Comments (none posted)
This is the last LWN Weekly Edition for 2012; following our longstanding
tradition, we will be taking a break during the last week of the year to
rest, be with our families, and recover from too much good food and wine.
We wish the best holidays to all of our readers, and a happy new year as
well. The Weekly Edition will return on January 4.
Comments (2 posted)
Page editor: Jonathan Corbet
Security
By Nathan Willis
December 19, 2012
As Fedora 18 nears its final release, some project members are
concerned that churn in the update tools has placed users in a risky
position by eliminating secure paths to update an existing
installation from Fedora 17. Although it looks like workarounds will
be available, the resulting choice will pit convenience against
security, which is rarely a trade-off that ends up in security's
favor. But as the whole debate nicely illustrates, when it comes to
verifying the authenticity of software acquired over the network, no
matter how many steps are involved, eventually the chain of trust must
start somewhere — because users always make a leap of trust when first
clicking on the download option.
The root of the issue is FedUp, the brand-new
updater currently being developed for deployment with F18, which does
not check the cryptographic signatures of the RPM packages it downloads
over the network (although FedUp also downloads initramfs and
vmlinuz files at the start of an upgrade; it does verify
the integrity of these by checking their SHA256 checksums). A bug
highlighting that deficiency was filed in November, noting that
fetching and installing unverified binaries over the network leaves
the system vulnerable to man-in-the-middle attacks and to trojaned
packages on compromised mirrors. Fedora's RPM packages are
signed, and various package installer tools do verify the signatures; it
is only the FedUp release-upgrading tool that does not check the signatures
before installing them.
The Fedora Engineering Steering Committee (FESCo) discussed the
subject in its December 5 meeting
and decided that the absence of network verification would not block
the release of F18. Rahul Sundaram raised the importance of the issue
on the fedora-devel list a week and a half later, arguing that the
lack of a secure method to update from F17 to F18 was irresponsible.
Part of Sundaram's initial concern was that the still-unfinished FedUp
did not yet support updating from an optical disc, which would provide
some measure of protection against attack because Fedora's
downloadable ISO images are provided with checksums. That concern
turned out to be partly unfounded; as Red Hat's Tim Flink explained in a
different thread, FedUp does support extracting packages from
optical media — it just does not support booting from an
optical disc to upgrade an existing installation, or pulling packages
from an ISO file.
Upgrades and release blockers
But the optical disc upgrade option is not the default behavior for FedUp;
the user must know to employ it with the --device switch.
FedUp's default is still to fetch packages from a remote repository. At the
moment, FedUp is also fetching these pre-release packages over an
unencrypted HTTP connection, although that could change when the final
F18 is released. One can always argue that users in the know will
seek out the secure means to update their machines, but it is far less
risky to make the default behavior secure. After all, "users in the
know" are not the ones most in need of protection. On that point, Sundaram
noted in his original email that FedUp's lack of package verification
is not a regression, since the update tool that it replaces, PreUpgrade,
did not check package signatures either. Neither does Anaconda, the
Fedora installer.
The only release-upgrade path that ever has verified remote
repository package signatures was that of upgrading via the yum package manager,
which is an unsupported option that non-advanced users are steered
away from — and with good reason. Yum is a lower-level package
installation utility; it can
be used to manually upgrade the entire system to a new release, but
the procedure is not recommended because a release upgrade involves
more than simple package installation. Dedicated upgrade tools
like PreUpgrade and FedUp are designed to handle large or complicated
system changes between releases, such as the merging of
/bin and other directories into the /usr hierarchy.
Some of these changes require a reboot, which PreUpgrade and FedUp are
built to manage, but yum is not.
Sundaram opened a FESCo ticket asking
FESCo to classify the FedUp signature-verification issue as a blocker
— although he later closed it after determining that the
local-media option does offer a secure upgrade path for those that
seek it out. But the issue is also tangled up with the question of
whether or not FedUp is ready for release on other grounds. It still
lacks a GUI front-end, logging, and progress indication. The GUI in
particular is a limitation that some see as critical in its own right
(Sundaram called it "a severe regression").
The fedora of trust
FESCo decided to defer the FedUp GUI feature until F19, and the
question of providing any secure way to upgrade an F17
machine to F18 appears, for the moment, to be settled. The deeper
question of making such a secure upgrade the default, however, remains
unanswered.
Fedora's distribution upgrade tools have never baked signature
verification into the upgrade process, and the project knows it. The
discussion dates all the way back to the infamous bug 998,
first opened in 1999 and filled with mysterious references to
forgotten technologies like floppy disks (hint for younger readers:
imagine a really thin SSD). But securely checking package signatures
requires a secure method for the user to acquire the right public key
(with which to check said signatures) in the first place. Even yum,
which was raised during the FedUp discussion as the secure upgrade
option, needs the user to manually find and import the correct GPG key
for the new release. That is a weak link even in the manual case, as
Fedora QA team member Adam Williamson observed:
If you're doing things Properly, you should somehow verify you're
importing the correct key and not just blindly typing what a wiki page
tells you to, but of course what most people do is blindly type what
the wiki page tells them to...
Finding a way for the Fedora N release upgrade tool to
securely fetch and install the correct key for release N+1
automatically is a tricky problem indeed. For example, Will Woods
proposed a plan
to add GPG signatures for the initramfs and vmlinuz
files FedUp fetches at the beginning of the upgrade. As mentioned
above, currently only a SHA256 checksum is used to verify the
integrity of these files — courtesy of checksums found in the
repository's .treeinfo file. Woods proposed adding GPG
signatures to the repository at .treeinfo.signed, but noted
that there still needs to be a way to get the F18 key onto F17
systems, and some way for the user to decide if the F18 key is trustworthy.
If we consider the contents of /etc/pki/rpm-gpg trusted, the F18
key(s) could be packaged for F17. The package would be signed with the
F17 key, which sort of establishes a chain of trust from F17 to F18.
This won't work for completely offline systems, though. But if the F18
public key *itself* was signed with the F17 key, the F18 key could be
included on the install media, and tools could use that to get the F18
key into the F17 keyring.
Another option is to sign .treeinfo with both the F17 and F18 keys,
but this would require using a detached signature and tweaking things
to accept the file if *any* signature can be verified.
And so on. There's a lot of options. This is a distro-wide policy
decision about establishing trust between versions of Fedora, so I'm
guessing it's going to require a bunch of meetings and discussions and
voting.
Establishing a chain of trust can be a bit of a chicken-and-egg
problem: eventually one has to place one's trust in something
on which the subsequent links build. Till Maas suggested
to Woods that the F18 key could be securely fetched from the Fedora
project from a well-known location over HTTPS. Of course, that relies
on the security of HTTPS and the DNS system, which relies on the
security of the certificate authorities, and so on.
For most mortals, however, some security — however limited
— is still better than no security, which is what FedUp's
default usage provides now. As recently as the end of November, users
were still writing to the
list confused about how best to upgrade their machines.
There are clearly many more discussions ahead; even if Fedora
establishes a workable plan for its main releases, also problematic is
how downstream distributions (including Fedora "spins") and customized
ISOs are handled. Whatever solution develops, one thing is for
certain: it will not arrive until the F19 release cycle at the
earliest.
Comments (11 posted)
Brief items
In short, given everything known today about the possible potential of
quantum computers, it is already possible to do all the sorts of things
we do with cryptography today in a way that is secure against future
adversaries with quantum computers. Unfortunately, "Quantum Computing
Not Really A Big Deal For Security" doesn't make for a very good
magazine article.
--
Matt Mackall
Overall, we've been doing a pretty good job at teaching US-based law
enforcement about Tor. At the end of the conference, one of the FBI agents
took me aside and asked "surely you have *some* sort of way of tracking
your users?" When I pointed at various of his FBI colleagues in the room
who had told me they use Tor every day for their work, and asked if he'd be
comfortable if we had a way of tracing *them*, I think he got it.
--
Roger Dingledine
The whole idea that we're now allowing countries with horrid human rights records, and with little to no experience in supporting innovation-enabling technologies, to control direction of these discussions suggests that the entire ITU process is broken beyond belief.
--
Mike Masnick
Governments around the world continue to eye the Internet and the open communications it fosters to be primarily a threat, with its technology ripe for surveillance, and its users to be controlled, censored, flogged, imprisoned, and even worse. The ITU's newfound fetish for DPI -- Deep Packet Inspection -- makes the wet dreams of tyrants and others in this sphere all the more explicit.
These dynamics are continuing going forward. The risks of Internet censorship, fragmentation, and other severe damage to the Internet we've worked so hard to build will continue to be exacerbated, despite our holding the ITU pretty much at bay this time around.
--
Lauren Weinstein
Comments (1 posted)
Pascal Junod has
disclosed
a pair of denial-of-service attacks against the Btrfs filesystem based on
hash collisions. "
I have created several files with random names in
a directory (around 500). The time required to remove them is
negligible. Then, I have created the same number of files, but giving them
only 55 different crc32c values. The time required to remove them is so
large that I was not able to figure it out and killed the process after 220
minutes (!)." This is a local attack only, but administrators of
Btrfs-using sites with untrusted users may want to pay attention.
Comments (41 posted)
Here's
a
report on the xda-developers site stating that Samsung Android phones
have an interesting feature added to the kernel: a
/dev/exynos-mem
device, world-writable, that gives access to all physical memory on the handset.
"
The good news is we can easily obtain root on these devices and the
bad is there is no control over it." Owners of such phones might
want to be especially careful about which software they install for a
little while.
Comments (86 posted)
Version 1.4 of the Suricata intrusion detection/prevention system is
available. "
The biggest new features of this release are the Unix Socket support, IP
Reputation support and the addition of the Luajit keyword. Each of these
new features are still in active development, and should be approached
with some care." There's a lot of other new features and a number
of performance improvements as well.
Full Story (comments: 2)
New vulnerabilities
apport: AppArmor policy is too lenient
| Package(s): | apport |
CVE #(s): | |
| Created: | December 18, 2012 |
Updated: | December 19, 2012 |
| Description: |
From the Ubuntu advisory:
Dan Rosenberg discovered that an application running under an AppArmor
profile that allowed unconfined execution of apport-bug could escape
confinement by calling apport-bug with a crafted environment. While not a
vulnerability in apport itself, this update mitigates the issue by
sanitizing certain variables in the apport-bug shell script. |
| Alerts: |
|
Comments (none posted)
apt: information disclosure
| Package(s): | apt |
CVE #(s): | CVE-2012-0961
|
| Created: | December 19, 2012 |
Updated: | December 19, 2012 |
| Description: |
From the Ubuntu advisory:
It was discovered that APT set inappropriate permissions on the term.log
file. A local attacker could use this flaw to possibly obtain sensitive
information. |
| Alerts: |
|
Comments (none posted)
aptdaemon: man-in-the-middle attack
| Package(s): | aptdaemon |
CVE #(s): | CVE-2012-0962
|
| Created: | December 17, 2012 |
Updated: | December 19, 2012 |
| Description: |
From the Ubuntu advisory:
It was discovered that Aptdaemon incorrectly validated PPA GPG keys when
importing from a keyserver. If a remote attacker were able to perform a
man-in-the-middle attack, this flaw could be exploited to install altered
package repository GPG keys. |
| Alerts: |
|
Comments (none posted)
drupal6-ctools: cross-site scripting
| Package(s): | drupal6-ctools |
CVE #(s): | CVE-2012-5559
|
| Created: | December 19, 2012 |
Updated: | December 19, 2012 |
| Description: |
From the Red Hat bugzilla entry:
The Chaos tool suite is primarily a set of APIs and tools to improve the developer experience.
The page manager node view task does not sufficiently escape node titles when setting the page title, allowing XSS.
This vulnerability is partially [mitigated] by the node task being disabled by default and limited to users that have the ability to submit or edit nodes. |
| Alerts: |
|
Comments (none posted)
kernel: denial of service
| Package(s): | kernel |
CVE #(s): | CVE-2012-5517
|
| Created: | December 19, 2012 |
Updated: | December 20, 2012 |
| Description: |
From the Red Hat advisory:
A NULL pointer dereference flaw was found in the way a new node's hot
added memory was propagated to other nodes' zonelists. By utilizing this
newly added memory from one of the remaining nodes, a local, unprivileged
user could use this flaw to cause a denial of service. |
| Alerts: |
|
Comments (none posted)
librdmacm: bogus address resolution
| Package(s): | librdmacm |
CVE #(s): | CVE-2012-4516
|
| Created: | December 17, 2012 |
Updated: | December 19, 2012 |
| Description: |
From the Red Hat bugzilla:
A security flaw was found in the way librdmacm, a userspace RDMA Communication Managment API allowing to specify connections using TCP/IP addresses even though it opens RDMA specific connections, performed binding to the underlying ib_acm service (librdmacm used default port value of 6125 to bind to ib_acm service). An attacker able to run a rogue ib_acm service could use this flaw to make librdmacm applications to use potentially bogus address resolution information.
|
| Alerts: |
|
Comments (none posted)
mate-settings-daemon: insecure timezones
| Package(s): | mate-settings-daemon |
CVE #(s): | CVE-2012-5560
|
| Created: | December 17, 2012 |
Updated: | March 4, 2013 |
| Description: |
From the Red Hat bugzilla:
mate-settings-daemon's datetime mechanism provides a D-Bus method to set the timezone, which is guarded by polkit's action org.mate.settingsdaemon.datetimemechanism.settimezone; this has the default policy "auth_self_keep", which allows any local user to perform the operation with only knowing their own password.
This seems not to be currently exposed in the mate UI, but it is available through manual D-Bus calls, e.g.
> dbus-send --system --print-reply --type=method_call --dest=org.mate.SettingsDaemon.DateTimeMechanism / org.mate.SettingsDaemon.DateTimeMechanism.SetTimezone string:/usr/share/zoneinfo/Cuba
Because the time zone setting is a global resource, it should be restricted to system administrators (== root or users in the "wheel" group), by having a policy auth_admin_*. That's also what the other timezone setting mechanisms (in systemd and control-center) do. |
| Alerts: |
|
Comments (none posted)
nova: information disclosure
| Package(s): | nova |
CVE #(s): | CVE-2012-5625
|
| Created: | December 19, 2012 |
Updated: | December 19, 2012 |
| Description: |
From the Ubuntu advisory:
Eric Windisch discovered that Nova did not properly clear LVM-backed images
before they were reallocated which could potentially lead to an information
leak. This issue only affected setups using libvirt LVM-backed instances. |
| Alerts: |
|
Comments (none posted)
pki-core: cross-site scripting
| Package(s): | pki-core |
CVE #(s): | CVE-2012-4543
|
| Created: | December 17, 2012 |
Updated: | March 11, 2013 |
| Description: |
From the Red Hat bugzilla:
Multiple cross-site scripting (XSS) flaws were found in the way:
1) 'displayCRL' script of Certificate System sanitized content of 'pageStart' and 'pageSize' variables provided in the query string,
2) 'profileProcess' script of Certificate System sanitized content of 'nonce' variable provided in the query string.
A remote attacker could provide a specially-crafted web page that, when visited by an unsuspecting Certificate System user would lead to arbitrary HTML or web script execution in the context of Certificate System user session. |
| Alerts: |
|
Comments (none posted)
qt: information disclosure
| Package(s): | qt |
CVE #(s): | CVE-2012-5624
|
| Created: | December 19, 2012 |
Updated: | January 23, 2013 |
| Description: |
From the Red Hat bugzilla entry:
An information disclosure flaw was found in the way XMLHttpRequest object implementation in Qt, a software toolkit for developing applications, performed management of certain HTTP responses. Previous implementation allowed redirection from HTTP protocol to file schemas. Also the redirection handling was performed automatically by QML application and could not be disabled. A remote attacker could use this flaw to cause QML application in an unauthorized way to read local file content by causing the HTTP response for the application to be a redirect to a file: URL (file scheme). |
| Alerts: |
|
Comments (none posted)
squashfs-tools: two code execution flaws
| Package(s): | squashfs-tools |
CVE #(s): | CVE-2012-4024
CVE-2012-4025
|
| Created: | December 19, 2012 |
Updated: | January 7, 2013 |
| Description: |
From the Red Hat bugzilla entries [1, 2]:
CVE-2012-4024: Stack-based buffer overflow in the get_component function in
unsquashfs.c in unsquashfs in Squashfs 4.2 and earlier allows remote
attackers to execute arbitrary code via a crafted list file (aka a
crafted file for the -ef option). NOTE: probably in most cases, the
list file is a trusted file constructed by the program's user;
however, there are some realistic situations in which a list file
would be obtained from an untrusted remote source.
CVE-2012-4025: Integer overflow in the queue_init function in unsquashfs.c in
unsquashfs in Squashfs 4.2 and earlier allows remote attackers to
execute arbitrary code via a crafted block_log field in the superblock
of a .sqsh file, leading to a heap-based buffer overflow. |
| Alerts: |
|
Comments (none posted)
tomcat: multiple vulnerabilities
| Package(s): | tomcat |
CVE #(s): | CVE-2012-4534
CVE-2012-4431
CVE-2012-3546
|
| Created: | December 19, 2012 |
Updated: | January 24, 2013 |
| Description: |
From the CVE entries:
org/apache/tomcat/util/net/NioEndpoint.java in Apache Tomcat 6.x before 6.0.36 and 7.x before 7.0.28, when the NIO connector is used in conjunction with sendfile and HTTPS, allows remote attackers to cause a denial of service (infinite loop) by terminating the connection during the reading of a response. (CVE-2012-4534)
org/apache/catalina/filters/CsrfPreventionFilter.java in Apache Tomcat 6.x before 6.0.36 and 7.x before 7.0.32 allows remote attackers to bypass the cross-site request forgery (CSRF) protection mechanism via a request that lacks a session identifier. (CVE-2012-4431)
org/apache/catalina/realm/RealmBase.java in Apache Tomcat 6.x before 6.0.36 and 7.x before 7.0.30, when FORM authentication is used, allows remote attackers to bypass security-constraint checks by leveraging a previous setUserPrincipal call and then placing /j_security_check at the end of a URI. (CVE-2012-3546) |
| Alerts: |
|
Comments (none posted)
unity-firefox-extension: information disclosure
| Package(s): | unity-firefox-extension |
CVE #(s): | CVE-2012-0958
|
| Created: | December 19, 2012 |
Updated: | December 19, 2012 |
| Description: |
From the Ubuntu advisory:
It was discovered that unity-firefox-extension bypassed the same origin
policy checks in certain circumstances. If a user were tricked into opening
a malicious page, an attacker could exploit this to steal confidential data
or perform other security-sensitive operations. |
| Alerts: |
|
Comments (none posted)
Page editor: Jake Edge
Kernel development
Brief items
The 3.8 merge window is still open and patches continue to flow into
the mainline repository. See the separate article below for a summary of
significant changes for 3.8.
Stable updates: 3.0.57,
3.4.24, 3.6.11 and 3.7.1 were all released on December 17.
Note that 3.6.11 is the last planned 3.6 update.
Comments (none posted)
Those who develop kernels for Android devices know how frustrating
porting a kernel to a new device has always been. Well if you share
that notion and would like this process to get easier than it is
right now, you will be pleased to know that Linus Torvalds has
announced ARM support in Linux.
—
Android
Authority has a less-than-authoritative moment.
So the math is confused, the types are confused, and the naming is
confused. Please, somebody check this out, because now *I* am
confused.
—
Linus Torvalds
Comments (1 posted)
Kernel development news
By Jonathan Corbet
December 19, 2012
Linus has been busy in the last week; as of this writing, some 6200
changesets have been
pulled into the mainline repository since
last
week's summary. As a result, just over 10,000 changes have been merged
overall, making 3.8 the busiest merge window ever and the first to exceed
10,000 patches. And the merging process is not done yet.
Quite a few significant changes have been merged. Among other things, we
have seen a decision made on how the development of better NUMA balancing
will proceed. Without further ado, the most significant user-visible
changes merged in the last week include:
- The disagreement over how the kernel's
NUMA performance problems should be addressed was partially resolved
when Ingo Molnar agreed that Mel
Gorman's "balancenuma" patch
set should be merged as a base for future development. Balancenuma is
intended to get the fundamental infrastructure in place to allow
experimentation with placement and migration policies; it adds little
in the way of such policies itself. That base code has
been merged for 3.8; expect policy-oriented code to be pushed for the
3.9 development cycle.
- The huge zero page feature has been
merged, greatly reducing memory usage for some use cases.
- The kernel memory usage accounting
infrastructure has been merged, allowing the placement of
limitations on kernel memory use by any specific control group. See
the updated Documentation/cgroups/memory.txt file for
details on how to use this feature.
- The inline data patch set has been
merged into the ext4 filesystem. Ext4 can now store data for small
files directly in the inode, improving performance and space
efficiency. Ext4 also now supports the SEEK_HOLE and
SEEK_DATA lseek() operations.
- The Btrfs filesystem has a new "replace" operation to allow the
efficient replacement of a single drive in a volume.
- The tmpfs filesystem now supports the SEEK_HOLE and
SEEK_DATA lseek() operations.
- The user namespace completion patch
set has been pulled. Eric Biederman says: "This set of
changes adds support for unprivileged users to create user namespaces
and as a user namespace root to create other namespaces. The tyranny
of supporting suid root preventing unprivileged users from using cool
new kernel features is broken."
- The new system call:
int finit_module(int fd, const char *args, int flags);
can be used to load a kernel module from the given file descriptor.
This call was added by the ChromeOS developers so that they can accept
or reject a module depending on where it is stored in the filesystem.
- The batman-adv mesh networking subsystem has gained distributed
ARP table support.
- The tun/tap network driver and the virtio net driver both now support
multiple queues per device.
- The QFQ packet scheduler has been upgraded to "QFQ+", which is said to
be faster and more capable; see this
paper [PDF] for details.
- The s390 architecture has gained support for attached PCI buses.
- UEFI boot-time variables are now accessible via the new "efivars"
virtual filesystem.
- The ptrace() system call has a new option flag,
PTRACE_O_EXITKILL, which causes all traced processes to
receive a SIGKILL signal if the tracing process exits
unexpectedly.
- New hardware support includes:
- Audio:
Wolfson Microelectronics WM8766 and WM8776 codecs,
Philips PSC724 Ultimate Edge sound cards,
Freescale / iVeia P1022 RDK boards,
Maxim max98090 codecs, and
Silicon Laboratories 476x AM/FM radio chips.
- Block:
LSI MPT Fusion SAS 3.0 host adapters, and
Chelsio T4-based 10Gb adapters (FCoE offload support).
- Graphics:
NVIDIA Tegra20 display controllers and HDMI outputs.
- Input:
ION iCade arcade controllers,
Wolfson Microelectronics "Arizona" haptics controllers,
Roccat Lua gaming mice,
TI ADC/touchscreen controllers, and
Dialog Semiconductor DA9055 ONKEY controllers.
The kernel has also gained support for human input devices
connected via i²c as described in
this document downloadable from Microsoft.
- Miscellaneous:
TI TPS51632 power regulators,
TI TPS80031/TPS80032 power regulators,
Versatile Express power regulators,
Versatile Express hardware monitoring controllers,
Maxim MAX8973 voltage regulators,
Dialog Semiconductor DA9055 regulators,
NXP Semiconductor PCF8523 realtime clocks (RTCs),
Dialog Semiconductor DA9055 RTCs,
CLPS711X host SPI controllers,
Nvidia Tegra20/Tegra30 SLINK controllers,
Nvidia Tegra20 serial flash controllers,
Nokia RX-51 (N900) battery controllers,
Solomon SSD1307 OLED controllers,
Nano River Technologies Viperboard multifunction controllers,
Nokia "Retu" multifunction controllers,
AMS AS3711 power management chips, and
Nokia CBUS-attached devices.
- Network:
CDC mobile broadband interface model USB-attached adapters,
Atheros AR5523-based wireless adapters,
Realtek RTL8723AE wireless adapters,
Aeroflex Gaisler GRCAN and GRHCAN CAN controllers, and
Kvaser CAN/USB interfaces.
- Video4Linux:
Samsung S3C24XX/S3C64XX SoC camera interfaces (full-memory write
access not required).
In contrast with the large number of new features, the number of
significant internal changes has been relatively small.
Changes visible to kernel developers include:
- The Video4Linux2 layer now supports the use of shared DMA buffers for frame I/O. See
the DocBook documentation for details on how to use this feature.
Also: the videobuf2 subsystem now
supports the use of scatterlists with user-space buffers in the
"contiguous" DMA mode.
- The input subsystem supports the use of "managed" devices via the new
devm_input_allocate_device() function.
One feature that has not been merged is RAID5/6 support for the Btrfs
filesystem. Those patches are being prepared for the mainline, though, and
can be expected in the 3.9 cycle. Meanwhile, the merge window could stay
open until as late as December 24, though Linus has threatened to
close it early. The final changes to be merged for 3.8 will be summarized
once that closure has happened.
Comments (1 posted)
By Jake Edge
December 19, 2012
Breaking the application binary interface (ABI) between the kernel and user
space is a well-known taboo for Linux. That line may seem a little
blurrier to some when it comes to the ABI for tools like perf that ship
with the kernel. As a recent discussion on the linux-kernel mailing list
shows, though, Linus Torvalds and others still have that line in sharp focus.
The issue stems from what appears to be a fairly serious bug in some x86
processors. Back in
July, David Ahern reported
that KVM-based virtual machines would crash when recording certain
events on the host. On some x86 processors, the "Precise Events
Based Sampling" (PEBS) mechanism can be used to gather precise counts of
events like CPU cycles. Unfortunately, PEBS and hardware virtualization
don't play nicely together.
As Ahern reported, running:
perf record -e cycles:p -ag -- sleep 10
on the host would reliably crash all of the guests. That
particular command will record the events specified, CPU
cycles in this case, to a file; more information about
perf can be
found
here. It turns out that PEBS
incorrectly treats the contents of the Data Segment (DS) register as a guest address,
rather than as a host address. That leads to memory
corruption in the guest, which will crash all of the virtual machines on the
system.
The "
:p" (precise) attribute on the
cycles event (which can be
repeated for higher precision levels as in
cycles:pp) asks for more
precise measurements,
which leads to PEBS being used. Without that attribute, the
cycle counts measured are less accurate, but do not cause the VM crashes.
That problem led Peter Zijlstra to change
perf_event.c in the kernel to disallow precise measurements
unless guest
measurement
has been specifically excluded. Using the ":H" (host-only)
attribute will still allow precise measurements as perf will
set the exclude_guest flag on the event. That flag will inhibit
PEBS activity while in the guest. In addition, Ahern changed
perf so that exclude_guest would be automatically
selected if the "precise" attribute was set. There's just one problem with those solutions: existing
perf binaries do not set exclude_guest, so users
would get an EOPNOTSUPP error.
It turns out that one of those existing users is Torvalds, who complained that:
perf record -e cycles:pp
no longer worked for him. Ahern
suggested
using "
cycles:ppH", but that elicited an
annoyed response from Torvalds. Why should he
have to add a new flag to deal with virtualization, when he isn't running
it? "
That whole 'exclude_guest' test is insane when there isn't any
virtualization going on."
Ahern countered that it's worse to have VMs
explode because someone runs a precise perf. But that's beside
the point, as Torvalds pointed out:
You broke the WORKING case for old binaries in order to give an error
return in a case that NEVER EVEN WORKED with those binaries. Don't you
see how insane that is?
The 'H' flag is totally the wrong way around. Exactly because it only
"fixes" a case that was already working, and makes a case that never
worked anyway now return an error value. That's not sane. Since the
old broken case never worked, nobody can have depended on it. See why
I'm saying that it's the people who use virtualization who should be
forced to use the new flag, not the other way around?
Forcing existing perf binary users to change their habits is the
crux of the matter. Beyond breaking the ABI, which is clearly
not allowed, it makes perf break for real users as Ingo Molnar said: "Old, working binaries are actually our _most_
important usecase: it's 99.9% of our current installed base ...".
While it is certainly a problem that older kernels can have all their
guests crashed with a simple command, the proper solution is not to require
either upgrading perf or changing the flags (which could well be
buried in scripts or other automation).
Existing perf binaries set the exclude_guest flag to
zero, while binaries that have Ahern's change set it to one.
That means newer kernels that seek to fix the crashing
guest bug cannot rely on a particular value for that flag. The "proper"
way to have handled the problem is to use a new include_guest
flag (or similar), which defaults to zero. Older binaries cannot change
that flag (since they don't know about it), so the kernel code can use it
to exclude the precise flag for guests on x86 systems. Other architectures
may not suffer from the same restriction.
Beyond that, Torvalds argues that if the
user asks for a precise measurement but doesn't specify either the
"H" or "G" (include
guests) attribute, the code should try to do the right thing. That means it
should measure both the host and guests on systems that support it, while
backing off to just the host for x86. Meanwhile it could return
EOPNOTSUPP if the user explicitly asks for a broken combination
(e.g. precise and include guests on x86). Molnar concurred. Ahern seemed a
bit unhappy about things, but said
that he would start working on a patch that has not appeared yet.
It is worth noting that Torvalds admitted
that he could trivially recompile perf to get around the whole
problem; it was a principle that he was standing up for. Even though some
tools
like perf are distributed with the kernel tree, that does not
relax the "no regressions" rule. Some critics of the move to add tools to
the kernel tree were concerned that it would facilitate ABI changes that
could be glossed over by requiring keeping the tools and kernel in
sync. This discussion clearly shows that not to be the case.
Having a way to crash all the VMs on a system is clearly undesirable, but
as Torvalds pointed out, that had been true for quite some time.
Undesirable behavior does not rise to the level of allowing ABI breakage,
however.
In addition, distributions and administrators can always limit access to
perf
to the root user—though that obviously may still lead to unexplained
VM crashes
as
Ahern noted. Molnar pointed out that the virtualization use case
is a
much smaller piece of the pie, so making everyone else pay for a problem they
may never encounter just doesn't make sense. Either through a patch or a
revert, it would seem that the
"misbehavior" will disappear before 3.8 is released.
Comments (none posted)
By Jonathan Corbet
December 19, 2012
Compiler warnings can be life savers for kernel developers; often a
well-placed warning will help to avert a bug that, otherwise, could have
been painful to track down. But developers quickly tire of warnings that
appear when the relevant code is, in fact, correct. It does not take too
many spurious warnings to cause a developer to tune out compiler warnings
altogether. So developers will often try to suppress warnings for correct
code — a practice which can have undesirable effects in the longer term.
GCC will, when run with suitable options, emit a warning if it believes
that the value of a variable might be used before that variable is set.
This warning is based on the compiler's analysis of the paths through a
function; if it believes it can find a path where the variable is not
initialized, an "uninitialized variable" warning will result. The problem
is that the compiler is not always smart enough to know that a specific
path will never be taken. As a simple example, consider
uhid_hid_get_raw() in drivers/hid/uhid.c:
size_t len;
/* ... */
return ret ? ret : len;
A look at the surrounding code makes it clear that, in the case where
ret is set to zero, the value of len has been set
accordingly. But the compiler is unable to figure that out and warns that
len might be used in an uninitialized state.
The obvious response to such a warning is to simply change the declaration
of len so that the variable starts out initialized:
size_t len = 0;
Over the years, though, this practice has been discouraged on the kernel
mailing lists. The unneeded initialization results in larger code and a
(slightly) longer run time. And, besides, it is most irritating to be
pushed around by a compiler that is not smart enough to figure out that the
code is correct; Real Kernel Hackers don't put up with that kind of thing.
So, instead, a special macro was added to the kernel:
/* <linux/compiler-gcc.h> */
#define uninitialized_var(x) x = x
It is used in declarations in this manner:
size_t uninitialized_var(len);
This macro has the effect of suppressing the warning, but it doesn't cause
any additional code to be generated by the compiler. This macro has proved
reasonably popular; a quick grep shows over 280 instances in the 3.7+
mainline repository. That popularity is not surprising: it allows a kernel
developer to
turn off a spurious warning and to document the fact that the use of the
variable is, indeed, correct.
Unfortunately, there are a couple of problems with
uninitialized_var(). One is that, at the same time that it is
fooling GCC into thinking that the variable is initialized, it is also
fooling it into thinking that the variable is used. If the variable is
never referenced again, the compiler will still not issue an "unused
variable" warning. So, chances are, there are a number of excess variables
that have not been removed because nobody has noticed that they are not
actually used. That is a minor irritation, but one could easily decide
that it is tolerable if it were the only problem.
The other problem, of course, is that the compiler might just be right.
During the 3.7 merge window, a
patch was merged that moved some extended attribute handling code from
the tmpfs filesystem into common code. In the process of moving that code,
the developer noticed that one variable initialization could be removed,
since, it seemed, it would pick up a value in any actual path through the
function. GCC disagreed, issuing a warning, so, when this developer wrote
a
second patch to remove the initialization, he also suppressed the
warning with uninitialized_var().
Unfortunately, GCC knew what it was talking about in this case; that code
had just
picked up a bug where, in a specific set of circumstances, an uninitialized
value would be passed to kfree() with predictably pyrotechnic
results. That bug had to be tracked down by
other developers; it was fixed by David
Rientjes on October 17. At that time, Hugh Dickins commented that it was a good example of how
uninitialized_var() can go wrong.
And, of course, this kind of problem need not be there from the outset.
The code for a given function might indeed be correct when
uninitialized_var() is employed to silence a warning. Future
changes could introduce a bug that the compiler would ordinarily warn
about, except that the warning will have been suppressed. So, in a sense,
every uninitialized_var() instance is a trap for the unwary.
That is why Linus threatened to remove it
later in October, calling it "an abomination" and saying:
The thing is moronic. The whole thing is almost entirely due to
compiler bugs (*stupid* gcc behavior), and we would have been
better off with an explicit (unnecessary) initialization that at
least doesn't cause random crashes etc if it turns out to be wrong.
In response, Ingo Molnar put together a
patch removing uninitialized_var() outright. Every use is
replaced with an actual initialization appropriate to the type of the
variable in question. A special comment
("/* GCC */") is added as well to make the
purpose of the initialization clear.
The patch was generally well received and appears to be ready to go. In
October, Ingo said that he would keep it
out of linux-next (to avoid creating countless merge conflicts), but would
post it for merging right at the end of the 3.8 merge window. As of this
writing, that posting has not occurred, but there have been no signs that
the plans have changed. So, most likely, the 3.8 kernel will lack the
uninitialized_var() macro and developers will have to silence
warnings the old-fashioned (and obviously correct) way.
Comments (20 posted)
Patches and updates
Kernel trees
Build system
Core kernel code
Development tools
Device drivers
Documentation
Filesystems and block I/O
Memory management
Networking
Architecture-specific
Security-related
Virtualization and containers
Miscellaneous
Page editor: Jonathan Corbet
Distributions
By Jake Edge
December 20, 2012
A feature in Red Hat Enterprise Linux (RHEL) that supports multiple, parallel
installations of programming languages and other normally system-wide tools was
recently discussed on the fedora-devel mailing list. Matthew Miller, who
recently started as the Fedora cloud
architect, raised the idea of bringing
Software Collections from RHEL to Fedora. The idea behind Software
Collections is interesting, but no clear consensus on how appropriate
they might be for Fedora emerged. As with many Fedora discussions of late,
this one at least partly comes back to the question of the role that the
distribution is meant to fill.
The problem that Miller initially presented is particularly acute in the
Ruby and Java
worlds, though Python and other tools (e.g. databases) sometimes suffer
from it as well.
Various packages may depend on different versions of the underlying tools,
which makes it difficult to have them coexist on the same system.
As an example he noted that the Fedora packages for the
Puppet configuration
management tool are broken because the Fedora Ruby version is too new. One
way to
solve that problem is to have multiple Ruby versions available that can be
installed in parallel and chosen at runtime. That's exactly the problem
that Software Collections sets out to solve.
A Software Collection (SC) uses the same packaging tools (RPM, Yum) already
used by RHEL and
Fedora, but installs the packages and their dependencies in the
/opt/provider hierarchy. The provider piece is a specific
string assigned to a vendor, which will allow multiple software providers
to share the hierarchy without name collisions. There is also an
scl tool that allows choosing one or more SCs to be active when
running a specific command. Whatever is needed in the environment for the
particular SC will be set up by "scriptlets" that get installed with the
collection and are run when the SC is selected.
As Miller notes, there is nothing inherent in Java or Ruby that leads to
version-mismatch problems, instead they are caused by the expectations of developers building
software using those languages. There is a strong preference for bundling
various
toolkits and libraries with such packages, which runs counter to the way
Fedora and many other distributions do things. Red Hat Eclipse team member
Alexander Kurtakov put it more bluntly:
As a Java guy I'm more and more sure that the problem is not in the
packaging view but in the wrong view of developers not being capable of
making an application if they don't bundle everything. You're [right] the
problem is not in the languages it's in the developers :(.
But, the fast-paced nature of Fedora (normally a release every six months
or so) may not make for a good match with SCs. Former Fedora project
leader Jared Smith was a bit skeptical about
the fit:
Given the short shelf-life of a Fedora release and the complication
involved in Software Collections, I'm still not convinced that we
really need this in Fedora. Can you give me a concrete case where
Fedora really needs to be running two different versions of the same
software, in a production environment? Given it's longer shelf life
and different target audience, RHEL is a better candidate -- and [for] the
record, the company I work for uses Software Collections that way.
I'm just having a hard time justifying it in my mind for Fedora.
Miller had a ready answer. He outlined
three separate uses he saw for SCs in Fedora, starting with handling problems like the
Puppet issue. Allowing multiple languages would give more choices for
Fedora as a development platform. He also noted that RHEL and Fedora make up an ecosystem
where developers targeting the former may well be developing on the latter.
Access to SCs on Fedora might be quite useful since they are available for
RHEL. Those two potential use cases did
not require too much discussion, but the other one did:
On a long-lived platform, Software Collections can provide a way to move
faster than the base. On a fast-moving platform like Fedora, we could use
it in the other way: providing longer-lived versions of certain
components even as the base is upgraded.
Bill Nottingham responded with a
self-proclaimed "heretical" suggestion that Fedora be turned into a much
smaller platform, with packages from the "grand Fedora
universe" that target one or more of those platform
releases. In that model, the enormous pile of software that Fedora deals
with for each release would be greatly reduced. Miller and
others—notably enterprise-leaning participants—looked
favorably on the heresy, but it was recognized that it would be a difficult
direction for Fedora to take.
There are, of course, downsides to managing software via SCs. The
"library bundling problem" comes to mind,
for example. If multiple SCs all include a library (or other component) that
needs to be upgraded for security reasons, it may require a great deal of
work. One could imagine several different vulnerable versions of a library
lurking in SCs that are still being used. Each of those needs to be fixed
and all of the SCs need to be updated. For RHEL, that's par for the
course, but Fedora has generally moves on before those kinds of problems
become acute.
The conversation soon pivoted from Nottingham's suggestion to whether SCs
might help external projects or companies in making their software
available for Fedora. By providing a stable platform and a way for those
entities to bundle up and install all of the needed pieces, more software
might be made available for Fedora. Much of that software is, of course,
proprietary, but there are advocates for making Fedora an easier target
for those kinds of applications.
One of the problems that Fedora has faced over the years is the explosion
of packages that it tries to maintain, test, and release in a short
six-month time frame. Several commenters pointed to SCs (or something like
them) as a way to decouple Fedora from that enormous list of packages.
Projects could target Fedora, without actually becoming part of
Fedora, as Fenando Nasser suggested.
Those kinds of ideas hearken back to an earlier arrangement for the Fedora
distribution. While Adam Williamson's humorous
idea of a return to Fedora Core and Fedora Extras was not taken
seriously, some kind of similar split seemed to gain quite a bit of
traction in the discussion. Whether it goes any further than that down the
road remains to be seen, but SCs could potentially help the process if it does.
There are logistical and licensing questions—along with plenty of others—that would have to be
resolved, of course. If there were a split, how would non-core (for lack of
a better term) components manage their SCs and repositories? If they were
under the Fedora umbrella, the distribution would have some responsibility
for the contents of the packages. If they were not under the umbrella,
users would have to somehow enter those repositories into their Yum
configuration. Richard Jones outlined a
number of issues that would need to be resolved under such a system.
While the conversation veered in a direction that Miller may not have
expected, it does give an interesting view into the thinking of some
(many?) in the Fedora development community. Some of the interest in a
fairly radical change may come from
frustration with the delays in the Fedora 18 cycle, but there seems to be
more to it than that. For some time now, Fedora has been trying to find
(or define) its niche. This conversation is
another step along that
path.
Comments (15 posted)
Brief items
One person's "corner case" is another person's default operating mode.
--
Greg Kroah-Hartman
Before anyone says to use a news item, let me say that publishing a news
item to inform users that we decided to break their systems will not
make it better.
--
Richard Yao
Comments (none posted)
We recently
complained that CyanogenMod
builds based on the Android 4.2 release were not available for most
devices, meaning
that CyanogenMod lacked features found in stock Android builds. So it
seems only fair to point out that the
CyanogenMod nightly builds page now
includes CM10.1-based (and, thus, Android 4.2-based) builds for a wide
variety of targets. CM10.1 works well on the Nexus 7 tablet, with no
real problems found so far.
(Just be sure to install updated
Google Apps as well or things will not go well...not that your editor
would ever make such a mistake.)
Comments (7 posted)
IPFire 2.11 core update 65 will be the last release in the 2.x series.
IPFire is a hardened Linux appliance distribution designed for use as a
firewall. New features include a GUI to configure OpenVPN roadwarrior
clients individually and OpenVPN path MTU discovery.
Full Story (comments: none)
The PC-BSD team has
announced
that PC-BSD 9.1 is now available. "
This release includes many exciting new features and enhancements, such as a vastly improved system installer, ZFS “Boot Environment” support, TrueOS (A FreeBSD based server with additional power-user utilities), and much more!"
Comments (2 posted)
Distribution News
Fedora
A bug was discovered in the Fedora Project OpenID provider on December 12.
The bug was pulled in with a fix on October 23 and patched December 12,
shortly after its discovery. "
While the bug was present, anyone with
a valid Fedora Account System (FAS) account who tried to log into a remote
website using any FAS OpenID identity would have that identity validated by
FAS even if the identity belonged to a *different FAS user*. The fix we
put in place rejects the attempt if the user who logs in does not own the
identity that they requested." Potentially affected accounts have
been notified.
Full Story (comments: none)
openSUSE
The openSUSE Board election is over. Raymond Wooninck (tittiacoke) and
Robert Schweikert (robjo) will join the board on the January 9 transitional
meeting.
Full Story (comments: none)
Newsletters and articles of interest
Comments (none posted)
Univention has
released version 3.1 of its Univention Corporate Server
(UCS). The H
takes
a look at this release. "
The server distribution offers Active-Directory-compatible domain services using Samba 4. The new Univention App Centre simplifies the installation of third-party products such as groupware, document management and backup solutions; the app catalogue provides an overview of available applications."
Comments (none posted)
Page editor: Rebecca Sobol
Development
By Nathan Willis
December 19, 2012
The Inkscape vector graphics
editor is approaching its next milestone release, version 0.49. As
always, the update rolls together a wealth of new tools and features.
This development cycle is relatively light on large-scale additions,
but there is a long list of small usability enhancements that will add
up to a smoother design experience for most users.
The project just released a bugfix to the stable 0.48 series, and
although Inkscape is decidedly a "released when ready" application, the
murmuring is that Inkscape 0.49 could hit virtual shelves as soon as
January 2013. In the meantime, there are fairly stable
nightly
builds available from the trunk for those who wish to
experiment.
Drawing
Two new tools debut in Inkscape 0.49. The first is the measure tool,
a long-awaited addition that has applications in computer-aided
drafting (CAD) and other drawing tasks that require precision object
placement. As is the case with most graphics applications, one can
measure an object by clicking with the tool and dragging out a line.
However, where a raster application like Gimp can only measure the
length of the line segment defined by the mouse movement, Inkscape
measures the distance between drawing objects. Wherever the measure
tool's line intersects a path or object, an "x" appears, and the tool
overlays the distance in pixels between intersections, the total
distance if there are multiple intersections, and the angle of the
line.
The other new tool is the PowerStroke pen previewed at Libre Graphics
Meeting (LGM) 2012, which we discussed
in May. PowerStroke is an effect that produces calligraphy-like
lines that change width, much like hand-inked pen or brush strokes.
Of course, the advantage of using an Inkscape tool rather than
actual ink is that the paths drawn as well as their
attributes are fully adjustable after the fact. PowerStroke is better
integrated now than it was at LGM, with improvements to the sometimes
tricky joints at sharp corners.
Several of the existing tools pick up noteworthy features in this
release. The node editor (which allows the user to adjust and edit
the on-curve points and control points of Bézier splines) can now
automatically add points at the curve's maxima and minima. Not every
design task is improved by ensuring that a curve has points at its
extrema, but there are a lot of scenarios where it makes calculations
easier — consider calculating the bounding box of a path, for
example. Consequently, having curve points at the extrema is
sometimes required, and having Inkscape create them with one click is
a genuine time-saver.
The gradient editor has been reworked. In previous releases,
editing a gradient opened a floating window in which one could add or
remove gradient stops and adjust colors. The new editor works
on-canvas, with add/remove buttons and a color selection widget on the
toolbar. The "snap to" functionality is not a tool in its own right, but over
the past few releases, Inkscape's snapping has evolved into a complex
beast. New in this release is the ability to set
snap-to-path-intersection and snap-to-guide-intersection, plus the
ability to snap a path perpendicularly or tangentially while drawing
it. Snapping to text is also improved.
Similarly, Inkscape's new "symbols library" is not a tool per
se, though it does add functionality. The library is based on
SVG's <symbol> element, which is used to define
reusable graphical objects. One could always duplicate a normal SVG
object, of course, but the idea is that SVG can be used to create
common collections of frequently-referenced symbols, in the same way
that named colors can simplify SVG documents that need to reuse
identical color in multiple spots. Whether one finds that idea
enticing or not, Inkscape can now take advantage of it, providing
access to "libraries" of symbols and enabling users to create their
own. Two symbol libraries are built in: one with logic gates,
and the other with international road and travel symbols.
Beyond the basic tools, a handful of new extensions is included
that also offer new functionality. Inkscape extensions
tend to be more task-centric, so not every user will find them
helpful, but they can also offer surprisingly sophisticated
features. For example, one new extension allows N-up page layout for
printing large documents. Another allows converting drawings to the
G-code format used
by computer-controlled cutting machines. There are also extensions
for generating QR codes, isometric grids, and Voronoi
diagrams, plus smaller extensions to replace fonts in a document,
extract text, and the "guillotine," which can split a drawing up with
guide lines to export it as multiple PNG images.
Environmental and usability tweaks
In practice, Inkscape's tool set is extensive enough that it can be
mind-boggling to hear that some users still find it lacking, but such
is the lot of the general-purpose creative application. On the other
hand, the diverse tool set can frequently result in enough complexity
that the application begins to get in the way. On that front, the
project is making frequent small improvements that fix minor
annoyances.
For instance, layers have often felt like an afterthought in
Inkscape. The layer controls are small and tucked away at the bottom
of the window. Changing layer visibility or order was not intuitive.
Inkscape 0.49 improves things by allowing the user to re-order layers
with a simple drag-and-drop. Gimp and Krita still provide a nicer
layers interface (in Inkscape, a drawing's layers are listed in a
drop-down selection widget, rather than a list with all layers visible
at once), but the fix is an improvement.
Selecting the right object from a position where multiple objects
are stacked or overlap has been another pain point; Inkscape 0.49
allows the user to cycle through the options at the cursor point using
the mouse scroll-wheel. Users can also increase the size of the
"handles" shown for grabbing and manipulating path nodes, toggle the
visibility of guides, and edit keyboard shortcuts. Inkscape will now
remember from session to session which tool palettes are open and
their various screen positions, too, eliminating the need to reconfigure
the application at every start-up.
A particularly novel feature is the ability to enter arithmetic
operations into spinboxes. For example, if a rectangle is 1071 pixels
wide and needs to be shrunk to 1/7 of that size, the user can simply
enter /7 into the width box and hit Return, rather
than waste precious minutes hunting for a pencil or searching for the
calculator in GNOME Shell's "Activities overview." The Inkscape wiki
does not provide a canonical list of which calculations are supported,
but it does provide
examples with nested parentheses and even physical units (e.g., mm
instead of pixels). No word yet on logarithms, infinite series, or
complex numbers — mathematicians take note; this is clearly a
feature in need of serious stress-testing....
There are also several improvements aimed at using Inkscape for
print work. The background display color of the canvas can be changed
without changing the color of the document itself. That would be
useful, for example, if designing a flyer that will be printed on tan
paper — one needs to see how the design will look against tan,
but adding a solid-color rectangle as a background object is a
kludge. The interface can be toggled between normal full-color
display and grayscale, which again allows the user to preview output
— although this feature has other uses; it is often a good idea to
design a logo in grayscale first, to establish contrast and
readability, and grayscale is important for supporting people
color-impaired vision.
PDF and TeX export have been improved, which can benefit print or
electronic output. The PDF exporter can now automatically add a "bleed"
margin, and TeX export now supports text styles — including,
font nerds will be happy to learn, distinguishing between oblique and
italic styles. In addition, support for export to Gimp's XCF format
has been improved, and it is now possible to export drawings to Flash
XML Graphics (FXG), XAML, and to the native format for the open source
vector animation application Synfig.
Over and out
The new Inkscape series is not all bling, however. There are a number
of performance and rendering improvements rolled into 0.49, too. The
first is the long-awaited merging of Google Summer of Code work from
2010 and 2011 that ported the rendering engine to the Cairo raster graphics
library. This results in improved responsiveness when editing and
closes a number of outstanding rendering bugs. Responsiveness is also
improved through caching, and memory usage is reduced; the release
notes cite a four-fold reduction in the memory required over
Inkscape 0.48. Cairo is also responsible for the grayscale preview
mode mentioned earlier, and is used as the PNG export engine.
The other under-the-hood change of significance is the addition of
multithreading through OpenMP. This
change is primarily felt through speed improvements when using
filters; the OpenMP parallelization can take advantage of all CPU
cores on the system. Inkscape implements many effects through SVG
filters even when they are not labeled as such explicitly (such as the
"blur" slider available on every object). In addition, each release
adds more live previews to path effects and extensions. The upshot is
that multithreading is likely to benefit a lot of users even if they
do not employ complex filters in their normal workflow.
Overall, Inkscape 0.49 is shaping up to be a solid improvement. It
is hard to generalize about the impact of features like Cairo
rendering and multithreading; some users may feel no improvement at
all, while others may be ecstatic. The same goes for the new tools
and features — if you use G-code or TeX the benefits are clear,
if you have some other design needs you might not even notice the new
features. On the other hand, the usability improvements (particularly
to selection and window management) are more or less universal. But
the truly interesting aspect of any new Inkscape release is seeing how
users will take to a new drawing mode like PowerStroke, because that
is ultimately unpredictable. Inkscape is a "creative" application:
its biggest enhancements are in how it allows end users to think, and
act, a little more creatively with every new release.
Comments (12 posted)
Brief items
I once scoffed at the idea that anyone would write in COBOL anymore, as if the average COBOL programmer was some sort of second-class technology citizen. COBOL programmers in 1991, and even today, are surely good programmers — doing useful things for their jobs. The same is true of Perl these days: maybe Perl is finally getting a bit old fashioned — but there are good developers, still doing useful things with Perl. Perl is becoming Free Software's COBOL: an aging language that still has value.
Perl turns 25 years old today. COBOL was 25 years old in 1984, right at the time when I first started programming. To those young people who start programming today: I hope you'll learn from my mistake. Don't scoff at the Perl programmers. 25 years from now, you may regret scoffing at them as much as I regret scoffing at the COBOL developers. Programmers are programmers; don't judge them because you don't like their favorite language.
—
Bradley Kuhn
Comments (5 posted)
Eudev is the Gentoo-based fork of udev which was
covered here in November. The project has now
officially announced its existence. "
udev often
breaks compatibility with older systems by depending upon recent Linux
kernel releases, even when such dependencies are avoidable. This became
worse after udev became part of systemd, which has jeopardized our
ability to support existing installations. The systemd developers are
uninterested in providing full support in udev to systemd alternatives.
These are problems for us and we have decided to fork udev to address
them."
Full Story (comments: 190)
Digia, the current owner of the Qt code base, has sent out
a
press release announcing the Qt 5.0 release. "
Key benefits
of Qt 5 include: graphics quality; performance on constrained hardware;
cross-platform portability; support for C++11; HTML5 support with QtWebKit
2; a vastly improved QML engine with new APIs; ease of use and
compatibility with Qt 4 versions."
Comments (20 posted)
Evan Prodromou of StatusNet announced that the company's microblog-hosting service running at the status.net domain will close to new customers over the coming weeks, as the company begins migrating its offerings from the StatusNet software to it successor, the just-unveiled pump.io. User accounts on the existing sites running from the status.net domain will continue to function, as will the Identi.ca site. Self-hosted StatusNet instances will be unaffected by the move.
Comments (64 posted)
Version 3.0 of the PulseAudio subsystem is out; see
the
release notes for details. "
The tl;dr version for the lazy is: easier setup when your device is a
Bluetooth source, some ARM NEON optimisations, configurable latency
offsets, ALSA UCM [use-case manager] support for embedded folks, and a
_lot_ of other fixes
and infrastructure changes."
Full Story (comments: none)
Version 1.12 of the Gnumeric spreadsheet application is available. This is a major stable release wrapping up two years of development. Among the improvements are porting the interface to GTK+ 3, improved accuracy in computed cells, and additions to the graph tool. Gnumeric is also now available under two licenses, GPLv2 or GPLv3.
Full Story (comments: none)
Newsletters and articles
Comments (none posted)
At Opensource.com, Red Hat's Richard Fontana expresses his admiration for the "coordinated, centralized manner in which CC licenses are conceived, drafted and revised, and the successful occupation of the full policy field of open and quasi-open content licensing by Creative Commons." By comparison, he says, the de-centralized and organic growth of free software licenses lacks the "great emphasis on simplification of use, understanding, and identification of the various license categories."
Comments (25 posted)
The Perl Foundation News has posted
a
detailed history of the first 25 years of the Perl language.
"
Before the advent of Perl 5 the resources for collecting these
scripts were few and far between, and one or two have fallen into legend
and are now taken out by wizened Perlers around a flickering light where
the Tales of Terror are shared and Matt's Script Archive comes into its
own magnificent glory. In these Enlightened days it is easy to mock those
early pioneers and to smile fondly at some of the erroneous efforts, but
they were the only resource of their time and they were formative in the
evolution."
Comments (none posted)
At his blog, Jelmer Vernooij has written a detailed retrospective on the history of the Bazaar version control system, including a lot of analysis of the project's ups and downs over the years. "We just made these changes to the file format as they came along, rather than accumulating them. This meant that at one point there was a new format every couple of months. Later on, we did slow down on format changes and no new format has been introduced since 2009. Unfortunately we have been unable to shake the image that we introduce a new file format every fortnight."
Comments (32 posted)
Björn Balazs reports on some surprising results from the recent LibreOffice Writer icon test, which (among other things) pitted Tango's "floppy disc" icon against Oxygen's "filing cabinet" icon for the save action. "The results are stunning. There was not the slightest problem with using the floppy disc, while the filing cabinet metaphor more or less failed [...] Even when looking at the group of young users the results do not change significantly and the antiquated floppy disc still scores a perfect 10.0." Balazs speculates on possible explanations; whatever the cause, surely additional interesting findings are still to come from this survey project.
Comments (4 posted)
Page editor: Nathan Willis
Announcements
Brief items
The Free Software Foundation Europe has
published its annual
report for 2012.
"
In order to be in charge of our own lives, we need to be able to control the computers we use. We can only do this if they run Free Software that we can use, study, share and improve. We can only do this if our computers aren't neutered to restrict their functionality, or loaded with spyware. We also need neutral networks to connect them to, so we can freely choose what to say, and to whom.
With this in mind, 2012 was both a good and a bad year for our freedom."
Comments (none posted)
For those with 2.5 minutes to spare: the Linux Foundation has posted
a video looking back
at the most important Linux-related events (from its point of view) that
happened in 2012.
Comments (13 posted)
Articles of interest
The European Union's open source license, EUPL,
will
be revised to make it more compatible with GPLv3. The EUPL forum on
Joinup is open for comments until mid-March 2013. "
The main reason to update the licence is to remove barriers that could hinder others in the open source communities from using software licensed under the EUPL. "Making it explicitly compatible with the GPLv3 should increase interoperability", explains Patrice-Emmanuel Schmitz, a Brussels-based legal specialist working involved in the drafting of the EUPL.
This should for instance make it easier to combine EUPL and GPLv3 software
components or to use both licences to publish a project, says Schmitz. "It
should also put an end to the categorisation by the Free Software
Foundation of the EUPL as not fully GPL compatible."" (Thanks to
Martin Michlmayr)
Comments (38 posted)
Jennifer Cloer
interviews
Gabriella Coleman about her new book
Coding Freedom: The Ethics and
Aesthetics of Hacking. "
To hack effectively requires the freedom to determine the shape, contour and direction of technological production. Freedom, in other words, is essential for quality. Sociologist Richard Sennet has has defined this drive in terms of “craftsmanship,” which is “an enduring, basic human impulse, the desire to do a job well for its own sake." It is not always easy to put this ethic into practice and open source hackers have figured out how to do so, using the right mix of law, tools and project governance to make it happen."
Comments (none posted)
Opensource.com has an
interview
with Leslie Hawthorn about the 2012 Grace Hopper Celebration of Women
in Computing conference. "
This has been the 3rd year that I've been involved in the Grace Hopper conference. Three years ago, in Atlanta, we had a group of folks come together and decided that it was kind of a bummer that there wasn't a lot of open source related content on the program. So, we got together a program committee and put on a full day of tracks related to contributing to open source software—everything from how you get started as a contributor to different projects you may wish to join, and how to get involved in open source if you're a student (from an academic point of view, how working in open source can enhance your career prospects)."
Comments (none posted)
The H
talks
with Bradley Kuhn about GPL compliance.
"
Certainly we're in an era where lots of people are scrambling to create business models dancing around the issue of GPL compliance, and in using GPL enforcement in nefarious ways. Our community already has too much of that kind of activity, and I certainly don't want more of that.
If, however, someone wanted to start another non-profit charity to do enforcement, I'd certainly welcome it and help them do it. I also encourage any individuals who hold copyrights in projects that Conservancy currently does active enforcement for – namely, BusyBox, Linux, and Samba – to get in touch with me and join our coalition. That's an easy way for those who hold copyrights to get involved with the work Conservancy's already doing in this area."
Comments (23 posted)
Calls for Presentations
GNU Tools Cauldron will take place July 12-14, 2013, in Mountain View,
California. The abstract submission deadline is February 28. "
The
purpose of this workshop is to gather all GNU tools developers, discuss
current/future work, coordinate efforts, exchange reports on ongoing
efforts, discuss development plans for the next 12 months, developer
tutorials and any other related discussions."
Full Story (comments: none)
The 15th annual O'Reilly Open Source Convention (OSCON) will take place
July 22-26, 2013 in Portland, Oregon. Proposals are due by February 4.
There will be 20 full tracks covering all things open source.
Full Story (comments: none)
Upcoming Events
Sir Tim Berners-Lee will be a keynote speaker at the January 2013
linux.conf.au in Canberra. "
Sir Tim Berners-Lee was knighted in 2004 for his work on HTTP and the World Wide Web, and was elected as a foreign associate of the United States Academy of Sciences in 2009. He also holds the Founders Chair at MIT's Computer Science and Artificial Intelligence Laboratory. This is Sir Tim Berners-Lee's first visit to Australia, and his linux.conf.au keynote speech is set to be the only technical talk during his Down Under tour."
Full Story (comments: none)
Events: December 20, 2012 to February 18, 2013
The following event listing is taken from the
LWN.net Calendar.
| Date(s) | Event | Location |
December 27 December 29 |
SciPy India 2012 |
IIT Bombay, India |
December 27 December 30 |
29th Chaos Communication Congress |
Hamburg, Germany |
December 28 December 30 |
Exceptionally Hard & Soft Meeting 2012 |
Berlin, Germany |
January 18 January 19 |
Columbus Python Workshop |
Columbus, OH, USA |
January 18 January 20 |
FUDCon:Lawrence 2013 |
Lawrence, Kansas, USA |
| January 20 |
Berlin Open Source Meetup |
Berlin, Germany |
January 28 February 2 |
Linux.conf.au 2013 |
Canberra, Australia |
February 2 February 3 |
Free and Open Source software Developers' European Meeting |
Brussels, Belgium |
February 15 February 17 |
Linux Vacation / Eastern Europe 2013 Winter Edition |
Minsk, Belarus |
If your event does not appear here, please
tell us about it.
Page editor: Rebecca Sobol