LWN.net Weekly Edition for January 8, 2015
Some unreliable predictions for 2015
Welcome to the first LWN Weekly Edition for 2015. We hope that the holiday season was good to all of you, and that you are rested and ready for another year of free-software development. It is a longstanding tradition to start off the year with a set of ill-informed predictions, so, without further ado, here's what our notoriously unreliable crystal ball has to offer for this year.We will hear a lot about the "Internet of things" of course. For larger "things" like cars and major appliances, Linux is the obvious system to use. For tiny things with limited resources, the picture is not so clear. If the work to shrink the Linux kernel is not sufficiently successful in 2015, we may see the emergence of a disruptive competitor in that space. We may feel that no other kernel can catch up to Linux in terms of features, hardware support, and development community size, but we could be surprised if we fail to serve an important segment of the industry.
We'll hear a lot about "the cloud" too, and we'll be awfully tired of it by the end of the year. Some of the hype over projects like OpenStack will fade as the project deals with its growing pains. With some luck, we'll see more attention to projects that allow users to own and run their own clouds rather than depending on one of the large providers — but your editor has often been overly optimistic about such things.
While we're being optimistic: the systemd wars will wind down as users realize that their systems still work and that Linux as a whole has not been taken over by some sort of alien menace. There will still be fights — we, as a community, do seem to like fighting about such things — but most of us will increasingly choose to simply ignore them.
There is a wider issue here, though: we are breaking new ground in systems design, and that will necessarily involve doing things differently than they have been done in the past. There will certainly be differences of opinion on the directions our systems should take; if there aren't, we are doing something wrong. There is a whole crowd of energetic developers out there looking to do interesting things with the free software resources we have created. Not all of their ideas will be good ones, but it is going to be fun to watch what they come up with.
There will be more Heartbleed-level security incidents in 2015. There are a lot of dark, unmaintained corners in our software ecosystem, many of which undoubtedly contain ancient holes that, if we are lucky, nobody has yet discovered. But they will be discovered, and we'll not be getting off the urgent-update treadmill this year.
Investments in security will grow considerably as a consequence of 2014's high-profile vulnerabilities, high-profile intrusions at major companies, and ongoing spying revelations. How much good that investment will do remains to be seen; much will be swallowed up by expensive security companies that have little interest in doing the hard work required to actually make our systems more secure.
Investments in other important development areas will grow more slowly despite the great need in many areas. We all depend on code which is minimally maintained, if at all, and there are many unsolved problems out there that nobody seems willing to pick up. The Linux Foundation's Critical Infrastructure Initiative is a good start, but it cannot come close to addressing the whole problem.
Speaking of important development areas, serious progress will be made on the year-2038 problem in 2015. The pace picked up in 2014, but developers worked mostly on the easy part of the problem — internal kernel interfaces. But a real solution will involve user-space changes, and the sooner those are made, the better. The relevant developers understand the need; by the end of this year we'll know at least what the shape of the solution will be.
Some long-awaited projects will gain some traction this year. The worst Btrfs problems are being addressed thanks to stress testing at Facebook and real-world deployment in distributions like openSUSE. Wayland is reaching a point of usability for brave early adopters. Even Python 3, which has been ready for a while, will see increasing use. We'll have programs like X.org and Python 2 around for a long time, but the world does eventually move on.
There has been some talk of a decline in the number of active Linux distributions. If that is indeed the case, any decline in the number of distributions will be short-lived. We may not see a whole lot more general-purpose desktop or server distributions; that ground has been pretty well explored by now, and, with the possible exception of the systemd-avoidance crowd, there does not appear to be a whole lot to be done in that area. But we will see more and more distributions that are specialized for particular applications, be it network-attached storage, routing, or driving small gadgets. The flexibility of Linux in this area is one of its greatest strengths.
Civility within our community will continue to be a hot-button issue in 2015. Undoubtedly somebody will say something offensive and set off a firestorm somewhere. But, perhaps, we will see wider recognition of the fact that the situation has improved considerably over the years. With luck, we'll be able to have a (civil!) conversation on how to improve the environment we live in without painting the community as a whole in an overly bad light. We should acknowledge and address our failures, but we should recognize our successes as well.
Finally, an easy prediction is that, on January 22, LWN will finish its 17th year of publication. We could never have predicted that we would be doing this for so long, but it has been a great ride and we have no intention of slowing down anytime soon. 2015 will certainly be an interesting year for those of us working in the free software community, with the usual array of ups, downs, and surprises. We're looking forward to being a part of it with all of you.
Dark Mail publishes its secure-email architecture
The Dark Mail Alliance has published the first description of the architecture that enables its secure-and-private alternative to the existing Internet email system. Called the Dark Internet Mail Environment (DIME), the system involves a new email message format and new protocols for email exchange and identity authentication. Nevertheless, DIME also makes an effort to be backward-compatible with existing email deployments. DIME includes several interesting ideas, but its main selling points remain its security: it not only offers end-to-end encryption, but it encrypts much of the message metadata other systems leave in cleartext, too, and it offers resistance to attacks that target servers between the sender and the recipient.
The Alliance
Dark Mail was started in 2013, led
by Ladar Levison of the privacy-centric email service Lavabit and by PGP
creator Phil Zimmermann of Silent Circle. Both of those companies
abruptly shut down their email offerings in August 2013 in reaction to
a US government request for access to Edward Snowden's Lavabit
account—including a copy of the Lavabit SSL keys, which would
have enabled the government to decrypt all of the traffic between
Lavabit and its customers. Subsequently, Levison and Zimmermann
announced that they would be developing an "email 3.0
"
system through Dark Mail, with the goal of preventing just the sort of
attacks that occurred in the Snowden case.
One key problem that the Snowden incident revealed was that, even if two users employ strong encryption on their email messages (such as with OpenPGP or S/MIME), the metadata in those messages remains unencrypted. And that metadata can contain vital information: the sender and receiver addresses, the subject line, various mail headers, and even the trail of mail servers that relayed the message from sender to destination. Changing that would necessitate a new email message format, new protocols for email transfer and retrieval, and some sort of new infrastructure to let users authenticate each other. A new authentication framework is needed to avoid revealing key owners' email addresses, as currently happens with public PGP keyservers—and to avoid the well-documented vulnerabilities of the certificate authority (CA) system used for SSL/TLS.
DIME is designed to be that replacement email system. It describes a message format that encrypts every part of the message separately, using separate encryption keys for the different parts. Thus, mail transfer agents (MTAs) along the way can decrypt the portions of the message they need to deliver the message—but nothing else—and mail delivery agents (MDAs) can deliver messages to the correct user's inbox without learning anything about their content or about the sender. DIME also describes a transport protocol for sending such encrypted messages—one in which the multiple key retrieval and authentication steps are handled automatically—and a framework for how the authentication tokens required by the system should be published and formatted.
The DIME package
The DIME system is detailed in a 108-page PDF
specification—although it should be noted that several sections
in the specification are empty, either blank or labeled "TBD." The
most significant of these is DIME's IMAP replacement, DMAP, about
which the document says: " There is also source code for a suite of DIME-related libraries available
through the Lavabit GitHub account. So far, none of those GitHub
repositories indicates what software license the code is under.
Mozilla's Hubert Figuiere filed an issue
requesting one that does not yet seem to have been addressed. At this point, however, digesting and understanding the
architecture and formats described in the DIME specification is
probably the more important concern.
A bird's-eye view of the system starts with the message format. A
DIME message object contains three separate sections: the Next-Hop
section (which is unencrypted and holds the routing information needed
for the current transport method), the Envelope section (which
includes two "chunks" for the origin and destination information, each
encrypted separately), and the Content section (which contains the
email message headers and body, with each header and each body part
encrypted separately).
Within the Envelope and Content sections, it
is critical that each is encrypted separately and with a variety of
keys. This allows applications to decrypt only some parts of a
message if not all are of immediate importance (such as a mobile
client that only decrypts the Subject and Sender of new messages for a
summary screen, rather than downloading and decrypting everything).
It also allows the software to control which applications can decrypt
which sections by using several different keys.
By encrypting things like attachments and headers separately, there
is a clear security and privacy improvement—consider, for
example, that mailing-list thread information and return paths could
allow an attacker to collect a significant amount of information about
a conversation even without seeing the message body. Still, it may
come as a surprise to some that DIME also encrypts the sender and
recipient email addresses and names. The name of the sender and
recipient are optional, of course, but encrypting the addresses might
seem to make mail routing and delivery impossible.
DIME's solution to this problem is to adopt a domain-based
authentication scheme that the origin and destination mail servers can
use to validate each other's identities. Each mail server is also
responsible for authenticating the user on its end, but the
user-to-server authentication is logically separate from the
server-to-server authentication.
In other words, the scheme looks like this:
For each step (sender-to-origin, origin-to-destination,
destination-to-recipient), the necessary information to complete the
next step is encrypted separately, so that only the need-to-know
parties for that step have access to the information. The various
fields in the message are each encrypted with an ephemeral session
key, and a separate copy of that session key is included in the
message for each party trusted to access that field—with each
copy encrypted using a known public key for the appropriate party.
So there are three copies of the session key that protects the
recipient's email address: one encrypted with the sending user's
public key, one encrypted with the destination server's public key, and one
encrypted with the recipient user's public key. There are also three
copies of the (different) session key that protects the sender's
address: one for the sender, one for the recipient, and one for the
origin server. All of the keys in question are intended to be
generated automatically: users may naturally wish to have control over
their personal public/private key pairs (which will require software
support), but the session-key generation and retrieval of remote keys
is designed to be handled without explicitly involving the user.
The last piece in the puzzle is the actual transport method used to
send the message from the origin server to the destination server.
Here, DIME allows for several options: TLS, DIME's own SMTP
replacement DMTP, or even connecting over a Tor circuit.
Left up to the implementer are details such as exactly how the
users authenticate to their servers. There is a "paranoid" mode in
which the servers have no access to the user's key material and a full
key-exchange process is required for every connection, as well as a
"cautious" mode in which the server can store encrypted copies of the
user's keys to simplify the process somewhat, and a "trustful" mode in
which the server has full access to the user's secret keys.
The server-to-server authentication, however, is more precisely
specified. There are two authentication methods, both of which ought
to be used to protect against a well-funded adversary. The first is a
dedicated keyserver system akin to the OpenPGP keyserver network. The
other is based on DNS: each server publishes its DIME public key in a new DNS
resource record type, which (for security reasons) ought to be looked
up using DNSSEC. Thus, each server can look up the public key of its
peer in multiple ways, and verify that it generates an encrypted
session key matching the one included in the message before agreeing
to the message exchange.
So far, we have been using the term "public key" to describe the
DIME keys published for both mail servers and users, but DIME's actual
identity system is a bit more complicated than that. The credentials
used are called signets, and they include not just a public
key, but also a series of signatures and a set of fields describing
the DIME options, ciphers, and other settings supported by that user
or server. Since DIME's functionality places a great deal of trust in
domain-wide identity, each user signet has to be signed by the key for
the controlling organization.
DIME is, by any measure, a complex system. Interested users are
encouraged to read the full specification, which (naturally) goes into
considerably more detail than is feasible here. But by looking at
DIME constituent parts separately, it can be easier to follow the
overall design. The relevant fields of each message are encrypted
separately, and a copy of the decryption key for each field is
transmitted for each party that must decrypt the field for
processing. The per-party keys are published in a federated manner:
each mail domain is responsible for maintaining its own DIME DNS
records and keyserver, which places ultimate control of the
authentication scheme in the hands of the mail-server administrators,
not in a CA that can be compromised.
It is also noteworthy that the project seems to be taking pains to
consider how email providers and users might transition to
DIME—even if it is a wild success, there will necessarily be a
need for DIME users to interoperate with traditional email for many
years still to come. The new DNS records and the signet data format
include information that can be used to fall back to the most secure
alternative available, and several pieces of the overall architecture
are optional. Webmail providers, for example, could employ either the
"cautious" or "trustful" user-authentication models—the users
would have to decide if they indeed trust the provider enough to use
the service.
The DIME specification also examines a number of possible attack
scenarios against the new system, and shows how DIME is designed to
cope with such attacks. Public scrutiny will, of course, be required
before most potential adopters consider implementing the
architecture. For now, even Lavabit and Silent Circle have not yet
announced any intention to deploy DIME-based mail services. When they
do so, no doubt the offerings will attract a great many users
interested in testing the system.
The other major dimension to any widespread roll-out scenario is
acceptance of the DIME architecture by some appropriate standards
body. Levison told
Ars Technica that he intends to pursue eventual IETF approval via a
set of RFCs. That will be a slow process, though, starting when he
begins " That said, there is clearly considerable interest within the
technology community for the additional protections that DIME offers
beyond existing email encryption systems. The government surveillance
revealed in the Snowden case alarmed many a software developer (and
regular citizen), but the law-enforcement chase that followed
it—particularly where it affected Lavabit and Silent
Circle—was, in many ways, an even bigger call to arms for
privacy advocates.
Gnuplot is a program for creating plots, charts, and graphs that
runs on Linux as well as on a wide
variety of free and proprietary operating systems.
The purpose of a plot, in general, is to help to understand data or
functional relationships by representing them visually.
Some plotting programs, including gnuplot, may perform calculations and massage data,
which can also be convenient. Some data-plotting tools are complete solutions, standalone
programs that can be controlled through a command line, a GUI,
or both. Others exist as subsystems of various tools, or as
libraries available for a specific programming language.
This article will introduce a prominent example of the first
type. Gnuplot is one of the earliest open-source programs in wide use.
It's free enough to be packaged with Debian, for example, but has
an idiosyncratic license, with unusual
restrictions on how modifications to the source code may be distributed.
The name is not derived from the GNU project, with which it has no
particular relationship, but came about when the original authors, who had
decided on the name "newplot", discovered that this name was already
in use.
You may already be using gnuplot without knowing it. The
plotting facilities of Maxima, Octave, gretl, the Emacs graphing calculator, and statist,
for example, all use gnuplot. Most of gnuplot is written in C and is quite fast and memory-efficient.
Its output is highly customizable, and can be seen in a multitude of
scientific and technical publications. It's also a popular choice with
system administrators who want to generate graphs of server performance,
as it can be run from a script on a remote machine and forward its graphs
over X11, without having to transfer the usually voluminous data sets. The same
arrangement makes gnuplot useful for monitoring the progress of simulations
running on remote machines or clusters. Gnuplot has an interactive command-line prompt, can run script
files stored on disk, can be controlled through a socket connection
from any language, and has interfaces in everything from Fortran to Clojure.
There are also several GUI interfaces for gnuplot, including an Emacs mode,
that are not too widely used, since much of gnuplot's power arises from its
scriptability. Gnuplot is actively developed, with desirable new features added regularly.
If you have Octave or Maxima installed, then you already have gnuplot somewhere,
although you might not have a recent version. Binaries are probably available
from your distribution's package management system, but they are likely to lag approximately
one major version behind the shiniest. The solution is to follow the Download link from gnuplot headquarters
to get the source tarball of the latest stable release (or a pre-release
version if you can't live without some feature in development). A simple
./configure and make will get you a working gnuplot, but you
probably want to check for some dependencies first. Having the right packages installed before compiling gnuplot
will ensure that the resulting binary supports the "terminals"
that you want to use. In gnuplot land, a terminal is the form taken by the
output: either a file on disk or a (possibly interactive) display on the
screen. Gnuplot is famous for the long list of output formats that it
supports. You can create graphs using ASCII art on the console,
in a canvas on a web page, in various ways for LaTeX and ConTeXt,
as a rotatable, zoomable object in an X window,
for Tektronix terminals, for pen plotters, and much else, including
Postscript, EPS, PNG, SVG, and PDF. Support for most of this will happen without any special action
on your part. But you will want to make sure that you have compiled
in the highest quality, anti-aliased graphics formats, using the
Cairo libraries; this makes a noticeable difference in the quality
of the results. You will need to have the development libraries for
Cairo and Pango installed. On my Ubuntu laptop installation of
the packages libcairo2-dev and libpango1.0-dev are sufficient
for the latest stable (v. 4.6.6) gnuplot version. Pick up libwxgtk2.8-dev while
you're at it: it will add support for a wxWidgets interactive terminal
that's a higher quality alternative to the venerable X11 display.
Finally, if you envision using gnuplot with LaTeX, you might want the Lua
development package, which enables gnuplot's tikz terminal. Gnuplot comes with
extensive help. For extra information about any of the commands used below, try typing
"help command" at the gnuplot interactive prompt. For more, try the
official documentation [PDF], the many examples on the web,
or the two books about gnuplot:
one by Philipp K. Janert and one by me.
The command stanzas here can be entered as shown at the gnuplot prompt or saved in
a file and executed with: gnuplot file. Here is how to plot a pair of curves: The set ytics etc. commands create independent sets of tics and labels
on the two vertical axes. The final line illustrates the usual form of gnuplot's
2D plot command, and some of the program's support for special functions.
The axes parameters tell gnuplot what axis to associate with which curve,
lw is an abbreviation for "linewidth" (gnuplot's default is pretty thin), and
each curve has an individual title assigned, which is used in the automatically
generated legend. The sequence of colors used to distinguish the curves is chosen
automatically, but can, of course, be specified manually as well. Gnuplot also excels at all kinds of 3D plots. Here is a surface plot with contours
projected on the x-y plane. There is a vector field embedded in the surface as well. The set hidd front command has the effect of making the surface
opaque to itself but transparent to the other elements in the plot. The
set style command is an example of gnuplot's commands for defining
detailed styles for lines, arrows, and anything else that can be made into
a plot element. After this command is entered, arrowstyle 1 (or as 1)
can be referred to wherever we want a black arrow with a filled arrowhead. This script defines a function, f(x,y), using gnuplot's
ternary notation (with an embedded ternary form to implement two conditions)
in concert with NaNs, to skip a range of coordinates when
plotting. The function is used on the following line to plot the vector
field over only part of the surface. Two additional details may be worth noting in this example. First, in gnuplot, NaN (for "not a number") is a special value that you
can use in conditional statements where you want to disable plotting,
as we did here. You can also use "1/0" and some other undefined
values, but using NaN makes the code easier to understand. Second,
gnuplot's ternary notation is borrowed from C. In the statement B will be executed if A is true, otherwise C
will be executed. In order to have two conditions, as we have here, B
needs to be replaced by another ternary statement. The splot command is the 3D version of plot.
The part before the comma plots our Bessel function again, this time
as a surface depending on x and y.
The rest of it plots the vector field of a circular flow as an array of arrows
originating on the surface. Vector plotting uses gnuplot's data graphing syntax,
which refers to columns of data ($1 and $2 instead of x
and y). There are six components per vector, for the three spatial
coordinates on each side of the arrow. Finally, the every clause
skips some grid points to avoid crowding, and we invoke our defined arrow style at the end. Gnuplot can integrate with the LaTeX document processing system in several ways.
Most of these allow gnuplot to calculate and draw the graphic elements
while handing off the typesetting of any text within the plot
(including, of course, mathematical expressions) to LaTeX.
This is desirable because, first, TeX's typesetting algorithms produce superior
results, and, second, the labels that are typeset as part of the graph will
harmonize with the text of the paper in which it is embedded. The results
look like the figure here, which is a brief excerpt from an imaginary math textbook. Notice that the fonts used in the figure labels and the text in the paragraph
are the same — everything is typeset by LaTeX (even the numbers on the axes). There is a two-step procedure to produce this result. First, we create the figure
in gnuplot, using the cairolatex terminal: We've used LaTeX syntax for the labels. Running this through gnuplot
creates a file called fig3.tex, which we include in the LaTeX
document, listed in the Appendix. The final step is to process the document with pdflatex.
This is just one of several workflows for integrating gnuplot with LaTeX.
If you use tikz to draw diagrams in your LaTeX documents,
for example, you can extend it with calls to gnuplot from within the
tikz commands. Gnuplot and LaTeX share a family resemblance. They are both early
open-source programs that demand a certain amount of effort on the part of the user
to achieve the desired
results, but that repay that effort handsomely. They're both popular
with scientists and other authors of technical publications. Both programs
are unusually extensively documented by both their creators and a cadre
of third parties. And both systems, originating in an era of more anemic
hardware, do a great deal with a modest amount of machine memory.
Gnuplot has a good reputation for the ability to plot large data files
that cause most other plotting programs to crash or exhaust the available
RAM. Gnuplot can do more than just plot data and functions. It can perform
several types of data analysis and smoothing — nothing like a specialized
statistics platform, but enough to fit functions or plot a smoothed curve
through noisy data. To illustrate, we first need to create some noisy data.
The Appendix contains a
little Python program that will write the coordinates of a Gaussian curve
to a file, called rn.dat, with some pseudorandom noise added to the ordinates. Suppose we are presented with this data and we want to fit a function
to it. Since it looks bell-shaped to us, we'll attempt to fit a Gaussian.
That kind of curve has two parameters, its amplitude and its width, or
standard deviation. We could write a program to search the parameter
space of these two numbers to optimize the fit of the curve to the data,
or we could ask gnuplot to do it for us.
Gnuplot's built-in fitting routine is invoked like this: After typing that command into gnuplot's interactive prompt, it
will return its best guess for the free parameters a
and b, as well as its confidence in its estimates. It also
remembers the estimated values, so we can plot the fit function
on top of the data:
gets us this plot:
The pointtype specifier selects the style
of marker used in the scatterplot of the data. There is a different
list for every terminal type, which you can see by typing
test at the gnuplot prompt. We've selected a thick line width
(lw 5) and a black line color (lc 'black'). Gnuplot is endowed with some simple language constructs providing
blocks, loops, and conditional execution. This is enough to do
significant calculation without having to resort to external
programs. Using looping, you can create animations on the screen.
Try the following gnuplot script to get a rotating surface plot: The first line tells gnuplot not to delete the window after the
script is complete, which it will otherwise do if these commands are
not run interactively. The last line contains the loop that creates
the animation. The pause command adds a tenth of a second
delay between each frame. Gnuplot in the wild is not a rare encounter.
Its output can be found in
many of the math and science entries on Wikipedia; my article about calculating Fibonacci numbers;
the book Mechanics by Somnath Datta, an example of a complex text with
closely integrated intricate plots, using LaTeX and gnuplot; the book Modeling with Data: Tools and Techniques for Scientific Computing
by Ben Klemens, using gnuplot’s latex terminals;
and the free online text Computational Physics
by Konstantinos Anagnostopoulos, just to give a few examples.
In the system administrator field, check out the articles on benchmarking Apache,
graphing performance statistics on Solaris,
and using gnuplot with Dstat.
Gnuplot is a good choice if you have large data sets, if you prefer a
language-agnostic solution, if you need to automate your graphing,
and especially if you use LaTeX.
This protocol specification will not
be released as part of the initial publication of this
document
", followed by an assurance that a later release with
more details will follow.
Authenticating identities
What next
circulating the project’s specifications document among
members of the IETF at the group’s meeting this March
".
Plotting tools for Linux: gnuplot
Installation
Using gnuplot
set title 'Bessel Functions of the First and Second Kinds'
set samp 1000
set xrange [-.05:20]
set y2tics nomirror
set ytics nomirror
set ylabel 'Y0'
set y2label 'J0'
set grid
plot besy0(x) axes x1y1 lw 2 title 'Y0', besj0(x) axes x1y2 lw 2 title 'J0'
set samp 200
set iso 100
set xrange [-4:4]
set yrange [-4:4]
set hidd front
set view 45, 75
set ztics .5
set key off
set contour base
set style arrow 1 filled lw 3 lc 'black'
f(x,y) = x**2+y**2 < 2.0 ? x**2+y**2 > 0.5 ? besj0(x**2+y**2) : NaN : NaN
splot besj0(x**2+y**2), '++' using 1:2:(f($1,$2)):\
( -.5*sin(atan2($2,$1)) ):( .5*cos(atan2($2,$1)) ):(0)\
every 4:2 w vec as 1
A ? B : C
LaTeX support
set term cairolatex pdf
set out 'fig3.tex'
set samp 1000
set xrange [-4:4]
set key off
set label 1 '\huge$\frac{1}{\sqrt{2\pi}\sigma}\,e^{-\frac{x^2}{2\sigma^2}}$' at -3.5,.34
set label 2 '\Large$\sigma = 1$' at 0.95,.3
set label 3 '\Large$\sigma = 2$' at 2.7,.1
plot for [s=1:2] exp(-x**2/(2*s**2))/(s*sqrt(2*pi)) lw 3
set out
Analysis
fit a*exp(-b*x**2) 'rn.dat' via a,b
plot 'rn.dat' pointtype 7, a*exp(-b*x**2) lw 5 lc 'black'
set term wxt persist
set yr [-pi:pi]
set xr [-pi:pi]
end = 200.0
do for [a=1:end] {set view 70, 90*(a/end); splot cos(x)+sin(y); pause 0.1}
Conclusion
Security
Docker image "verification"
One might be forgiven for expecting that a message stating that a download has been "verified" would actually be indicating some kind of verification. But, as Jonathan Rudenberg discovered, getting that message when downloading a Docker image is, at best, misleading—at worst it is flat-out wrong. Worse still, perhaps, is that an image file that is definitely corrupted only provokes a warning, though Rudenberg was unable to even make that happen. All told, his post should serve as an eye opener for those Docker users who are concerned about the security of the images they run.
After downloading an official container image using the Docker tools, Rudenberg saw
the following message: "ubuntu:14.04: The image you are pulling has
been verified
". At the time, he believed it was the result of a
feature described
in the Docker 1.3 release announcement, which touted a "tech
preview" of digital-signature verification for images. Subsequently, however, he had
reason to look a bit deeper and was not impressed with what he found:
Docker’s report that a downloaded image is “verified” is based solely on the presence of a signed manifest, and Docker never verifies the image checksum from the manifest. An attacker could provide any image alongside a signed manifest. This opens the door to a number of serious vulnerabilities.
Beyond that, the processing pipeline for images also suffers from a number of flaws: it does three separate processing steps using the unverified (potentially malicious) image. To begin with, the image is decompressed using one of three different algorithms: gzip, bzip2, or xz. The first two use the memory-safe Go language library routines, which should provide resilience against code-execution flaws, he said, but xz decompression is a different story.
To decompress an image that uses the xz algorithm, Docker spawns the xz binary, as root. That binary is written in C, thus it does not have any of the memory safety provided by Go, so it could well be vulnerable to (unknown) code-execution vulnerabilities. That means that a simple "docker pull" command could potentially lead to full system compromise, which is probably not quite what the user expected.
Docker uses TarSum to deterministically generate a checksum/hash from a tar file, but doing so means that the tar file must be decoded. The program calculates a hash for specific portions of the tar file, but that is done before any verification step. So an attacker-controlled tar file could potentially exploit a TarSum vulnerability to evade the hashing process. That might allow additions or subtractions to a tar file without changing its TarSum-calculated hash.
The final step in the processing pipeline is to unpack the tar file into the "proper" location. Once again, this is done pre-verification, so any path traversal or other vulnerability in the unpacking code (Rudenberg points to three vulnerabilities that have already been found there) could be exploited. All three of those problems could be alleviated by verifying the entire image before processing it.
Unfortunately, even after those three processing steps have been done, Docker does not actually verify much of anything before emitting its "verified" message. In fact, Rudenberg reported that the presence of a signed manifest that passes libtrust muster is enough to trigger the message. No checking is done to see if the manifest corresponds to the rest of the image. In addition, the public key that is used to sign the manifest is retrieved each time an image is pulled, rather than provided as part of the Docker tool suite, for example.
Overall, the image verification feature is sloppy work, so far, that is likely to mislead Docker users. In a thread on Hacker News, Docker founder and CTO Solomon Hykes complained that Rudenberg's analysis did not quote the "work in progress" disclaimer in the Docker announcement. Notably, though, he did not argue with any of the technical points made in the analysis.
Rudenberg made several suggestions for improving Docker image verification in the post. Verifying the entirety of the image, rather than just parts using TarSum, is one. Another is to employ privilege separation so that tasks like decompression are not run as root. Furthermore, he suggested adopting The Update Framework rather than using the largely undocumented libtrust for signature verification.
Perhaps the biggest mistake made by Docker here was to enable the feature by default when it was clearly not even close to ready. As pointed out by Red Hat, there are other ways to get Docker images that are more secure, so just avoiding the docker pull command until image verification is fully baked may be the right course for security-conscious users.
Brief items
Security quotes of the week
If the “I CAN'T LET YOU DO THAT, DAVE” message is being generated by a program on your desktop labeled HAL9000.exe, you will certainly drag that program into the trash. If your computer's list of running programs shows HAL9000.exe lurking in the background like an immigration agent prowling an arrivals hall, looking for sneaky cell phone users to shout at, you will terminate that process with a satisfied click.
So the only way to sustain HAL9000.exe and its brethren—the programs that today keep you from installing non-App Store apps on your iPhone and tomorrow will try to stop you from printing gun.stl on your 3-D printer—is to design the computer to hide them from you. And that creates vulnerabilities that make your computer susceptible to malicious hacking.
The Darkmail Internet Mail Environment
From Phillip Zimmermann and Ladar Levison (among others) comes the Darkmail Internet Mail Environment, an attempt to replace SMTP with a more secure protocol. It has a 108-page specification [PDF] for those wanting details, and code is available on GitHub. "In addition to the usual protection of content, a design goal for secure email must be to limit what meta-information is disclosed so that a handling agent only has access to the information it needs to see. The Dark Internet Mail Environment (DIME) achieves this with a core model having multiple layers of key management and multiple layers of message encryption."
New vulnerabilities
apache: mis-handling of Require directives
Package(s): | apache2 | CVE #(s): | CVE-2014-8109 | ||||||||||||||||||||||||||||||||
Created: | December 29, 2014 | Updated: | March 16, 2015 | ||||||||||||||||||||||||||||||||
Description: | From the openSUSE advisory:
Fixes handling of the Require line when a LuaAuthzProvider is used in multiple Require directives with different arguments. | ||||||||||||||||||||||||||||||||||
Alerts: |
|
asterisk: multiple vulnerabilities
Package(s): | asterisk | CVE #(s): | CVE-2014-8412 CVE-2014-8414 CVE-2014-8417 CVE-2014-8418 CVE-2014-9374 | ||||||||||||
Created: | December 29, 2014 | Updated: | January 9, 2015 | ||||||||||||
Description: | From the CVE entries:
The (1) VoIP channel drivers, (2) DUNDi, and (3) Asterisk Manager Interface (AMI) in Asterisk Open Source 1.8.x before 1.8.32.1, 11.x before 11.14.1, 12.x before 12.7.1, and 13.x before 13.0.1 and Certified Asterisk 1.8.28 before 1.8.28-cert3 and 11.6 before 11.6-cert8 allows remote attackers to bypass the ACL restrictions via a packet with a source IP that does not share the address family as the first ACL entry. (CVE-2014-8412) ConfBridge in Asterisk 11.x before 11.14.1 and Certified Asterisk 11.6 before 11.6-cert8 does not properly handle state changes, which allows remote attackers to cause a denial of service (channel hang and memory consumption) by causing transitions to be delayed, which triggers a state change from hung up to waiting for media. (CVE-2014-8414) ConfBridge in Asterisk 11.x before 11.14.1, 12.x before 12.7.1, and 13.x before 13.0.1 and Certified Asterisk 11.6 before 11.6-cert8 allows remote authenticated users to (1) gain privileges via vectors related to an external protocol to the CONFBRIDGE dialplan function or (2) execute arbitrary system commands via a crafted ConfbridgeStartRecord AMI action. (CVE-2014-8417) The DB dialplan function in Asterisk Open Source 1.8.x before 1.8.32, 11.x before 11.1.4.1, 12.x before 12.7.1, and 13.x before 13.0.1 and Certified Asterisk 1.8 before 1.8.28-cert8 and 11.6 before 11.6-cert8 allows remote authenticated users to gain privileges via a call from an external protocol, as demonstrated by the AMI protocol. (CVE-2014-8418) Double free vulnerability in the WebSocket Server (res_http_websocket module) in Asterisk Open Source 11.x before 11.14.2, 12.x before 12.7.2, and 13.x before 13.0.2 and Certified Asterisk 11.6 before 11.6-cert9 allows remote attackers to cause a denial of service (crash) by sending a zero length frame after a non-zero length frame. (CVE-2014-9374) | ||||||||||||||
Alerts: |
|
cgmanager: information disclosure
Package(s): | cgmanager | CVE #(s): | CVE-2014-1425 | ||||
Created: | January 6, 2015 | Updated: | January 7, 2015 | ||||
Description: | From the Ubuntu advisory:
cgmanager could be made to expose sensitive information or devices to containers running on the system. | ||||||
Alerts: |
|
cxf: denial of service
Package(s): | cxf | CVE #(s): | CVE-2014-3584 | ||||
Created: | December 31, 2014 | Updated: | January 7, 2015 | ||||
Description: | From the CVE entry:
The SamlHeaderInHandler in Apache CXF before 2.6.11, 2.7.x before 2.7.8, and 3.0.x before 3.0.1 allows remote attackers to cause a denial of service (infinite loop) via a crafted SAML token in the authorization header of a request to a JAX-RS service. | ||||||
Alerts: |
|
ettercap: denial of service
Package(s): | ettercap | CVE #(s): | CVE-2014-9380 CVE-2014-9381 | ||||||||||||||||||||||||||||||||||||||||
Created: | December 30, 2014 | Updated: | March 27, 2015 | ||||||||||||||||||||||||||||||||||||||||
Description: | From the CVE entries:
The dissector_cvs function in dissectors/ec_cvs.c in Ettercap 8.1 allows remote attackers to cause a denial of service (out-of-bounds read) via a packet containing only a CVS_LOGIN signature. (CVE-2014-9380) Integer signedness error in the dissector_cvs function in dissectors/ec_cvs.c in Ettercap 8.1 allows remote attackers to cause a denial of service (crash) via a crafted password, which triggers a large memory allocation. (CVE-2014-9381) | ||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
ettercap: multiple vulnerabilities
Package(s): | ettercap | CVE #(s): | CVE-2014-6396 CVE-2014-6395 CVE-2014-9377 CVE-2014-9376 CVE-2014-9379 CVE-2014-9378 | ||||||||||||||||||||||||||||||||||||
Created: | January 5, 2015 | Updated: | March 27, 2015 | ||||||||||||||||||||||||||||||||||||
Description: | From the CVE entries:
The dissector_postgresql function in dissectors/ec_postgresql.c in Ettercap before 8.1 allows remote attackers to cause a denial of service and possibly execute arbitrary code via a crafted password length, which triggers a 0 character to be written to an arbitrary memory location. (CVE-2014-6396) Heap-based buffer overflow in the dissector_postgresql function in dissectors/ec_postgresql.c in Ettercap before 8.1 allows remote attackers to cause a denial of service or possibly execute arbitrary code via a crafted password length value that is inconsistent with the actual length of the password. (CVE-2014-6395) Heap-based buffer overflow in the nbns_spoof function in plug-ins/nbns_spoof/nbns_spoof.c in Ettercap 8.1 allows remote attackers to cause a denial of service or possibly execute arbitrary code via a large netbios packet. (CVE-2014-9377) Integer underflow in Ettercap 8.1 allows remote attackers to cause a denial of service (out-of-bounds write) and possibly execute arbitrary code via a small (1) size variable value in the dissector_dhcp function in dissectors/ec_dhcp.c, (2) length value to the dissector_gg function in dissectors/ec_gg.c, or (3) string length to the get_decode_len function in ec_utils.c or a request without a (4) username or (5) password to the dissector_TN3270 function in dissectors/ec_TN3270.c. (CVE-2014-9376) The radius_get_attribute function in dissectors/ec_radius.c in Ettercap 8.1 performs an incorrect cast, which allows remote attackers to cause a denial of service (crash) or possibly execute arbitrary code via unspecified vectors, which triggers a stack-based buffer overflow. (CVE-2014-9379) Ettercap 8.1 does not validate certain return values, which allows remote attackers to cause a denial of service (crash) or possibly execute arbitrary code via a crafted (1) name to the parse_line function in mdns_spoof/mdns_spoof.c or (2) base64 encoded password to the dissector_imap function in dissectors/ec_imap.c. (CVE-2014-9378) | ||||||||||||||||||||||||||||||||||||||
Alerts: |
|
glpi: SQL injection
Package(s): | glpi | CVE #(s): | CVE-2014-9258 | ||||||||||||||||||||
Created: | January 2, 2015 | Updated: | January 12, 2015 | ||||||||||||||||||||
Description: | From the CVE entry: SQL injection vulnerability in ajax/getDropdownValue.php in GLPI before 0.85.1 allows remote authenticated users to execute arbitrary SQL commands via the condition parameter. | ||||||||||||||||||||||
Alerts: |
|
kernel: two vulnerabilities
Package(s): | kernel | CVE #(s): | CVE-2014-9419 CVE-2014-9420 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | January 7, 2015 | Updated: | January 13, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the CVE entries:
The __switch_to function in arch/x86/kernel/process_64.c in the Linux kernel through 3.18.1 does not ensure that Thread Local Storage (TLS) descriptors are loaded before proceeding with other steps, which makes it easier for local users to bypass the ASLR protection mechanism via a crafted application that reads a TLS base address. (CVE-2014-9419) The rock_continue function in fs/isofs/rock.c in the Linux kernel through 3.18.1 does not restrict the number of Rock Ridge continuation entries, which allows local users to cause a denial of service (infinite loop, and system crash or hang) via a crafted iso9660 image. (CVE-2014-9420). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
libevent: denial of service
Package(s): | libevent | CVE #(s): | CVE-2014-6272 | ||||||||||||||||||||||||||||||||||||
Created: | January 6, 2015 | Updated: | March 28, 2016 | ||||||||||||||||||||||||||||||||||||
Description: | From the Debian advisory
Andrew Bartlett of Catalyst reported a defect affecting certain applications using the Libevent evbuffer API. This defect leaves applications which pass insanely large inputs to evbuffers open to a possible heap overflow or infinite loop. In order to exploit this flaw, an attacker needs to be able to find a way to provoke the program into trying to make a buffer chunk larger than what will fit into a single size_t or off_t. | ||||||||||||||||||||||||||||||||||||||
Alerts: |
|
libpng: memory overwrite
Package(s): | libpng | CVE #(s): | CVE-2014-9495 | ||||||||||||||||||||||||||||
Created: | January 7, 2015 | Updated: | March 9, 2015 | ||||||||||||||||||||||||||||
Description: | From the Mageia advisory:
libpng versions 1.6.9 through 1.6.15 have an integer-overflow vulnerability in png_combine_row() when decoding very wide interlaced images, which can allow an attacker to overwrite an arbitrary amount of memory with arbitrary (attacker-controlled) data. | ||||||||||||||||||||||||||||||
Alerts: |
|
libreoffice: denial of service
Package(s): | libreoffice | CVE #(s): | CVE-2014-9093 | ||||||||||||||||||||
Created: | December 29, 2014 | Updated: | February 20, 2015 | ||||||||||||||||||||
Description: | From the CVE entry:
LibreOffice before 4.3.5 allows remote attackers to cause a denial of service (invalid write operation and crash) and possibly execute arbitrary code via a crafted RTF file. | ||||||||||||||||||||||
Alerts: |
|
libssh: denial of service
Package(s): | libssh | CVE #(s): | CVE-2014-8132 | ||||||||||||||||||||||||||||||||||||||||||||
Created: | January 5, 2015 | Updated: | January 19, 2015 | ||||||||||||||||||||||||||||||||||||||||||||
Description: | From the CVE entry:
Double free vulnerability in the ssh_packet_kexinit function in kex.c in libssh 0.5.x and 0.6.x before 0.6.4 allows remote attackers to cause a denial of service via a crafted kexinit packet. | ||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
libvirt: three denial of service flaws
Package(s): | libvirt | CVE #(s): | CVE-2014-8131 CVE-2014-8135 CVE-2014-8136 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | December 25, 2014 | Updated: | February 17, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Debian security tracker entry:
CVE-2014-8131: deadlock and segfault in qemuConnectGetAllDomainStats CVE-2014-8135: From the CVE entry: The storageVolUpload function in storage/storage_driver.c in libvirt does not check a certain return value, which allows local users to cause a denial of service (NULL pointer dereference and daemon crash) via a crafted offset value in a "virsh vol-upload" command. CVE-2014-8136: From the CVE entry: The (1) qemuDomainMigratePerform and (2) qemuDomainMigrateFinish2 functions in qemu/qemu_driver.c in libvirt do not unlock the domain when an ACL check fails, which allow local users to cause a denial of service via unspecified vectors. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
mantis: multiple vulnerabilities
Package(s): | mantis | CVE #(s): | CVE-2014-8553 CVE-2014-8986 CVE-2014-8988 CVE-2014-9269 CVE-2014-9270 CVE-2014-9271 CVE-2014-9272 CVE-2014-9281 CVE-2014-9388 | ||||
Created: | January 7, 2015 | Updated: | January 7, 2015 | ||||
Description: | From the Debian advisory:
Multiple security issues have been found in the Mantis bug tracking system, which may result in phishing, information disclosure, CAPTCHA bypass, SQL injection, cross-site scripting or the execution of arbitrary PHP code. | ||||||
Alerts: |
|
mime-support: code execution
Package(s): | mime-support | CVE #(s): | CVE-2014-7209 | ||||||||||||
Created: | December 29, 2014 | Updated: | January 8, 2015 | ||||||||||||
Description: | From the Debian advisory:
Timothy D. Morgan discovered that run-mailcap, an utility to execute programs via entries in the mailcap file, is prone to shell command injection via shell meta-characters in filenames. In specific scenarios this flaw could allow an attacker to remotely execute arbitrary code. | ||||||||||||||
Alerts: |
|
nvidia: code execution
Package(s): | nvidia | CVE #(s): | CVE-2014-8298 | ||||
Created: | January 7, 2015 | Updated: | January 7, 2015 | ||||
Description: | From the CVE entry:
The NVIDIA Linux Discrete GPU drivers before R304.125, R331.x before R331.113, R340.x before R340.65, R343.x before R343.36, and R346.x before R346.22, Linux for Tegra (L4T) driver before R21.2, and Chrome OS driver before R40 allows remote attackers to cause a denial of service (segmentation fault and X server crash) or possibly execute arbitrary code via a crafted GLX indirect rendering protocol request. | ||||||
Alerts: |
|
openvas-manager: sql injection
Package(s): | openvas-manager | CVE #(s): | CVE-2014-9220 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | January 6, 2015 | Updated: | July 14, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Mageia advisory:
It has been identified that OpenVAS Manager before 4.0.6 is vulnerable to sql injections due to a improper handling of the timezone parameter in modify_schedule OMP command. It has been identified that this vulnerability may allow read-access via sql for authorized user account which have permission to modify schedule objects. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
privoxy: two vulnerabilities
Package(s): | privoxy | CVE #(s): | |||||
Created: | January 6, 2015 | Updated: | January 7, 2015 | ||||
Description: | From the Mageia advisory:
A memory leak occurred in privoxy 3.0.21 compiled with IPv6 support when rejecting client connections due to the socket limit being reached. (CID 66382) A use-after-free bug was found in privoxy 3.0.21 and two additional potential use-after-free issues were detected by Coverity scan. (CID 66394, CID 66376, CID 66391) See the Privoxy changelog for details. | ||||||
Alerts: |
|
python-django-horizon: denial of service
Package(s): | python-django-horizon | CVE #(s): | CVE-2014-8124 | ||||||||||||||||
Created: | January 5, 2015 | Updated: | January 7, 2015 | ||||||||||||||||
Description: | From the CVE entry:
OpenStack Dashboard (Horizon) before 2014.1.3 and 2014.2.x before 2014.2.1 does not properly handle session records when using a db or memcached session engine, which allows remote attackers to cause a denial of service via a large number of requests to the login page. | ||||||||||||||||||
Alerts: |
|
python-pip: denial of service
Package(s): | python-pip | CVE #(s): | CVE-2014-8991 | ||||||||
Created: | January 6, 2015 | Updated: | January 15, 2015 | ||||||||
Description: | From the CVE request:
There is a local DoS in pip 1.3, 1.3.1, 1.4, 1.4.1, 1.5, 1.5.1, 1.5.2, 1.5.3, 1.5.4, 1.5.5, and 1.5.6. In an attempt to fix CVE-2013-1888 pip modified it's build directories from pip-build to pip-build-<username> and added in checks that would ensure that only a directory owned by the current user would be used. However because the build directory is predictable a local DoS is possible simply by creating a /tmp/pip-build-<username>/ directory owned by someone other than the defined user. This issue has also been reported to the Debian bug tracker as https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=725847. | ||||||||||
Alerts: |
|
strongswan: denial of service
Package(s): | strongswan | CVE #(s): | CVE-2014-9221 | ||||||||||||||||||||||||||||
Created: | January 5, 2015 | Updated: | August 19, 2015 | ||||||||||||||||||||||||||||
Description: | From the Debian advisory:
Mike Daskalakis reported a denial of service vulnerability in charon, the IKEv2 daemon for strongSwan, an IKE/IPsec suite used to establish IPsec protected links. The bug can be triggered by an IKEv2 Key Exchange (KE) payload that contains the Diffie-Hellman (DH) group 1025. This identifier is from the private-use range and only used internally by libtls for DH groups with custom generator and prime (MODP_CUSTOM). As such the instantiated method expects that these two values are passed to the constructor. This is not the case when a DH object is created based on the group in the KE payload. Therefore, an invalid pointer is dereferenced later, which causes a segmentation fault. This means that the charon daemon can be crashed with a single IKE_SA_INIT message containing such a KE payload. The starter process should restart the daemon after that, but this might increase load on the system. Remote code execution is not possible due to this issue, nor is IKEv1 affected in charon or pluto. | ||||||||||||||||||||||||||||||
Alerts: |
|
torque: two vulnerabilities
Package(s): | torque | CVE #(s): | CVE-2011-2907 CVE-2011-4925 | ||||
Created: | December 29, 2014 | Updated: | January 7, 2015 | ||||
Description: | From the CVE entries:
Terascale Open-Source Resource and Queue Manager (aka TORQUE Resource Manager) 3.0.1 and earlier allows remote attackers to bypass host-based authentication and submit arbitrary jobs via a modified PBS_O_HOST variable to the qsub program. (CVE-2011-2907) Terascale Open-Source Resource and Queue Manager (aka TORQUE Resource Manager) before 2.5.9, when munge authentication is used, allows remote authenticated users to impersonate arbitrary user accounts via unspecified vectors. (CVE-2011-4925) | ||||||
Alerts: |
|
unzip: code execution
Package(s): | unzip | CVE #(s): | CVE-2014-8139 CVE-2014-8140 CVE-2014-8141 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | December 29, 2014 | Updated: | March 29, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Debian advisory:
Michele Spagnuolo of the Google Security Team discovered that unzip, an extraction utility for archives compressed in .zip format, is affected by heap-based buffer overflows within the CRC32 verification function (CVE-2014-8139), the test_compr_eb() function (CVE-2014-8140) and the getZip64Data() function (CVE-2014-8141), which may lead to the execution of arbitrary code. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
webmin: malicious symlinks
Package(s): | webmin | CVE #(s): | CVE-2015-1377 | ||||
Created: | January 7, 2015 | Updated: | January 27, 2015 | ||||
Description: | From the Mageia advisory:
The webmin package has been updated to version 1.730 to fix possible security issues that could be caused by malicious symlinks when reading mail. | ||||||
Alerts: |
|
xlockmore: X error
Package(s): | xlockmore | CVE #(s): | |||||
Created: | December 29, 2014 | Updated: | January 10, 2015 | ||||
Description: | From the Mageia advisory:
xlockmore before 5.45 contains a security flaw related to a bad value of fnt for pyro2 which could cause an X error. This update backports the fix for version 5.43. | ||||||
Alerts: |
|
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 3.19-rc3, released on January 5. "It's a day delayed - not because of any particular development issues, but simply because I was tiling a bathroom yesterday. But rc3 is out there now, and things have stayed reasonably calm. I really hope that implies that 3.19 is looking good, but it's equally likely that it's just that people are still recovering from the holiday season."
3.19-rc2 was released, with a minimal set of changes, on December 28.
Stable updates: there have been no stable updates released in the last two weeks. As of this writing, the 3.10.64, 3.14.28, 3.17.8, and 3.18.2 updates are in the review process; they can be expected on or after January 9. Note that 3.17.8 will be the final update in the 3.17 series.
Quotes of the week
Kernel development news
Haunted by ancient history
Kernel development policy famously states that changes are not allowed to break user-space programs; any patch that does break things will be reverted. That policy has been put to the test over the last week, when two such changes were backed out of the mainline repository. These actions demonstrate that the kernel developers are serious about the no-regressions policy, but they also show what's involved in actually living up to such a policy.
The ghost of wireless extensions
Back in the dark days before the turn of the century, support for wireless networking in the kernel was minimal at best. The drivers that did exist mostly tried to make wireless adapters look like Ethernet cards with a few extra parameters. After a while, those parameters were standardized, after a fashion, behind the "wireless extensions" interface. This ioctl()-based interface was never well loved, but it did the job for some years until the developers painted themselves into a corner in 2006. Conflicting compatibility issues brought development of that API to a close; the good news was that there was already a plan to supersede it with the then under-development nl80211 API.
Years later, nl80211 is the standard interface to the wireless subsystem. The wireless extensions, which are now just a compatibility interface over nl80211, have been deprecated for years, and the relevant developers would like to be rid of them entirely. So it was perhaps unsurprising to see a patch merged for 3.19 that took away the ability to configure the wireless extensions into the kernel.
Equally unsurprising, though, would be the flurry of complaints that came shortly thereafter. It seems that the wicd network manager still uses the wireless extensions API. But, perhaps more importantly, the user-space tools (iwconfig for example) that were part of the wireless extensions still use it — and they, themselves, are still in use in countless scripts. So this change looks set to break quite a few systems. As a result, Jiri Kosina posted a patch reverting the change and Linus accepted it immediately.
There were complaints from developers that users will never move away from the old commands on their own, and that some pushing is required. But it is not the place of the kernel to do that pushing. A better approach, as Ted Ts'o suggested, would be:
Such an approach would avoid breaking user scripts. But it would still take a long time before all users of the old API would have moved over, so the kernel is stuck with supporting the wireless extensions API into the 2020's.
Bogomips
Rather older than the wireless extensions is the concept of "bogomips," an estimation of processor speed used in (some versions of) the kernel for short delay loops. The bogomips value printed during boot (and found in /proc/cpuinfo) is only loosely correlated with the actual performance of the processor, but people like to compare bogomips values anyway. It seems that some user-space code uses the bogomips value for its own purposes as well.
If bogomips deserved the "bogo" part of the name back in the beginning, it has only become more deserving over time. Features like voltage and frequency scaling will cause a processor's actual performance to vary over time. The calculated bogomips value can differ significantly depending on how successful the processor is in doing branch prediction while running the calibration loop. Heterogeneous processors make the situation even more complicated. For all of these reasons, the actual use of the bogomips value in the kernel has been declining over time.
The ARM architecture code, on reasonably current processors, does not use that value at all, preferring to poll a high-resolution timer instead. On some subarchitectures the calculated bogomips value differed considerably from what some users thought was right, leading to complaints. In response, the ARM developers decided to simply remove the bogomips value from /proc/cpuinfo entirely. The patch was accepted for the 3.12 release in 2013.
Nearly a year and a half later, Pavel Machek complained that the change broke pyaudio on his system. Noting that others had complained as well, he posted a patch reverting the change. It was, he said, a user-space regression and, thus, contrary to kernel policy.
Reverting this change was not a popular idea in the ARM camp; Nicolas Pitre
tried to block it, saying that "No
setups actually relying on this completely phony bogomips value
bearing no links to hardware reality could have been qualified as
'working'.
"
Linus was unsympathetic, though, saying
that regressions were not to be tolerated and that "The kernel serves
user space. That's what we do.
" The change was duly reverted; ARM
kernels starting with 3.19 will export a bogomips value again; one assumes
the change will make it into the stable tree as well.
That still leaves the little problem that the bogomips value calculated on current ARM CPUs violates user expectations; people wonder when their shiny new CPU shows as having 6.0 bogomips. Even ARM systems are expected to be faster than that. The problem, according to Nicolas, is that a constant calculated to help with the timer-based delay loops was being stored as the bogomips value; the traditional bogomips value was no longer calculated at all. There is no real reason, he said, to conflate those two values. So he has posted a patch causing bogomips to be calculated by timing the execution of a tight "do-nothing" loop — the way it was done in the beginning.
The bogomips value has long since outlived its value for the kernel itself.
It is calculated solely for user space, and, even there, its value is
marginal at best. As Alan Cox put it,
bogomips is mostly printed "for the user so they can copy it to tweet
about how neat their new PC is
". But, since some software depends on
its presence, the kernel must continue to provide this silly number
despite the fact that it reflects reality poorly at best. Even a useless
number has value if it keeps programs from breaking.
The problem with nested sleeping primitives
Waiting for events in an operating system is an activity that is fraught with hazards; without a great deal of care, it is easy to miss the event that is being waited for. The result can be an infinite wait — an outcome which tends to be unpopular with users. The kernel has long since buried the relevant code in the core kernel with the idea that, with the right API, wait-related race conditions can be avoided. Recent experience shows, though, that the situation is not always quite that simple.Many years ago, kernel code that needed to wait for an event would execute something like this:
while (!condition) sleep_on(&wait_queue);
The problem with this code is that, should the condition become true between the test in the while loop and the call to sleep_on(), the wakeup could be lost and the sleep would last forever. For this reason, sleep_on() was deprecated for a long time and no longer exists in the kernel.
The contemporary pattern looks more like this:
DEFINE_WAIT(wait); while (1) { prepare_to_wait(&queue, &wait, state); if (condition) break; schedule(); } finish_wait(&queue, &wait);
Here, prepare_to_wait() will enqueue the thread on the given queue and put it into the given execution state, which is usually either TASK_INTERRUPTIBLE or TASK_UNINTERRUPTIBLE. Normally, that state will cause the thread to block once it calls schedule(). If the wakeup happens first, though, the process state will be set back to TASK_RUNNING and schedule() will return immediately (or, at least, as soon as it decides this thread should run again). So, regardless of the timing of events, this code should work properly. The numerous variants of the wait_event() macro expand into a similar sequence of calls.
Signs of trouble can be found in messages like the following, which are turning up on systems running the 3.19-rc kernels:
do not call blocking ops when !TASK_RUNNING; state=1 set at [<ffffffff910a0f7a>] prepare_to_wait+0x2a/0x90
This message, the result of some new checks added for 3.19, is indicating that a thread is performing an action that could block while it is ostensibly already in a sleeping state. One might wonder how that can be, but it is not that hard to understand in the light of the sleeping code above.
The "condition" checked in that code is often a function call; that function may perform a fair amount of processing on its own. It may need to acquire locks to properly check for the wakeup condition. That, of course, is where the trouble comes in. Should the condition-checking function call something like mutex_lock(), it will go into a new version of the going-to-sleep code, changing the task state. That, of course, may well interfere with the outer sleeping code. For this reason, nesting of sleeping primitives in this way is discouraged; the new warning was added to point the finger at code performing this kind of nesting. It turns out that kind of nesting happens rather more often than the scheduler developers would have liked.
So what is a developer to do if the need arises to take locks while checking the sleep condition? One solution was added in 3.19; it takes the form of a new pattern that looks like this:
DEFINE_WAIT_FUNC(wait, woken_wait_function); add_wait_queue(&queue, &wait); while (1) { if (condition) break; wait_woken(&wait, state, timeout); } remove_wait_queue(&queue, &wait);
The new wait_woken() function encapsulates most of the logic needed to wait for a wakeup. At a first glance, though, it looks like it would suffer from the same problem as sleep_on(): what happens if the wakeup comes between the condition test and the wait_woken() call? The key here is in the use of a special wakeup function called woken_wait_function(). The DEFINE_WAIT_FUNC() macro at the top of the above code sequence associates this function with the wait queue entry, changing what happens when the wakeup arrives.
In particular, that change causes a special flag (WQ_FLAG_WOKEN) to be set in the flags field of the wait queue entry. If wait_woken() sees that flag, it knows that the wakeup already occurred and doesn't block. Otherwise, the wakeup has not occurred, so wait_woken() can safely call schedule() to wait.
This pattern solves the problem, but there is a catch: every place in the kernel that might be using nested sleeping primitives needs to be found and changed. There are a lot of places to look for problems and potentially fix, and the fix is not an easy, mechanical change. It would be nicer to come up with a version of wait_event() that doesn't suffer from this problem in the first place or, failing that, with something new that can be easily substituted for wait_event() calls.
Kent Overstreet thinks he has that replacement in the form of the "closure" primitive used in the bcache subsystem. Closures work in a manner similar to wait_woken() in that the wakeup state is stored internally to the relevant data structure; in this case, though, an atomic reference count is used. Interested readers can see drivers/md/bcache/closure.h and closure.c for the details. Scheduler developer Peter Zijlstra is not convinced about the closure code, but he agrees that it would be nice to have a better solution.
The form of that solution is thus unclear at this point. What does seem clear is that the current nesting of sleeping primitives needs to be fixed. So, one way or another, we are likely to see a fair amount of work going into finding and changing problematic calls over the next few development cycles. Until that work is finished, warnings from the new debugging code are likely to be a common event.
Patches and updates
Kernel trees
Architecture-specific
Build system
Core kernel code
Development tools
Device drivers
Device driver infrastructure
Documentation
Filesystems and block I/O
Memory management
Networking
Security-related
Miscellaneous
Page editor: Jonathan Corbet
Distributions
OpenMediaVault: a distribution for NAS boxes
The Linux community has no shortage of general-purpose distributions that can be made to serve almost any need. But many Linux deployments are not on general-purpose machines; often the owner has a more specific objective in mind. One such objective is to put together a network-attached storage (NAS) box. A general-purpose distribution can easily be used in such a setting, but there are also several specialized distributions that make the task easier. This article, the first in a series, will look at OpenMediaVault, a Debian-based NAS-oriented distribution.Given that the market is full of Linux-based NAS products, one might well wonder whether building a NAS server from scratch is worthwhile. There are a few reasons for doing so beyond the obvious "because we can." Most of the commercial products are relatively closed devices, depriving the owner of much of the freedom that Linux offers. They may not offer the specific combination of features and services that a user wants. It's a rare commercial box that gets regular security updates, but security is important for a storage server system. There may be a system sitting around already that is well suited to the task and just needs the right operating system. Or, if nothing else, it is comforting to have root access on the storage server and to be able to manage it with familiar commands and interfaces.
Installation
One of the advantages of a specialized distribution is that it tends to lack a lot of the baggage found in other distributions; a full OpenMediaVault 1.0.20 installation image weighs in at under 400MB, and the installed image takes just over 1GB. Booting that image yields a fairly standard sequence of Debian text-oriented installation screens. One thing that jumped out early on is that OpenMediaVault insists on taking a full disk for its own installation; it cannot work from a smaller partition, and it cannot export any part of the system disk to the network. That, of course, turns a four-bay server into a three-bay device; it also means that OpenMediaVault does not play well with any other distributions one might want to install on the system. Given that the system itself is quite small, it would be nice if it could accept life in a small partition and leave the bulk of the system drive available for other uses.
It's amusing that a storage server operating system's installation sequence ends by recommending that the user remove any floppies before rebooting into the installed system.
OpenMediaVault is based on the Debian stable ("wheezy") distribution, so it runs that distribution's venerable 3.2 kernel. That kernel has been consistently maintained since its release, so it will be well debugged and stable — but it won't be the place to look for exciting new features. There is no graphical desktop included with the system (unsurprisingly); it is Debian underneath, though, and is set up to use Debian's repositories, so a desktop environment could be installed if that truly seemed like a good idea.
Administration
One can log into the console as root and do all of the usual administrative
tasks from the command line. But the real added value in OpenMediaVault is
in its web-oriented administration interface. At the outset, though, there
were a couple of things that caught your editor's eye: (1) the whole
thing is implemented in PHP, and (2) by default, only port 80
(HTTP) is supported. Supporting HTTPS out of the box is hard, of course;
somebody has to come up with a server certificate from somewhere. One
could also
argue that a NAS box should run in a friendly environment, well sheltered
from the Internet, so higher security might just get in the way. But it
still feels wrong to have only insecure access
to an important administrative function.
The administrative screens provide access to most of the functionality that users will want. At the storage level, one can manage individual disks, including wiping them completely if desired. There is access to SMART monitoring, and, happily, an extensive set of power-management controls allowing disks to be configured to spin down when they are idle. One thing that is missing, again, is partitioning; OpenMediaVault really wants to work exclusively with whole drives.
There is a RAID management layer, providing access to the MD subsystem in
the kernel. Assembling a RAID array is a simple matter of filling out the
forms. The experience could be a little smoother; did it really have to
spend five
hours synchronizing a simple two-disk mirror array that had no data on it?
But, little glitches like that aside, the RAID setup and management
interface works well enough.
The filesystem screen allows the creation and mounting of filesystems on the available physical devices. The system can manage ext4, JFS, and XFS filesystems; there is no support for filesystems like Btrfs. There is also no logical volume manager support, thus no ability to create pools of space to be divided across filesystems. There is a screen for the management of disk quotas.
There is another set of screens for user and group management. They work well enough for a small number of users, but the interface is clearly oriented toward the management of individual user accounts, one at a time, in a local database. There is an "import" functionality, but it has its own special format; one can't, thus, just paste the contents of a password file into it. There is no provision for obtaining user information from an LDAP or NIS database. One might be able to set that up at the command-line level, but the web-based interface clearly doesn't envision tying into a larger network.
Exporting of filesystems via CIFS, NFS, and FTP is easily managed via the
appropriate screens. One can also turn on services like rsync. There is
no access to some of the fancier aspects of the NFS server — user-ID
mapping, for example — but the basics are all there. Users can
be allowed to access the server via SSH, but only if (1) the service
has been explicitly enabled, and (2) the user in question is in the
"ssh" group. Most of the time, one assumes, there will be no reason to
allow ordinary users to log into a NAS box.
Screens exist to provide system information in a number of forms; there are nice plots for network bandwidth usage or the system load average, for example. Conspicuously missing is any kind of plot of I/O bandwidth usage — a parameter that might be of interest on a box dedicated to storage! There is no provision for monitoring an uninterruptible power supply, unfortunately.
Closing notes
For the most part, the user interface works well. It does, though, have an annoying habit of requiring a click to save configuration changes, then another (at a distant location on screen) to confirm that the changes should really be saved. It might prevent a novice user from tripping, but it gets tiresome quickly. Also tiresome are the "do you really want to leave this page?" dialogs that pop up when the user does, indeed, want to leave an OpenMediaVault page.
One other little nit: there is a five-minute idle timeout by default; after that, the browser puts up this rather disconcerting image:
One does not normally want to hear about "software failures" on a storage box. In this case, the only failure is putting up a hair-raising warning when all that has happened is that the session has timed out.
For somebody wanting to set up a simple storage box for a home or a small office, OpenMediaVault might well be an attractive option. It takes away all of the fiddly details of setting up network services and, for the most part, things Just Work. Users wanting more advanced features or integration into a corporate network, instead, might find OpenMediaVault to be a bit more limiting than they would like. That is fine; those users do not appear to be the ones the project is targeting at this point. In the end, your editor is tempted to keep this distribution on the test server, but there are others to try out first; stay tuned.
Brief items
Distribution quotes of the week
If dealing with this kind of stuff seems unpleasant to you, take some comfort in the fact that it isn't any more pleasant for the rest of us. :)
CyanogenMod CM12 nightly builds available
For those of you who have been waiting for a CyanogenMod release based on Android "Lollipop," the first nightly builds are now available. "We would like to note that at this point we consider ourselves 85% complete for our initial CM12 M release. We’ll spend the remainder of this month bringing up additional devices and finishing up the features you’ve come to love from CM11 – implementing them into the new Material UI."
Distribution News
Debian GNU/Linux
Debian Bug Squashing Party in Salzburg
There will be a Bug Squashing Party in Salzburg, Austria, April 17-19. "Even if you are not a Debian Developer or Contributor yet, but interested in fixing bugs and helping Debian, don't hesitate to come! There will be enough people around to sponsor your uploads."
Fedora
Fedora 19 End of Life
Fedora 19 reached its end of life for updates and support on January 6. Users of F19 are encouraged to upgrade to Fedora 20.
Newsletters and articles of interest
Distribution newsletters
- Debian Project News (December 29)
- DistroWatch Weekly, Issue 591 (January 5)
- Ubuntu Weekly Newsletter, Issue 398 (January 4)
Cuthbertson: NixOS and Stateless Deployment
Here is a lengthy post from Tim Cuthbertson on the virtues of building servers with NixOS. "It should hopefully be obvious at this point why NixOS is better than puppet: Both are declarative, but puppet is impure and non-exhaustive - when you apply a config, puppet compares everything specified against the current state of the system. Everything not specified is left alone, which means you’re only specifying a very tiny subset of your system. With NixOS, if something is not specified, it is not present."
McIntyre: Bootstrapping arm64 in Debian
Steve McIntyre provides a progress report on the status of the arm64 port for Debian 8 "Jessie". "arm64 is officially a release architecture for Jessie, aka Debian version 8. That's taken a lot of manual porting and development effort over the last couple of years, and it's also taken a lot of CPU time - there are ~21,000 source packages in Debian Jessie! As is often the case for a brand new architecture like arm64 (or AArch64, to use ARM's own terminology), hardware can be really difficult to get hold of. In time this will cease to be an issue as hardware becomes more commoditised, but in Debian we really struggled to get hold of equipment for a very long time during the early part of the port."
The building blocks of a distribution with Linux from Scratch (Opensource.com)
Opensource.com takes a look at Linux from Scratch, and it's variants. "Linux from Scratch creates a very basic system, but there are two variants you can choose from—one uses sysvinit and the other uses systemd. The package list for each of these is almost identical, except for the init system and a few supporting packages. The other packages in both variants are the same, so pick the version with the init system you prefer and then move on to Beyond Linux from Scratch to further customize your system to your liking."
Hands-on with Makulu Linux Xfce 7.0 (ZDNet)
Over at ZDNet, Jamie A. Watson reviews the latest release of Makulu Linux. "The Release Announcement / Release Notes give some interesting insight into the background and development of this release, as well as the major features of this release. As always with Makulu Linux, aesthetics was a major focus, and it includes loads of beautiful wallpapers, themes, icons, and fonts. The other major focus was speed, thus the choice of Xfce for the desktop, and Firefox rather than Chrome, the synapse launcher rather than slingscold, and the inclusion or omission of various other packages."
Page editor: Rebecca Sobol
Development
An "open governance" fork of Node.js
In early December, a group of key Node.js developers announced the creation of a fork that they call io.js. The new project insists that its effort will remain compatible with Node.js and that the principal reasons for starting the new project are issues of governance and management style. Nevertheless, part of the project-management style adopted by the io.js team involves faster feature development and releases—so it certainly remains to be seen whether the new codebase diverges in a significant way, much less how the user community will respond to the competing projects.
For those unfamiliar with it, Node.js is a web application framework that emphasizes I/O performance between the client (the browser) and the server. It uses the V8 JavaScript engine developed by Google for Chrome/Chromium. Since its inception, Node.js has been maintained by the cloud-computing vendor Joyent, who employed Node.js's creator Ryan Dahl as well as several other core team members.
History
In recent years, however, several leading Node.js developers not
employed by Joyent began to express dissatisfaction with the
manner in which the company managed the project. In mid-2014,
some of those developers launched Node Forward, which is described as
"a broad community effort
" to improve Node.js and
projects that depend on it.
On Node Forward's issue tracker, participants aired a number of specific complaints about Joyent's management. They include the long delay between versions 0.11 and 0.12 (over one year and counting), the lag between a new release of V8 and that release being rolled into Node.js, and the length of time that the project takes to respond to pull requests.
There were, of course, quite a few technical issues raised by individual Node.js users and developers as well. But, regarding the project-management concerns, perhaps the fundamental issue underneath all of them was the perception that Joyent's project team was overruling the senior community developers. Those community members wanted faster releases, for example, but the company wanted to take its time. So Node.js takes its time between releases, even if more than half of the core team disagrees.
In October, after the Node Forward group went public with its grievances,
Joyent responded by forming a Node Advisory Board
designed to give community members a way to "advise Joyent and
the Node.js project core committers team leadership
" about
Node.js development—but not, notably, to serve in any formal
leadership or governance role. That was not, evidently, a sufficient
solution to the key Node Forward players, however: on December 2
they announced the io.js fork. Video recordings for several of the Node
Forward meetings have been released
on YouTube, as have text minutes—although, so far, the meeting that precipitated the
official fork has not been released.
Governance and openness
Officially, the io.js GitHub repository calls the new project a "spork," not a fork. The distinction, unfortunately, is not elaborated upon, but perhaps it comes down to the io.js team's insistence that their project differs from Node.js only in terms of governance. As the io.js repository's README file puts it:
(NPM is the Node.js package manager, which is developed independently of Node.js and Joyent.)
The new project's governance model is explained in detail in a section of the CONTRIBUTING file. It starts with a technical committee (TC) that has final authority over technical direction, the governance process, all policies, and code-of-conduct guidelines for participants. The TC is said to hold weekly meetings on Google Hangouts, and will attempt to resolve all questions by unanimous consensus:
The initial TC members are Ben Noordhuis, Bert Belder, Fedor Indutny, Isaac Schlueter, Nathan Rajlich, and Trevor Norris. Indutny, Rajlich, and Norris were members of the Node.js core team prior to the fork, while Belder, Noordhuis, Schlueter were listed as core team alumni. In a blog post, Schlueter said that TJ Fontaine, the current Node.js project lead at Joyent, was invited to participate, but declined.
Development
Schlueter's post is also interesting reading because it asserts
that the io.js project is a continuation of the Node Forward effort,
which was "created with the express intent of merging changes
with the joyent/node repository
" and is not intended to be a
competing project. The name change, he said, is solely an effort to
avoid stepping on Joyent's trademarks.
That is, no doubt, a comforting outcome for Node.js users. Having the two projects compete and diverge on technical matters rather than merely adopt different release schedules would fracture the Node.js ecosystem and force downstream development projects into making a difficult choice. On the other hand, even when projects diverge for non-technical reasons, it is still easy for fissures to emerge between the feature sets and APIs.
The io.js TC says it has an initial release planned for January 13. That release will be based on the Node.js 0.12 development branch, but will be tagged 1.0-alpha1, leading up to an eventual io.js 1.0 release. The goal moving forward is to make new releases every week, with continuous integration keeping the codebase stable in between releases, and to adopt new V8 releases as quickly as possible.
A few commenters on the initial-release plan expressed concerns about leaping immediately into a rapid-release schedule but, for the most part, the proposal seemed to have the support of the io.js community. Nevertheless, it is difficult not to observe ambivalence on the part of the new project when it comes to maintaining compatibility with Node.js. There is an io.js roadmap discussion taking place at the GitHub site, some points of which might result in compatibility troubles—from merging in external projects to building io.js as a shared library rather than as an executable.
The way forward
Which is not to say that breaking compatibility with Node.js would necessarily be a bad thing. As Wesley Smith wrote on his blog, it is entirely possible for both projects to move forward and build active communities—just as it is possible for one or the other to achieve dominance and the rival to fizzle out.
The io.js project does have some work ahead of it, though, if it
aims to take the lead away from Node.js. The initial-release plan
asserts that Node.js is "pretty damn stable
"
as-is—the product of Joyent's slow release model—but it also asserts
that moving to the "as fast as possible" model will make io.js more
stable, not less. That will have to be proven with stable releases.
The io.js leadership may also have to grapple with project management, which is often not as easy as it sounds. "Openness" in governance, after all, is frequently in the eye of the beholder—and a lack of openness is often only perceived by those whose wishes are overruled by a project's existing governance. Cynics might note, for example, that io.js's Technical Committee is a self-appointing body with no term limits, no fixed size, and no formal eligibility requirements. Some users and developers might not call that open governance, either.
Forks are a natural part of the open development model—so much so that GitHub famously plasters a "fork your own copy" button on almost every page. But forks that attempt to set up and maintain an actively developed project in parallel to the original are rarer. Sometimes they work out well in their own right, sometimes they serve to catalyze changes in the original project, and sometimes they fade away. The io.js effort clearly demonstrates how critical Node.js has become to modern developers. What impact it will have in the long term remains to be seen.
Brief items
Quotes of the weeks
mozjpeg 3.0 released
Version 3.0 of mozjpeg, Mozilla's high-performance JPEG encoder, has been released. The most significant change in the new release is ABI compatibility with libjpeg-turbo. While making mozjpeg backward-compatible with libjpeg-turbo is important for its viability as a drop-in replacement, the change does break ABI compatibility with previous mozjpeg releases.
MusE 2.2 available
Version 2.2 of the MuSE sound sequencer has been released. This is a major update, rolling in changes from more than two years of development. The major new feature in this release is support for LV2 synths and effects, which the announcement notes constitutes "yet another MAJOR audio engine and plugin/synth process chain re-write
". Also important are several UI improvements, a metronome with support for accent clicks and replaceable clicks, several new scripts, and improved undo/redo support.
Auditory icon theme 3d available
At the Emacspeak blog, T.V. Raman announced
the initial release of 3d, a
new "auditory icon" theme for the Emacspeak audio desktop
environment, created with CSound. "CSound is a sophisticated music sound synthesis
system with a strong community of developers. I've played off and on
with CSound ever since the early 90's and have always been intrigued
by the possibility of algorithmically creating new auditory icon
themes for Emacspeak using CSound.
"
Ubuntu Make 0.4 released, with Go support
At his blog, Didier Roche announced
the availability of version 0.4 of Ubuntu Make, the distribution's new
metapackage for installing development tools. Initially called Ubuntu
Developer Tools Center and focusing on the Android IDE, the new
release adds support for Google's popular Go language. "To hack
using Go under Ubuntu, just open a terminal and type: umake go
and here we "go"! This will enable developers to always install the
latest Google golang version and setting up some needed environment
variables for you.
" Also included in the update is support for
the Stencyl game-development tool.
Newsletters and articles
Development newsletters from recent weeks
- What's cooking in git.git (December 29)
- What's cooking in git.git (January 6)
- LLVM Weekly (December 29)
- LLVM Weekly (January 5)
- OCaml Weekly News (December 30)
- OCaml Weekly News (January 6)
- OpenStack Community Weekly Newsletter (December 26)
- Perl Weekly (December 29)
- Perl Weekly (January 5)
- PostgreSQL Weekly News (December 28)
- PostgreSQL Weekly News (January 4)
- Python Weekly (December 25)
- Python Weekly (January 1)
- Ruby Weekly (January 1)
- This Week in Rust (December 29)
- This Week in Rust (January 5)
- Tor Weekly News (December 31)
- Tor Weekly News (January 7)
- TUGboat (December 31)
- Wikimedia Tech News (December 29)
GIMP 2014 report
Alexandre Prokoudine has posted a detailed 2014 development report for the GIMP and GEGL projects. The report highlights development on a number of GIMP tools, as well as changes in the underlying codebase and libraries. "The text tool was updated by Mukund Sivamaran to use HarfBuzz library directly instead of relying on deprecated Pango functions. This will
make sure we always provide excellent support for complex writing
systems such as Arabic, Devanagari etc. To make things even more fun, we added 64bit per color channel precision to GIMP. One part of GIMP that already uses it is the FITS
loader/saver for astrophysicists. But that bears the question: can
GIMP reliably perform when dealing with such resources-hungry images?
" The answer, evidently, is "yes"—thanks to multi-threading in GEGL and OpenCL-based hardware acceleration.
Page editor: Nathan Willis
Announcements
Brief items
Parallels to merge OpenVZ and Cloud Server
Parallels has announced that it will be merging its open-source OpenVZ and proprietary Parallels Cloud Server projects. "Now it's time to admit -- over the course of years OpenVZ became just a little bit too separate, essentially becoming a fork (perhaps even a stepchild) of Parallels Cloud Server. While the kernel is the same between two of them, userspace tools (notably vzctl) differ. This results in slight incompatiblities between the configuration files, command line options etc. More to say, userspace development efforts need to be doubled." The result of the merger will be open source; the name will be "Virtuozzo Core."
Articles of interest
Free Software Supporter - Issue 81, December 2014
The Free Software Foundation's newsletter for December covers What does it mean for your computer to be loyal?, the High Priority Projects list, reclaiming the PDF from Adobe Reader, LibrePlanet scholarships, EU to fund Free Software code review, and much more.FSFE Newsletter – January 2015
The Free Software Foundation Europe's newsletter for January covers the organization's annual report for 2014, a look ahead at 2015, Fellowship elections, and several other topics.Ringing in 2015 with 40 Linux-friendly hacker SBCs (LinuxGizmos)
For anybody looking for a single-board computer to experiment with: LinuxGizmos has a survey of 40 of them. "Over the last year we’ve seen some new quad- and octa-core boards with more memory, built-in WiFi, and other extras. Yet, most of the growth has been in the under $50 segment where the Raspberry Pi and BeagleBone reign. Based on specs alone, standouts in price/performance that have broken the $40 barrier include the new Odroid-C1 and pcDuino3 Nano, but other good deals abound here as well."
Purism Librem 15 (Linux Journal)
Linux Journal looks at the Purism Project and the Purism Librem 15 laptop. "The Librem 15 uses the Trisquel distribution which wasn't a distribution I had heard of before now. Basically it's a Debian-based distribution that not only removes the non-free repository by default, but it has no repositories at all that provide non-free software. It was picked for the Librem 15 because it is on the list of official FSF-approved GNU/Linux distributions and since that laptop is aiming to get the FSF stamp of approval, that decision makes sense. Since it's a Debian-based distribution, the desktop environment and most of the available software shouldn't seem too different for anyone who has used a Debian-based distribution before. Of course, if you do want to use any proprietary software (like certain multimedia codecs or official Flash plugins) you will have to hunt for those on your own. Then again, the whole point of this laptop is to avoid any software like that."
Calls for Presentations
NetDev 0.1 cfp deadline extension
The call for papers for NetDev 0.1 has been extended until January 24. Some hotel rooms are still available at the earlybird rates, but they are filling up fast. The conference will take place February 14-17, in Ottawa, Canada.ApacheCon North America
ApacheCon NA will be held in Austin, TX, April 13-17. Apache OpenOffice will be celebrating 15 years of open source success. The call for papers is open until February 1.Linux Audio Conference 2015 - Call for Participation
The Linux Audio Conference (LAC) will be held April 9-12 in Mainz, Germany. The submission deadline is February 1. "We invite submissions of papers addressing all areas of audio processing and media creation based on Linux and other open source software. Papers can focus on technical, artistic and scientific issues and should target developers or users. In our call for music, we are looking for works that have been produced or composed entirely/mostly using Linux and other open source music software."
Call for Participation: Libre Graphics Meeting 2015
Libre Graphics Meeting 2015 will take place April 29–May 2 in Toronto, Canada. The call for participation closes February 1. "Because 2015 marks the tenth edition of LGM, the focus this year is on the past and future of Libre Graphics. We have a special interest this year in projects that show where Libre Graphics has been, what distances it has travelled, and where it might go in the next ten years. We welcome submissions representing the broad ecology of Libre Graphics, from full-fledged software packages, to artist-built scripts, to art and publications, and even the cultural touchstones and issues present in Libre Graphics communities. We’re looking for talks, workshops and other events that show the accomplishments and strides that Libre Graphics has made in its first decade and what it can do next."
CFP Deadlines: January 8, 2015 to March 9, 2015
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
Deadline | Event Dates | Event | Location |
---|---|---|---|
January 9 | March 23 March 25 |
Embedded Linux Conference | San Jose, CA, USA |
January 10 | May 16 May 17 |
11th Intl. Conf. on Open Source Systems | Florence, Italy |
January 11 | March 12 March 14 |
Studencki Festiwal Informatyczny / Academic IT Festival | Cracow, Poland |
January 11 | March 11 | Nordic PostgreSQL Day 2015 | Copenhagen, Denmark |
January 16 | March 9 March 10 |
Linux Storage, Filesystem, and Memory Management Summit | Boston, MA, USA |
January 19 | June 16 June 20 |
PGCon | Ottawa, Canada |
January 19 | June 10 June 13 |
BSDCan | Ottawa, Canada |
January 24 | February 14 February 17 |
Netdev 0.1 | Ottawa, Ontario, Canada |
January 30 | April 25 April 26 |
LinuxFest Northwest | Bellingham, WA, USA |
February 1 | April 13 April 17 |
ApacheCon North America | Austin, TX, USA |
February 1 | April 29 May 2 |
Libre Graphics Meeting 2015 | Toronto, Canada |
February 2 | July 20 July 24 |
O'Reilly Open Source Convention | Portland, OR, USA |
February 6 | July 27 July 31 |
OpenDaylight Summit | Santa Clara, CA, USA |
February 8 | April 9 April 12 |
Linux Audio Conference | Mainz, Germany |
February 9 | May 18 May 22 |
OpenStack Summit | Vancouver, BC, Canada |
February 10 | June 1 June 2 |
Automotive Linux Summit | Tokyo, Japan |
February 12 | June 3 June 5 |
LinuxCon Japan | Tokyo, Japan |
February 15 | March 1 March 6 |
Circumvention Tech Festival | Valencia, Spain |
February 15 | May 1 May 4 |
openSUSE Conference | The Hague, Netherlands |
February 16 | May 12 May 13 |
PyCon Sweden 2015 | Stockholm, Sweden |
February 16 | April 13 April 14 |
2015 European LLVM Conference | London, UK |
February 20 | March 26 | Enlightenment Developers Day North America | Mountain View, CA, USA |
February 20 | May 13 May 15 |
GeeCON 2015 | Cracow, Poland |
February 24 | April 24 | Puppet Camp Berlin 2015 | Berlin, Germany |
February 28 | May 19 May 21 |
SAMBA eXPerience 2015 | Goettingen, Germany |
February 28 | July 15 July 19 |
Wikimania Conference | Mexico City, Mexico |
February 28 | June 26 June 27 |
Hong Kong Open Source Conference 2015 | Hong Kong, Hong Kong |
March 1 | April 24 April 25 |
Grazer Linuxtage | Graz, Austria |
March 1 | April 17 April 19 |
Dni Wolnego Oprogramowania / The Open Source Days | Bielsko-Biała, Poland |
March 2 | May 12 May 14 |
Protocols Plugfest Europe 2015 | Zaragoza, Spain |
March 6 | May 8 May 10 |
Open Source Developers' Conference Nordic | Oslo, Norway |
March 7 | June 23 June 26 |
Open Source Bridge | Portland, Oregon, USA |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
Traditional LCA events, with some important changes
Cherie Ellis provides some information about a few linux.conf.au events, including the speakers' dinner, the Penguin dinner, and more.Python FOSDEM 2015 - Selected Talks
A schedule of talks is available for the Python devroom at FOSDEM. The devroom will be open on January 31. There will be a dinner following the talks.Events: January 8, 2015 to March 9, 2015
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
January 10 January 11 |
NZ2015 mini-DebConf | Auckland, New Zealand |
January 12 | Linux.conf.au 2015 Multimedia and Music Miniconf | Auckland, New Zealand |
January 12 January 16 |
linux.conf.au 2015 | Auckland, New Zealand |
January 12 | LCA Kernel miniconf | Auckland, New Zealand |
January 12 | LCA2015 Debian Miniconf | Auckland, New Zealand |
January 13 | Linux.Conf.Au 2015 Systems Administration Miniconf | Auckland, New Zealand |
January 23 | Open Source in the Legal Field | Santa Clara, CA, USA |
January 31 February 1 |
FOSDEM'15 Distribution Devroom/Miniconf | Brussels, Belgium |
January 31 February 1 |
FOSDEM 2015 | Brussels, Belgium |
February 2 February 5 |
Python Namibia | Windhoek, Namibia |
February 6 February 8 |
DevConf.cz | Brno, Czech Republic |
February 6 February 8 |
Taiwan mini-DebConf 2015 | Yuli Township, Taiwan |
February 9 February 13 |
Linaro Connect Asia | Hong Kong, China |
February 11 February 12 |
Prague PostgreSQL Developer Days 2015 | Prague, Czech Republic |
February 14 February 17 |
Netdev 0.1 | Ottawa, Ontario, Canada |
February 18 February 20 |
Linux Foundation Collaboration Summit | Santa Rosa, CA, USA |
February 19 February 22 |
Southern California Linux Expo | Los Angeles, CA, USA |
March 1 March 6 |
Circumvention Tech Festival | Valencia, Spain |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol