LWN.net Weekly Edition for January 15, 2015
Extracting the abstract syntax tree from GCC
Richard Stallman recently revived a nearly year-old thread in the emacs-devel mailing list, but the underlying issue has been around a lot longer than that. It took many years before the GNU Compiler Collection (GCC) changed its runtime library exemption in a way that allowed for GCC plugins, largely because of fears that companies might distribute proprietary, closed-source plugins. But efforts to use the plugin API to add features to another GNU project mainstay, Emacs, seem to be running aground on that same fear—though there has never been any real evidence that there is much interest in circumventing the runtime library exception to provide proprietary backends to GCC.
Last March, a debate about exporting the abstract syntax tree (AST) information from GCC to Emacs (and other programs) was ongoing in various mailing lists. That discussion was about adding an Emacs feature to auto-complete various program-specific identifiers (e.g. variable names, types, structure member names, etc.). It was an offshoot of another wide-ranging discussion that we covered back in January 2014. When the conversation dropped last March, Stefan Monnier had responded to Stallman:
On January 2, Stallman renewed the
conversation by noting that he hoped "we can work out a kind of 'detailed output' that is
enough for what Emacs wants, but not enough for misuse of GCC front
ends
". He was looking for people to help define that "detailed
output", but instead found a number of people that felt that exporting the
full AST information would be more sensible.
Stallman is concerned that proprietary backends could take the AST output and generate code from it. While no one in the thread wanted to see that happen, most also saw it as an unlikely outcome. David Engster said that he had been working on a way to get the AST information out of GCC for Emacs and noted that there was no technical barrier to doing so:
While the original discussion was largely about auto-completion, Engster
and others would eventually
like to go further than that. Their vision is to turn Emacs into a
more full-featured integrated development environment (IDE), which would
require all of the AST information, at least in their eyes. Stallman would prefer providing far less information:
"just enough to do the
completion and other operations that Emacs needs, and NO MORE.
"
Engster replied that he understood Stallman's concerns, but felt that there was no real problem:
He went on to say that if Stallman was opposed to using the AST in Emacs,
he would drop the work he was doing in that area (for
auto-completion and other IDE-like features). Stallman remained unconvinced of the need for the AST
information, saying "it is so important to avoid the full AST
that I'm not going to give up on it just because someone claims
that is necessary
".
But Perry E. Metzger noted that he is doing
"a bunch of complicated refactoring work in
conjunction with my current academic research
" and that the lack of
AST information from GCC forced him to use Clang/LLVM. He would like to
see Emacs gain more IDE-like features that other tools, such as Apple's Xcode, have:
Stallman would like to see some kind of investigation to determine what pieces of information are needed for which purposes (e.g. auto-completion, refactoring, and so on), but that seems short-sighted to some. Metzger gave several examples of things that could only be done easily by having the full AST. Object-oriented languages, in particular, have programs with complex inter-relationships that can only be untangled by using all of the information that the compiler has collected in the AST.
Even if various subsets of the AST information could be defined to enable particular IDE features, new IDE features come about regularly. As David Kastrup pointed out, adding new interfaces for each new set of requirements makes little sense and would require that the Emacs IDE features and GCC versions be tightly coupled. Either that or the GCC plugin interface needs to be stable and provide a highly general API into the guts of the compiler, which could also be used in "bad" ways:
Keeping all of our babies while throwing out all of the bath water for evolving notions of babies and bathwater is not likely to work well.
I don't really see that we have much of a chance not to be hobbling us ourselves more in the process than currently hypothetical leeches.
There appear to be a few disconnects in the conversation. Stallman is focused on auto-completion, and believes that it can be done without access to the full AST. Others disagree, especially for languages with operator overloading such as C++. Stallman said that he has never used C++, so he is trying to understand what would be needed to support auto-completion for C++ in Emacs. However, the discussion has also included adding support for more than just auto-completion, but Stallman is not ready to look into additional features yet. Even if it is possible to handle C++ (and the other languages GCC supports) auto-completion without all of the AST information, there are plenty of examples in the thread (mostly from Metzger) of IDE features that do require the AST (or enough information in other forms that it would be functionally equivalent to the AST).
It is clear that Stallman has not used any of the "competing" IDEs
(e.g. Xcode, IntelliJ IDEA), which is not a huge surprise, but he is clearly
feeling browbeaten about the issue: "Rather you
are trying to pressure me to do what you want, at the expense of
something I consider important.
" But Metzger and others have
clearly stated that they understand (and, in general, agree with)
Stallman's concerns, they just see the tradeoff differently than he does.
In fact, Metzger said, there is a freedom
issue at stake:
That is, of course, a hot-button issue for Stallman, who takes umbrage at that characterization. Furthermore, he wants to take some time to study the problem(s), without being pressured:
Metzger volunteered to help with the process, but noted that it goes well beyond auto-completion. Emacs is a powerful tool that, unlike most of the other IDEs, gives its users the ability to reprogram the way it works. But that requires flexibility:
Beyond that, Kastrup and others felt that Stallman was being unfair to Metzger by characterizing his comments as "changing the subject". Karl Fogel pointed out that Metzger and others have all acknowledged Stallman's concerns, but that Stallman has not done the same:
This would be poor behavior even if those people were wrong. I think they're actually right, though, which makes it even worse, because now our goal is being damaged too.
It would seem that the goal of more IDE-like features for Emacs has suffered a setback. Based on Stallman's responses, Engster said he would not continue working on his project to incorporate AST information into Emacs:
The problem that Stallman foresees is that
the AST information could be used by a non-GCC backend that wouldn't use
libgcc and would thus evade the GCC plugin restrictions that were
added to the runtime library exemption. But as Óscar Fuentes pointed out, that has already been done by the
DragonEgg project, which "is now abandoned, mostly because Clang is a better front-end than
GCC
". Because LLVM does not have the freedom-respecting
requirements that Stallman holds dear, it also takes a much more modular
approach that is easier for other projects to interface with.
Another potential problem is the possibility of a fork of Emacs to support
using a GCC plugin that exports the AST, which would be legal
under both the runtime library exemption and the Emacs license. The only
barrier would be if Stallman was unwilling to accept the code into Emacs.
Monnier said that, under those circumstances, he would
be "willing to consider a
fork if that's what it takes
".
Kastrup noted that Stallman has laid out a plan for getting the understanding he needs to proceed with these features. That will take time, however, which may have other negative impacts:
There appears to be a fairly wide chasm between the two sides of the debate. It is hard to see how an Emacs IDE mode for GCC can compete with the proprietary alternatives without some mechanism to extract the AST from the compiler. To Stallman, at least, that is not of paramount importance, while others see things differently. The risk is that both Emacs and GCC decline in both usage and developer mindshare while some kind of solution is found. That would seem to make it worth coming to a solution sooner rather than later, no matter which side of the debate one is on.
Bob Young on freedom, control, and the GPL
Bob Young, known to the free-software community as the co-founder of Red Hat, founder of the print-on-demand service Lulu, and creator of the non-profit Center For The Public Domain, delivered the morning keynote address on the first full day of linux.conf.au (LCA) 2015 in Auckland. Although Young confessed several times to not being as plugged-in to the Linux and free-software economy as he once was, he had plenty of wisdom to dispense to the crowd, in particular by looking back on which (perhaps unexpected) factors allowed Red Hat to become a successful enterprise in the face of large competitors selling proprietary software.
Young's talk was laid-back, informal, and peppered with wisecracks and comedic stories—he introduced himself as "the light entertainment interlude" between the week's other keynotes from Eben Moglen and Linus Torvalds, and he consistently referred to himself as "just a typewriter salesman," to pick out two examples from many. But whatever description one might use, Young oversaw the rise of Red Hat in the early 1990s from a small software shop to a major IT vendor, and at the start of that process, it was far from a proven strategy that free software could compete for customers in the commercial space.
Young said he was initially a bigger skeptic about free software than Bill Gates, but while he was editing an early software-development newsletter, he repeatedly heard from developers that free software was what they wanted to hear more about. By the time Linux 1.0 was released, he decided to find out for himself how it all worked, so he went to meet with Richard Stallman, Donald Becker, and other "really smart guys" and figure out why altruism—an idea that Young had never thought would be capable of spawning a sophisticated technological system—seemed to be succeeding.
Each of those encounters had its own lesson: Stallman advocated the egalitarian notion that each free-software developer contributed according to their ability and reaped rewards in accordance to their skill level; Becker argued that sharing software was simply "the right thing to do;" Becker's boss Thomas Sterling explained the notion of the barter economy, saying "I contribute a driver and get back a gigabyte of software with the license to do what I want with it—and you think they are exploiting me?" But what finally clicked for Young seemed to be what he heard from engineers: that free software gives them control over their own system, and control was the one thing that proprietary software vendors were unwilling to part with.
That idea was what led to the formation of Red Hat. It was fundamentally against the proprietary IT vendors' business models to sell a product that allowed the customer to have control, so Red Hat had an immediate competitive advantage against all of them. Young added that he recently stopped by Red Hat's offices (his first visit in a long time) to try and ensure that he didn't put something problematic into his LCA talk that would cause the company's stock price to plummet. On that visit, he said he was thrilled to see that the corporate messaging remains essentially the same: customers are treated as partners with the company; giving them control over their systems is still paramount.
Young then spoke briefly about software licenses. He has long been a staunch advocate of the GPL, he said, but not because he has detailed opinions of the differences between it and other, similar licenses. Rather, he said, he learned early on that customers do not care about the details of free-software licenses, and that diving into an explanation of them was the fastest way to send a potential customer running toward the competition. (In an aside, Young made it clear that even during his days at Red Hat, he never considered other Linux distributors to be "the competition;" he used the word only to refer to the proprietary vendors.)
With the GPL, everyone knows what the rules are, more or less, so the debate is over quickly. They might not actually know the rules, he added, but they think they do, which amounts to the same thing. As a "business guy," he said, he is always trying to simplify the "pitch"—the GPL tells everyone "what we're about," so it has the greatest impact. Furthermore, he advised developers not to waste any time trying to come up with a new, specially-crafted free-software license for their project. Whatever effort it takes to write that new license corresponds to creating more complications for the potential customer. In the end, it is a net loss.
Young took a few questions from the audience at the end, some about Red Hat and some about Lulu. The two companies are quite different on the surface, but Young explained that they are both attempts to do the same thing: give customers control over what they want to do. When Red Hat was competing for customers with Microsoft, he said, Microsoft was totally unwilling to offer customers the control that Red Hat was, so Red Hat gained a lot of contracts and grew into an enormous enterprise of its own.
Lulu is different, he noted, in the sense that the competition—most notably Amazon—has seen Lulu's business model and responded by offering its own, similar service. Thus, Lulu has a different kind of battle to fight. It is still a really fun project, he said, but time will tell whether or not what it offers to authors is significantly better than Amazon's pitch, only marginally better, or if Amazon comes up with something better.
Several audience members asked about how the community should respond to resistance toward software-freedom issues. One asked specifically how to communicate free-software ideals to politicians; another asked about businesses' resistance toward the GPL. On the political front, Young said that there are a number of players today who take the correct approach (Lawrence Lessig, the EFF, and Public Knowledge, for example). They emphasize transparency, which is important to politicians. But Young also advised everyone listening to understand that they, too, have a role to play in such conversations. If you ever meet a politician, he said, make sure the politician knows what you consider important. Money in politics is a problem, but all politicians ultimately care more about votes than money.
As for businesses' resistance toward the GPL, Young said that it would be better to ask Moglen or Karen Sandler for the real details. But he noted that in the early days, Stallman had considered putting his software into the public domain. That would not work, Stallman concluded, since the "public domain" is not a concept that is protected against abuse (in this case, against turning a public-domain program into a proprietary one). The GPL and the concept of "copyleft" that it implements, he said, create the legal protection for the public domain. There are other licenses, but there is no more effective tool for the "real" public domain. And that real public domain is the thing that gives users (or customers) control, which, in turn, is the one thing proprietary software cannot offer.
"Control" as Young articulated it in his talk is, essentially, the same idea as the freedom that has always been central to free software. But it is interesting to hear it described in such different terms. Young jokingly called himself "an evil capitalist" several times, and perhaps control is terminology that fits better into vendor-to-customer conversations and sales pitches than does freedom. But Red Hat's success over the years certainly demonstrates that Young's way of explaining the issues is one that resonates with quite a few users.
[The author would like to thank LCA 2015 for travel assistance to Auckland.]
Software-defined radio at the OpenRadio miniconf
The first two days of linux.conf.au (LCA) are reserved for a series of miniconfs proposed and run by the conference delegates. This year, one of Monday's miniconfs centered around free-software and amateur (ham) radio, especially where it involves software-defined radio (SDR). The ham and free-software fields have a great deal in common—especially at the philosophical level—but, practically speaking, using free software with SDR can be a bit clunky. The OpenRadio miniconf attempted to kickstart interest in the topic, both by including relevant presentations and by giving attendees the opportunity to construct their own device: a new SDR peripheral that is more flexible and more open than most of the competition.
Sessions
The miniconf day started and ended with plenary talks. Don Wallace, who is the Overseas Liaison Officer for the New Zealand Association of Radio Transmitters (NZART, New Zealand's amateur radio association) gave an overview of the radio-spectrum allocation process and how New Zealand's allocation rules fit in with the multi-tier regional and global system of radio regulation. He also discussed the current landscape of ham radio allocations. Notably, he pointed out that almost all existing frequency allocations are at risk, as more and more spectrum is being allocated for cellular telephone systems every year.
He also explained the background of the unlicensed industrial, scientific, and medical (ISM) bands, where the miniconf's SDR peripheral is designed to operate. The 2.5GHz and 5GHz ISM bands were initially designed for use in operations like welding, microwave oven heating, and medical devices. It was only co-opted for use in wireless networking when Cisco and other networking hardware manufacturers balked at the costs that would accompany a new spectrum allocation. This strange pairing continues to be a problem for ham radio and for wireless networking as new devices come to market: the cumulative RF noise generated by the radio sources in compact-fluorescent and LED bulbs is rapidly reaching a problematic level. During a recent city-wide blackout, he said, the RF noise floor dropped by 20dB. It can be hard to work in such a noisy radio band, he said, and ham radio operators are often perceived as the least important group when national and international regulations are being re-examined.
Next, Paul Warren gave a talk (the slides from which are available
online [ODP]) comparing and contrasting the current crop of "open" ham
radio transceivers. There were a great many options covered, from
stand-alone hardware devices that take microphone input and produce
audio output (which makes those devices the most similar to
traditional ham radio gear) to peripherals that require a computer for
their functionality. The sense in which each device is "open" varies
considerably as well, he explained. Some of the devices include
schematics and printed circuit board (PCB) designs—which makes them
"open hardware"—but they might use firmware that is not released
under a free software license (if its source is published at all).
Others might be the reverse: fully free software, but without plans or
schematics. In selecting which devices qualified for inclusion in the
talk, Warren sad, he just made his best guess for which projects
"were attempting to do the right thing.
"
A thorough examination of the options is important for anyone considering a purchase. Warren identified a particular favorite device at the moment in the PortableSDR, a powerful but handheld-sized device that recently won third prize in a project-building competition held by the Hackaday web site. He also looked at several devices he called "SDR exciters" rather than true transceivers. These include well-publicized projects like the HackRF and BladeRF. Some of them are quite nice, he said, but they belong in a different class because they come with serious limitations: very low transmission power and a lack of filtering on input signals. Due to these limitations, most are good for experimentation, but would require modification to do any long-range or sensitive communication.
Last but certainly not least, Paul Campbell gave an overview of his
work designing "smaller, cheaper, open wireless
" for
Internet-connected devices. The project is still in the experimental
stage, but more
details can be found at his web site Moonbase Otago. The project
in question is a type of extremely low-power mesh-networking device.
The devices communicate with the IEEE 802.15.4
low-power wireless Personal Area Networking (WPAN) networking layer.
802.15.4 is best known for ZigBee products,
although ZigBee communication is actually implemented in a higher-level
protocol on top of 802.15.4.
Moonbase Otago's cheap RF devices are built around a tiny CC2533 CPU and very few physical parts: just a handful of inexpensive resistors and capacitors. The CC2533 provides serial communication, I2C, and several GPIOs; just 32K of storage is available, plus 4K of RAM and five timers—but there is a AES encryption chip. Campbell's team has developed a tiny operating system (occupying 6K) that provides threading and task queueing for applications, but also incorporates several interesting features. Using the AES chip, all communication is encrypted automatically: paired devices receive messages from each other, and drop any that use a different key ID in the message packet. The devices can also update application software over the air, propagating updates from device to device automatically.
The project is still in active development. Because GCC does not support the CC2533 CPU, the team has been using the free-software SDCC compiler, and has had to reverse-engineer a number of features of the device where the manufacturer's sample code does not work. Writing a GUI development tool is underway as well, and the boards will support a variety of Arduino-compatible macros to assist new developers.
Hardware
The majority of the OpenRadio miniconf schedule, however, was devoted to the construction of the OpenRadio SDR device. The board is a radio transceiver, designed by Mark Jessop. The basic architecture is modeled on the open-hardware SoftRock SDR board, with some modifications intended to reduce cost and a few enhancements to its functionality. Kits were manufactured ahead of LCA, and attendees purchased them online to pick up at the beginning of the workshop session.
An Arduino Nano is used to control the board's local oscillator, setting the receive frequency, over I2C. The Nano also does phase-shift-keying (PSK) modulation and generates transmitter output. Incoming signals are mixed with the local oscillator signal to produce a baseband audio signal that can be fed into a host computer via any sound card with a stereo input.
The board is designed to run on the 27MHz ISM radio band, although it can be used with other bands. Another of the distinctions between the OpenRadio and the SoftRock design on which it is built is that the OpenRadio hardware includes a prototyping area: a set of large pads onto which the user can construct a set of filters to change the operating frequency. The SoftRock can also be constructed to tune to one of several specific radio bands, but the OpenRadio has a pluggable option, too: the frequency filters can be built on removable daughterboards (instead of being soldered directly to the prototyping area) so users can make several filters and swap them out as desired.
The Arduino code that runs on the Nano is available on GitHub. Several applications are available to run on the host computer, to read samples from the stereo input and process the signal. Both the Python-based Quisk and fldigi were in use around the room once testing began. Both programs can demodulate the radio signals received by the board, decoding a wide variety of signals modes—from single side-band (SSB) voice to digital data packets of various varieties.
Unfortunately, even given the simplicity of the kits, time did run fairly short by the end of the day. Not everyone was able to complete the assembly process, although quite a few did. Afterward, OpenRadio miniconf organizer Kim Hawtin said that he hopes the kits will be just the first step, and that the project will be able to build on the initial design at subsequent events.
Ham radio has long been a popular hobby among free-software fans, so it can come as an understandable surprise to hear supporters of both movements say that there is important work left to be done in a popular area like SDR. As the LCA OpenRadio miniconf demonstrated, though, even when there is considerable interest, the hardware is far from standardized (even on items like licensing) and the free-software tools for working with SDR tend to be of the "toolkit" variety rather than the full-blown application variety. But the miniconf sold out in advance (there were a fixed number of spaces available in the workshop); one would hope that this and the increasing number of projects available are indicators that the community is actively maturing and good things are still to come.
[The author would like to thank LCA 2015 for travel assistance to Auckland.]
Security
Protecting Python package downloads
Python is looking at ways to protect its users from installing compromised packages from the Python Package Index (PyPI) repository. Currently, packages are downloaded using SSL/TLS encryption, which is enough to ensure package integrity between PyPI and the client, as long as PyPI itself—or some mirror or content-delivery network (CDN) server—is not compromised. But dealing with a compromise of the repository or its mirrors requires another level of security, which is what is now under discussion.
A two-part proposal has been made by three NYU researchers (Vladimir Diaz, Trishank Kuppusamy, and Justin Cappos) with assistance from Python core developer Donald Stufft. The first part takes the form of Python Enhancement Proposal (PEP) 458, which provides a mechanism to sign packages in PyPI using The Update Framework (TUF). It is largely non-controversial, partly because there is no visible impact on users or package developers.
TUF is a project by those same researchers to provide a library that can be used to handle the software-update process securely. It is designed so that the updating client can securely determine that an update is available and download a verified copy of the latest version. The intent is to place most of the work into the library, so that the software-update problem can be easily dealt with for a wide variety of different projects. TUF was also mentioned as a possible solution for the problems outlined in our article last week about the state of Docker image verification.
The second piece of the proposal is contained in PEP 480. It would change the workflow for package developers, which is part of why it seems headed for the back burner until the user-interface issues can be fully considered. The problems largely come down to key management—something that is always difficult in cryptographic verification schemes.
The basic idea behind PEP 458 is that the PyPI administrators would attach TUF metadata (which includes signatures) to packages in the repository. The pip installer (which is now shipped with any recent version of Python), as well as other installers, could then be changed to use the library to look at the metadata and verify the information found therein. This would thwart a wide variety of attacks, but still leave PyPI users vulnerable to others, which is what PEP 480 (the "maximum security model") is meant to address.
The changes needed on the client side are not directly addressed in PEP 458, though there has been work done to make pip work with TUF metadata. Those changes are fairly small, largely because the TUF library handles most of the complexity. By far the biggest pieces of the change are files containing various trusted public keys; the actual download function just needed a tuf.interposition.open_url decorator placed on it.
The bigger piece of the puzzle (and most of what is contained in the PEP) is changing PyPI to handle, store, and serve the TUF metadata. That metadata is signed by various kinds of keys that are described in the "PyPI and TUF Metadata" section of the PEP. There is a "root" key that is stored offline (and its public portion is distributed with any update clients like pip) that provides the root of trust. It signs all of the top-level keys.
Another offline key is "targets", which is used to sign the metadata files for the available packages. To allow uploaded packages to be immediately available, the signing ability of the targets key is actually delegated to the online "bins" key. For scalability reasons, that key has its signing authority delegated to up to 1024 subsidiary "bin" keys that are actually used to sign package metadata.
The package metadata consists of sizes and hashes of each file that a client will actually download. Those values can then be verified on the client side to ensure that the proper files were downloaded. The PEP specifies SHA-256 for the hashing algorithm, but does not recommend a specific digital signature algorithm, though it assumes RSA is used. The PEP says that other algorithms could be substituted since the state of cryptography changes over time.
There are also two other metadata files, each with its own key, that need to be maintained by the repository. The "snapshot" file provides information on the latest version of all the metadata files for each package, which ensures that a client gets a consistent view of the entire repository. Similarly, the "timestamp" file simply provides the latest version number for the snapshot file, so that clients get the latest even in the presence of multiple simultaneous updaters. Those files are signed with separate keys (named, unsurprisingly, snapshot and timestamp). They are stored online (to allow instant availability of new packages) and signed by the root key.
The idea behind all of the different keys is to try to prevent a compromise of one (other than root, obviously) leading to the compromise of all of the different pieces of metadata. The metadata will all expire with some frequency (yearly for root or target, daily for the others) as a way to reduce the impact of key disclosure. Unless the offline keys are disclosed, the short expiration times of the metadata will limit the window of time in which attacks can take place because the attacker cannot sign new versions without access to the offline keys. The PEP also contains an analysis of the effects of compromising individual keys and combinations of those keys.
In fact, the PEP contains a lot of information about TUF and how the authors recommend it be applied to PyPI. Those interested in more details should refer to that document and the TUF specification.
Ideally, packages should be end-to-end signed, so that users can ensure that the same code uploaded by a package developer is what gets installed. That requires developers to have their own keys that can be verified, distributed, revoked, and expired by PyPI. That is the subject of PEP 480, but there are lots of questions about how, exactly, that all might work. In the meantime, though, implementing PEP 458 (the "minimum security model") still protects users against malicious mirrors and CDNs once update clients start incorporating the validation.
The kinds of attacks that can be prevented are those where a compromised repository can cause the client to install malicious code. That includes installing arbitrary code controlled by the attacker or older, known-vulnerable code. TUF also prevents things like the repository specifying a dependency on a malicious or known-vulnerable package or sending a file that is not what was requested by the client.
The biggest complaint about the proposals in the Python distutils mailing list thread is that there are two of them that are being discussed at the same time. Overall, PEP 458 and the TUF security model have been largely met with approval, but PEP 480 is another story. As Nick Coghlan put it:
As a result, my perspective is that it's the UX [user experience] design concept that will make or break PEP 480 - the security model of TUF looks great to me, what gives me pause is concern over the usability and maintainability of signed uploads for "developers in a hurry".
As Coghlan noted, there is still an unresolved issue with regard to externally hosted packages that are listed at PyPI. There are a number of alternatives listed in PEP 458 to handle those kinds of packages, but one needs to be chosen in coordination with those who host those packages. That particular problem has come up before; we looked at it last May and it is the subject of PEP 470.
The confusion stemming from both PEPs being discussed at once led Stufft to propose putting PEP 480 on the back burner while PEP 458 gets polished and finalized. That was met with multiple "+1" posts as well as agreement by Diaz, who is the researcher who posted the PEPs and who has been fielding questions and concerns. Working out the end-to-end problem can come later.
Given that TUF has been suggested for Docker and has been prototyped for Ruby Gems, it would seem to be a solution that numerous projects are interested in. While TUF uses well-studied cryptographic primitives, it is not entirely clear how much vetting by the security and cryptographic communities has been done on the overall framework. Obviously the researchers have looked it over carefully, but one hopes that other, independent security folks have or will do so as well. As we have seen over the years, it is not just cryptographic primitives that need scrutiny, the algorithms that are built atop them can be vulnerable too.
Brief items
Security quotes of the week
All keystrokes are logged online and locally. SMS alerts are sent upon trigger words, usernames or URLs, exposing passwords. If unplugged, KeySweeper continues to operate using its internal battery and auto-recharges upon repowering. A web based tool allows live keystroke monitoring.
- All Britons' communications must be easy for criminals, voyeurs and foreign spies to intercept
- Any firms within reach of the UK government must be banned from producing secure software
- [...]
- Proprietary operating system vendors (Microsoft and Apple) must be ordered to redesign their operating systems as walled gardens that only allow users to run software from an app store, which will not sell or give secure software to Britons
- Free/open source operating systems -- that power the energy, banking, ecommerce, and infrastructure sectors -- must be banned outright
New vulnerabilities
binutils: two vulnerabilities
| Package(s): | binutils | CVE #(s): | CVE-2014-8484 CVE-2014-8485 | ||||||||||||||||||||||||||||||||||||||||||||
| Created: | January 12, 2015 | Updated: | November 24, 2015 | ||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the CVE entries:
The srec_scan function in bfd/srec.c in libdbfd in GNU binutils before 2.25 allows remote attackers to cause a denial of service (out-of-bounds read) via a small S-record. (CVE-2014-8484) The setup_group function in bfd/elf.c in libbfd in GNU binutils 2.24 and earlier allows remote attackers to cause a denial of service (crash) and possibly execute arbitrary code via crafted section group headers in an ELF file. (CVE-2014-8485) | ||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||
condor: code execution
| Package(s): | condor | CVE #(s): | CVE-2014-8126 | ||||||||||||||||
| Created: | January 13, 2015 | Updated: | July 20, 2015 | ||||||||||||||||
| Description: | From the Red Hat advisory:
The HTCondor scheduler can optionally notify a user of completed jobs by sending an email. Due to the way the daemon sent the email message, authenticated users able to submit jobs could execute arbitrary code with the privileges of the condor user. | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
curl: access restriction bypass
| Package(s): | curl | CVE #(s): | CVE-2014-8150 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | January 9, 2015 | Updated: | January 16, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Debian advisory: Andrey Labunets of Facebook discovered that cURL, an URL transfer library, fails to properly handle URLs with embedded end-of-line characters. An attacker able to make an application using libcurl to access a specially crafted URL via an HTTP proxy could use this flaw to do additional requests in a way that was not intended, or insert additional request headers into the request. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
drupal6-flag: code execution
| Package(s): | drupal6-flag | CVE #(s): | CVE-2014-3453 | ||||||||
| Created: | January 14, 2015 | Updated: | January 14, 2015 | ||||||||
| Description: | From the CVE entry:
Eval injection vulnerability in the flag_import_form_validate function in includes/flag.export.inc in the Flag module 7.x-3.0, 7.x-3.5, and earlier for Drupal allows remote authenticated administrators to execute arbitrary PHP code via the "Flag import code" text area to admin/structure/flags/import. NOTE: this issue could also be exploited by other attackers if the administrator ignores a security warning on the permissions assignment page. | ||||||||||
| Alerts: |
| ||||||||||
exiv2: denial of service
| Package(s): | exiv2 | CVE #(s): | CVE-2014-9449 | ||||||||||||
| Created: | January 8, 2015 | Updated: | July 7, 2015 | ||||||||||||
| Description: | From the Ubuntu advisory: It was discovered that Exiv2 incorrectly handled certain tag values in video files. If a user or automated system were tricked into opening a specially-crafted video file, a remote attacker could cause Exiv2 to crash, resulting in a denial of service. | ||||||||||||||
| Alerts: |
| ||||||||||||||
gcab: directory traversal
| Package(s): | gcab | CVE #(s): | CVE-2015-0552 | ||||||||||||
| Created: | January 12, 2015 | Updated: | June 1, 2015 | ||||||||||||
| Description: | From the Mageia advisory:
Jakub Wilk reported a directory traversal vulnerability due to gcab not filtering leading slashes from paths in CAB files. | ||||||||||||||
| Alerts: |
| ||||||||||||||
glpi: two vulnerabilities
| Package(s): | glpi | CVE #(s): | CVE-2014-5032 CVE-2014-8360 | ||||||||
| Created: | January 12, 2015 | Updated: | January 14, 2015 | ||||||||
| Description: | From the Mageia advisory:
Due to a bug in GLPI before 0.84.7, a user without access to cost information can in fact see the information when selecting cost as a search criteria (CVE-2014-5032). An issue in GLPI before 0.84.8 may allow arbitrary local files to be included by PHP through an autoload function (CVE-2014-8360). | ||||||||||
| Alerts: |
| ||||||||||
kernel: multiple vulnerabilities
| Package(s): | kernel | CVE #(s): | CVE-2014-9529 CVE-2014-9428 CVE-2014-8989 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | January 12, 2015 | Updated: | January 14, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the CVE entries:
Race condition in the key_gc_unused_keys function in security/keys/gc.c in the Linux kernel through 3.18.2 allows local users to cause a denial of service (memory corruption or panic) or possibly have unspecified other impact via keyctl commands that trigger access to a key structure member during garbage collection of a key. (CVE-2014-9529) The batadv_frag_merge_packets function in net/batman-adv/fragmentation.c in the B.A.T.M.A.N. implementation in the Linux kernel through 3.18.1 uses an incorrect length field during a calculation of an amount of memory, which allows remote attackers to cause a denial of service (mesh-node system crash) via fragmented packets. (CVE-2014-9428) The Linux kernel through 3.17.4 does not properly restrict dropping of supplemental group memberships in certain namespace scenarios, which allows local users to bypass intended file permissions by leveraging a POSIX ACL containing an entry for the group category that is more restrictive than the entry for the other category, aka a "negative groups" issue, related to kernel/groups.c, kernel/uid16.c, and kernel/user_namespace.c. (CVE-2014-8989) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
libsndfile: multiple vulnerabilities
| Package(s): | libsndfile | CVE #(s): | CVE-2014-9496 CVE-2014-9756 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | January 8, 2015 | Updated: | November 18, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Mageia advisory: libsndfile contains multiple buffer-overflow vulnerabilities in src/sd2.c because it fails to properly bounds-check user supplied input, which may allow an attacker to execute arbitrary code or cause a denial of service (CVE-2014-9496). libsndfile contains a divide-by-zero error in src/file_io.c which may allow an attacker to cause a denial of service. This issue was assigned CVE-2014-9756 in November 2015. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
mediawiki: multiple vulnerabilities
| Package(s): | mediawiki | CVE #(s): | CVE-2014-9475 CVE-2014-9476 | ||||||||
| Created: | January 8, 2015 | Updated: | January 14, 2015 | ||||||||
| Description: | From the Mandriva advisory: In MediaWiki before 1.23.8, thumb.php outputs wikitext message as raw HTML, which could lead to cross-site scripting. Permission to edit MediaWiki namespace is required to exploit this (CVE-2014-9475). In MediaWiki before 1.23.8, a malicious site can bypass CORS restrictions in $wgCrossSiteAJAXdomains in API calls if it only included an allowed domain as part of its name (CVE-2014-9476). | ||||||||||
| Alerts: |
| ||||||||||
mozilla: multiple vulnerabilities
| Package(s): | firefox thunderbird seamonkey | CVE #(s): | CVE-2014-8634 CVE-2014-8638 CVE-2014-8639 CVE-2014-8641 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | January 14, 2015 | Updated: | February 17, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Red Hat advisory:
Several flaws were found in the processing of malformed web content. A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox. (CVE-2014-8634, CVE-2014-8639, CVE-2014-8641) It was found that the Beacon interface implementation in Firefox did not follow the Cross-Origin Resource Sharing (CORS) specification. A web page containing malicious content could allow a remote attacker to conduct a Cross-Site Request Forgery (XSRF) attack. (CVE-2014-8638) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
mpfr: buffer overflow
| Package(s): | mpfr | CVE #(s): | CVE-2014-9474 | ||||||||||||||||||||||||
| Created: | January 8, 2015 | Updated: | December 30, 2015 | ||||||||||||||||||||||||
| Description: | From the Red Hat bug report: A buffer overflow was reported [1] in mpfr. This is due to incorrect GMP documentation for mpn_set_str about the size of a buffer (discussion is at [1]; first fix in the GMP documentation is at [2]). This bug is present in the MPFR versions from 2.1.0 (adding mpfr_strtofr) to this one, and can be detected by running "make check" in a 32-bit ABI under GNU/Linux with alloca disabled (this is currently possible by using the --with-gmp-build configure option where alloca has been disabled in the GMP build). It is fixed by the strtofr patch [3]. Corresponding changeset in the 3.1 branch: 9110 [4]. [1]: https://gmplib.org/list-archives/gmp-bugs/2013-December/003267.html [2]: https://gmplib.org/repo/gmp-5.1/raw-rev/d19172622a74 [3]: http://www.mpfr.org/mpfr-3.1.2/patch11 [4]: https://gforge.inria.fr/scm/viewvc.php?view=rev&root=mpfr&revision=9110 | ||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||
openssl: multiple vulnerabilities
| Package(s): | openssl | CVE #(s): | CVE-2014-3569 CVE-2014-3570 CVE-2014-3571 CVE-2014-3572 CVE-2014-8275 CVE-2015-0204 CVE-2015-0205 CVE-2015-0206 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | January 12, 2015 | Updated: | March 20, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the CVE entries:
The ssl23_get_client_hello function in s23_srvr.c in OpenSSL 0.9.8zc, 1.0.0o, and 1.0.1j does not properly handle attempts to use unsupported protocols, which allows remote attackers to cause a denial of service (NULL pointer dereference and daemon crash) via an unexpected handshake, as demonstrated by an SSLv3 handshake to a no-ssl3 application with certain error handling. NOTE: this issue became relevant after the CVE-2014-3568 fix. (CVE-2014-3569) The BN_sqr implementation in OpenSSL before 0.9.8zd, 1.0.0 before 1.0.0p, and 1.0.1 before 1.0.1k does not properly calculate the square of a BIGNUM value, which might make it easier for remote attackers to defeat cryptographic protection mechanisms via unspecified vectors, related to crypto/bn/asm/mips.pl, crypto/bn/asm/x86_64-gcc.c, and crypto/bn/bn_asm.c. (CVE-2014-3570) OpenSSL before 0.9.8zd, 1.0.0 before 1.0.0p, and 1.0.1 before 1.0.1k allows remote attackers to cause a denial of service (NULL pointer dereference and application crash) via a crafted DTLS message that is processed with a different read operation for the handshake header than for the handshake body, related to the dtls1_get_record function in d1_pkt.c and the ssl3_read_n function in s3_pkt.c. (CVE-2014-3571) The ssl3_get_key_exchange function in s3_clnt.c in OpenSSL before 0.9.8zd, 1.0.0 before 1.0.0p, and 1.0.1 before 1.0.1k allows remote SSL servers to conduct ECDHE-to-ECDH downgrade attacks and trigger a loss of forward secrecy by omitting the ServerKeyExchange message. (CVE-2014-3572) OpenSSL before 0.9.8zd, 1.0.0 before 1.0.0p, and 1.0.1 before 1.0.1k does not enforce certain constraints on certificate data, which allows remote attackers to defeat a fingerprint-based certificate-blacklist protection mechanism by including crafted data within a certificate's unsigned portion, related to crypto/asn1/a_verify.c, crypto/dsa/dsa_asn1.c, crypto/ecdsa/ecs_vrf.c, and crypto/x509/x_all.c. (CVE-2014-8275) The ssl3_get_key_exchange function in s3_clnt.c in OpenSSL before 0.9.8zd, 1.0.0 before 1.0.0p, and 1.0.1 before 1.0.1k allows remote SSL servers to conduct RSA-to-EXPORT_RSA downgrade attacks and facilitate brute-force decryption by offering a weak ephemeral RSA key in a noncompliant role. (CVE-2015-0204) The ssl3_get_cert_verify function in s3_srvr.c in OpenSSL 1.0.0 before 1.0.0p and 1.0.1 before 1.0.1k accepts client authentication with a Diffie-Hellman (DH) certificate without requiring a CertificateVerify message, which allows remote attackers to obtain access without knowledge of a private key via crafted TLS Handshake Protocol traffic to a server that recognizes a Certification Authority with DH support. (CVE-2015-0205) Memory leak in the dtls1_buffer_record function in d1_pkt.c in OpenSSL 1.0.0 before 1.0.0p and 1.0.1 before 1.0.1k allows remote attackers to cause a denial of service (memory consumption) by sending many duplicate records for the next epoch, leading to failure of replay detection. (CVE-2015-0206) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
otrs2: privilege escalation
| Package(s): | otrs2 | CVE #(s): | CVE-2014-9324 | ||||||||||||||||
| Created: | January 12, 2015 | Updated: | February 10, 2015 | ||||||||||||||||
| Description: | From the Debian advisory:
Thorsten Eckel of Znuny GMBH and Remo Staeuble of InfoGuard discovered a privilege escalation vulnerability in otrs2, the Open Ticket Request System. An attacker with valid OTRS credentials could access and manipulate ticket data of other users via the GenericInterface, if a ticket webservice is configured and not additionally secured. | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
php5: denial of service
| Package(s): | php5 | CVE #(s): | |||||
| Created: | January 13, 2015 | Updated: | January 14, 2015 | ||||
| Description: | From the Debian advisory:
It was discovered that libmagic as used by PHP, would trigger an out of bounds memory access when trying to identify a crafted file. Additionally, this updates fixes a potential dependency loop in dpkg trigger handling. | ||||||
| Alerts: |
| ||||||
python-django: multiple vulnerabilities
| Package(s): | python-django | CVE #(s): | CVE-2015-0219 CVE-2015-0220 CVE-2015-0221 CVE-2015-0222 | ||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | January 14, 2015 | Updated: | February 6, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Ubuntu advisory:
Jedediah Smith discovered that Django incorrectly handled underscores in WSGI headers. A remote attacker could possibly use this issue to spoof headers in certain environments. (CVE-2015-0219) Mikko Ohtamaa discovered that Django incorrectly handled user-supplied redirect URLs. A remote attacker could possibly use this issue to perform a cross-site scripting attack. (CVE-2015-0220) Alex Gaynor discovered that Django incorrectly handled reading files in django.views.static.serve(). A remote attacker could possibly use this issue to cause Django to consume resources, resulting in a denial of service. (CVE-2015-0221) Keryn Knight discovered that Django incorrectly handled forms with ModelMultipleChoiceField. A remote attacker could possibly use this issue to cause a large number of SQL queries, resulting in a database denial of service. This issue only affected Ubuntu 14.04 LTS and Ubuntu 14.10. (CVE-2015-0222) | ||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||
smack: IQ response spoofing
| Package(s): | smack | CVE #(s): | CVE-2014-0364 | ||||
| Created: | January 12, 2015 | Updated: | January 14, 2015 | ||||
| Description: | From the CVE entry:
The ParseRoster component in the Ignite Realtime Smack XMPP API before 4.0.0-rc1 does not verify the from attribute of a roster-query IQ stanza, which allows remote attackers to spoof IQ responses via a crafted attribute. | ||||||
| Alerts: |
| ||||||
unrtf: denial of service
| Package(s): | unrtf | CVE #(s): | |||||||||
| Created: | January 12, 2015 | Updated: | January 15, 2015 | ||||||||
| Description: | From the Mageia advisory:
Hanno Böck also reported a number of other crashes in unrtf besides the ones associated with CVE-2014-9275. These could allow a denial of service when opening a malicious malformed RTF file which causes unrtf to crash. | ||||||||||
| Alerts: |
| ||||||||||
webkitgtk: multiple vulnerabilities
| Package(s): | webkitgtk | CVE #(s): | CVE-2014-1344 CVE-2014-1384 CVE-2014-1385 CVE-2014-1386 CVE-2014-1387 CVE-2014-1388 CVE-2014-1389 CVE-2014-1390 | ||||||||||||
| Created: | January 12, 2015 | Updated: | January 27, 2016 | ||||||||||||
| Description: | From the CVE entries:
WebKit, as used in Apple Safari before 6.1.4 and 7.x before 7.0.4, allows remote attackers to execute arbitrary code or cause a denial of service (memory corruption and application crash) via a crafted web site, a different vulnerability than other WebKit CVEs listed in APPLE-SA-2014-05-21-1. (CVE-2014-1344) WebKit, as used in Apple Safari before 6.1.6 and 7.x before 7.0.6, allows remote attackers to execute arbitrary code or cause a denial of service (memory corruption and application crash) via a crafted web site, a different vulnerability than other WebKit CVEs listed in HT6367. (CVE-2014-1384) WebKit, as used in Apple Safari before 6.1.6 and 7.x before 7.0.6, allows remote attackers to execute arbitrary code or cause a denial of service (memory corruption and application crash) via a crafted web site, a different vulnerability than other WebKit CVEs listed in HT6367. (CVE-2014-1385) WebKit, as used in Apple Safari before 6.1.6 and 7.x before 7.0.6, allows remote attackers to execute arbitrary code or cause a denial of service (memory corruption and application crash) via a crafted web site, a different vulnerability than other WebKit CVEs listed in HT6367. (CVE-2014-1386) WebKit, as used in Apple Safari before 6.1.6 and 7.x before 7.0.6, allows remote attackers to execute arbitrary code or cause a denial of service (memory corruption and application crash) via a crafted web site, a different vulnerability than other WebKit CVEs listed in HT6367. (CVE-2014-1387) WebKit, as used in Apple Safari before 6.1.6 and 7.x before 7.0.6, allows remote attackers to execute arbitrary code or cause a denial of service (memory corruption and application crash) via a crafted web site, a different vulnerability than other WebKit CVEs listed in HT6367. (CVE-2014-1388) WebKit, as used in Apple Safari before 6.1.6 and 7.x before 7.0.6, allows remote attackers to execute arbitrary code or cause a denial of service (memory corruption and application crash) via a crafted web site, a different vulnerability than other WebKit CVEs listed in HT6367. (CVE-2014-1389) WebKit, as used in Apple Safari before 6.1.6 and 7.x before 7.0.6, allows remote attackers to execute arbitrary code or cause a denial of service (memory corruption and application crash) via a crafted web site, a different vulnerability than other WebKit CVEs listed in HT6367. (CVE-2014-1390) | ||||||||||||||
| Alerts: |
| ||||||||||||||
wireshark: denial of service
| Package(s): | wireshark | CVE #(s): | CVE-2015-0562 CVE-2015-0563 CVE-2015-0564 | ||||||||||||||||||||||||||||||||||||||||
| Created: | January 12, 2015 | Updated: | January 27, 2015 | ||||||||||||||||||||||||||||||||||||||||
| Description: | From the Mageia advisory:
The DEC DNA Routing Protocol dissector could crash (CVE-2015-0562). The SMTP dissector could crash (CVE-2015-0563). Wireshark could crash while decypting TLS/SSL sessions (CVE-2015-0564). | ||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||
xen: denial of service
| Package(s): | xen | CVE #(s): | CVE-2013-3495 | ||||||||||||||||
| Created: | January 9, 2015 | Updated: | January 14, 2015 | ||||||||||||||||
| Description: | From the CVE entry: The Intel VT-d Interrupt Remapping engine in Xen 3.3.x through 4.3.x allows local guests to cause a denial of service (kernel panic) via a malformed Message Signaled Interrupt (MSI) from a PCI device that is bus mastering capable that triggers a System Error Reporting (SERR) Non-Maskable Interrupt (NMI). | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 3.19-rc4, released on January 11. "Another week, another -rc. Things have remained reasonably calm, although we also had a few last-minute MM regressions. Happily, most of them got fixed really quickly, with one remaining arm64 issue still pending."
Stable updates: 3.18.2, 3.17.8, 3.14.28, and 3.10.64 were all released on January 8. 3.17.8 is the final release in the 3.17.x series. As of this writing, the 3.18.3, 3.14.29, and 3.10.65 updates are in the review process; they can be expected on or after January 16.
Linux’s Creator Wants Us All to Chill Out About the Leap Second (Wired)
Wired talks with Linus Torvalds about the potential for another leap-second bug. "Really, to the rest of us, just take the leap second as an excuse to have a small nonsensical party for your closest friends. Wear silly hats, get a banner printed that says 'Leap Second Doomsday Party', and get silly drunk. You’ll blink, and it’s over, but at least you’ll have the hangover next day to remind you of that glorious but fleeting extra second."
Kernel development news
Improving Linux networking performance
100Gb network adapters are coming, said Jesper Brouer in his talk at the LCA 2015 kernel miniconference (slides [PDF]). Driving such adapters at their full wire speed is going to be a significant challenge for the Linux kernel; meeting that challenge is the subject of his current and future work. The good news is that Linux networking has gotten quite a bit faster as a result — even if there are still problems to be solved.
The challenge
As network adapters get faster, the time between packets (i.e. the time the kernel has to process each packet) gets smaller. With current 10Gb adapters, there are 1,230ns between two 1538-byte packets. 40Gb networking cuts that time down significantly, to 307ns. Naturally, 100Gb exacerbates the problem, dropping the per-packet time to about 120ns; the interface, at this point, is processing 8.15 million packets per second. That does not leave a lot of time to figure out what to do with each packet.
So what do you do if you, like almost all of us, do not have a 100Gb
adapter around to play with? You use a 10Gb adapter with small frames
instead. The smallest Ethernet frame that can be sent is 84 bytes; on a
10G adapter, Jesper said, there are 67.2ns between minimally-sized
packets. A system that can cope with that kind load should be positioned to do
something reasonable with 100Gb networking when it becomes available. But
coping with that load is hard: on a 3GHz CPU, there are only about 200 CPU
cycles available for the processing of each packet. That, Jesper noted, is
not a lot.
The kernel has traditionally not done a great job with this kind of network-intensive workload. That has led to existence of a number of out-of-tree networking implementations that bypass the kernel's network stack entirely. The demand for such systems indicates that the kernel is not using the hardware optimally; the out-of-tree implementations can drive adapters at full wire speed from a single CPU, which the mainline kernel is hard-put to do.
The problem, Jesper said, is that the kernel developers have focused on scaling out to large numbers of cores. In the process, they have been able to hide regressions in per-core efficiency. The networking stack, as a result, works well for many workloads, but workloads that are especially latency-sensitive have suffered. The kernel, today, can only forward something between 1M and 2M packets per core every second, while some of the bypass alternatives approach a rate of 15M packets per core per second.
Time budgets
If you are going to address this kind of problem, you have to take a hard look at the cost of every step in the processing of a packet. So, for example, a cache miss on Jesper's 3GHz processor takes about 32ns to resolve. It thus only takes two misses to wipe out the entire time budget for processing a packet. Given that a socket buffer ("SKB") occupies four cache lines on a 64-bit system and that much of the SKB is written during packet processing, the first part of the problem is apparent — four cache misses would consume far more than the time available.
Beyond that, using the x86 LOCK prefix for atomic operations takes about 8.25ns. In practice, that means that the shortest spinlock lock/unlock cycle takes a little over 16ns. So there is not room for a lot of locking within the time budget.
Then there is the cost of performing a system call. On a system with SELinux and auditing enabled, that cost is just over 75ns — over the time budget on its own. Disabling auditing and SELinux reduces the time required to just under 42ns, which is better, but that is still a big part of the time budget. There are ways of amortizing that cost over multiple packets; they include system calls like sendmmsg(), recvmmsg(), sendfile(), and splice(). In practice, he said, they do not work as well as he expected, but he did not get into why. From the audience, Christoph Lameter noted that latency-sensitive users tend to use the InfiniBand "IB verbs" mechanism.
Given all of these costs, Jesper asked, how do the network-bypass solutions achieve higher performance? The key appears to be batching of operations, along with preallocation and prefetching of resources. These solutions keep work CPU-local and avoid locking. It is also important to shrink packet metadata and reduce the number of system calls. Faster, cache-optimal data structures also help. Of all of these techniques, batching of operations is the most important. A cost that is intolerable on a per-packet basis is easier to absorb if it is incurred once per dozens of packets. 16ns of locking per packet hurts; if sixteen packets are processed at once, that overhead drops to 1ns per packet.
Improving batching
So, unsurprisingly, Jesper's work has been focused on improving batching in the networking layer. It includes the TCP bulk transmission work that was covered here in October; see that article for details on how it works. In short, it is a mechanism for informing network drivers that there are more packets waiting for transmission, allowing the driver to delay expensive operations until all of those packets have been queued. With this work in place, his system can transmit 14.8M packets per second — at least if it's the same little packet sent over and over again.
The tricky part, he said, is adding batching APIs to the networking stack without increasing the latency of the system. Latency and throughput must often be traded off against each other; here the objective is to optimize both. An especially hard trick to resist is speculative transmission delays — a bet that another packet is coming soon. Such tricks tend to improve benchmark results but are less useful for real-world workloads.
Batching can — and should — be done at multiple layers in the stack. So, for example, the queuing discipline ("qdisc") subsystem is a good place for batching; after all, delays are already happening as the result of queueing. In the best case, currently, the qdisc code requires six LOCK operations per packet — 48ns of pure locking overhead. The full cost of queuing a packet is 58-68ns, so the bulk of the time spent is with locking operations. Jesper has worked to add batching, spreading that cost over multiple packets, but that only works if there is actually a queue of packets.
The nominal fast path through the qdisc code happens when there is no queue; in such situations, packets can often be passed directly to the network interface and not queued at all. Currently, such packets incur the cost of all six LOCK operations. It should, he said, be possible to do better. A lockless qdisc subsystem could eliminate almost all the cost of queuing packets. Jesper has a test implementation to demonstrate what can be done; eliminating a minimum of 48ns of overhead, he said, is well worth doing.
While transmission performance now looks reasonably good, he said, receive processing can still do with some improvement. A highly tuned setup can receive a maximum of about 6.5M packets per second — and that's when the packets are simply being dropped after reception. Some work on optimizing the receive path is underway, raising that maximum to just over 9M packets per second. But there is a problem with this benchmark: it doesn't show the costs of interaction with the memory-management subsystem.
Memory management
And that interaction, it turns out, is painful. The network stack's receive path, it seems, has some behavioral patterns that do not bring out the best behavior in the slab allocators. The receive code can allocate space for up to 64 packets at a time, while the transmit path can free packets in batches of up to 256. These pattern seems to put the SLUB allocator, in particular, into a relatively slow path. Jesper did some microbenchmarking and found that a single kmem_cache_alloc() call followed by kmam_cache_free() required about 19ns. But when 256 allocations and frees were done, that time increased to 40ns. In real-world use in the networking stack, though, where other things are being done as well, the allocation/free overhead grows even more, to 77ns — more than the time budget on its own.
Thus, Jesper concluded, there need to be either improvements to the memory-management code or some way of bypassing it altogether. To try the latter approach, he implemented a subsystem called qmempool; it does bulk allocation and free operations in a lockless manner. With qmempool, he was able to save 12ns in simple tests, and up to 40ns in packet forwarding tests. There are a number of techniques used in qmempool to make it faster, but the killer feature is the batching of operations.
Jesper wound down by saying that qmempool was implemented as a sort of provocation: he wanted to show what was possible and inspire the memory-management developers to do something about it. The response from the memory-management camp was covered in the next talk, which will be reported on separately.
[Your editor would like to thank linux.conf.au for funding his travel to the event.]
Toward a more efficient slab allocator
Following up on Jesper Brouer's session on networking performance, Christoph Lameter's LCA kernel miniconf session covered ways in which the performance of the kernel's low-level object allocators (the "slab" allocators) could be improved to meet current and future demands. Some of the work he covered is new, but some of it has been around, in concept at least, for some time.
Batch allocation
Jesper talked about the need to process packets in batches; that, in turn, leads to the need to allocate and free data structures in batches. The overhead of a single-object allocation is too high for the needs of the networking subsystem, but, if that overhead can be spread out over a large numbers of objects, it becomes more tolerable. Christoph's work, which was posted to the linux-kernel list for review in December, provides an interface for multiple-object allocation.
To allocate a set of objects from a slab cache, one would call:
kmem_cache_alloc_array(struct kmem_cache *cache, gfp_t gfp, int nr,
void **objects, unsigned int flags);
If all goes well, this function will allocate nr objects, placing pointers to them in the objects array.
The flags argument is there to support a few different modes of allocation. SLAB_ARRAY_ALLOC_LOCAL says to allocate the objects from a local, per-CPU array. Allocation is lockless and quite fast, but there is a limited number of objects available from this cache. Larger batches can be allocated with SLAB_ARRAY_ALLOC_PARTIAL, which tries to grab the objects from the per-CPU list of partially-allocated pages. This mode may be a bit slower, but it avoids draining the per-CPU object cache. Finally, large numbers of objects can be allocated with SLAB_ARRAY_ALLOC_NEW, which allocates objects from freshly allocated pages.
That last mode may seem especially slow since it requires calls into the page allocator. But, for large batches, Christoph said, it could actually be the fastest mode of them all. Normally the SLUB allocator (which Christoph maintains) must manipulate the free list of objects used in the management of slab pages; working with fresh pages avoids that need, and, in the process, cuts out a lot of cache misses associated with list traversal. The cost, in the current implementation, is that only full pages of objects can be allocated, so the returned number of objects may be less than what was asked for. Dave Chinner said that such an interface may be be useful in the filesystem layer, but the allocator would have to return the requested number of objects, so that behavior might change in the future.
Objects can also be freed in batches, using:
kmem_cache_free_array(struct kmem_cache *cache, int nr, void **objects);
The current plan is to add this array-allocation API with a fallback mode for slab allocators that do not support it. That allows testing the API without the need to implement it in all three allocators supported by the kernel.
Implementation of this API in the SLUB allocator is done. The biggest challenge is the manipulation of the free lists, which can add a lot of cache misses to an allocation operation. As mentioned above, allocation using fresh pages avoids that problem, since the free list need never exist in the first place. Implementation in the slab allocator is easier, since it already maintains a per-page array of free objects; there is no free list to traverse. There was no mention of the SLOB allocator, but SLOB users are not primarily focused on performance anyway.
Fixing slab fragmentation
The second part of Christoph's talk had to do with slab page fragmentation
issues. All of the slab allocators work by allocating full pages, breaking
them up into equal-sized objects, then passing those objects out to the the rest
of the kernel on request. One result of this strategy is that, over time,
the allocators accumulate lists of partially allocated pages — pages with
some objects allocated, and others free. These fragmented pages are costly
to track; they also represent a fair amount of wasted memory that cannot be
freed for other uses. There would be value in a mechanism that could free
some of these partially allocated pages.
There are a number of patches out there addressing parts of the fragmentation problem. The first of these takes a relatively simple approach: the lists of partially allocated pages are sorted to put those with the fewest free objects at the beginning. The hope is that subsequent allocation requests will allocate the last remaining objects in those pages, at which point the allocator can stop tracking them. At the other end of the list, the pages which contain few allocated objects will, with luck and if further objects are not allocated from them, become fully free when the remaining objects are returned. Those pages can then be handed back to the page allocator.
The next step is off-node allocation. The slab allocators normally try to keep memory allocations on the same NUMA node as the requester. But, on occasion, the SLUB allocator will allocate from a remote node in the hope of clearing some partially-allocated pages from that node. This off-node access happens relatively rarely, and only if the allocation request does not explicitly ask for node-local memory. But, carefully used, it can help to get mostly-allocated pages off the partial-page lists.
A more invasive approach is what Christoph called "defragmentation by eviction." It was first posed in 2009, but was rejected at the time. It allows callbacks to be associated with objects allocated from a slab cache. There are two of these: get() and kick(). A call to get() establishes a stable reference to an object so that it will not be freed while the allocator is trying to free the entire page. A call to kick(), instead, requests that the object be freed. The callback can refuse to free the object, but, clearly, the mechanism will work better if these requests are honored whenever possible. After all, it only takes one refused request to thwart an attempt to free a page.
Finally, Christoph mentioned that, sometime in the future, there will be a need to support movable objects in the slab caches. Much work has gone into making memory pages movable; at this point, the slab caches represent the bulk of unmovable pages in the system. Solving that problem will not be easy, Christoph said, but it may, in the end, be the only way to truly solve the problem of slab page fragmentation.
[Your editor would like to thank linux.conf.au for funding his travel to the event.]
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Development tools
Device drivers
Device driver infrastructure
Documentation
Filesystems and block I/O
Memory management
Networking
Security-related
Virtualization and containers
Page editor: Jonathan Corbet
Distributions
Rockstor — A Btrfs-based NAS distribution
This is the second article in a short series on distributions designed for use in a network-attached storage (NAS) box. The first was a look at OpenMediaVault, a fairly traditional NAS distribution. The subject this time around — Rockstor — is a different beast; its purpose is to make the features of the Btrfs filesystem available behind an easy-to-use, web-oriented management interface.
Given that, by some accounts, Btrfs is still not ready for production use,
one might wonder about the wisdom of using it in an NAS box, which, after
all, could be the definition of "production use." The Rockstor developers
are clearly sensitive to that concern, to the point that they have
dedicated an
FAQ entry to it: "In our experience, BTRFS has become very
reliable. Also, Rockstor confines users from using BTRFS more freely, thus
reducing the chances of hitting deep intricate bugs.
" The Rockstor forums do not contain
any reports of data loss as of this writing, which is an encouraging sign. But
it is not clear
that Rockstor has a lot of users yet, and advice in the project
documentation like "so
wear your Linux ninja hat to troubleshoot serious data loss
problems.
" might be seen as troubling. Rockstor is probably of most
interest to users who want to explore the leading edge and are prepared for
the possibility of trouble.
The distribution appears to be built by a small group of developers at a company (Rockstor, Inc.) that hopes to monetize it via support contracts and partnerships with other companies. That, too, could be an area of concern; if Rockstor's commercial plans don't work out, the distribution could end up in an unsupported state. The best defense against that outcome, of course, would be to grow an active development community outside of the corporation; it is not clear that this has happened so far, though. The Rockstor GitHub repository shows commits from four contributors, three of which are listed as employees of the company.
The Rockstor distribution is based on CentOS 7. Given the rapid pace of Btrfs development, one might wonder about using a set-in-stone enterprise-oriented distribution. Rockstor ships a newer kernel than CentOS, though; the 3.5 release tested by your editor runs a 3.17 kernel seemingly taken from ElRepo. Assuming that no bugs are introduced in the newer kernel, a CentOS base should provide the kind of stability that is more than welcome in a storage server. It also allows Rockstor to depend on CentOS to provide the bulk of its security updates.
Installation
The initial installation experience was not entirely pleasant. The installation documentation says that the installation image can be copied to a USB drive and booted from there — as one would expect these days. Nothing your editor tried would render that drive bootable, though. A question on the forum yielded the truth: booting from USB simply does not work. That would have been nice to know before sinking some hours into the attempt.
The acquisition of a cheap USB optical drive was sufficient to get past this problem. Once booted, the CD provides a fairly straightforward, Anaconda-based graphical installation experience. Like OpenMediaVault, Rockstor wants an entire drive for its own use; it will not share a drive with the filesystems it serves. Happily, the small server used as a testing platform has a MicroSD slot on the motherboard; purchasing a 16GB card along with the USB drive enabled an installation that leaves all of the drive bays free for data storage.
Managing Rockstor
Unlike OpenMediaVault, Rockstor uses SSL with its web-based administration interface from the outset. At this point, of course, the user has had no opportunity to install an SSL certificate on the machine, so Rockstor generates its own. That leads to the inevitable "unknown certificate" warnings when first connecting to the new machine — unfortunate, but it would be hard to do better. The warning seems preferable to managing a storage server over an unencrypted connection.
The first step requires the user to agree to the end-user license agreement. It was with some trepidation that your editor went off to see what he was agreeing to, but the actual EULA is sufficiently short and sweet that most users should have little trouble with it.
The administrative interface itself starts with a "dashboard" view with a
number of little, constantly updating widgets. The usual information is
there, showing parameters like available space, network bandwidth, and CPU
usage at a glance. The Rockstor interface also shows I/O bandwidth to the
individual storage devices, which is a useful feature.
The first step, naturally, is to set up some filesystems to export to the network. The only available filesystem type is Btrfs, and its use forces a different view of the task than one sees with more traditional filesystems. The first step is to organize the available devices (Rockstor only deals with full drives; no partitioning is available) into "pools" for use. Pools can be set up with any of the usual RAID configurations from standalone drives through RAID6; data compression can also be enabled at this level.
The pool setup process feels a bit fragile at times; it will fail if it looks like the drive is in use for any other purpose. Seemingly the presence of a partition table on the drive is all it takes to block the pool-creation process. Getting around such issues can require some manual command-line work — not the experience that the web interface is meant to provide. The error messages can also be misleading; it claimed that a disk was unusable due to having a Btrfs filesystem on it, when that drive was previously part of an MD RAID setup hosting an ext4 filesystem.
Existing pools can be resized by adding or removing disks. The resizing dialog, though, warns that the pool must be rebalanced manually after adding or removing drives. There is no indication of how to do that rebalancing. So it is safe to say that, for all practical purposes, the pool-resizing functionality in the web interface does not really work yet.
One other place where Rockstor falls a bit short is that it has no support for SMART monitoring of drive health. Drives fail, and anything that can be done to gain some advance warning of a failure can only be welcome. Rockstor's web interface also lacks any sort of power-management configuration for drives.
After at least one pool has been set up, it is time to create "shares,"
otherwise known as filesystems. Each share draws space from one pool; it
can share that space with other filesystems. The space in the pool can be
overcommitted if one desires, as long as the relevant filesystems don't all
fill at the same time.
A separate "exports" screen controls the exporting of shares to the net; it can manage NFS, CIFS, and Apple (AFP) shares. The NFS screen is simple but straightforward; it allows the specification of a special host that is allowed root access to the filesystem. The CIFS (Samba) screen is also simple; one thing that is lacking here is support for the Samba home directories feature.
If one digs far enough into the "shares" screens, one finds the ability to
work with Btrfs snapshots. Taking a snapshot is a simple matter of hitting
a button and giving the snapshot a name. Snapshots can be cloned and
deleted. There is also a function to roll a filesystem back to a previous
snapshot, but that will only work if that filesystem is not exported to the
net.
There is a set of screens for managing users and groups. There also appears to be the ability to obtain this information from a NIS or LDAP server on the net. In general, one gets the sense that the Rockstor developers assume that user management will be handled elsewhere.
Command-line administration — sort of
The web interface is nice, but it seems obvious that not all tasks can be accomplished that way currently. Obtaining a shell on the device is easy enough, of course, but one naturally wonders how much command-line work can be done before one runs into conflicts with the Rockstor software. There is, for example, a PostgreSQL database humming away in the background, so there is clearly some significant state being maintained; putting the system in a condition that doesn't match that state is unlikely to lead to pleasant results.
The Rockstor developers seem to have thought of this problem; for that reason, they have provided a command-line interface providing access to a variety of management functions. The documentation says that this interface is entered automatically when an administrative user logs into the server with SSH. That did not happen; an SSH login leads to an ordinary bash prompt. Some digging around turned up the actual interface in /opt/rockstor/bin/rcli, though.
The functionality provided there is useful, and could make the writing of shell scripts easier. Unfortunately, the command-line interface is fragile at best. Various functions don't work and just about any error leads to a Python traceback and the program's untimely demise. Clearly some work is still needed to get this tool into a usable condition.
Closing notes
One interesting feature that your editor was unable to play with is server replication; Rockstor can be configured to automatically replicate filesystems across servers. The Btrfs send/receive feature is used to implement this functionality. If it works as advertised, it is a feature that could prove useful, especially in larger installations.
The web interface advertises an "analytics" function that is supposed to provide information on what is keeping the NFS server busy. It seems to be based on a set of SystemTap probes. Your editor was unable to get it to produce any output, though.
To summarize: Rockstor is an interesting distribution with a number of useful features. It could well serve as the base for a production NAS device. That said, much of this code has the appearance of being rather new and immature. The Rockstor distribution could certainly benefit from a larger group of developers who could round out its functionality and deal with the various glitches. If Rockstor the company survives long enough to build a development community and get that work done, it could have a bright future; this is a project to watch.
Brief items
Distribution quote of the week
Debian 7.8 released
The eighth update to Debian 7 "wheezy" has been released. As usual this update adds corrections for security issues and serious problems. "Those who frequently install updates from security.debian.org won't have to update many packages and most updates from security.debian.org are included in this update."
Distribution News
Debian GNU/Linux
Marvell donation accelerates Debian ARM package builds
The Debian Project has announced that Marvell has donated equipment to help the ARM port. "Starting in April, several Debian ARM port builder machines have been upgraded to substantially faster Marvell Armada XP based servers. Marvell has donated eight Marvell MV78460 SoC development boards using Marvell Armada 370/XP CPUs running at 1.6GHz."
Bug Squashing party for Debian and Ubuntu
There will be a bug squashing party for Debian and Ubuntu January 31-February 1 in Oslo, Norway. There will be a workshop for new contributors.
Fedora
FESCo and Env and Stacks WG upcoming elections nominations are open
Nominations are open for the Fedora Engineering Steering Committee (FESCo) and the Environment and Stacks Working Group (WG). There are five seats open on FESCo and four seats open for the WG. Nominations are open through January 19.
Newsletters and articles of interest
Distribution newsletters
- DistroWatch Weekly, Issue 592 (January 12)
- Ubuntu Weekly Newsletter, Issue 399 (January 11)
6 new things Fedora 21 brings to the open source cloud (Opensource.com)
Jason Baker takes a look at Fedora 21 Cloud, on Opensource.com. "When you're talking about paying for space for hundreds of virtual machines, or even just waiting for a configured image to upload from your local machine to your cloud environment, size matters. The Fedora maintainers made strong progress bringing down the base size of the Fedora cloud image for the new release. Cloud images now clock in at a 10% smaller size than the previous release, with a qcow2 formatted version under 200MB."
Deepin Linux: A Polished Distro That's Easy to Install and Use (Linux.com)
Linux.com has a review of Deepin Linux. "Even if it looks like Gnome Shell, Deepin is not using the shell. They started off with the Shell, but encountered many problems as they tried to customize it to their needs (that could be one of the many reasons there are so many forks or alternatives to Gnome Shell: Cinnamon, Mate, Elementary OS' Pantheon, and Unity, among many others). Similar to other projects Deepin went ahead and developed their own Shell which was simply called Deepin Desktop Environment. DDE is based on HTML5 and WebKit and uses a mix of QML and Go Language for different components. Core components of DDE include the desktop itself, the brand new launcher, bottom Dock, and the control center."
Page editor: Rebecca Sobol
Development
Android gets a toybox
The toybox project has been around for a while, and now it looks to be achieving one of its longtime goals: inclusion into Android. Toybox is a replacement for many Linux command-line utilities that is targeted at embedded systems—similar to BusyBox. It was created by Rob Landley, who is also a former BusyBox maintainer, and has a BSD license rather than BusyBox's GPLv2—something that was likely important for the notoriously GPL-averse Android project.
Landley started toybox in 2006 under the GPLv2, but soon switched to the BSD license. The project was largely dormant until he restarted development in 2011 with the goal of being a drop-in replacement for Android's toolbox that, like BusyBox and toybox, encompasses multiple command-line utilities in a single binary.
Toolbox, though, has long been a source of frustration for Android developers, some of whom replace it with BusyBox as one of their first acts on any new Android system. Developers have found that toolbox lacks many of the commands they need, while also providing commands with different behavior and command-line options than they are used to from Linux.
Landley released toybox 0.5.1 on November 19. That release is a bug fix update to 0.5.0 that we looked at in October.
Prior to that, though, Jason Spiro filed a bug in the Android Open Source Project (AOSP) bug tracker asking that the project incorporate toybox. There was some back-and-forth in the bug comments between Bionic (Android's C library) maintainer Elliott Hughes and Landley that resulted in a toybox that would build for Android. After that, toybox was added to the AOSP Git repository.
But there is more than just adding the code. Hughes posted a status report on December 18 to the toybox mailing list (which was updated on January 1) that listed various utilities in the AOSP master branch that use toybox as the underlying binary. The list has quite a number of new commands that were not available in toolbox, as well as more than two dozen commands where the toybox version replaces the one in toolbox. There are also some notes for commands that need to be looked at or fixed in toybox before Android can switch to using them.
For example, Hughes lists missing command-line arguments for several commands (e.g. cat, cmp, grep), but patches are pending for many of those. Other commands, such as ifconfig and inotifyd, appear to provide a superset of the functionality required by Android, but more testing needs to be done.
There are a couple of advantages that toybox brings to the table. The utilities work more like the GNU versions, which should be more familiar to Linux users than the NetBSD-based versions that are in toolbox. It also neatly avoids the GPL that comes with both the GNU utilities and BusyBox, which is frowned upon (at least) for Android user space.
In fact, the license difference is one reason that another embedded Linux distribution, Tizen, is looking at toybox. It currently uses the GNU coreutils package that is GPLv3-licensed. The anti-Tivo-ization language in that version of the GPL evidently worries some device manufacturers that have a "managed platform" (e.g. phones, TVs, in-vehicle infotainment or IVI). Tizen test systems have been built to use toybox instead without any problems, as reported on the wiki page, but it appears that more testing needs to be done.
Landley's larger vision is to use stock Android phones as a development platform by adding USB devices for keyboard, mouse, and display. There are hurdles, however, since Android kernels tend to only support USB gadget mode—plugging in a hub with input and output devices does not work on stock Android devices. He also described his plans, with links to a talk he gave about them, in a comment on the AOSP bug. It's not at all clear that Google (or even some Google developers such as Hughes) share those goals, but it is clear that Android will be moving away from toolbox and toward toybox in coming releases. That certainly gives the project a nice boost.
Shortly after the project restarted, it came under fire from Matthew Garrett and others because it was perceived as providing a way to avoid the license-compliance efforts that surround BusyBox. While Google is not circumventing the GPL in its work—other than the kernel, it tries to avoid the license entirely—the Android downstreams have sometimes played fast and loose with the GPL-licensed kernel (and possibly other GPL code). But other devices that currently ship BusyBox could switch to toybox to avoid compliance efforts based on BusyBox, which is Garrett's complaint.
The BusyBox compliance front has been quiet lately, at least visibly, so the Software Freedom Conservancy (SFC) (which has been leading those efforts) may have turned its attention elsewhere. There may also be ongoing compliance efforts that have not been made public, which is the norm unless a lawsuit is filed. The SFC also has a compliance project for the Linux kernel that could be used against vendors that are not publishing their kernel source as required by the GPL. Once again, though, any kernel compliance work from that project is proceeding quietly at this point.
Landley has been a vocal opponent of the BusyBox lawsuits since he ceased being one of the plaintiffs in those suits. So any reduction in those kinds of actions due to the adoption of toybox will likely be a satisfying outcome for him. In any case, more free code seeing wider adoption is certainly a good thing for the community. It may mean that certain kinds of GPL enforcement will need to change, but enforcing the license on the actual code of interest, rather than using BusyBox as a proxy, is likely to work out better in the long run.
Brief items
Quotes of the week
Rust 1.0 alpha released
The alpha version of the Rust 1.0 release has been announced. There is a long list of new features added to the language; see the release notes for details. "The language is feature-complete for 1.0. While we plan to make many usability improvements before the final release, all 1.0 language features are now in place and we do not expect major breaking changes to them."
KDE Frameworks 5.6.0 available
KDE Frameworks 5.6.0 has been released. Among the 60 addon libraries included in the release are two new offerings: KPackage, a library for loading and installing packages of non-binary files, and NetworkManagerQt, a Qt wrapper for the NetworkManager API.
Plasma 5.2 beta out for testing
KDE has announced the release of Plasma 5.2 beta. Some new components in this release include BlueDevil to manage Bluetooth devices, the Muon software manager, Login theme configuration (SDDM), KScreen to set up multiple monitors, and more.U-Boot v2015.01 released
Version 2015.01 of the U-Boot bootloader has been released. This update is noteworthy because it begins the project's effort to migrate its configuration data into Kconfig files. Also worth pointing out is that device-model I2C and SPI support has been merged upstream.
Firefox 35.0
Firefox 35.0 has been released. New in this release: Firefox Hello with new rooms-based conversations model, new search UI improved and enabled for more locales, access the Firefox Marketplace from the Tools menu and optional toolbar button, improved high quality image resizing performance, and more. See the release notes for details.
Newsletters and articles
Development newsletters from the past week
- What's cooking in git.git (January 12)
- Haskell Weekly News (January 8)
- LLVM Weekly (January 12)
- OCaml Weekly News (January 13)
- OpenStack Community Weekly Newsletter (January 12)
- Perl Weekly (January 21)
- PostgreSQL Weekly News (January 11)
- Python Weekly (January 8)
- Ruby Weekly (January 8)
- This Week in Rust (January 14)
- Tor Weekly News (January 14)
- Wikimedia Tech News (January 5)
Django Software Foundation in 2014
The Django Software Foundation has published a 2014 retrospective, looking at (among other things) the past year's development and adoption efforts. Included are several successful fundraising campaigns, Django Fellowships, and a major redesign of the project's web resources. "2014 was a very busy year for the Django Software Foundation, with a number of high profile projects and initiatives seeing the light of day. We hope to continue to grow and expand in 2015.
"
Blender 2.73: Houston, 2D animation apps have a problem (Libre Graphics World)
Libre Graphics World takes a look at the Blender 2.73 release, concluding (among other things) that Blender's recent work to support 2D animation is good enough that other free-software 2D animation projects might want to worry. "Arguably, the most prominent update is a bunch of new features in Grease Pencil which used to be getting barely any updates since its inception around 2008. Editable, animated strokes? Reading pressure level from your stylus to adjust strokes' width? Configurable fills? Configurable onion skin? Volumetric strokes? It's all in the package.
"
Page editor: Nathan Willis
Announcements
Articles of interest
Top 10 FOSS legal developments of 2014 (Opensource.com)
Mark Radcliffe covers some legal developments from 2014, on Opensource.com. "Governments are one of the most important users of software but have had a mixed record in using and contributing to FOSS (free and open source software). The EC recently announced that it intends to remove the barriers that may hinder code contributions to FOSS projects. In particular, the EC wants to clarify legal aspects, including intellectual property rights, copyright, and which author or authors to name when submitting code to the upstream repositories. Pierre Damas, Head of Sector at the Directorate General for IT, hopes that such clarification will motivate many of the EC’s software developers and functionaries to promote the use of FOSS at the EC."
Calls for Presentations
EFL Dev Day US 2015
Enlightenment Developers Day will take place March 26 in Mountain View, CA. The call for papers deadline is February 20.CFP Deadlines: January 15, 2015 to March 16, 2015
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
| Deadline | Event Dates | Event | Location |
|---|---|---|---|
| January 16 | March 9 March 10 |
Linux Storage, Filesystem, and Memory Management Summit | Boston, MA, USA |
| January 19 | June 16 June 20 |
PGCon | Ottawa, Canada |
| January 19 | June 10 June 13 |
BSDCan | Ottawa, Canada |
| January 24 | February 14 February 17 |
Netdev 0.1 | Ottawa, Ontario, Canada |
| January 30 | April 25 April 26 |
LinuxFest Northwest | Bellingham, WA, USA |
| February 1 | April 13 April 17 |
ApacheCon North America | Austin, TX, USA |
| February 1 | April 29 May 2 |
Libre Graphics Meeting 2015 | Toronto, Canada |
| February 2 | July 20 July 24 |
O'Reilly Open Source Convention | Portland, OR, USA |
| February 6 | July 27 July 31 |
OpenDaylight Summit | Santa Clara, CA, USA |
| February 8 | April 9 April 12 |
Linux Audio Conference | Mainz, Germany |
| February 9 | May 18 May 22 |
OpenStack Summit | Vancouver, BC, Canada |
| February 10 | June 1 June 2 |
Automotive Linux Summit | Tokyo, Japan |
| February 12 | June 3 June 5 |
LinuxCon Japan | Tokyo, Japan |
| February 15 | March 1 March 6 |
Circumvention Tech Festival | Valencia, Spain |
| February 15 | May 1 May 4 |
openSUSE Conference | The Hague, Netherlands |
| February 16 | May 12 May 13 |
PyCon Sweden 2015 | Stockholm, Sweden |
| February 16 | April 13 April 14 |
2015 European LLVM Conference | London, UK |
| February 20 | March 26 | Enlightenment Developers Day North America | Mountain View, CA, USA |
| February 20 | May 13 May 15 |
GeeCON 2015 | Cracow, Poland |
| February 24 | April 24 | Puppet Camp Berlin 2015 | Berlin, Germany |
| February 28 | May 19 May 21 |
SAMBA eXPerience 2015 | Goettingen, Germany |
| February 28 | July 15 July 19 |
Wikimania Conference | Mexico City, Mexico |
| February 28 | June 26 June 27 |
Hong Kong Open Source Conference 2015 | Hong Kong, Hong Kong |
| March 1 | April 24 April 25 |
Grazer Linuxtage | Graz, Austria |
| March 1 | April 17 April 19 |
Dni Wolnego Oprogramowania / The Open Source Days | Bielsko-Biała, Poland |
| March 2 | May 12 May 14 |
Protocols Plugfest Europe 2015 | Zaragoza, Spain |
| March 6 | May 8 May 10 |
Open Source Developers' Conference Nordic | Oslo, Norway |
| March 7 | June 23 June 26 |
Open Source Bridge | Portland, Oregon, USA |
| March 9 | June 26 June 28 |
FUDCon Pune 2015 | Pune, India |
| March 15 | May 7 May 9 |
Linuxwochen Wien 2015 | Wien, Austria |
| March 15 | May 16 May 17 |
MiniDebConf Bucharest 2015 | Bucharest, Romania |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
NetDev 0.1 new proposals accepted
NetDev 0.1 will take place February 14-17 in Ottawa, Canada. This announcement covers some new proposals that were recently accepted. "All of the accepted proposals are new work. A couple of proposals have been returned for rework prior to acceptance. There are still more excellent proposals currently making their way through the technical committee vetting process. The committee has so far been very impressed with the quality of proposals submitted."
Ready for SCALE 13x?
The Southern California Linux Expo will take place February 19-22. Registration is open for the event and also a Linux Basics Class. Submissions are still open for UpSCALE talks.Announcing AdaCamp Montreal
AdaCamp Montreal is a bilingual English/French event that will take place April 13-14 in Montreal, Quebec, Canada. "The event will involve an unconference held over the two days, along with evening social events."
Events: January 15, 2015 to March 16, 2015
The following event listing is taken from the LWN.net Calendar.
| Date(s) | Event | Location |
|---|---|---|
| January 12 January 16 |
linux.conf.au 2015 | Auckland, New Zealand |
| January 23 | Open Source in the Legal Field | Santa Clara, CA, USA |
| January 31 February 1 |
FOSDEM'15 Distribution Devroom/Miniconf | Brussels, Belgium |
| January 31 February 1 |
FOSDEM 2015 | Brussels, Belgium |
| February 2 February 5 |
Python Namibia | Windhoek, Namibia |
| February 6 February 8 |
DevConf.cz | Brno, Czech Republic |
| February 6 February 8 |
Taiwan mini-DebConf 2015 | Yuli Township, Taiwan |
| February 9 February 13 |
Linaro Connect Asia | Hong Kong, China |
| February 11 February 12 |
Prague PostgreSQL Developer Days 2015 | Prague, Czech Republic |
| February 14 February 17 |
Netdev 0.1 | Ottawa, Ontario, Canada |
| February 18 February 20 |
Linux Foundation Collaboration Summit | Santa Rosa, CA, USA |
| February 19 February 22 |
Southern California Linux Expo | Los Angeles, CA, USA |
| March 1 March 6 |
Circumvention Tech Festival | Valencia, Spain |
| March 9 March 10 |
Linux Storage, Filesystem, and Memory Management Summit | Boston, MA, USA |
| March 9 March 12 |
FOSS4G North America | San Francisco, CA, USA |
| March 11 March 12 |
Vault Linux Storage and Filesystems Conference | Boston, MA, USA |
| March 11 | Nordic PostgreSQL Day 2015 | Copenhagen, Denmark |
| March 12 March 14 |
Studencki Festiwal Informatyczny / Academic IT Festival | Cracow, Poland |
| March 13 March 15 |
FOSSASIA | Singapore |
| March 13 March 15 |
GStreamer Hackfest 2015 | London, UK |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol

![Paul Warren [Paul Warren]](https://static.lwn.net/images/2015/01-radio-warren-sm.jpg)
![[OpenRadio SDR board]](https://static.lwn.net/images/2015/01-radio-board-sm.jpg)