User: Password:
Subscribe / Log in / New account Weekly Edition for November 21, 2013

The Linus and Dirk show comes to Seoul

By Jonathan Corbet
November 20, 2013
Korea Linux Forum
Linus Torvalds famously does not like to give prepared talks, so, when he makes a conference appearance, it tends to be in the form of a free-ranging conversation, usually with diving buddy Dirk Hohndel. Those two put in an appearance on the second day of the 2013 Korea Linux Forum. What resulted was a lively conversation covering a wide range of topics; there was little overlap with their LinuxCon Japan session earlier this year.

In the beginning

Dirk started off with a simple question to Linus: looking back over the history of Linux, what would you change if you could? Linus answered that, given how well things have gone, he either did everything right or was incredibly lucky. His vote, he added, would be for the "did everything right" alternative. In truth, though, there was a lot of luck combined with doing one important thing right: he had no preconceived ideas of where he wanted Linux to go. Instead, he wanted the system's users to say where things should go; everything was open to change. That gave Linux the freedom to evolve into something that was useful for a lot of people.

What did he expect when he first put Linux out there? "Riches, fame, and fast cars" was his first, flippant response. What he actually expected, he said, was rather more prosaic. He had been developing software for long enough to see a common pattern with new projects: initially developers get excited about what they are doing, but then things slow down about halfway through. Once developers encounter the vast numbers of boring details that have to be dealt with to make things work, they often decide it's not what they jumped into the project for, so they leave it for something shiny and exciting. Linus fully thought that he would do the same with the system that came to be called [Linus and Dirk] "Linux," but it never happened. He quickly reached a point where it did what he wanted, but then "crazy people started doing insane things." New challenges came along, along with new users doing even stranger things. That has kept it interesting all this time.

Even now, he does not really know what to expect, except that Linux will be around for a long time. He had never thought that Linux would turn out to be his life, but, at this point, he has been working on it for more than half of his life. He doesn't, he added, see himself stopping now.

Was Linus surprised by how many people responded to the early Linux releases and sent patches? Yes, he was, but that was also what he had wanted to have happen. The one time he was truly surprised was in early 1992, when he suddenly realized that he didn't know all of his users anymore. That's unlikely to change, he noted, as it wouldn't work well to have all Linux users approach him now — even if sometimes the state of his mailbox makes it seem like that is exactly what is happening.

Pace, process, and patents

The previous day, Greg Kroah-Hartman had given a quick talk on how the pace of kernel development was high and continuing to increase. Dirk asked: do those numbers make your head spin? Linus replied in the negative, saying that we have learned how to build kernels at this point. There are increasing numbers of developers, but Linus personally deals with about 100 of them in any given release cycle. The rest of the work is nicely distributed. Nobody, he said, worries about the number of patches being merged per hour. He worries, instead, about testing and the scalability of the process as a whole. Thus far, things seem to be working out well.

An audience member stepped up to ask: given that the amount of code and functionality going into the kernel is increasing, is there a need for more testing time between releases? How does the balance between development and testing work out? Linus responded that most projects have a vision for what they want to accomplish, with nice lists of planned features for the next release. The kernel community doesn't do that; releases are purely time-based, and the release numbers themselves don't really mean anything. If something isn't ready for a given cycle, it simply does not get merged. [Linus and Dirk] Yes, more code implies a need for more testing, but that work is distributed too — there are more testers. In the end, individual developers are probably not working any faster, even if the process as a whole is accelerating.

Dirk added that Linus's mainline development activities, while being the highest-profile part of the process, are really just the final integration testing step. All the code merged at that point has been in maintainer trees and linux-next and is, hopefully, well tested.

Another audience question touched on patent issues: does the kernel project have any policies in place to detect and avoid patent-related problems? Linus responded that software patents are clearly a huge problem, but they are no bigger a problem for the kernel than for the technology industry as a whole. Indeed, the kernel tends to have fewer patent problems than software at higher levels because kernels have been around for a long time. In a sense, little that is new or novel is done at the kernel level. That said, kernel developers do try to avoid patent problems; he gave Samsung's exFAT filesystem implementation as an example. This code has been released under the GPL, but it has not yet been merged into the mainline; lawyers are talking about it now, and he hopes that the problems will soon be resolved. But, Linus said, as technical people we can't do much about patent problems; we have to let the legal people deal with them.

The next question had to do with patches causing incompatible changes. Linus asserted that most patches merged into the kernel are obvious improvements. Others may fix security issues, which can raise problems: some programs may be relying on the behavior that resulted in the security problem in the first place. In the case of security problems the necessary outcome is usually clear, and it may be necessary to break things. But, sometimes, a patch can be tweaked so that the kernel still provides the needed behavior while closing the security hole. A couple of times every release cycle it becomes necessary to think more carefully about a patch: is it more important to improve performance, or to maintain compatibility for some obscure behavior? In the end, he said, engineering is about tradeoffs.

Africa, outreach, and inspiration

What about people in places like Africa, who lack the resources to work with Linux; what can the kernel project do for them? Linus responded that he is not trying to solve all the world's problems. A nice thing about open-source software is that it makes it possible to lower barriers for people anywhere in the world. But if you live in a remote part of Africa and don't even have electricity, you will not be able to participate in open-source development. But, he added, Linux and open-source software have helped to bring infrastructure support to a lot of poor places; it allows things to be done more cheaply and efficiently. He is happy that Linux has been used in that kind of project.

When he was in Korea ten years ago, Linus said, most Korean developers seemed to be busy doing localization work. That is important work, but not particularly challenging or gratifying. Now Korean developers are doing no end of interesting things; open source, he said, has helped to bring that change about. In general, we used to worry about getting more developers in Asia. A lot of progress has been made in that area, but it will take rather longer to reach places like Africa.

Has the Outreach Program for Women been successful in the kernel project? Linus answered that he is the wrong person to ask, since he doesn't usually get involved in initiatives like that. In general, he said, participation by women has been getting better, but it is still "really bad."

Linus was then asked: back at the beginning, what inspired you to start Linux in the first place? The answer is that it comes down to a lot of hard-to-describe personal things, starting with a background that made Linux possible in the first place. The country he grew up in offers free, widely available education; it also features a culture where money is less of an issue than in a lot of other places. That made releasing Linux an easier choice — he was never going for the money. In general, the culture outside of technology matters a lot; that is why open-source software took off much more quickly in certain countries than in others. It was simply less of a cultural jump in those countries.

How did he decide which patches to accept when he started? It took a while to start getting actual patches; early users usually sent in feature requests instead. When the first patches started coming in, he usually would not apply them directly; instead, he used them as a template describing the desired functionality. The TTY patches from Ted Ts'o were, perhaps, the first patches that he didn't rewrite; they were obvious features that he wanted to have anyway.

Things got more difficult around 2000, when he was bombarded with patches and the community as a whole had huge process problems. He had not yet given up control to the subsystem maintainers, and so had to look at every patch himself. It took him a long time, Linus said, to learn not to argue with people — most of the time. He also just doesn't look at most patches that closely now; he simply cannot afford to do it anymore.

Security and surveillance

Is open source a hindrance to security? No, Linus said, but the open-source process can be inconvenient when it comes to security issues. Security people tend to want to deal with companies that will keep issues hidden, but in an open environment you can't keep things quiet for long. He is completely convinced that open source is "good for security in every way," but it does mean that we need to have a different process for dealing with security reports. That creates some friction with how some people want to work.

What about the reports of widespread surveillance? Who is responsible for that problem — governmental agencies or the companies that work with them? Linus responded that there are a lot of people with stronger opinions on this topic than him. But, he said, companies often do not have a choice; they put backdoors into their products because security agencies require it. When faced with a situation like that, employees at the company can choose to resign rather than add the backdoor, but not everybody has the freedom to make that choice. Or they can go public, leading to a need to move abruptly to a different country; that, he said, is a choice for rare individuals.

In the end, he said, even the security agencies behind all this surveillance activity believe that they are doing things for the right reasons. So he doesn't get as emotional about the issue as some others do. Open-source software, he said, does not guarantee the absence of backdoors, but it does make them much harder to hide. It protects us from the sort of single points of failure that can enable backdoors to be hidden in proprietary code.

Dirk concluded by asking for some predictions for the next five years. His own were that there would be at least twenty new filesystems added to the kernel, but that we would not see a new filesystem notification system call. Linus said that he knows some of what is coming, but that stuff isn't interesting and fun; it's "the boring stuff that Jon [Corbet] talked about before." The fun stuff, he said, are the things that nobody predicts. It still happens; people send in stuff that is "completely insane." His response is usually "hell, no," but, two years later, he discovers that he has accepted it.

[Your editor thanks the Linux Foundation for travel assistance to attend the Korea Linux Forum.]

Comments (17 posted)

A Long-term support initiative update

By Jonathan Corbet
November 20, 2013
Korea Linux Forum
Distributors of Linux-based systems typically do not ship vanilla mainline kernels; instead, they apply a set of changes that, they think, will make the kernel more useful to their users. Some distributors, most notably those of the enterprise variety, are known for making extensive changes. But companies that ship Linux within mobile and embedded systems — Linux distributors of a different sort — also often provide highly modified kernels with their devices. Maintaining those kernels can be a significant cost for embedded companies. The Long Term Support Initiative (LTSI), working under the auspices of the Linux Foundation, aims to make life easier for mobile and embedded distributors. At the Foundation's 2013 Korea Linux Forum, Noriaki Fukuyasu provided an update on the current status of this effort, along with an estimate of the value it provides.

LTSI is about two years old at this point, Noriaki said; most of its users are companies in the Asian region. He said that LTSI could be thought of as "LTS + I" — the combination of community long-term support and industry. The LTS part is simply the long-term stable kernels supported by Greg Kroah-Hartman. LTSI started with the 3.0 kernel which, having had its two years of support, has just gone into the unsupported mode. Currently the 3.4 and 3.10 kernels are deemed the LTS kernels, with 3.10 having just begun its two years.

The "I" part comes from the industry patches that are added on top of the community LTS kernels. With these changes, Noriaki said, LTSI looks something like a fork of the mainline kernel, but that is not really the case. Yes, there is a lot of backporting of features into the LTSI kernel in response to industry requests, along with non-mainline changes that [Noriaki Fukuyasu] come from within the embedded industry, but the LTSI kernel is meant to serve as a conduit to direct those patches into the mainline.

Why was LTSI started? Kernel fragmentation has turned out to be expensive and painful for the embedded industry. Maintaining a pile of in-house patches is a difficult and error-prone task. Meanwhile, the industry is struggling to keep up with the high pace of innovation in the Linux kernel. The idea behind LTSI is that a lot of these problems can be addressed with a single kernel maintained for the embedded industry.

In general, Noriaki said, there is no standard distribution for the embedded industry, so companies tend to end up assembling things on their own. Yes, Android exists for a portion of the industry, but Android is aimed at mobile applications, while the embedded market is considerably larger than that. But even creating an Android-based device is a lot of work: one starts with a mainline kernel, applies Google's patches, mixes in patches to support the specific system-on-chip in use, adds some vendor-specific patches, then tries to bring in as many upstream fixes as possible. And this job must be done many times: embedded distributors often maintain a dozen or more kernels for their range of products.

The existence of in-house patches adds another level of complication to the creation of custom kernels. Vendors often put a lot of work into drivers for their hardware and other custom kernel code, but that work is never merged upstream or shared with others. As a consequence, the vendor is required to drag all that code forward whenever the time comes to move to a new kernel release. All this results in a repeated cycle of deadline-driven, one-shot work.

Unsurprisingly, the cost of moving to a new upstream kernel release tends to discourage vendors from doing so. But the kernel is evolving rapidly, and vendors do not want to be left behind. Products are getting more complex even while product cycles are getting shorter; it is natural to want to use newer kernels for new products to take advantage of the development work that has been done there.

A number of distributions have been based on the community LTS kernels; notably, Android releases were based on 3.0 and 3.4. LTSI is another distribution based on those kernels. The LTS kernel, though, is augmented in a couple of ways. One of those is the backporting of features and (in particular) new hardware support from later upstream kernel releases; this work is done on request from LTSI users. The other source of changes is in-house patches from vendors.

The next LTSI kernel will be 3.10; patches for this release are being accepted through the end of 2013. January will be dedicated to testing and stabilization of the LTSI 3.10 kernel, which, if all goes according to plan, will see its first release toward the beginning of February. See this page for some information on how to participate in this process.

Noriaki concluded his talk with an attempt to quantify the value of the LTSI kernel to its users. If nothing else, he said, having a specific number can be useful to hand to management when trying to convince them to ship LTSI in future products. The value of this kernel, he said, is a function of the number of backported patches. Getting patches to apply to older kernels, making whatever changes might be needed, and testing the result is an expensive process — and it must be done every time a company moves to a new kernel release.

The LTSI 3.0 kernel at the end of its life included 2,238 patches from the community LTS release, augmented by 875 patches backported by LTSI itself — a total of 3,113. The 3.4 kernel, instead, has 2,750 patches from the community release plus 721 LTSI patches; that's 3,471 total, and 3.4 is only halfway through its two-year life span. Assuming that each patch takes eight hours of a developer's time to backport and that the full cost of a developer's salary is $18,000 for a 20-day working month, one can quickly do the math to assign a value to this backporting work. That number came out to about $2.8 million for the 3.0 kernel, and $3.1 million for 3.4. The value for "active users," who are able to take advantage of the process to get their in-house patches upstream (and thus avoid the need to forward-port them in the future), should be quite a bit higher.

These numbers were taken from a white paper published by the Linux Foundation; registration is required to obtain a copy. One might well take issue with the cost assumptions used therein, but it would still be hard to deny that the LTSI kernels provide real value for the product-based distributors that use them. If working with LTSI brings those distributors — and their patches — closer to the mainline, that value can only increase.

[Your editor thanks the Linux Foundation for travel assistance to attend the Korea Linux Forum.]

Comments (3 posted)

Page editor: Jonathan Corbet


GNU virtual private Ethernet

By Nathan Willis
November 20, 2013

Virtual private networks (VPNs) are designed to overlay a second, secure network on top of the existing (insecure) Internet, but that network overlay can take a number of different forms depending on the precise security needs in question, how static or dynamic the network is, and other factors. GNU Virtual Private Ethernet (GVPE) is a free software VPN suite that takes a different approach to the problem than that of popular projects like OpenVPN. In particular, GVPE creates an actual network where all participating nodes can talk directly to one another, rather than setting up a point-to-point tunnel, and it tries to simplify VPN deployment by making encryption and other settings into compile-time options.

The latest release is version 2.25, from July 2013. Prior to 2.25's release, the last update was from February 2011, and the one before that from 2009. Suffice it to say, then, that GVPE is not a rapidly moving target. But there are several changes in 2.25 that users should take note of. It is also noteworthy, however, that developer Marc Lehmann announced in the release notes that 2.25 would be the final release in the 2.x line—subsequent releases will be changing the underlying message protocol, and will be numbered 3.x to indicate the ABI break.


GVPE is designed to handle a use case most other VPN tools do not: connecting multiple nodes—as in "more than two"—into a single virtual network. The difference is not in how many clients computers can use the VPN, but in how the participating nodes connect. Most other VPN software is optimized for creating site-to-site tunnels that provide a link from one LAN to another, which serves the commonplace usage scenario of connecting to an office VPN from a single laptop or remote home office. For example, a gateway router or a machine on the LAN is set up to serve as an OpenVPN server, creating either a layer-2 (link-layer) or layer-3 (IP layer) tunnel to a remote OpenVPN client, using the kernel's TUN/TAP driver. OpenVPN is generally geared toward IP-layer tunneling, however.

While multiple network sites can be connected in such a fashion, the more sites there are, the more difficult the configuration is to set up and maintain. A separate tunnel needs to be configured between each site and at least one other—either in a star topology with one site serving as a hub, or else with routing rules for each tunnel configured at each site.

GVPE, is designed to simplify this multi-site configuration. It runs separately on each node participating in the VPN, with the same configuration file at each node. The "virtual Ethernet" segment that connects the nodes is, in a sense, a separate network—it provides a network interface for the VPN that exists alongside the normal network. The VPN provides multiple entry points, each client can talk directly to the others, and any node can be taken offline at any time without disconnecting the rest. GVPE is very much a link-layer VPN in this respect; even broadcast Ethernet frames are supported. Consequently, any network protocol stack that can run over Ethernet can run over a GVPE network, which provides considerable flexibility in setting up the virtual network.

On the other hand, the flexibility in GVPE's network topology comes at the cost of a bit less flexibility where the security design is concerned. GVPE uses public-private key pairs for each node to secure its traffic. But the ciphers and digest algorithms that are used must be chosen when GVPE is configured and built (the defaults are RIPEMD-160 for the digest and AES-128 for encryption). The cipher and digest choices are passed to the configure script. Selecting just one of each makes it possible to build smaller binaries (with, in theory, a smaller attack surface). The transport protocols over which GVPE will run (raw IP, UDP, ICMP, TCP, and even DNS are supported) are specified in the configuration file, along with a list of all of the GVPE nodes participating in the virtual network. The nodes can be specified by IP address or by hostname.

Compared to OpenVPN, this configuration file is quite simple; for example, a three-site network could make do with:

    enable-rawip = yes
    ifname = vpn0

    node = site1
    hostname =

    node = site2
    hostname =

    node = secrethq
    hostname =

and be fully connected. On the other hand, the configuration file must be distributed to every node, so adding, removing, or reconfiguring nodes and options necessitates propagating the changed file to all sites. Tools like rsync help, of course, but for large networks it could become a hassle—or worse, should the need arise to quickly remove a node from the VPN. Each node must also have an ifup script for the interface named in the configuration which assigns it an IP address for the private network. Finally, each node needs to have its own key pair created and its public key distributed to all of the other nodes. The private keys need to be distributed to their respective nodes securely, of course.

GVPE provides a command line tool called gvpectrl that can automatically read in the configuration file and create the key pairs needed for each node. Once the configuration file and appropriate keys are all in place, each node can start the GVPE daemon with:

    gvpe -D theappropriatenodename

Subsequently, applications that need to use the private network may need to be configured either for the vpn0 virtual interface or for the private network IP address. Simple applications like ping may require no configuration; others like Apache need to be bound to the specific interface. Apart from being pointed at the VPN, however, applications should work automatically—an IMAP client just needs to know the IP address of its servers, whether they are on the VPN or the Internet. GVPE nodes establish a connection with Elliptic-Curve Diffie-Hellman key exchange, and the packets include hash message authentication codes (HMAC) as checksums.

The protocol header includes the source and destination IDs of each node, so that nodes can route messages. These IDs can be assigned in the configuration file, but by default they are the integer number of the nodes in the order they are listed in the file. That has the benefit of being generic, rather than being taken from (and revealing) some property of the node, like MAC address. Node-to-node traffic is connection-oriented, with sequence numbers in each packet; retries include exponential back-off.

GVPE does not have to run on the actual client machines; it can also be run on gateway routers to connect entire networks. However, the routing configuration in such a scenario is understandably more complex, as is setting up firewall rules to restrict access to the private network. But GVPE is designed to eliminate much of this complexity by running on each client node—one of the project's repeated bullet-point features is that it allows clients to conduct private networking without trusting any of the intermediary network.

The future

GVPE has not changed much in recent releases. Version 2.25 introduces two changes that affect backward compatibility with existing deployments—although neither change demands much reconfiguration.

First, it is no longer possible to enable UDP as the sole transport protocol. The release notes say that this is necessary because, in some situations, nodes need to negotiate their connection to another node without knowing what transport protocols the other node can speak. That negotiation requires contacting a third GVPE node that can act as a router, and UDP's connection-less nature prohibits that negotiation. Second, the DNS transport protocol has been altered; the change breaks compatibility with previous releases (changing the encoding used for SYNs and headers, among other things), although the project warns users to use DNS transport only as a last resort to sneak through stubborn firewalls anyway. Other changes include the addition of the SHA-256 and SHA-512 digests as HMAC options and additional options for configuring the GVPE daemon's chroot behavior.

Considering the fact that changes to the DNS transport and the allowable transport protocol settings may force some current 2.x users to update their configurations, one might well ask what to expect in the forthcoming 3.x series. To that question, Lehmann's announcement only says that the GVPE message protocol itself will change, "to take advances in the last decade into account." What that means is not entirely clear, although he does note how key lengths and hash functions have evolved in the intervening years.

In the meantime, though, GVPE offers an interesting feature set that differs considerably from the "traditional" VPN model. Not only can nodes communicate without trusting the network itself, but the endpoint-to-endpoint encryption means that they do not have to trust other nodes on the network, either. True, the manual propagation of the configuration file and keys does mean that users need to trust the administrator who sets up the system, but that is ultimately true in almost all networks, private and virtual ones included.

Comments (33 posted)

Brief items

Security quotes of the week

Honestly, I don't believe in portable security. :-)
Guido van Rossum

Pre-Snowden, there was no downside to cooperating with the NSA. If the NSA asked you for copies of all your Internet traffic, or to put backdoors into your security software, you could assume that your cooperation would forever remain secret. To be fair, not every corporation cooperated willingly. Some fought in court. But it seems that a lot of them, telcos and backbone providers especially, were happy to give the NSA unfettered access to everything. Post-Snowden, this is changing. Now that many companies' cooperation has become public, they're facing a PR backlash from customers and users who are upset that their data is flowing to the NSA. And this is costing those companies business.
Bruce Schneier

The recipient, perhaps sitting at home in a pleasant Virginia suburb drinking his morning coffee, has no idea that someone in Minsk has the ability to watch him surf the web. Even if he ran his own traceroute to verify connectivity to the world, the paths he’d see would be the usual ones. The reverse path, carrying content back to him from all over the world, has been invisibly tampered with.
Jim Cowie of Renesys looks at "Targeted Internet Traffic Misdirection"

This information appears to be sent back unencrypted and in the clear to LG every time you change channel, even if you have gone to the trouble of changing the setting above to switch collection of viewing information off.

It was at this point, I made an even more disturbing find within the packet data dumps. I noticed filenames were being posted to LG's servers and that these filenames were ones stored on my external USB hard drive.

DoctorBeet looks into the traffic from his new LG Smart TV

Comments (none posted)

Your visual how-to guide for SELinux policy enforcement (

Over at, SELinux hacker Dan Walsh describes SELinux policy enforcement using dogs and cats. It has lots of cute cartoons (by Máirín Duffy) of the interaction between various types of dogs, a cat, food meant for each, and Tux as an enforcer of the food policies. It looks at type enforcement (TE), multi-category security (MCS), and multi-level security (MLS) using dog/cat analogies as well as relating them to the "real world". "SElinux is a labeling system. Every process has a label. Every file/directory object in the operating system has a label. Even network ports, devices, and potentially hostnames have labels assigned to them. We write rules to control the access of a process label to an a object label like a file. We call this policy. The kernel enforces the rules."

Comments (44 posted)

New vulnerabilities

chromium: multiple vulnerabilities

Package(s):chromium-browser-stable CVE #(s):CVE-2013-2931 CVE-2013-6621 CVE-2013-6622 CVE-2013-6623 CVE-2013-6624 CVE-2013-6625 CVE-2013-6626 CVE-2013-6627 CVE-2013-6628 CVE-2013-6629 CVE-2013-6630 CVE-2013-6631
Created:November 14, 2013 Updated:December 13, 2013

From the Mageia advisory:

Various fixes from internal audits, fuzzing and other initiatives (CVE-2013-2931).

Use after free related to speech input elements (CVE-2013-6621).

Use after free related to media elements (CVE-2013-6622).

Out of bounds read in SVG (CVE-2013-6623).

Use after free related to “id” attribute strings (CVE-2013-6624).

Use after free in DOM ranges (CVE-2013-6625).

Address bar spoofing related to interstitial warnings (CVE-2013-6626).

Out of bounds read in HTTP parsing (CVE-2013-6627).

Issue with certificates not being checked during TLS renegotiation (CVE-2013-6628).

libjpeg 6b and libjpeg-turbo will use uninitialized memory when decoding images with missing SOS data for the luminance component (Y) in presence of valid chroma data (Cr, Cb) (CVE-2013-6629).

libjpeg-turbo will use uninitialized memory when handling Huffman tables (CVE-2013-6630).

Use after free in libjingle (CVE-2013-6631).

openSUSE openSUSE-SU-2014:1645-1 java-1_7_0-openjdk 2014-12-15
openSUSE openSUSE-SU-2014:1638-1 java-1_7_0-openjdk 2014-12-15
openSUSE openSUSE-SU-2014:1100-1 Firefox 2014-09-09
Gentoo 201406-32 icedtea-bin 2014-06-29
SUSE SUSE-SU-2014:0733-1 IBM Java 7 2014-05-30
SUSE SUSE-SU-2014:0728-2 IBM Java 6 2014-05-30
SUSE SUSE-SU-2014:0728-1 IBM Java 6 2014-05-29
Red Hat RHSA-2014:0508-01 java-1.6.0-ibm 2014-05-15
Red Hat RHSA-2014:0509-01 java-1.5.0-ibm 2014-05-15
Gentoo 201403-01 chromium 2014-03-05
openSUSE openSUSE-SU-2014:0065-1 chromium 2014-01-15
openSUSE openSUSE-SU-2014:0008-1 seamonkey 2014-01-03
openSUSE openSUSE-SU-2013:1918-1 MozillaFirefox 2013-12-19
openSUSE openSUSE-SU-2013:1917-1 MozillaFirefox 2013-12-19
openSUSE openSUSE-SU-2013:1916-1 MozillaFirefox 2013-12-19
openSUSE openSUSE-SU-2013:1861-1 chromium 2013-12-12
Ubuntu USN-2053-1 thunderbird 2013-12-11
Ubuntu USN-2052-1 firefox 2013-12-11
Scientific Linux SLSA-2013:1803-1 libjpeg-turbo 2013-12-10
Scientific Linux SLSA-2013:1804-1 libjpeg 2013-12-10
Oracle ELSA-2013-1803 libjpeg-turbo 2013-12-09
Oracle ELSA-2013-1804 libjpeg 2013-12-10
CentOS CESA-2013:1803 libjpeg-turbo 2013-12-10
CentOS CESA-2013:1804 libjpeg 2013-12-10
Red Hat RHSA-2013:1803-01 libjpeg-turbo 2013-12-10
Red Hat RHSA-2013:1804-01 libjpeg 2013-12-10
openSUSE openSUSE-SU-2013:1776-1 chromium 2013-11-27
openSUSE openSUSE-SU-2013:1777-1 chromium 2013-11-27
Mageia MGASA-2013-0333 libjpeg 2013-11-20
Mandriva MDVSA-2013:273 libjpeg 2013-11-21
Mandriva MDVSA-2013:274 libjpeg 2013-11-21
Debian DSA-2797-1 chromium-browser 2013-11-17
Mageia MGASA-2013-0324 chromium-browser-stable 2013-11-13

Comments (none posted)

chromium: code execution

Package(s):chromium-browser CVE #(s):CVE-2013-6632
Created:November 18, 2013 Updated:December 1, 2013
Description: From the Debian advisory:

Pinkie Pie discovered multiple memory corruption issues.

Gentoo 201403-01 chromium 2014-03-05
openSUSE openSUSE-SU-2014:0065-1 chromium 2014-01-15
Mageia MGASA-2013-0383 chromium-browser-stable 2013-12-23
openSUSE openSUSE-SU-2013:1861-1 chromium 2013-12-12
openSUSE openSUSE-SU-2013:1776-1 chromium 2013-11-27
openSUSE openSUSE-SU-2013:1777-1 chromium 2013-11-27
Debian DSA-2797-1 chromium-browser 2013-11-17

Comments (none posted)

curl: unchecked ssl certificate host name

Package(s):curl CVE #(s):CVE-2013-4545
Created:November 18, 2013 Updated:December 13, 2013
Description: From the Debian advisory:

Scott Cantor discovered that curl, a file retrieval tool, would disable the CURLOPT_SSLVERIFYHOST check when the CURLOPT_SSL_VERIFYPEER setting was disabled. This would also disable ssl certificate host name checks when it should have only disabled verification of the certificate trust chain.

Fedora FEDORA-2014-17596 mingw-curl 2015-01-02
openSUSE openSUSE-SU-2013:1865-1 curl 2013-12-12
openSUSE openSUSE-SU-2013:1859-1 curl 2013-12-12
Ubuntu USN-2048-2 curl 2013-12-06
Ubuntu USN-2048-1 curl 2013-12-05
Fedora FEDORA-2013-21887 mingw-curl 2013-12-02
Debian DSA-2798-2 curl 2013-11-20
Mandriva MDVSA-2013:276 curl 2013-11-21
Mageia MGASA-2013-0338 curl 2013-11-20
Debian DSA-2798-1 curl 2013-11-17

Comments (none posted)

Foreman: SQL injection

Package(s):Foreman CVE #(s):CVE-2013-4386
Created:November 15, 2013 Updated:November 20, 2013

From the Red Hat advisory:

It was found that Foreman did not correctly sanitize values of the "fqdn" and "hostgroup" parameters, allowing an attacker to provide a specially crafted value for these parameters and perform an SQL injection attack.

Red Hat RHSA-2013:1522-01 Foreman 2013-11-14

Comments (none posted)

gnutls: off-by-one error

Package(s):gnutls CVE #(s):CVE-2013-4487
Created:November 18, 2013 Updated:November 20, 2013
Description: From the Red Hat bugzilla:

GnuTLS upstream recently fixed a bug, which seems to have emerged due to the fix implemented in CVE-2013-4466.

openSUSE openSUSE-SU-2013:1714-1 gnutls 2013-11-15
Fedora FEDORA-2013-20628 gnutls 2013-11-18

Comments (none posted)

ibus: password disclosure

Package(s):ibus CVE #(s):CVE-2013-4509
Created:November 18, 2013 Updated:February 24, 2014
Description: From the openSUSE advisory:

This is an additional fix patch for ibus to avoid the wrong IBus.InputPurpose.PASSWORD advertisement, which leads to the password text appearance on GNOME3 lockscreen

Fedora FEDORA-2014-1910 ibus-chewing 2014-02-22
openSUSE openSUSE-SU-2014:0068-1 ibus-chewing 2014-01-15
Fedora FEDORA-2014-1908 ibus-chewing 2014-02-11
openSUSE openSUSE-SU-2013:1825-1 ibus-pinyin 2013-12-04
openSUSE openSUSE-SU-2013:1686-1 ibus 2013-11-15
Fedora FEDORA-2013-20993 ibus-pinyin 2013-11-19

Comments (none posted)

mozilla: plaintext-recovery attack

Package(s):firefox CVE #(s):CVE-2013-2566
Created:November 20, 2013 Updated:November 20, 2013
Description: From the CVE entry:

The RC4 algorithm, as used in the TLS protocol and SSL protocol, has many single-byte biases, which makes it easier for remote attackers to conduct plaintext-recovery attacks via statistical analysis of ciphertext in a large number of sessions that use the same plaintext.

Gentoo 201504-01 firefox 2015-04-07
Gentoo 201406-19 nss 2014-06-22
Ubuntu USN-2032-1 thunderbird 2013-11-21
Mandriva MDVSA-2013:269 firefox 2013-11-20
Mandriva MDVSA-2013:270 nss 2013-11-20
Mageia MGASA-2013-0337 firefox, rootcerts, nspr, and nss 2013-11-20
Ubuntu USN-2031-1 firefox 2013-11-20

Comments (none posted)

nagios: symbolic link attack

Package(s):nagios CVE #(s):CVE-2013-2029 CVE-2013-4214
Created:November 19, 2013 Updated:November 20, 2013
Description: From the Red Hat advisory:

Multiple insecure temporary file creation flaws were found in Nagios. A local attacker could use these flaws to cause arbitrary files to be overwritten as the root user via a symbolic link attack. (CVE-2013-2029, CVE-2013-4214)

Red Hat RHSA-2013:1526-01 nagios 2013-11-18

Comments (none posted)

nss: multiple vulnerabilities

Package(s):mozilla-nss CVE #(s):CVE-2013-1741 CVE-2013-5605 CVE-2013-5606 CVE-2013-5607
Created:November 19, 2013 Updated:June 13, 2014
Description: From the CVE entries:

Integer overflow in Mozilla Network Security Services (NSS) 3.15 before 3.15.3 allows remote attackers to cause a denial of service or possibly have unspecified other impact via a large size value. (CVE-2013-1741)

Mozilla Network Security Services (NSS) 3.14 before 3.14.5 and 3.15 before 3.15.3 allows remote attackers to cause a denial of service or possibly have unspecified other impact via invalid handshake packets. (CVE-2013-5605)

The CERT_VerifyCert function in lib/certhigh/certvfy.c in Mozilla Network Security Services (NSS) 3.15 before 3.15.3 provides an unexpected return value for an incompatible key-usage certificate when the CERTVerifyLog argument is valid, which might allow remote attackers to bypass intended access restrictions via a crafted certificate. (CVE-2013-5606)

From the openSUSE advisory: Avoid unsigned integer wrapping in PL_ArenaAllocate. (CVE-2013-5607)

Gentoo 201504-01 firefox 2015-04-07
Oracle ELSA-2014-1948 nss 2014-12-02
Debian DSA-2994-1 nss 2014-07-31
Gentoo 201406-19 nss 2014-06-22
Ubuntu USN-2087-1 nspr 2014-01-23
Fedora FEDORA-2013-23479 nss-util 2013-12-21
Fedora FEDORA-2013-23479 nss-softokn 2013-12-21
Fedora FEDORA-2013-23683 nss 2013-12-22
Fedora FEDORA-2013-23479 nss 2013-12-21
Debian DSA-2820-1 nspr 2013-12-17
Red Hat RHSA-2013:1841-01 nss 2013-12-16
Red Hat RHSA-2013:1840-01 nss 2013-12-16
Scientific Linux SLSA-2013:1829-1 nss, nspr, and nss-util 2013-12-13
Fedora FEDORA-2013-23301 nss-util 2013-12-15
Fedora FEDORA-2013-23301 nss-softokn 2013-12-15
Fedora FEDORA-2013-23301 nss 2013-12-15
Oracle ELSA-2013-1829 nss, nspr, and nss-util 2013-12-12
Fedora FEDORA-2013-23139 nspr 2013-12-13
CentOS CESA-2013:1829 nspr 2013-12-13
CentOS CESA-2013:1829 nss 2013-12-13
CentOS CESA-2013:1829 nss-util 2013-12-13
Red Hat RHSA-2013:1829-01 nss, nspr, and nss-util 2013-12-12
Fedora FEDORA-2013-23159 nspr 2013-12-11
Scientific Linux SLSA-2013:1791-1 nss and nspr 2013-12-09
Slackware SSA:2013-339-03 seamonkey 2013-12-05
Slackware SSA:2013-339-02 mozilla-thunderbird 2013-12-05
Slackware SSA:2013-339-01 mozilla-nss 2013-12-05
Oracle ELSA-2013-1791 nss, nspr 2013-12-05
CentOS CESA-2013:1791 nspr 2013-12-05
CentOS CESA-2013:1791 nss 2013-12-05
Red Hat RHSA-2013:1791-01 nss, nspr 2013-12-05
SUSE SUSE-SU-2013:1807-1 mozilla-nspr, mozilla-nss 2013-12-02
Ubuntu USN-2032-1 thunderbird 2013-11-21
Mandriva MDVSA-2013:269 firefox 2013-11-20
Ubuntu USN-2031-1 firefox 2013-11-20
openSUSE openSUSE-SU-2013:1730-1 mozilla-nss 2013-11-19
Debian DSA-2800-1 nss 2013-11-25
Mandriva MDVSA-2013:270 nss 2013-11-20
Mageia MGASA-2013-0337 firefox, rootcerts, nspr, and nss 2013-11-20
openSUSE openSUSE-SU-2013:1732-1 mozilla-nss 2013-11-19
Ubuntu USN-2030-1 nss 2013-11-18

Comments (none posted)

python-django: cross-site scripting

Package(s):python-django CVE #(s):CVE-2013-6044
Created:November 15, 2013 Updated:November 20, 2013

From the Red Hat advisory:

It was discovered that the django.utils.http.is_safe_url() function considered any URL that used a scheme other than HTTP or HTTPS (for example, "javascript:") as safe. An attacker could potentially use this flaw to perform cross-site scripting (XSS) attacks.

Red Hat RHSA-2013:1521-01 python-django 2013-11-14

Comments (none posted)

python-djblets: cross-site scripting

Package(s):python-djblets CVE #(s):CVE-2013-4519
Created:November 18, 2013 Updated:November 26, 2013
Description: From the Red Hat bugzilla:

A flaw in the display of the branch field of a review request allows an attacker to inject arbitrary HTML, allowing attackers to construct scripts that run in the context of the page.

A flaw in the display of the alt text for an uploaded screenshot or image file attachment allows an attacker to inject arbitrary HTML through the caption field, allowing attackers to construct scripts that run in the context of the page.

Fedora FEDORA-2013-20814 python-djblets 2013-11-15
Fedora FEDORA-2013-20817 ReviewBoard 2013-11-26
Fedora FEDORA-2013-20817 python-djblets 2013-11-26
Fedora FEDORA-2013-20814 ReviewBoard 2013-11-15

Comments (none posted)

samba: multiple vulnerabilities

Package(s):samba CVE #(s):CVE-2013-4475 CVE-2013-4476
Created:November 19, 2013 Updated:December 9, 2013
Description: From the CVE entries:

Samba 3.x before 3.6.20, 4.0.x before 4.0.11, and 4.1.x before 4.1.1, when vfs_streams_depot or vfs_streams_xattr is enabled, allows remote attackers to bypass intended file restrictions by leveraging ACL differences between a file and an associated alternate data stream (ADS). (CVE-2013-4475)

Samba 4.0.x before 4.0.11 and 4.1.x before 4.1.1, when LDAP or HTTP is provided over SSL, uses world-readable permissions for a private key, which allows local users to obtain sensitive information by reading the key file, as demonstrated by access to the local filesystem on an AD domain controller. (CVE-2013-4476)

Gentoo 201502-15 samba 2015-02-25
SUSE SUSE-SU-2014:0024-1 Samba 2014-01-07
openSUSE openSUSE-SU-2013:1921-1 samba 2013-12-19
Ubuntu USN-2054-1 samba 2013-12-11
Scientific Linux SLSA-2013:1806-1 samba and samba3x 2013-12-10
Oracle ELSA-2013-1806 samba 2013-12-10
Oracle ELSA-2013-1806 samba 2013-12-09
CentOS CESA-2013:1806 samba 2013-12-10
CentOS CESA-2013:1806 samba 2013-12-10
Red Hat RHSA-2013:1806-01 samba 2013-12-10
Debian DSA-2812-1 samba 2013-12-09
openSUSE openSUSE-SU-2013:1787-1 samba 2013-11-29
openSUSE openSUSE-SU-2013:1790-1 samba 2013-11-30
Mandriva MDVSA-2013:278 samba 2013-11-21
Fedora FEDORA-2013-21207 samba 2013-11-23
Fedora FEDORA-2013-21094 samba 2013-11-21
Mageia MGASA-2013-0348 samba 2013-11-22
openSUSE openSUSE-SU-2013:1742-1 samba 2013-11-22
Slackware SSA:2013-322-03 samba 2013-11-18

Comments (none posted)

torque: code execution

Package(s):torque CVE #(s):CVE-2013-4495
Created:November 14, 2013 Updated:November 21, 2013

From the Debian advisory:

A user could submit executable shell commands on the tail of what is passed with the -M switch for qsub. This was later passed to a pipe, making it possible for these commands to be executed as root on the pbs_server.

Gentoo 201412-47 torque 2014-12-26
Mandriva MDVSA-2013:268 torque 2013-11-19
Mageia MGASA-2013-0327 torque 2013-11-18
Debian DSA-2796-1 torque 2013-11-13

Comments (none posted)

varnish: denial of service

Package(s):varnish CVE #(s):CVE-2013-4484
Created:November 15, 2013 Updated:May 6, 2014

From the openSUSE bug report:

A denial of service flaw was found in the way Varnish Cache handled certain GET requests when using certain configurations. A remote attacker could use this flaw to crash a worker process.

Gentoo 201412-30 varnish 2014-12-15
Fedora FEDORA-2013-24018 varnish 2014-05-06
Fedora FEDORA-2013-24023 varnish 2014-05-06
Mageia MGASA-2014-0065 varnish 2014-02-13
Debian DSA-2814-1 varnish 2013-12-09
Mandriva MDVSA-2014:036 varnish 2014-02-17
openSUSE openSUSE-SU-2013:1679-1 varnish 2013-11-15
openSUSE openSUSE-SU-2013:1683-1 varnish 2013-11-15

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The 3.13 merge window remains open; just over 9,900 non-merge changesets have been pulled into the mainline so far. See the article below for an update on what has been pulled for the 3.13 release.

Stable updates: 3.12.1, 3.11.9, 3.10.20, and 3.4.70 were released on November 20; each contains the usual set of important fixes.

Comments (none posted)

Quotes of the week

Unfortunately in the ARM space you often need an NDA just to start a conversation. If folks don't want to talk in public about stuff but still expect the world to go ACPI, all I am left with is tattered clothing, an old soapbox, megaphone, and inane rumblings.
Jon Masters

It is amazing how reliable broken synchronization-primitive implementations can be.
Paul McKenney

Just for the record. I'm really frightened by the phrase "UDP realtime" which was mentioned in this thread more than once. Looking at the desperation level of these posts I fear that there are going to be real world products out already or available in the near future which are based on the profound lack of understanding of the technology they are based on.

This just confirms my theory that most parts of this industry just work by chance.

Thomas Gleixner

Note: No one seems to have docs for this, so this patch here is just unreviewed black magic.
Shobhit Kumar (thanks to Greg Kroah-Hartman)

Comments (3 posted)

The "Jailhouse" hypervisor

The Jailhouse project has announced its existence. Jailhouse is a Linux-native hypervisor like KVM, but with a focus on minimalism and isolation of virtual machines on dedicated CPUs. "Jailhouse is a partitioning hypervisor that can create asymmetric multiprocessing (AMP) setups on Linux-based systems. That means it runs bare-metal applications or non-Linux OSes aside a standard Linux kernel on one multicore hardware platform. Jailhouse ensures isolation between these 'cells', as we call them, via hardware-assisted virtualization. The typical workloads we expect to see in non-Linux cells are applications with highly demanding real-time, safety or security requirements." The project is in an early stage and looking for interested developers.

Comments (4 posted)

Kernel development news

3.13 Merge window, part 2

By Jonathan Corbet
November 20, 2013
The 3.13 merge window appears to be winding down, despite the fact that, as of this writing, it should have the better part of a week yet to run. There are now just over 9,900 non-merge changesets that have been pulled for 3.13; that is about 3,300 since last week's summary. Given the patch count and its slowing rate of increase, there is a good chance that Linus will close the merge window short of the full three weeks that had been expected this time around. It turns out that even diving trips on remote islands with bad Internet service can't slow the kernel process that much.

Some of the interesting user-visible changes pulled since last week's summary are:

  • The multiqueue block layer patch set has been merged at last. This code will pave the way toward cleaner, higher-performing block drivers over time, though the conversion of drivers has not really begun in 3.13.

  • The ARM big.LITTLE switcher code has been merged, providing basic support for heterogeneous ARM-based multiprocessor systems.

  • The ARM "BE8" big-endian subarchitecture is now supported.

  • The kernel has a new "power capping framework" allowing administrator control of peripherals which can implement maximum power consumption limits. Initially, support is limited to devices implementing Intel's "Running Average Power Limit" mechanism. See Documentation/power/powercap/powercap.txt for an overview of this subsystem and Documentation/ABI/testing/sysfs-class-powercap for details on the sysfs control interface.

  • The new "tmon" tool can be used to monitor and tweak the kernel's thermal management subsystem.

  • The split PMD locks patch set has been merged into the memory management subsystem. This code should result in significantly better performance in settings with a lot of transparent huge page use.

  • The ability to wait when attempting to remove a module whose reference count has not yet dropped to zero has been disabled. This feature, accessible via rmmod --wait, has been deprecated for the last year.

  • The size of huge pages on the SPARC64 architecture has changed from 4MB to 8MB. This change was necessary to enable this architecture to support up to 47 bits of physical address space. SPARC64 also supports the full tickless mode in 3.13.

  • New hardware support includes:

    • Block: STEC, Inc. S1120 PCIe solid-state storage devices. Also note that the Compaq Smart Array driver has been disabled in this release; it will be removed altogether unless somebody complains.

    • Graphics: Marvell Armada 510 LCD controllers. Also: the radeon driver now supports dynamic power management by default on a range of newer chipsets.

    • I2C: Samsung Exynos5 high-speed I2C controllers, STMicroelectronics SSC I2C controllers, and Broadcom Kona I2C adapters.

    • Input: Microsoft Hyper-V synthetic keyboards, Neonode zForce infrared touchscreens, and LEETGION Hellion gaming mice.

    • Miscellaneous: ARM Versatile Express serial power controllers, Freescale i.MX6 PCIe controllers, Renesas R-Car Gen2 internal PCI controllers, TPO TD028TTEC1 LCD panels, ST Microelectronics STw481x power management chips, AMS AS3722 power management chips, and TI BQ24735 battery chargers.

    • Video4Linux: Conexant CX24117 dual DVB-S/S2 tuner modules, TI video processing engines, TI LM3560 dual-LED flash controllers, and ST Micro remote controls.

    • Watchdog: MOXA ART watchdog timers, Ralink SoC watchdog timers, and CSR SiRFprimaII and SiRFatlasVI watchdog timers.

Changes visible to kernel developers include:

  • The new helper function:

        int dma_set_mask_and_coherent(struct device *dev, u64 mask);

    Will attempt to set both the streaming (non-coherent) and coherent DMA masks for the given device. Many drivers have been converted to this function, often with bugs fixed along the way.

  • Most locking-related code has been moved into the new kernel/locking subdirectory.

  • printk() and friends no longer implement the "%n" directive, which was seen as an invitation to security problems.

  • The confusingly-named INIT_COMPLETION() macro, part of the completion mechanism, has been renamed to reinit_completion(). Its purpose has always been to reinitialize a completion that has already been used at least once; the new name should make that clearer.

  • The new set_graph_notrace tracing filter allows the selective pruning of subtrees from graph trace output. See the commit changelog for an example of how this feature works.

Next week's LWN Kernel Page will contain an update with the final changes merged for the 3.13 kernel, which, most likely, will be released around the end of the year.

Comments (2 posted)

The past, present, and future of control groups

By Jonathan Corbet
November 20, 2013
Korea Linux Forum
Much has been said about the problems surrounding control groups and the changes that will need to be made with this kernel subsystem. At the 2013 Korea Linux Forum, control group co-maintainer Tejun Heo provided a comprehensive overview of how we got into the current situation, what the problems are, and what is being done to fix them.

The idea behind control groups is relatively simple: divide processes into a hierarchy of groups and use those groups to provision resources in the system. The reality has turned out to be rather messier. So, Tejun asked: how did we get to this point? To begin with, he said, much of what is being done with control groups is new; all of it is new to Linux in particular, and some is new in general. So the community did not have any sort of model to follow when designing this new feature.

Beyond that, though, it is worth looking at who did the work. Control groups started as a new interface to the "cpuset" mechanism, which is used to partition the CPUs in a system among groups of processes. Few people, [Tejun Heo] Tejun said, cared much about this feature, which is used mostly by the high-performance computing crowd. So few kernel developers paid much attention to what was being done.

Then control groups gained the memory controller, a mechanism for restricting the amount of memory used by each group. The core memory management developers did not really care about this work, so they did not participate in it and did not want to hear about it. The block controller came about the same way; Tejun does work in the block subsystem, but he had no real interest in the block controller and mostly just wanted it to stay out of his way. This environment led to a situation where controllers were written by developers without extensive experience in the subsystems they were working with; those controllers had to work on a non-interference basis so that the core developers could ignore them. As a result, controllers have been "bolted onto the side" of the existing kernel subsystems.

The result, Tejun said, is "not pretty." Even worse, though, is that the barriers between the controllers and the subsystems they work with inevitably broke down over time. So control groups are, as a whole, isolated and poorly integrated with the rest of the kernel, but they still manage to complicate the development of the rest of the kernel.

The developers who did all this work were good programmers, Tejun said, but they were not all that experienced with kernel development. So the code they produced was "kind of alien," not conforming to the usual coding style and practices. They repeated a lot of mistakes that the community has made and fixed in the past — repetition that could have been avoided with more review, but, he said, few people were paying active attention to the work being done in this area.

Mistakes were made

What kinds of mistakes were made? Start with hierarchy support — or the lack thereof — in a number of controllers. Control groups allow the organization of processes into a true hierarchy, with policies applied at various levels in the tree. But making a truly hierarchical controller is hard, so a number of controller developers simply didn't bother; instead, they ignored the tree structure and treated every group as if it were placed directly under the root. This was not a good decision, Tejun said; if a controller could not be made hierarchical, it should have at least refused to work with nested control groups. That would have indicated to users that things wouldn't work as they might expect and avoided the creation of a non-hierarchical interface that must now be supported.

The ".use_hierarchy" flag added by the memory controller to enable hierarchical behavior in subtrees was an especially confusing touch, he said.

Another clear mistake was the "release_agent" mechanism. The idea was to notify some process when a control group becomes empty; it was a good idea, he said, in that it allows that process to clean up groups that are no longer in use. But it was implemented as a user-mode helper — every time a control group becomes empty, the kernel creates a new process to run the release agent program. This is an expensive and complex operation for the simple task of sending a notification to user space. The rest of the kernel had moved away from this kind of process-based notification years ago, but the control group developers reimplemented it. We have much better notification mechanisms that should have been used instead, but nobody who could have pointed that out was paying attention when this code was merged.

Yet another problem is the heavy entanglement with the virtual filesystem (VFS) layer. Many years ago, the original sysfs implementation was also deeply tied to the VFS with the idea that it would simplify things. But that didn't work; the results were, instead, lots of memory used and locking-related [Tejun Heo] problems. So sysfs was reworked to look a lot like a distributed filesystem, and things have worked better ever since. When the control group developers set out to create their administrative filesystem, though, they repeated the sysfs mistake. So now control groups have a number of related problems, such as locking hassles whenever an operation needs to work across multiple groups. Tejun is now working on separating things properly; some of that work was merged for the 3.13 kernel.

In engineering, Tejun said, nothing is free; everything comes down to a tradeoff between objectives. Or, in other words, "extremes suck," but control groups went to an extreme with regard to flexibility. Allowing multiple, independent hierarchies is the biggest example; this feature results in a situation where the kernel cannot tell which control group a given process belongs to. Instead, that membership is expressed by a list of arbitrary length. Controllers are all entirely separate from each other, with no coordination between them; they also behave in inconsistent ways. All this flexibility makes it difficult to make changes to the code, since there is no way to know what might break.

Flexibility also led to a range of implementation issues beyond the lack of hierarchical support in some controllers. The core code is complex and fragile. Developers took a lot of shortcuts in areas like security, leading to problems like denial-of-service issues. But, perhaps worst of all, the kernel community committed to a new ABI for control groups without really thinking about it; as a result, we ended up with a lot of accidental features. The ability to assign a process's threads to different control groups is one of those — most controllers only make sense at the process level. The control interface is filesystem-based, but no thought went into permissions, so it is possible to change the ownership of subdirectories, essentially delegating ownership of a subtree of groups to another user. The control group developers have, for all practical purposes, created a new set of system calls without the kind of review that system calls must normally go through.

What now?

The first step has been to fix the controllers that do not support the full control group hierarchy. Unfortunately, they cannot simply be fixed in place without breaking existing users. So there will have to be a "version 2" of the control interface that users can opt into. Controllers must be fully hierarchical or they will simply be unavailable in the new interface. The interface change will also allow the developers to enforce a certain degree of consistency between controllers. It will be possible, Tejun said, to mix use of the old and new interfaces without breaking things.

The multiple control group hierarchies will be going away. Most users will not really notice the change, but some were using multiple hierarchies to avoid enabling expensive controllers for processes that don't need them. In the new scheme, that need will be met by making it possible to enable or disable specific controllers at any level of the hierarchy. But all controllers will see the same process hierarchy; among other things, that will make it possible for them to cooperate more effectively. The resulting system will not be as flexible as multiple hierarchies are, but there seems to be an emerging consensus that it will suffice for the known use cases out there.

A lot of controllers will need updates to work in the new scheme, he said. There are a number of people working in the problem and the work is "70-80% there" at this point.

There will be, Tejun said, "no more faking things that we can't do properly." That is especially true when it comes to security which, he said, is a matter of noting and dealing with all of the relevant details — something that has not been done in the control group subsystem. In particular, the whole concept of delegating subtrees of the control group hierarchy to untrusted users is "broken"; there is no way to prevent denial-of-service attacks (or worse) under that scenario. To allow users to move to the new API without breaking things, it will still be possible to do this kind of delegation by changing the ownership of control group directories, but, he said, it will not be secure, just like it is not secure now.

A more secure approach might be the use of a trusted user-space agent process — something that is likely to be necessary in the future anyway. A number of these agents already exist: systemd is one, Google has its own, Ubuntu has one based on Google's code, and Android has an agent as well. In the Android case, Google actually has to "break the kernel" to make it work the way it wants. There is a need for some kind of common scheme so that processes can interoperate with any agent without having to know which one it is.

Tejun had hoped to have a prototype implementation of a reworked control group subsystem available by about now, but that has not happened. It may be ready by the end of the year, with, hopefully, the work being complete around the middle of 2014.

In summary, he repeated that control groups embody a lot of functionality that has not existed in Linux before. When he looks at the current code, he often gets angry at the mistakes that were made, but he is quite confident that he is making plenty of horrible mistakes of his own. So he fully expects future developers to be just as angry with him. That just goes with the territory. The important thing, he said, is to minimize the commitment that is made to user space; in that way, he hopes, we will not get locked into too many mistakes in the future.

[Your editor thanks the Linux Foundation for travel assistance to attend the Korea Linux Forum.]

Comments (10 posted)

Device trees II: The harder parts

November 18, 2013

This article was contributed by Neil Brown

A devicetree describes the hardware in a system using a tree of nodes, with each node describing a single device. As we observed last week, there are often relationships between devices which do not fit with the model of a strict hierarchical tree. Devicetree can address these needs through a range of techniques best described as cross-tree linkages.

Cross-tree linkages

Two of the more messy things to deal with in board files are interrupts and GPIOs (General Purpose Input/Output pins). This is because there are several different interrupt controllers and several different GPIO controllers. Both interrupts and GPIOs are identified by simple numbers; keeping track of the allocation of those numbers can become clumsy.

In the GTA04, the OMAP3 SoC contains 16 banks of 32 GPIOs, which can reasonably be treated as a single block and will probably be numbered 0-191. The twl4030 has a further 18 GPIO lines (unused) which will presumably be 192-209. The tca6507 LED driver can be configured to treat any of the seven output lines as a GPIO and one of them is. So it is GPIO 210.

There are two approachs to tracking these numbers. One is to hard-code the numbers, or at least to use lots of #defines like:


This approach is simple but can be fragile in the face of change. The other is to use callbacks.

When the "gpio-twl4030" driver registers its 18 GPIOs it will be assigned a range of numbers; it would be best to not assume what numbers they will be until they are assigned. To this end, the platform_data provided to gpio-twl4030 can include a function to be called when initialization is complete, as is used by board-omap3beagle.c in the beagle_twl_gpio_setup(). This function can then store the numbers where appropriate and register the platform devices which depend on those GPIOs.

This hand-coded, delayed initialization can get very messy and is consequently error prone. Devicetree (that bringer of joy) makes this much easier. When one device depends on the service of another, such as a GPIO, an interrupt, a regulator, a timer, etc., the target device is identified by a reference to the relevant node in the devicetree. Unfortunately there is not as much uniformity here as we might like.

To reference an interrupt, the controller node and the interrupt number within the set controlled by that node are given separately, so:

    interrupt-parent = <&intc>;
    interrupts = <76>;

means that the interrupt to attach to is number 76 of those controlled by the node called "intc".

If no "interrupt-parent" is present, the ancestors of the current node are searched until either "interrupt-parent" or "interrupt-controller" is found. In the latter case the node containing "interrupt-controller" is the target node.

If a node responds to interrupts from different controllers, that situation cannot be represented with this approach. For that reason there is work to provide a syntax like:

    interrupts-extended = <&intc 76>

so the parent and the offset are both specified for each interrupt.

Depending on the interrupt controller, it might be necessary to specify more than one number to identify an interrupt. The number of numbers needed is specified with the attribute "#interrupt-cells" in the node for the interrupt controller. The exact meaning of the numbers can only be discovered by examining the devicetree bindings documentation (or the code); often one number will contain flag bits describing whether the interrupt should be edge- or level-triggered and whether high or low (or both) levels are interesting.

To reference a GPIO a syntax similar to the proposed "interrupts-extended" is the standard, so:

    gpios = <&gpio1 7 GPIO_ACTIVE_HIGH>;

(where GPIO_ACTIVE_HIGH is defined somewhere in include/dt-bindings/) will sort out the required GPIO number.

Naturally, in each case the device which provides the interrupt or GPIO will need to be initialized before it can be found and used. It wasn't very many kernel versions ago that this was a real problem. However in the 3.4 kernel, drivers gained the ability for their initialization (or "probe") routine to return the error "EPROBE_DEFER" which would cause the initialization to be tried again later. So if a driver finds that a GPIO line is listed in the devicetree, but no driver has registered GPIOs for the target node yet, it can fail with EPROBE_DEFER and know it can try again later. This can even be used to remove the need for callbacks and delayed registration in board files, but it is really essential for devicetree, and happily it works quite well.

It is worth highlighting that the standard attribute name to identify the GPIO for a device is "gpios" in the plural because, of course, a device might require multiple GPIOs and the descriptors can simply be listed on the one line. Hunting through the sample devicetree files in arch/arm/boot/dts, one finds extremely few cases where multiple GPIOs are specified in one attribute. What seems to happen more often is that there are multiple different "xx-gpios" attributes. For example, an MMC card driver might expect a "cd-gpios" to identify the "Card Detect" line, and a "wp-gpios" to identify the "Write Protect" line. This approach has the benefit of being more explicit (and so less confusing) and of making it easy to indicate that a particular line is simply not present on some board.

While interrupts and GPIOs allow a list of targets with some implicit meaning, regulators don't. Every request for a regulator must include a supply name, so the battery charger declares its dependence on a regulator with:

    bci3v1-supply = <&vusb3v1>;

Every regulator request is of the form "xxx-supply".

As described above, interrupts can sometimes specify the type of trigger, and GPIOs can sometimes specify the active level. Regulators, instead, have no extra parameters that can be passed, even though it would sometimes be useful to specify the required voltage — many regulators are programmable and the GTA04 WiFi chip requires 3.15 volts, which isn't the default (and which cannot yet be set at all using devicetree).

One final cross-tree linkage is implicit in the "reg" attribute mentioned in part 1. As with the "interrupts" attribute, the device that provides the registers is implicit, though, unlike "interrupts", it cannot even be made explicit with something like "reg-parent". Rather the device that provides the registers is always exactly the parent of the device which uses the registers.

We've already observed that a hierarchical tree often cannot accurately reflect reality. Were we to create a "reg-extended" attribute following the pattern of "interrupts-extended" we may well be able to discard the hierarchy altogether and replace the "device tree" with a "device list" where each device contains references to the other devices that it depends on to provide registers, interrupts, GPIOs, etc. This is already happening to some extent. Many so-called "platform devices" are described by devicetree nodes which appear at the top level of the tree rather than where they fit in a device hierarchy.

A simple example is the "aux-keys" device node for the GTA04.

    aux-keys {
        compatible = "gpio-keys";
        aux-button {
            label = "aux";
            linux,code = <169>;
            gpios = <&gpio1 7 GPIO_ACTIVE_HIGH>;
            pinctrl-names = "default";
            pinctrl-0 = <&aux_pins>;

The GTA04 has two physical buttons, one of which is referred to as the "AUX" button and is connected to a GPIO input on the OMAP3. This node describes that part of the hardware. As you can see it identifies a particular GPIO, notes the key-code that it should generate in Linux, asserts that the key could wake the device from suspend, and gives some "pinctrl" information which assures that the particular pad on the OMAP3 is configured as a GPIO input.

Given what we have already learned about the tree structure of devicetree you might expect this node to appear as a child of a node describing the particular GPIO, which in turn would be a child of the GPIO controller within the OMAP3. However that is not the case. Instead, this "aux-keys" node appears at the top level, immediately under "/". While this seems a little odd in the case of a single button, it would make more sense if you imagined a device with multiple related buttons, such as volume-up and volume-down. If they were wired to two separate GPIOs, then placing the node as a child of both GPIOs is impossible and as a child of either would be untidy. Having two separate nodes (one per key) would obscure the fact that there is a single conceptual device: the "volume control".

So we find that some devices, such as the accelerometer and other sensors on the I2C bus, appear in devicetree at the end of a path reflecting how the CPU would address the device, some devices such as a GPIO-based keypad exist at the top level and refer to the components that they combine, and still other devices, such as the GSM modem in the GTA04, cannot be represented as a single device at all.

Not all fricasseed frogs and eel pie.

While exploring and enabling devicetree for the GTA04 has been a lot of fun, there have been some less exciting discoveries.

Firstly, the fact that devicetree support is still quite incomplete is a mixed blessing. On the one hand it is very easy to add devicetree support to many devices and this results in a positive feeling of achievement. On the other, there are fairly significant elements of functionality that are far from trivial. These, such as the omap-dss display driver and the cpu-freq support for OMAP, can largely be worked-around by hacking in some old-style "board-file" style initialization, but that isn't nearly so rewarding.

Secondly the devicetree compiler "dtc" which converts .dts source files to .dtb binaries is fairly primitive. If you do something wrong you'll mostly get either:

    Error: /home/git/gta04-mainline/arch/arm/boot/dts/omap3-gta04.dts:407.12-13 syntax error
    FATAL ERROR: Unable to parse input tree

or silent success as ARM maintainer Russell King recently observed (there are a couple of other error messages, but not many).

The compiler will often succeed for files which will make no sense to the kernel because there is no checking for the validity of attribute names or value ranges. The kernel does have fairly good schema documentation in "Documentation/devicetree/bindings", but this is not machine readable and dtc couldn't read it even if it were. Fortunately there is hope on the horizon. Tomasz Figa recently posted a proposed mechanism for writing machine-readable devicetree schemata which would allow more checking to be added to dtc. A proposal is certainly a long way from working code, but this is still an encouraging step.

What does the future hold?

Devicetree has already had clear benefits, such as the fun this author had in learning something new and the various cleanups that it has motivated in the driver support code in Linux. However it has also required a substantial amount of effort and that effort is ongoing. Such effort needs more justification than some fun and cleanup.

The significant benefit that devicetree promises is for operating system vendors and their clients. Currently, a Linux distribution can create a release for the x86_64 architecture and it can be expected to run on every x86_64 machine in existence. This is because there is just one platform to target. This is not the case for ARM. For ARM, there are many platforms. However if we can expect every ARM device to come with a devicetree description, then the Linux distributors could target "ARM + devicetree", and that is a credible platform concept.

If we could get to the point where a device plus a devicetree file could be reasonably expected to run with every subsequent release of Linux, then that would be a happy place to be. I would be able to upgrade Debian or openSUSE on my device with confidence, even though no-one else in the world has tested the combination.

My own personal experience suggests that might be overly optimistic. I've been regularly updating my GTA04 to the latest kernel, updating the board file as necessary, and I have always had regressions. Sometimes minor (the display has a green tinge), sometimes major (the battery won't charge). Every patch that broke my device was tested by its developer, often on several devices. But none had quite the same set of components as mine and so nobody noticed until I did.

A consequence of the lack of a standard platform is that there are lots of different components available which different designers interconnect in lots of different ways resulting in an impossibly large test matrix. Based on my (limited) experience, I have very little confidence that a kernel that nobody has tested on my device will actually work on my device. And so the promise that devicetree offers seems particularly hollow to me. Of course it is entirely possible that my recent experience is not the norm for others nor for the future. We can but hope.

Comments (15 posted)

Patches and updates

Kernel trees

  • Sebastian Andrzej Siewior: 3.12.0-rt2 . (November 18, 2013)


Core kernel code

Development tools

Device drivers


Filesystems and block I/O

Memory management



Virtualization and containers


Page editor: Jonathan Corbet


A "personal cloud": percloud

By Jake Edge
November 20, 2013

Over the years, there have been numerous projects seeking to replace one or more of the centralized "social networking" and other services that are in use today. Diaspora, ownCloud, FreedomBox, and others have set out to "break the chains" connecting users to companies like Facebook, Twitter, Google, and the like. The revelations from Edward Snowden have only accelerated that trend. A new project, percloud, has similar goals but, unlike some of the others, is looking at the problem as a whole, rather than as a collection of underlying technologies. In fact, percloud wants to put together a full distribution that allows users to have their own cloud services, with simplified setup and configuration.

Percloud is the brainchild of Marco Fioretti, who has been involved with free software and open digital standards as a teacher, writer, and activist over the last several years. He put out a call to action back in August for a study of the alternatives to the existing proprietary cloud services along with how to integrate them into a distribution that can even be used by those without a technical background. Fioretti's thoughts go back even further than August, however, and he has written extensively on the topic at his blog.

While he is happy to see other projects out there tackling various aspects of the problem, Fioretti is convinced that they are missing the forest for the trees. The biggest problems that users will face when trying to move their data—lives—away from the commercial services are a lack of knowledge about configuring the various tools and setting them up to federate with their friends' systems. The latter is what will allow users to have their own software, under their own control, but to still exchange information with their friends and colleagues. Federation is, essentially, the mechanism to get out from under the centralized control of today's cloud services.

A lack of consideration for these configuration and federation pieces are what make some of the alternatives—Mailpile, FreedomBox, Diaspora—insufficient, Fioretti said. By pulling all of the pieces together into a single distribution, with an integrated control panel allowing users to easily set things up, percloud would provide one-stop-shopping for those looking to break their chains.

The focus of Phase I of the project, as described in the roadmap would be a feasibility study to determine what parts and pieces need to come together to create percloud. Fioretti had put together an unsuccessful crowdfunding campaign for Phase I. In a status report after the failed funding drive, he shifted gears slightly:

[...] I would work almost exclusively on the top layer (=unified interface and federation), without bothering at all to build a whole, stand-alone Linux system from day one. Because once that “unified Web interface” (which is, see above, THE REAL ISSUE) were ready, then “attaching” it to pieces of arkOS or anything else, to build one complete system, would be much easier.

The kinds of services that Fioretti envisions being provided by percloud are email, blogging, social networking, and online storage for bookmarks, pictures, files, and so on. In addition, he sees integrated encryption for all of those services, with minimal setup and configuration. Beyond that, the system should allow users to freely import their data from elsewhere, as well as export it to other services. It is, obviously, a rather tall order.

Much of Fioretti's writing sounds like a manifesto or, perhaps, a rant to some. But he does have a point, and it is one that free software efforts often fail to consider. In fact, it could be argued that the lack of adoption for the Linux desktop may have been caused by some of the same kinds of issues. Free software folks tend to focus on the software and the technology, without considering the higher-level pieces—some of which are not at all technical in nature. Another of his posts describes that issue:

In my opinion the real, or at least the most urgent problem, is social and psychological, not technical. While the real solutions to PRISM-like issues are not technical, we can’t get there unless a lot of average Internet users are willing/prepared/able to get there. We need awareness and confidence much more than “platforms”.

Today, most average Internet users can’t see at all how replacing with something open the corporate walled gardens in which they currently live could ever be within their reach. Or why they should want it in the first place. I want to prove to how many of those users as possible, as soon as possible, that they can live online outside those walls. Why should they care if their first “refuge” may not be everybody’s ultimate, perfect digital home, since they could leave it whenever they wish for something better, without losing their data?

There are quite a number of projects that overlap some part of the percloud vision. Some could be incorporated into the distribution; the percloud in 10 slides [SlideShare] presentation mentions ownCloud and Webmin, for example. Fioretti clearly does not want to reinvent any wheels if that can be avoided. But he does want to focus directly on the non-technical users and their needs, rather than to create a big wad of technical solutions that fundamentally don't play well together.

While Fioretti seems willing to keep working on the project in his spare time, the lack of funding—and, seemingly, other participants—may make percloud something of a dead project. He encourages others to get involved, to work on the feasibility study or other pieces of the problem, but so far percloud appears to be a one-man show. The idea of turning the problem on its head, by starting with an integrated, user-centric interface targeted at non-technical folks, is an interesting one. Perhaps other projects will pick it up and run with it.

Comments (5 posted)

Brief items

Distribution quote of the week

Backups are tasty snacks. Let's all run a backup now.
-- Lars Wirzenius

Comments (none posted)

Mageia 4 beta 1 released

Mageia has released the first beta for its fourth release. "Mageia 4 beta 1 has been, for sure, the most difficult release we had to face since the beginning of Mageia project. Lots of good and bad reasons here and again some improvements to be done on our development process." See the release notes for details.

Comments (none posted)

openSUSE 13.1 released

The openSUSE 13.1 release is available. "Much effort was put in testing openSUSE 13.1, with improvements to our automated openQA testing tool, a global bug fixing hackathon and more. The btrfs file system has received a serious workout and while not default, is considered stable for everyday usage. This release has been selected for Evergreen maintenance extending its life cycle to 3 years." See the announcement for a long list of new features in this release.

Comments (2 posted)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

Stability first: Ubuntu’s Mir won’t replace X in 14.04 desktop (ars technica)

Ars technica reports that Mir won't be the default display server in Ubuntu 14.04. "[Jono] Bacon expanded upon Shuttleworth's remarks in an e-mail to Ars. The goal from the start was to "deliver Mir + XMir + Unity 7 in Ubuntu 14.04 Desktop, and Mir + Unity 8 on the phone and tablet," he wrote. However, "for the 14.04 Desktop, we are very conscious that this is an LTS release," Bacon continued. "This means that our already fairly conservative assessment of when to land significant new foundational pieces is even more conservative from a quality, technology, and support perspective. We feel that while Mir is on-track (hence its current delivery on the phone and scheduled for tablet too), we have more reservations about quality in terms of the XMir, and we didn't want to risk the LTS experience for our users.""

Comments (21 posted)

PCLinuxOS Makes Desktop Linux Look Good (LinuxInsider)

LinuxInsider has a review of PCLinuxOS. "The PCLinuxOS distribution was first released on Oct. 24, 2003, by Bill Reynolds. Its current version is 2013.10 and is available in several smartly integrated desktop varieties: the KDE Desktop, the FullMonty Desktop, the LXDE Desktop and the Mate Desktop. It also comes in an interesting KDE MiniMe version."

Comments (none posted)

Page editor: Rebecca Sobol


The failure of pysandbox

By Jake Edge
November 20, 2013

Running applications securely, such that they cannot escape confinement and affect other parts of the system, is the general goal of various types of sandboxes. They can be used to safely run untrusted code, for example. Java (in)famously has sandboxing built into the language, and pysandbox was an attempt to do something similar for Python. But the developer of pysandbox, Victor Stinner, recently declared that the project has failed in its goals and warned others away from that style of sandbox. There are some useful lessons in Stinner's experience that others looking at sandboxes will benefit from.

Pysandbox takes the approach of allowing semi-arbitrary Python to be run in a protected sandbox namespace. While the goal might be to allow all of the Python language, that has led to several different ways to escape the confinement. The sandbox does attempt to provide a portable mechanism for running some Python code safely, using the standard CPython interpreter. Unfortunately, as his declaration made clear, Stinner is convinced that pysandbox is the wrong approach.

Stinner's post to the python-dev mailing list is worth reading in full. He starts with a bit of history that includes some of the difficulties faced by the project over the years. As time went on, more and more holes were found in the sandbox, which required restricting various Python language features so that they couldn't be used to escape. Recently, a security challenge targeted pysandbox and found two vulnerabilities in less than a day. This has led him to two conclusions. The first is that "pysandbox is broken" at a fundamental level:

I now agree that putting a sandbox in CPython is the wrong design. There are too many ways to escape the untrusted namespace using the various introspection features of the Python language. To guarantee the [safety] of a security product, the code should be [carefully] audited and the code to review must be as small as possible. Using pysandbox, the "code" is the whole Python core which is a really huge code base. For example, the Python and Objects directories of Python 3.4 contain more than 126,000 lines of C code.

The security of pysandbox is the security of its weakest part. A single bug is enough to escape the whole sandbox.

He outlined some of the kinds of problems that were found. For example, the __builtins__ dictionary could be modified in various ways to circumvent the sandbox functions and escape. Any segmentation fault in the CPython executable (there are known "crashers" of this sort) could also be used to break out of confinement.

The two most recent vulnerabilities were some of the most fundamental. One used the compile() function to get access to the contents of arbitrary disk files (using a syntax error to print lines as part of the traceback message). The other used a traceback object to unwind the stack frame to one in the trusted namespace, then use the f_globals attribute to retrieve a global object. In both cases, the fix limited the usefulness of pysandbox even further.

The second fundamental flaw is that pysandbox "cannot be used in practice", Stinner said. Because so many restrictions have needed to be added, pysandbox cannot be used for anything "more complex than evaluating '1+(2*3)'". Basic language constructs like "del dict[key]" have been removed because they can be used to modify __builtins__ and break out of the restricted namespace.

He notes that various folks had contacted him about using pysandbox in web applications, so he believes there is a real need for the functionality. He ended his message with a call for information on alternative approaches beyond the PyPy sandbox he already knows about. Based on what he has learned, he believes a different approach, outside of the standard CPython interpreter, will be required:

To build a secure sandbox, the whole Python process must be put in an external sandbox. There are for example projects using Linux SECCOMP security feature to isolate the Python process.

Several developers spoke up to thank Stinner for his analysis and to laud his admission of a defeat of sorts. He set out to create a secure CPython-based sandbox and ended up recognizing that it may well be an unattainable goal. As Python benevolent dictator for life (BDFL) Guido van Rossum put it: "Negative results are also results, and they need to be published."

In addition, several posters were not particularly surprised by the outcome Stinner reported. Nick Coghlan noted the "many JVM vulnerabilities" as one indication that sandboxing is a difficult problem to solve. He continued: "the only ones I even remotely trust are the platform level mechanisms that form the foundation of the various PaaS [platform as a service] services, including SELinux and Linux containers." He is skeptical of trying to have sandboxes that are cross-platform or in-process because of the attack surfaces of both CPython and Java.

The PyPy solution may make sense for some, as Maciej Fijalkowski pointed out, but it is a very different model. A special PyPy interpreter is created that cannot do any library or system calls, but instead sends the operation name and arguments to stdout and waits for a response on stdin. In that way, an outer process completely controls the sandboxed program's interaction with the rest of the system. It may be more difficult to use than was envisioned for pysandbox, however.

Creating a sandbox is clearly a difficult problem—solutions are often quite fragile. But the need to be able to safely run more or less arbitrary code is real. In the thread, Terry Reedy listed a number of sites that accept and run Python code, while Stinner noted a Python shell web application that uses Google's App Engine; all of those use some kind of sandbox—presumably one at the operating system level. But Stinner's tale should serve as a cautionary one to anyone considering a CPython-based solution—or the equivalent for other languages.

Comments (6 posted)

Brief items

(Cynical) quotes of the week

I no longer give Linux machines to my family to use. I still try to support the ones I have given out in the past, but it is increasingly painful — just as painful as it is trying to use Linux myself on the desktop.
David Woodhouse

I used to be hopeful, evangelistic even, about the possibility of a cloud service provider ecosystem built on open source. Now I am quite skeptical and feel that opportunity may be lost. Not that OpenStack doesn’t work, or at least that it can’t be made to, given certain competence and constraints, but that OpenStack doesn’t have the coherence or the will to do more than compromise itself for politics and vanity metrics.

Comments (31 posted)

Dart 1.0: A stable SDK for structured web apps (Google Open Source Blog)

The Google Open Source Blog has announced the release of Dart SDK 1.0. Dart is a language targeted at building web applications that was announced in October 2011. The 1.0 SDK release indicates that Dart is production-ready for web developers. "The Dart SDK 1.0 includes everything you need to write structured web applications: a simple yet powerful programming language, robust tools, and comprehensive core libraries. Together, these pieces can help make your development workflow simpler, faster, and more scalable as your projects grow from a few scripts to full-fledged web applications."

Comments (12 posted)

PHP 5.5.6 released

Version 5.5.6 of the PHP language is out. "This release fixes several bugs against PHP 5.5.5, and adds some performance improvements for some functions."

Full Story (comments: none)

Python 3.3.3 released

Version 3.3.3 of the Python language is out. "Python 3.3.3 includes several security fixes and over 150 bug fixes compared to the Python 3.3.2 release. Importantly, a security bug in CGIHTTPServer was fixed."

Full Story (comments: none)

PyPy 2.2 released

Version 2.2 of the PyPy implementation of the Python 2 language is out. "Our Garbage Collector is now 'incremental'. It should avoid almost all pauses due to a major collection taking place. Previously, it would pause the program (rarely) to walk all live objects, which could take arbitrarily long if your process is using a whole lot of RAM. Now the same work is done in steps." There have also been improvements to the JIT compiler, the NumPy module has been split out, and various other changes have been made.

Full Story (comments: 2)

Newsletters and articles

Development newsletters for the week

Comments (none posted)

Four years of Go

The Go Blog is running a nice retrospective look at the four-year anniversary of the Go language, which was reached earlier this week. Some numbers are available, along with a look at open source projects and businesses using Go. "The number of high-quality open source Go projects is phenomenal. Prolific Go hacker Keith Rarick put it well: 'The state of the Go ecosystem after only four years is astounding. Compare Go in 2013 to Python in 1995 or Java in 1999. Or C++ in 1987!'"

Comments (154 posted)

Exploring LXC Networking (

The initial posting on reads: "One of the (many) things which are not yet entirely clear to me and to the people I speak with about this topic almost on daily basis is how the networking can be done and configured when using LXC. Hopefully the first blog post on this topic is going to shed some more light on this matter and hopefully it will inspire further posts on various other topics related to the containers." What follows is a detailed and extensive tutorial on how to manage networking within and around LXC containers.

Comments (2 posted)

Page editor: Nathan Willis


Articles of interest

How to Run Your Small Business With Free Open Source Software (CIO)

CIO has a summary of open source options for business software. It is a bit thin (and annoyingly broken up over multiple pages—the printable version is better), but it does cover many of the categories of business software that small businesses are likely to be interested in. Each category offers a few different options for open source solutions. "Even if you want to stick with a closed source operating system (or, the case of OS X, partially closed source), your business can still take advantage of a vast amount of open source software. The most attractive benefit of doing so: It's generally available to download and run for nothing. While support usually isn't available for such free software, it's frequently offered at an additional cost by the author or a third party. It may be included in a low-cost commercially licensed version as well."

Comments (5 posted)

Calls for Presentations

FOSDEM 2014 Desktops DevRoom Call for Talks

There will be a Desktop DevRoom at FOSDEM (Free Open Source Developers European Meeting). FOSDEM will be held February 1-2 in Brussels, Belgium. The call for talks deadline for the Desktop DevRoom is December 14.

Full Story (comments: none)

CFP Deadlines: November 21, 2013 to January 20, 2014

The following listing of CFP deadlines is taken from the CFP Calendar.

DeadlineEvent Dates EventLocation
November 22 March 22
March 23
LibrePlanet 2014 Cambridge, MA, USA
November 24 December 13
December 15
SciPy India 2013 Bombay, India
December 1 February 7
February 9 Brno, Czech Republic
December 1 March 6
March 7
Erlang SF Factory Bay Area 2014 San Francisco, CA, USA
December 2 January 17
January 18
QtDay Italy Florence, Italy
December 3 February 21
February 23 2014 Gandhinagar, India
December 15 February 21
February 23
Southern California Linux Expo Los Angeles, CA, USA
December 31 April 8
April 10
Open Source Data Center Conference Berlin, Germany
January 7 March 15
March 16
Chemnitz Linux Days 2014 Chemnitz, Germany
January 10 January 18
January 19
Paris Mini Debconf 2014 Paris, France
January 15 February 28
March 2
FOSSASIA 2014 Phnom Penh, Cambodia
January 15 April 2
April 5
Libre Graphics Meeting 2014 Leipzig, Germany
January 17 March 26
March 28
16. Deutscher Perl-Workshop 2014 Hannover, Germany
January 19 May 20
May 24
PGCon 2014 Ottawa, Canada
January 19 March 22 Linux Info Tag Augsburg, Germany

If the CFP deadline for your event does not appear here, please tell us about it.

Upcoming Events

CentOS Dojo Austin

There will be a CentOS Dojo in Austin, Texas on December 6, 2013.

Full Story (comments: none)

Django Weekend Cardiff

There will be a Django Weekend in Cardiff, Wales, February 7-9, 2014. "The conference is Django-focused, but all of all aspects of Python fall within its remit - particularly in the tutorials and workshops."

Full Story (comments: none)

Events: November 21, 2013 to January 20, 2014

The following event listing is taken from the Calendar.

November 17
November 21
Supercomputing Denver, CO, USA
November 18
November 21
2013 Linux Symposium Ottawa, Canada
November 22
November 24
Python Conference Spain 2013 Madrid, Spain
November 25 Firebird Tour: Prague Prague, Czech Republic
November 28 Puppet Camp Munich, Germany
November 30
December 1
OpenPhoenux Hardware and Software Workshop Munich, Germany
December 6 CentOS Dojo Austin, TX, USA
December 10
December 11
2013 Workshop on Spacecraft Flight Software Pasadena, USA
December 13
December 15
SciPy India 2013 Bombay, India
December 27
December 30
30th Chaos Communication Congress Hamburg, Germany
January 6 Sysadmin Miniconf at 2014 Perth, Australia
January 6
January 10 Perth, Australia
January 13
January 15
Real World Cryptography Workshop NYC, NY, USA
January 17
January 18
QtDay Italy Florence, Italy
January 18
January 19
Paris Mini Debconf 2014 Paris, France

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol

Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds