User: Password:
Subscribe / Log in / New account Weekly Edition for May 23, 2013

Google releases a draft VP8 patent cross-license

By Nathan Willis
May 22, 2013

Two months after its surprise announcement that it had reached an agreement with the MPEG Licensing Authority (MPEG LA) over the VP8 video codec, Google has posted the first public draft of the patent license underpinning the agreement. The terms of the license allow free access to any patents asserted against VP8 from MPEG LA or its eleven co-signatories, subject to certain restrictions. Not everyone is happy with the terms as written; Open Source Initiative (OSI) president Simon Phipps, for instance, argues that the terms pose several specific problems for open source projects. A draft agreement may still change, of course, and Google still insists that VP8 is usable even without a patent license.

License to bill

The VP8 agreement between Google and MPEG LA was announced in March. Its terms were not revealed in detail, but MPEG LA and eleven other patent owners granted licenses to Google on all of the patents "essential" to VP8 (or its successor VP9), and granted Google the right to sublicense those patents on a royalty-free basis to anyone developing an implementation of VP8—whether based on Google's code or not. At the time, there was a lot of speculation about what changed hands behind the scenes (primarily if what changed hands was cash), as well as about the identities of the eleven entities joining MPEG LA's side of the agreement.

Google promised to post the terms of the resulting VP8 sublicense agreement shortly, and its draft incarnation was posted on the WebM project site (WebM uses VP8 for video, of course) during the company's Google I/O developer conference in May. This sublicense agreement is a patent cross-license only; it is separate from the software license for WebM code itself (namely the libvpx library) and from Google's WebM Intellectual Property Rights Grant, which grants a perpetual, worldwide, royalty-free license to all of Google's patents on VP8.

The overall terms of the draft cross-license agreement are that a licensee gets access to all of the essential patents on VP8 owned by MPEG LA and the eleven other participants, on a royalty-free basis. In exchange, the licensee agrees to cross-license all of its VP8-essential patents to Google and every other VP8 licensee. Google's grant is retroactive as well, so that licensees are released from any past infringements on the patents covered; there is also a defensive termination clause nullifying the agreement for any licensee that sues Google or another VP8 licensee on patent infringement grounds over VP8. The text, interestingly enough, always refers to VP8; whether the final version will address VP9 as well or whether there will be separate terms for the new codec remains to be seen.

However nice the broad strokes may sound, the agreement also includes several key limitations. First, accepting the license requires action on the licensee's part: either signing and mailing a copy of the agreement to Google, or clicking on an "Accept" button on the web form (although at the moment this web form is not yet available). Licensees must also provide their name and other as–yet unspecified information, and agree to let Google publicly disclose the fact that they are parties to the agreement. Perhaps more troublingly, the agreement must be accepted by every legal entity wishing to take advantage of it—in other words, licensees cannot sublicense their participation to downstream users or customers. Finally, there are specific "field of use" restrictions on the circumstances in which the cross-licensed patents can be used:

“Licensed Field of Use” means (and is limited to) encoding, decoding, transcoding, and/or playing VP8 Video. For the avoidance of doubt, any other function that might be performed, used or enabled by a Licensed Product (such as encoding, decoding, transcoding or playing video in any format other than the VP8 Format) is beyond the Licensed Field of Use; accordingly, Exploitation of a Licensed Product would fall outside of the Licensed Field of Use (and therefore would not be covered by any of the licenses or releases granted under this VP8 Patent Cross- License Agreement) to the extent of such other function.

Fine print

Phipps wrote about the draft agreement at InfoWorld, highlighting the field of use restrictions, lack of sublicensability, and the need to sign and return the agreement to Google as problems that make the draft run contrary to the Open Source Definition, not to mention causing practical problems for open source projects. On the field-of-use restriction, he pointed out that multi-purpose code falls outside of the licensed uses, as does extending VP8 beyond the codec's "normal" functionality. "For open source, that chills innovation and leads to uncertainty over whether a specific program is covered."

But the more significant issue, Phipps said, is the requirement that each party participating in the agreement must sign it. The legal indemnity the agreement grants cannot be inherited by derivative works, which could be problematic for open source projects, but there are also many open source projects without a legal entity that can sign the agreement on behalf of the contributor community. Thus, he said, "Google is erecting a barrier that at the very least will provoke suspicion from open source projects."

Beyond such suspicion, Phipps also argued that the terms of the draft agreement would be incompatible with a license meeting OSI's Open Source Definition (OSD). "Specifically, the license appears to conflict with OSD 6 (no field-of-use restrictions), OSD 7 (no registrations), and OSD 5 (no discrimination against persons unwilling to be identified). It's arguable it also conflicts with OSD 3 since it is not sublicensable." By not meeting the terms of the OSD, he said, the agreement is likely to be negatively received.

Moot or not?

Then again, whether or not signing the agreement is necessary to implement VP8 is still less than crystal clear. Google has long asserted that the existing VP8 IP Rights Grant is all that is needed to implement VP8 (a position it reinforces in the cross-license agreement FAQ, among other places). In fact, the company has never agreed that MPEG LA or anyone else even has patents essential to VP8. If that is true, then no one needs to sign the cross-license agreement, ever, and its precise wording makes no difference whatsoever. But no one really knows whether that is the case; the agreement with MPEG LA and its cohorts did not disclose any patents essential to VP8, nor does the draft cross-license agreement. But a big part of the patent playbook is to not disclose a patent until one's competitor releases a tempting target for litigation. Perhaps the cross-license agreement would suddenly become important in such circumstances, but they are by their very nature impossible to foresee. For commercial parties implementing VP8 support, at least the agreement would take the patents of MPEG LA and its eleven partners out of the equation.

Phipps argued that the fog of uncertainty about the need for the patent grants in the cross-license agreement is it own problem. It creates a disruption by muddying the waters, just when Google is pushing for wider VP8 adoption in web standards like WebRTC. Perhaps further revision of the terms (and public discussion of the topic) will un-muddy the waters a bit in the coming weeks and months. In the meantime, however, there is at least one concrete benefit to the newly-published agreement: the world finally has its look at the eleven companies joining MPEG LA in the agreement. They are the usual suspects in codec patents, for the most part—Mitsubishi, Siemens, Fujitsu, etc.—but at least knowing the names brings the overall story into sharper focus. They are not small firms with small patent arsenals, so having them participate in the agreement could make it a more appealing option for a company interested in VP8, but with its own patent portfolio to worry about. Whether that offers anything meaningful to open source projects, however, may have to wait until the next episode.

Comments (21 posted)

Moodle 2.5 brings Badges and Bootstrap

By Nathan Willis
May 22, 2013

Version 2.5 of the Moodle open source courseware suite has been released. In some ways, the update brings with it many of the same sort of feature enhancements one might expect of any content management system (CMS), but there are some improvements tailored to the education environment as well.

They're the dot in .edu

The release announcement was posted on May 15 in the Moodle discussion forum. The first feature mentioned is support for badges: criteria-based awards given to students, based on guidelines established by the teachers or site administrators. Moodle's badge implementation is compatible with the Open Badges specification created by Mozilla, and earned badges can be displayed outside of Moodle itself. That makes them a potential certification tool for a variety if organizations that use Moodle for online training (outside of, say, the traditional school and college deployments). The release notes observe that Moodle is believed to be the first courseware system to implement Open Badges support.

The other new features of interest to educators include expanded settings for assignment "workflow"—specifically, the ability to set whether or not an assignment can be re-submitted by the student, and whether or not it must be re-submitted until it passes. There are also several new options for setting assignment feedback—for example, whether an assignment should be recorded in an offline spreadsheet, gets a file in reply (which could be anything from a text file to a video), or simply gets brief comments from the teacher. In quizzes, teachers can now include a "template" for essay questions, prompting what the student needs to write rather than presenting a blank text entry field. That might sound like a trivial change, but consider how valuable such a template is in a bug reporting form; such a brief outline can prevent students from accidentally leaving out a vital portion of their answer, and makes asking for multi-part responses simpler. It is also now possible to search the database of students enrolled for a course, which may not be a common occurrence in classroom situations, but could prove useful in online training courses with their potential for unlimited enrollment.

The usables

Several other changes in this release have been classified as usability or accessibility improvements in the release notes, which noted that these are a continuation of "our incremental usability improvement made in every version, and is difficult but very important work that we will continue to focus on in every release." Several of these usability improvements amount to the ability to hide or switch off portions of Moodle's sometimes-dizzying assortment of options. The majority of web forms in the interface have now been reconfigured to hide the less-frequently-used options under a toggle-able "Advanced" switch. Similarly, teachers can switch off the rich-text editor component for quizzes and forms where simple text will suffice.

But there are other usability improvements as well. Folders containing course content can now be displayed in several ways, including a new tree-style navigator akin to the one found in almost all OS file managers. Teachers can now drag-and-drop content directly into course pages, which is a new feature. A returning feature is the addition of a "Jump to section" drop-down list for reading course pages. The bug report suggests this feature disappeared sometime between Moodle 2.2 and 2.3. Without it, the only ways to get to a specific section in course materials were to either to click through the sections one by one with "Next" or to back up to the table of contents.

They're also the M in CMS

As Moodle has evolved, it has found plenty of uses outside of traditional educational institutions, from certification programs at government offices to training for open source applications like Blender. As a result, it has grown into a somewhat more general-purpose CMS; there are plugins and add-ons that enhance functionality and themes that allow administrators to fit Moodle into the established look-and-feel of a site.

Both of these aspects have been improved in 2.5. First, add-ons and plugins can be both installed and upgraded from within the Moodle web interface, rather than requiring shell access to the Moodle server. Second, Moodle now supports creating themes with the open source Bootstrap framework. The release notes caution that version 2.5 does not yet "fully take advantage of" Bootstrap, but say it is already usable for creating themes that automatically work well on desktops and on mobile devices (both with small displays and with touch-screens). Both of these updates are changes that do not alter the fundamental feature set of Moodle, but show its progress as a project—Wordpress adopted similar features when it was growing out of the traditional "blog" use case and into wider CMS adoption as well.

There is also an updated Moodle app for iOS, which allows users to upload photos and videos directly from a mobile device to the course (there are similar apps for Android, but they are unofficial work from third-party developers). There are additional management features, too, such as the ability to switch off new self-enrollments for a course (thus allowing self-enrollments for a time, but also enabling teachers to decide when the time for enrollment is up), there is a new "scheduled maintenance" mode, and support has been restored for enrolling in a paid course with a PayPal payment.

Moodle has had its share of security vulnerabilities in the past (not an uncommon affliction for PHP applications), and version 2.5 also closes several important holes. LDAP password expiration is fixed, information leaks in the built-in gradebook and in site registration have been closed, and passwords are now saved in bcrypt format rather than as MD5 hashes. The site can also be configured to lock out an account if too many failed login attempts are made (with an adjustable threshold value). Finally, this release rolls in all of the security fixes made in point releases of Moodle 2.4 as well.

Moodle is one of those interesting projects that has a large and vibrant user community, but a community that is comparatively self-contained. That is to say, people who run large Moodle installations often do so because they are motivated by the subject matter that they teach, so they may not be particularly interested in open source software as a movement. Still, it is encouraging to see how much momentum the online courseware community has, Moodle in particular, and it looks like version 2.5 is poised to refresh the package in ways that help student and teacher alike.

Comments (none posted)

Next week's edition will be published on May 31

May 27 is the Memorial Day holiday in the US. Normally, LWN staff would take that day off for the usual festivities, most of which usually involve barbecues and beer. This year, instead, some of us will be in Japan for the 2013 LinuxCon Japan event. Either way, the normal holiday schedule applies: publication of the LWN Weekly Edition will be delayed one day to May 31. We will return to our usual Thursday schedule the following week.

Comments (1 posted)

Page editor: Jonathan Corbet


DeadDrop and Strongbox

By Nathan Willis
May 22, 2013

In mid-May, US news magazine The New Yorker unveiled Strongbox, a service that lets whistleblowers and other potential news sources contact its reporters securely and anonymously. The security model is ambitious; it is reachable only on Tor, hides source identities (while preserving continuity across repeat visits), encrypts messages and uploaded files, and allows a monitoring station to watch for possible security breaches. It is also open source, and is the work of the late programmer and activist Aaron Swartz.

Wired's Kevin Poulsen wrote about the system's origin in a blog post timed to coincide with the debut of the service. The goal, he explained, was to create a modern system through which journalists could safely communicate with anonymous sources, since the old methods had not kept up with the pace of technology—including both legal access methods and "outright hacking." Poulsen had met Swartz in 2006 when Condé Nast (parent company to both Wired and The New Yorker) purchased news site Reddit (where Swartz worked). Poulsen asked Swartz to build the secure communications gateway in 2011, after seeing Swartz's work on Tor2Web.

The security model designed by the two incorporated several stringent requirements, such as ensuring that it was impossible for the news organization to know the origin of files or messages (thus it could not be compelled to disclose that origin), but it should also be able to run on servers at the news organization's offices (to reduce the likelihood of tampering). Swartz insisted that the project should be released as open source software, and they settled on the name DeadDrop. Outside security researchers audited the architecture of the system as well as the code. By December 2012, DeadDrop was reasonably stable and a launch was planned. But those plans came to a sudden halt with Swartz's suicide in January 2013.

Strongbox debuts

In the subsequent months, Poulsen consulted with Swartz's friends and family before eventually deciding to proceed with the launch, which finally happened on May 15. The public front-end of the service runs as a "hidden service" on Tor, accessible only through a .onion pseudo-domain. The front end itself is almost deceptively simple; when a user (the would-be whistleblower or source) logs in for the first time, the system creates a unique code name composed of four English words. In the future, the same user can visit the service from any location and log on again using the four-word code (hopefully without writing it down). When logged in, the user can type a message into a web form and optionally attach files to upload. Journalists can leave secure messages for the user in reply, which the user is encouraged to delete after reading.

Obviously the system cannot provide absolute security; among other things, if the user discloses personally identifiable information in his or her messages, the anonymity is gone. If the user writes down the unique code word identifier, someone else could copy it and spoof the user's identity. But if the instructions are followed correctly, the framework does offer a fairly strong guarantee of anonymity and protection against eavesdropping—certainly a far better offer than PGP encryption or a TLS-protected web form do alone. Instructions and a privacy pledge are posted on the Strongbox home page; hopefully anyone concerned enough about security and anonymity to use the system will take the time needed to learn how to use it correctly.

Drop it

The Strongbox site does not go into much detail about how the system works, but there is a good deal more to examine on the DeadDrop project site at GitHub. The threat model document provides an overview of the system from the outside source's perspective as well as the journalist and administrator's. The same application server provides the front end seen by sources over Tor (which we have already described) and the front end seen by journalists inside the news organization on its internal network. The other two pieces of the puzzle are the "secure viewing station" (SVS) where journalists decrypt and read messages and the auditing system.

But security starts on the application server. The four-word code name generated randomly for the source is never stored in the clear, but it is used. The SHA256 hash of the code name is stored on the server as the directory name for the user. Every message sent by the source includes the code name in the POST request, and the application checks its hash against the stored hash to authorize the request. The application also creates a GPG v2 key pair for each new user, and uses the code name as the secret key passphrase (which, again, is not stored in the clear). Replies from the journalist are encrypted with the public key from this pair, so that only the intended source can decrypt them, but the decryption happens automatically using the code name in the HTTP request, so the source is not required to store or memorize a separate passphrase.

Every message or file uploaded by a source is also encrypted, but with a different key pair, for which the application server only stores the public key. The journalist interface to the application is only available through a VPN requiring two-factor authentication. The application uses private SSL certificates distributed to the journalists' computers, with the certificates and revocation lists generated offline. When a journalist logs in to the system, he or she is presented with a list of new messages, and must download them in encrypted form and take them to a separate machine (the SVS) to read. The SVS is intended to be a diskless workstation booted from a live CD and not attached to the network. The private key for the application is stored on the SVS live CD, so the journalist can read messages and files. As an additional measure of security, the application presents a different set of code names in the journalist interface, so that the source's code name and hash are not known.

Finally, the application server and the SVS are kept on the news organization's premises, under lock and key and monitored for unauthorized access. The application server has a hardware entropy source attached to generate strong cryptographic keys, and journalists are instructed to re-encrypt any files they take from the SVS to their personal workstations (using yet another GPG key pair). There is a separate machine running the OSSEC intrusion detection system and logging events from Tor, AppArmor, the firewall, and grsecurity. Just as the system provides guidance to sources using the service (including potentially unfamiliar pieces like Tor hidden services), there is a set of guidelines for security on the journalists' side of the system: GPG key type and key length, VPN settings, and browser certificates. There is an installation guide that walks the reader through the entire process, starting with installing the operating system with full-disk encryption.

Follow the money code

How much will Strongbox or any other DeadDrop installation ever get used? Those of us on the outside may never know for sure. Despite the best efforts of the designers to make DeadDrop easy-to-use, it should be clear to anyone who reads the threat model that balancing high security and ease-of-use remains a tall order. There are some very nice features in DeadDrop. The code name feature is quite clever; it allows both source and journalist to reliably feel confident that the same person is accessing any particular user account, but by using separate code names for the source and journalist front-ends, anonymity is preserved. Using the source's code name as both the POST authentication method and the GPG passphrase is a compromise, but it also reduces the burden of complexity placed on the source—who, in the real world, may already be taking on an enormous risk by talking to a reporter.

The downside is that some of DeadDrop's security stems from parameters that are hard or impossible for small organizations to implement, such as the constantly-guarded SVS and on-premises application server. Those features mandate dedicating a facility in a brick-and-mortar location (and ideally one that is distinct from the rest of the organization's network infrastructure). But physical security is a vital part of maintaining the overall integrity of the system; without it, someone could copy information from the server, install a keylogging device, or attempt any number of other attacks.

It is interesting to compare the design of DeadDrop to similar whistleblowing applications, like GlobaLeaks (which is arguably the best known). GlobaLeaks also allows sources to submit information to a news organization (or to any other entity running the service, of course), including a Tor hidden service front end. But GlobaLeaks does not implement the persistent identity across repeat visits feature (although it does allow a whistleblower to return to a specific "tip" by saving or remembering a ten-digit "receipt" string). GlobaLeaks also does not offer end-to-end encryption of uploaded messages and files, nor does it utilize DeadDrop's rather complex offline SVS scheme to ensure that communications are not monitored or intercepted.

Of course, to some budding journalists and whistleblowers, GlobaLeaks' lack of complexity may well be a plus. DeadDrop's architecture is set up to protect against a wider assortment of attacks, but along with the usual administrative overhead, a more complex system brings with it the increased chances of human error. That is almost always the case with security; the better password is harder to remember, the safer authentication method is easier to mess up—and in either case, the temptation to write down a secret for fear of forgetting it is an obstacle to which there is no technological solution. Then again, Whistleblowing fundamentally requires some leaps of faith; a source must trust the news organization to begin with or DeadDrop would not make them do so. But DeadDrop remains a valuable object for those interested in developing secure communication systems, regardless of whether or not their plans involve contacting The New Yorker.

Comments (17 posted)

Brief items

Security quotes of the week

In the longer term, the Internet of Things means ubiquitous surveillance. If an object "knows" you have purchased it, and communicates via either Wi-Fi or the mobile network, then whoever or whatever it is communicating with will know where you are. Your car will know who is in it, who is driving, and what traffic laws that driver is following or ignoring. No need to show ID; your identity will already be known. Store clerks could know your name, address, and income level as soon as you walk through the door. Billboards will tailor ads to you, and record how you respond to them. Fast food restaurants will know what you usually order, and exactly how to entice you to order more. Lots of companies will know whom you spend your days -- and nights -- with. Facebook will know about any new relationship status before you bother to change it on your profile. And all of this information will all be saved, correlated, and studied.
Bruce Schneier

It is no secret that Hollywood is trying to take down as many pirated movies as they can, but their targeting of a Creative Commons Pirate Bay documentary is something new. Viacom, Paramount, Fox and Lionsgate have all asked Google to take down links pointing to the Pirate Bay documentary TPB-AFK. But is it a secret plot to silence the voices of the Pirate Bay’s founders, or just another screw up of automated DMCA takedowns?

I also work for the FBI on Tuesdays at 1pm in memphis, tn. They allow me to continue this business and have full access. The FBI also use the site so that they can [monitor] the [activities] of online users. They even added a nice IP logger that logs the users IP when they login.
Justin Poland, operator of DDoS service (as quoted by Brian Krebs)

Comments (none posted)

Strongbox and Aaron Swartz (The New Yorker)

The New Yorker magazine has started a service called Strongbox that allows anonymous information to be sent to magazine. It is based on the DeadDrop free software project that was created by the late Aaron Swartz, which uses the Tor network to preserve anonymity. The magazine also has an article by Kevin Poulsen, who organized the project, about its history. "In New York, a computer-security expert named James Dolan persuaded a trio of his industry colleagues to meet with Aaron to review the architecture and, later, the code. We wanted to be reasonably confident that the system wouldn't be compromised, and that sources would be able to submit documents anonymously—so that even the media outlets receiving the materials wouldn't be able to tell the government where they came from."

Comments (32 posted)

New vulnerabilities

gallery3: cross-site scripting

Package(s):gallery3 CVE #(s):CVE-2013-2087
Created:May 22, 2013 Updated:May 22, 2013
Description: From the Gallery advisories [1, 2]:

Gallery contains a flaw that allows a remote cross-site scripting (XSS) attack. This flaw exists because the application does not validate input passed via movie titles before returning it to the user. This may allow an attacker to create a specially crafted request that would execute arbitrary script code in a user's browser within the trust relationship between their browser and the server.

Gallery contains a flaw that allows a reflected cross-site scripting (XSS) attack. This flaw exists because the application does not validate input passed via the Error page before returning it to the user. This may allow an attacker to create a specially crafted request that would execute arbitrary script code in a user's browser within the trust relationship between their browser and the server.

Fedora FEDORA-2013-8065 gallery3 2013-05-22
Fedora FEDORA-2013-8060 gallery3 2013-05-22

Comments (none posted)

kernel: multiple vulnerabilities

Package(s):linux CVE #(s):CVE-2013-3227 CVE-2013-3301
Created:May 16, 2013 Updated:July 18, 2013

From the Debian advisory:

CVE-2013-3227: Mathias Krauss discovered an issue in the Communication CPU to Application CPU Interface (CAIF). Local users can gain access to sensitive kernel memory.

CVE-2013-3301: Namhyung Kim reported an issue in the tracing subsystem. A privileged local user could cause a denial of service (system crash). This vulnerabililty is not applicable to Debian systems by default.

openSUSE openSUSE-SU-2013:1971-1 kernel 2013-12-30
Oracle ELSA-2013-1645 kernel 2013-11-26
SUSE SUSE-SU-2013:1473-1 Linux kernel 2013-09-21
Oracle ELSA-2013-2538 kernel 2013-07-18
Oracle ELSA-2013-2538 kernel 2013-07-18
Scientific Linux SL-kern-20130717 kernel 2013-07-17
Oracle ELSA-2013-1051 kernel 2013-07-16
CentOS CESA-2013:1051 kernel 2013-07-17
Red Hat RHSA-2013:1080-01 kernel 2013-07-16
Red Hat RHSA-2013:1051-01 kernel 2013-07-16
Mandriva MDVSA-2013:194 kernel 2013-07-11
SUSE SUSE-SU-2013:1182-2 Linux kernel 2013-07-12
Red Hat RHSA-2013:1264-01 kernel-rt 2013-09-16
Mandriva MDVSA-2013:176 kernel 2013-06-24
SUSE SUSE-SU-2013:1022-3 Linux kernel 2013-06-18
SUSE SUSE-SU-2013:1022-2 Linux kernel 2013-06-17
SUSE SUSE-SU-2013:1022-1 kernel 2013-06-17
Ubuntu USN-1882-1 linux-ti-omap4 2013-06-14
Ubuntu USN-1881-1 linux 2013-06-14
Ubuntu USN-1880-1 linux-lts-quantal 2013-06-14
Ubuntu USN-1879-1 linux-ti-omap4 2013-06-14
Ubuntu USN-1878-1 linux 2013-06-14
Ubuntu USN-1839-1 linux-ti-omap4 2013-05-28
Ubuntu USN-1836-1 linux-ti-omap4 2013-05-24
Ubuntu USN-1834-1 linux-lts-quantal 2013-05-24
Ubuntu USN-1833-1 linux 2013-05-24
Ubuntu USN-1835-1 linux 2013-05-24
Ubuntu USN-1837-1 linux 2013-05-24
Mageia MGASA-2013-01451 kernel-vserver 2013-05-17
Mageia MGASA-2013-0150 kernel-rt 2013-05-17
Mageia MGASA-2013-0149 kernel-tmb 2013-05-17
Mageia MGASA-2013-0148 kernel-linus 2013-05-17
Mageia MGASA-2013-0147 kernel 2013-05-17
Debian DSA-2669-1 linux 2013-05-15

Comments (none posted)

kernel: privilege escalation

Package(s):linux CVE #(s):CVE-2013-2094
Created:May 16, 2013 Updated:June 14, 2013

From the Debian advisory:

CVE-2013-2094: Tommie Rantala discovered an issue in the perf subsystem. An out-of-bounds access vulnerability allows local users to gain elevated privileges.

More information available here.

Oracle ELSA-2013-1645 kernel 2013-11-26
Oracle ELSA-2013-2546 enterprise kernel 2013-09-17
Mandriva MDVSA-2013:176 kernel 2013-06-24
openSUSE openSUSE-SU-2013:1042-1 kernel 2013-06-19
Oracle ELSA-2013-2546 enterprise kernel 2013-09-17
Oracle ELSA-2013-2525 kernel 2013-06-13
Oracle ELSA-2013-2525 kernel 2013-06-13
Oracle ELSA-2013-0911 kernel 2013-06-11
openSUSE openSUSE-SU-2013:0925-1 kernel 2013-06-10
openSUSE openSUSE-SU-2013:0847-1 kernel 2013-05-31
Ubuntu USN-1849-1 linux-lts-raring 2013-05-30
Ubuntu USN-1838-1 linux-ti-omap4 2013-05-30
Ubuntu USN-1839-1 linux-ti-omap4 2013-05-28
Ubuntu USN-1836-1 linux-ti-omap4 2013-05-24
SUSE SUSE-SU-2013:0819-2 Linux kernel 2013-05-24
SUSE SUSE-SU-2013:0819-1 the Linux Kernel (x86) 2013-05-22
Slackware SSA:2013-140-01 linux 2013-05-20
Red Hat RHSA-2013:0841-01 kernel 2013-05-20
Red Hat RHSA-2013:0840-01 kernel 2013-05-20
Red Hat RHSA-2013:0829-01 kernel-rt 2013-05-20
Mageia MGASA-2013-01451 kernel-vserver 2013-05-17
Mageia MGASA-2013-0150 kernel-rt 2013-05-17
Mageia MGASA-2013-0149 kernel-tmb 2013-05-17
Mageia MGASA-2013-0148 kernel-linus 2013-05-17
Mageia MGASA-2013-0147 kernel 2013-05-17
Scientific Linux SL-kern-20130516 kernel 2013-05-16
Oracle ELSA-2013-0830 kernel 2013-05-16
Oracle ELSA-2013-2524 kernel 2013-05-16
CentOS CESA-2013:0830 kernel 2013-05-17
Red Hat RHSA-2013:0832-01 kernel 2013-05-17
Red Hat RHSA-2013:0830-01 kernel 2013-05-16
Ubuntu USN-1828-1 linux-lts-quantal 2013-05-15
Ubuntu USN-1827-1 linux 2013-05-15
Ubuntu USN-1826-1 linux 2013-05-15
Debian DSA-2669-1 linux 2013-05-15
Ubuntu USN-1825-1 linux 2013-05-15
openSUSE openSUSE-SU-2013:0951-1 kernel 2013-06-10

Comments (none posted)

kernel: information disclosure

Package(s):kernel CVE #(s):CVE-2013-2636
Created:May 20, 2013 Updated:May 22, 2013
Description: From the CVE entry:

net/bridge/br_mdb.c in the Linux kernel before 3.8.4 does not initialize certain structures, which allows local users to obtain sensitive information from kernel memory via a crafted application.

Mageia MGASA-2013-01451 kernel-vserver 2013-05-17
Mageia MGASA-2013-0150 kernel-rt 2013-05-17
Mageia MGASA-2013-0149 kernel-tmb 2013-05-17
Mageia MGASA-2013-0148 kernel-linus 2013-05-17
Mageia MGASA-2013-0147 kernel 2013-05-17

Comments (none posted)

krb5: UDP ping-pong flaw in kpasswd

Package(s):krb5 CVE #(s):CVE-2002-2443
Created:May 21, 2013 Updated:July 2, 2013
Description: From the Red Hat bugzilla:

A flaw in certain programs that handle UDP traffic was discovered and assigned the name CVE-1999-0103 (that CVE specifically mentions echo and chargen as vulnerable). In 2002, a Nessus plugin was included [1] that reference this CVE name, but was for the kpasswd service. Until recently, this issue had not been reported upstream. This issue has since been reported upstream [2] and is now fixed [3].

If a malicious remote user were to spoof their IP address to that of another server running kadmind with the password change port (kpasswd, port 464), or to the target server's IP address itself), kpasswd will pass UDP packets to the spoofed address and reply each time. This can be used to consume bandwidth and CPU on the affected servers running kadmind.

This should be fixed in the krb5-1.11.3 release.


Ubuntu USN-2810-1 krb5 2015-11-12
Gentoo 201312-12 mit-krb5 2013-12-16
openSUSE openSUSE-SU-2013:1122-1 krb5 2013-07-02
openSUSE openSUSE-SU-2013:1119-1 kpasswd 2013-07-02
Scientific Linux SL-krb5-20130613 krb5 2013-06-13
Oracle ELSA-2013-0942 krb5 2013-06-12
Oracle ELSA-2013-0942 krb5 2013-06-12
CentOS CESA-2013:0942 krb5 2013-06-13
CentOS CESA-2013:0942 krb5 2013-06-13
Red Hat RHSA-2013:0942-01 krb5 2013-06-12
Mageia MGASA-2013-0161 krb5 2013-06-06
Debian DSA-2701-1 krb5 2013-06-02
Fedora FEDORA-2013-8219 krb5 2013-05-23
Mandriva MDVSA-2013:166 krb5 2013-05-21
Fedora FEDORA-2013-8212 krb5 2013-05-21

Comments (none posted)

libvirt: denial of service

Package(s):libvirt CVE #(s):CVE-2013-1962
Created:May 17, 2013 Updated:July 3, 2013

From the Red Hat advisory:

It was found that libvirtd leaked file descriptors when listing all volumes for a particular pool. A remote attacker able to establish a read-only connection to libvirtd could use this flaw to cause libvirtd to consume all available file descriptors, preventing other users from using libvirtd services (such as starting a new guest) until libvirtd is restarted.

Gentoo 201309-18 libvirt 2013-09-25
Ubuntu USN-1895-1 libvirt 2013-07-02
Mageia MGASA-2013-0166 libvirt 2013-06-06
Fedora FEDORA-2013-8681 libvirt 2013-05-29
Scientific Linux SL-libv-20130516 libvirt 2013-05-16
Oracle ELSA-2013-0831 libvirt 2013-05-16
CentOS CESA-2013:0831 libvirt 2013-05-17
Red Hat RHSA-2013:0831-01 libvirt 2013-05-16
openSUSE openSUSE-SU-2013:0885-1 libvirt 2013-06-10

Comments (none posted)

mediawiki: multiple vulnerabilities

Package(s):mediawiki CVE #(s):CVE-2013-2031 CVE-2013-2032
Created:May 20, 2013 Updated:May 22, 2013
Description: From the Red Hat bugzilla:

Two flaws were corrected in the recently-released MediaWiki 1.20.5 and 1.19.6 releases:

* Jan Schejbal / reported that SVG script filtering could be bypassed for Chrome and Firefox clients by using an encoding that MediaWiki understood, but these browsers interpreted as UTF-8. [1]

* Internal review discovered that extensions were not given the opportunity to disable a password reset, which could lead to circumvention of two-factor authentication. [2]


Debian DSA-2891-3 mediawiki 2014-04-04
Debian DSA-2891-2 mediawiki 2014-03-31
Debian DSA-2891-1 mediawiki 2014-03-30
Gentoo 201310-21 mediawiki 2013-10-28
Fedora FEDORA-2013-7701 mediawiki 2013-05-19
Fedora FEDORA-2013-7714 mediawiki 2013-05-19

Comments (none posted)

openstack-keystone: delayed token invalidation

Package(s):keystone CVE #(s):CVE-2013-2059
Created:May 17, 2013 Updated:June 11, 2013

From the Ubuntu advisory:

Sam Stoelinga discovered that Keystone would not immediately invalidate tokens when deleting users via the v2 API. A deleted user would be able to continue to use resources until the token lifetime expired.

openSUSE openSUSE-SU-2013:0949-1 openstack-keystone 2013-06-10
Fedora FEDORA-2013-8048 openstack-keystone 2013-05-22
Ubuntu USN-1830-1 keystone 2013-05-16

Comments (none posted)

openstack-keystone: insecure signing directory

Package(s):openstack-keystone CVE #(s):CVE-2013-2030
Created:May 22, 2013 Updated:June 27, 2013
Description: From the Openwall advisory:

Grant Murphy from Red Hat and Anton Lundin both independently reported a vulnerability in Nova's default location for the Keystone middleware signing directory (signing_dir). By previously setting up a malicious directory structure, an attacker with local shell access on the Nova node could potentially issue forged tokens that would be accepted by the middleware. Only setups that use the default value for signing_dir are affected. Note that future versions of the Keystone middleware will issue a warning if an insecure signing directory is used.

openSUSE openSUSE-SU-2013:1087-1 openstack-nova 2013-06-27
Fedora FEDORA-2013-8048 openstack-keystone 2013-05-22

Comments (none posted)

openstack-nova: denial of service

Package(s):nova CVE #(s):CVE-2013-2096
Created:May 17, 2013 Updated:July 29, 2013

From the Ubuntu advisory:

Loganathan Parthipan discovered that Nova did not verify the size of QCOW2 instance storage. An authenticated attacker could exploit this to cause a denial of service by creating an image with a large virtual size with little data, then filling the virtual disk.

Fedora FEDORA-2013-22693 openstack-nova 2013-12-12
Fedora FEDORA-2013-13244 openstack-nova 2013-07-29
Fedora FEDORA-2013-13244 novnc 2013-07-29
Ubuntu USN-1831-2 nova 2013-05-28
Ubuntu USN-1831-1 nova 2013-05-16

Comments (none posted)

openswan: code execution

Package(s):openswan CVE #(s):CVE-2013-2053
Created:May 16, 2013 Updated:January 20, 2014

From the Red Hat advisory:

A buffer overflow flaw was found in Openswan. If Opportunistic Encryption were enabled ("oe=yes" in "/etc/ipsec.conf") and an RSA key configured, an attacker able to cause a system to perform a DNS lookup for an attacker-controlled domain containing malicious records (such as by sending an email that triggers a DKIM or SPF DNS record lookup) could cause Openswan's pluto IKE daemon to crash or, potentially, execute arbitrary code with root privileges. With "oe=yes" but no RSA key configured, the issue can only be triggered by attackers on the local network who can control the reverse DNS entry of the target system. Opportunistic Encryption is disabled by default. (CVE-2013-2053)

Debian DSA-2893-1 openswan 2014-03-31
Oracle ELSA-2014-0185 openswan 2014-02-18
Gentoo 201401-09 openswan 2014-01-18
SUSE SUSE-SU-2013:1150-1 openswan 2013-07-05
Mandriva MDVSA-2013:231 openswan 2013-09-12
Mageia MGASA-2013-0157 openswan 2013-05-25
Oracle ELSA-2013-0827 openswan 2013-05-15
Oracle ELSA-2013-0827 openswan 2013-05-15
CentOS CESA-2013:0827 openswan 2013-05-16
CentOS CESA-2013:0827 openswan 2013-05-15
Red Hat RHSA-2013:0827-01 openswan 2013-05-15

Comments (none posted)

openvpn: possible plaintext recovery

Package(s):openvpn CVE #(s):CVE-2013-2061
Created:May 16, 2013 Updated:October 3, 2014

From the OpenVPN advisory:

OpenVPN 2.3.0 and earlier running in UDP mode are subject to chosen ciphertext injection due to a non-constant-time HMAC comparison function. Plaintext recovery may be possible using a padding oracle attack on the CBC mode cipher implementation of the crypto library, optimistically at a rate of about one character per 3 hours. PolarSSL seems vulnerable to such an attack; the vulnerability of OpenSSL has not been verified or tested.

OpenVPN servers are typically configured to silently drop packets with the wrong HMAC. For this reason measuring the processing time of the packets is not trivial without a MITM position. In practice, the attack likely needs some target-specific information to be effective.

The severity of this vulnerability can be considered low. Only if OpenVPN is configured to use a null-cipher, arbitrary plain-text can be injected which can completely open up this attack vector.

Ubuntu USN-2368-1 openvpn 2014-10-02
Gentoo 201311-13 openvpn 2013-11-20
openSUSE openSUSE-SU-2013:1649-1 openvpn 2013-11-09
openSUSE openSUSE-SU-2013:1645-1 openvpn 2013-11-09
Mandriva MDVSA-2013:167 openvpn 2013-05-27
Mageia MGASA-2013-0153 openvpn 2013-05-25
Fedora FEDORA-2013-7531 openvpn 2013-05-16
Fedora FEDORA-2013-7552 openvpn 2013-05-16

Comments (none posted)

ruby: object taint bypassing

Package(s):ruby CVE #(s):CVE-2013-2065
Created:May 17, 2013 Updated:May 30, 2013

From the Ruby advisory:

There is a vulnerability in DL and Fiddle in Ruby where tainted strings can be used by system calls regardless of the $SAFE level set in Ruby.

Debian-LTS DLA-235-1 ruby1.9.1 2015-05-30
Ubuntu USN-2035-1 ruby1.8, ruby1.9.1 2013-11-27
openSUSE openSUSE-SU-2013:1611-1 ruby19 2013-10-30
Fedora FEDORA-2013-8411 ruby 2013-05-30
Fedora FEDORA-2013-8375 ruby 2013-05-30
Mageia MGASA-2013-0155 ruby 2013-05-25
Slackware SSA:2013-136-02 ruby 2013-05-16

Comments (none posted)

thunderbird: multiple vulnerabilities

Package(s):thunderbird CVE #(s):CVE-2013-0801 CVE-2013-1670 CVE-2013-1672 CVE-2013-1674 CVE-2013-1675 CVE-2013-1676 CVE-2013-1677 CVE-2013-1678 CVE-2013-1679 CVE-2013-1680 CVE-2013-1681
Created:May 17, 2013 Updated:June 28, 2013

From the Mozilla release notes:

Security researcher Abhishek Arya (Inferno) of the Google Chrome Security Team used the Address Sanitizer tool to discover a series of use-after-free, out of bounds read, and invalid write problems rated as moderate to critical as security issues in shipped software. Some of these issues are potentially exploitable, allowing for remote code execution. We would also like to thank Abhishek for reporting additional use-after-free flaws in dir=auto code introduced during Firefox development. These were fixed before general release. (CVE-2013-1676, CVE-2013-1677, CVE-2013-1678, CVE-2013-1679, CVE-2013-1680, CVE-2013-1681)

Mozilla community member Ms2ger discovered that some DOMSVGZoomEvent functions are used without being properly initialized, causing uninitialized memory to be used when they are called by web content. This could lead to a information leakage to sites depending on the contents of this uninitialized memory. (CVE-2013-1675)

Security researcher Nils reported a use-after-free when resizing video while playing. This could allow for arbitrary code execution. (CVE-2013-1674)

Security researcher Seb Patane reported an issue with the Mozilla Maintenance Service on Windows. This issue allows unprivileged users to local privilege escalation through the system privileges used by the service when interacting with local malicious software. This allows the user to bypass integrity checks leading to local privilege escalation. Local file system access is necessary in order for this issue to be exploitable and it cannot be triggered through web content. (CVE-2013-1672)

Security researcher Cody Crews reported a method to call a content level constructor that allows for this constructor to have chrome privileged accesss. This affects chrome object wrappers (COW) and allows for write actions on objects when only read actions should be allowed. This can lead to cross-site scripting (XSS) attacks. (CVE-2013-1670)

Mozilla developers identified and fixed several memory safety bugs in the browser engine used in Firefox and other Mozilla-based products. Some of these bugs showed evidence of memory corruption under certain circumstances, and we presume that with enough effort at least some of these could be exploited to run arbitrary code. (CVE-2013-0801)

openSUSE openSUSE-SU-2014:1100-1 Firefox 2014-09-09
Gentoo 201309-23 firefox 2013-09-27
SUSE SUSE-SU-2013:1152-1 Mozilla Firefox 2013-07-05
Fedora FEDORA-2013-11776 thunderbird 2013-06-28
Fedora FEDORA-2013-11799 thunderbird 2013-06-28
openSUSE openSUSE-SU-2013:0929-1 xulrunner 2013-06-10
Fedora FEDORA-2013-8284 thunderbird 2013-06-02
openSUSE openSUSE-SU-2013:0825-1 MozillaFirefox 2013-05-24
Slackware SSA:2013-136-01 thunderbird 2013-05-16
Fedora FEDORA-2013-8298 thunderbird 2013-05-17
openSUSE openSUSE-SU-2013:0946-1 MozillaFirefox 2013-06-10
openSUSE openSUSE-SU-2013:0896-1 Mozilla 2013-06-10

Comments (none posted)

tomcat: information disclosure

Package(s):tomcat CVE #(s):CVE-2013-2071
Created:May 21, 2013 Updated:July 2, 2013
Description: From the Red Hat bugzilla:

An information disclosure flaw was found in the way asynchronous context implementation of Apache Tomcat, an Apache Servlet/JSP Engine, performed request information management in certain circumstances (formerly certain elements of a previous request might have been exposed to the current request). If an application used AsyncListeners that threw RuntimeExceptions, a remote attacker could use this flaw to possibly obtain sensitive information.

Gentoo 201412-29 tomcat 2014-12-14
Debian DSA-2897-1 tomcat7 2014-04-08
openSUSE openSUSE-SU-2013:1306-1 tomcat 2013-08-07
Mageia MGASA-2013-0191 tomcat7 2013-07-01
Ubuntu USN-1841-1 tomcat6, tomcat7 2013-05-28
Fedora FEDORA-2013-7999 tomcat 2013-05-21
Fedora FEDORA-2013-7993 tomcat 2013-05-21

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 3.10-rc2, released on May 20. Linus says: "For being an -rc2, it's not unreasonably sized, but I did take a few pulls that I wouldn't have taken later in the rc series. So it's not exactly small either. We've got arch updates (PPC, MIPS, PA-RISC), we've got driver fixes (net, gpu, target, xen), and we've got filesystem updates (btrfs, ext4 and cepth - rbd)."

Stable updates: 3.9.3, 3.4.46, and and 3.0.79 were released on May 19; came out on May 20.

Comments (none posted)

Ktap 0.1 released

A new kernel tracing tool called "ktap" has made its first release. "KTAP have different design principles from Linux mainstream dynamic tracing language in that it's based on bytecode, so it doesn't depend upon GCC, doesn't require compiling a kernel module, safe to use in production environment, fulfilling the embedded ecosystem's tracing needs." It's in an early state; the project is looking for testers and contributors.

Comments (10 posted)

Merging zswap

By Jonathan Corbet
May 22, 2013
As reported in our Linux Storage, Filesystem, and Memory Management Summit coverage, the decision was made to merge the zswap compressed swap cache subsystem while holding off on the rather more complex "zcache" subsystem. But conference decisions can often run into difficulties during the implementation process; that has proved to be the case here.

Zswap developer Seth Jennings duly submitted the code for consideration for the 3.11 development cycle. He quickly ran into opposition from zcache developer Dan Magenheimer; Dan had agreed with the merging of zswap in principle, but he expressed concerns that zswap may perform poorly in some situations. According to Dan, it would be better to fix these problems before merging the code:

I think the real challenge of zswap (or zcache) and the value to distros and end users requires us to get this right BEFORE users start filing bugs about performance weirdness. After which most users and distros will simply default to 0% (i.e. turn zswap off) because zswap unpredictably sometimes sucks.

The discussion went around in circles the way that in-kernel compression discussions often do. In the end, though, the consensus among memory management developers (but not Dan) was probably best summarized by Mel Gorman:

I think there is a lot of ugly in there and potential for weird performance bugs. I ran out of beans complaining about different parts during the review but fixing it out of tree or in staging like it's been happening to date has clearly not worked out at all.

So the end result is likely to be that zswap will be merged for 3.11, but with a number of warnings attached to it. Then, with luck, the increased visibility of the code will motivate developers to prepare patches and improve the code to a point where it is production-ready.

Comments (2 posted)

Kernel development news

Ktap — yet another kernel tracer

By Jonathan Corbet
May 22, 2013
Once upon a time, usable tracing tools for Linux were few and far between. Now, instead, there is a wealth of choices, including the in-kernel ftrace facility, SystemTap, and the LTTng suite; Oracle also has a port of DTrace for its distribution, available to its paying customers. On May 21, another alternative showed up in the form of the ktap 0.1 release. Ktap does not offer any major features that are not available from the other tracing tools, but there may still be a place for it in the tracing ecosystem.

Ktap appears to be strongly oriented toward the needs of embedded users; that has affected a number of the design decisions that have been made. At the top of the list was the decision to embed a byte-code interpreter into the kernel and compile tracing scripts for that interpreter. That is a big difference from SystemTap, which, in its current implementation, compiles a tracing script into a separate module that must be loaded into the kernel. This difference matters because an embedded target often will not have a full compiler toolchain installed on it; even if the tools are available, compiling and linking a module can be a slow process. Compiling a ktap script, instead, requires a simple utility to produce byte code for the ktap kernel module.

That compiler implements a language that is based on Lua. It is C-like, but it is dynamically typed, has a dictionary-like "table" type, and lacks arrays and pointers. There is a simple function definition mechanism which can be used like this:

    function eventfun (e) {
	printf("%d %d\t%s\t%s", cpu(), pid(), execname(), e.tostring())

The resulting function will, when called, output the current CPU number, process ID, executing program name, and the string representation of the passed-in event e. There is a probe-placement function, so ktap could arrange to call the above function on system call entry with:

    kdebug.probe("tp:syscalls", eventfun)

A quick run on your editor's system produced a bunch of output like:

    3 2745	Xorg	sys_setitimer(which: 0, value: 7fff05967ec0, ovalue: 0)
    3 2745	Xorg	sys_setitimer -> 0x0
    2 27467	as	sys_mmap(addr: 0, len: 81000, prot: 3, flags: 22, fd: ffffffff, off: 0)
    2 27467	as	sys_mmap -> 0x2aaaab67c000
    2 3402	gnome-shell	sys_mmap(addr: 0, len: 97b, prot: 1, flags: 2, fd: 21, off: 0)
    2 3402	gnome-shell	sys_mmap -> 0x7f4ec4bfb000

There are various utility functions for generating timer requests, creating histograms, and so on. So, for example, this script:

    hist = {}

    function eventfun (e) {
	if (e.sc_is_enter) {

    kdebug.probe("tp:syscalls", eventfun)

    kdebug.probe_end(function () {

is sufficient to generate a histogram of system calls over the period of time from when it starts until when the user interrupts it. Your editor ran it with a kernel build running and got output looking like this:

                value ------------- Distribution ------------- count
        sys_enter_open |@@@@@@@@                               587779    
       sys_enter_close |@@@@                                   343728    
    sys_enter_newfstat |@@@@                                   331459    
        sys_enter_read |@@@                                    283217    
        sys_enter_mmap |@@@                                    243458    
       sys_enter_ioctl |@@                                     219364    
      sys_enter_munmap |@@                                     165006    
       sys_enter_write |@                                      128003    
        sys_enter_poll |@                                      77311     
    sys_enter_recvfrom |                                       52898     

The syntax for setting probe points closely matches that used by perf; probes can be set on specific functions or tracepoints, for example. It is possible to hook into the perf events mechanism to get other types of hardware or software events, and memory breakpoints are supported. The (sparse) documentation packaged with the code also suggests that ktap is able to set user-space probes, but none of the example scripts packaged with the tool demonstrate that capability.

Ktap scripts can manipulate the return value of probed functions within the kernel. There does not currently appear to be a way to manipulate kernel-space data directly, but that could presumably be added (along with lots of other features) in the future. What's there now is a proof of concept as much as anything; it is a quick way to get some data out of the kernel but does not offer a whole lot that is not available using the existing ftrace interface.

For those who want to play with it, the first step is a simple:

    git clone

From there, building the code and running the sample scripts is a matter of a few minutes of relatively painless work. There is the ktapvm module, which must, naturally, be loaded into the kernel. That module creates a special virtual file (ktap/ktapvm under the debugfs root) that is used by the ktap binary to load and run compiled scripts.

Ktap in its current form is limited, without a lot of exciting new functionality. Even so, it seems to have generated a certain amount of interest in the development community. Getting started with most tracing tools usually seems to involve a fair amount of up-front learning; ktap, perhaps, is a more approachable solution for a number of users. The whole thing is about 10,000 lines of code; it shouldn't be hard for others to run with and extend. If developers start to take the bait, interesting things could happen with this project.

Comments (4 posted)

Low-latency Ethernet device polling

By Jonathan Corbet
May 21, 2013
Linux is generally considered to have one of the most fully featured and fast networking stacks available. But there are always users who are not happy with what's available and who want to replace it with something more closely tuned for their specific needs. One such group consists of people with extreme low latency requirements, where each incoming packet must be responded to as quickly as possible. High-frequency trading systems fall into this category, but there are others as well. This class of user is sometimes tempted to short out the kernel's networking stack altogether in favor of a purely user-space (or purely hardware-based) implementation, but that has problems of its own. A relatively small patch to the networking subsystem might just be able to remove that temptation for at least some of these users.

Network interfaces, like most reasonable peripheral devices, are capable of interrupting the CPU whenever a packet arrives. But even a moderately busy interface can handle hundreds or thousands of packets per second; per-packet interrupts would quickly overwhelm the processor with interrupt-handling work, leaving little time for getting useful tasks done. So most interface drivers will disable the per-packet interrupt when the traffic level is high enough and, with cooperation from the core networking stack, occasionally poll the device for new packets. There are a number of advantages to doing things this way: vast numbers of interrupts can be avoided, incoming packets can be more efficiently processed in batches, and, if packets must be dropped in response to load, they can be discarded in the interface before they ever hit the network stack. Polling is thus a win for almost all situations where there is any significant amount of traffic at all.

Extreme low-latency users see things differently, though. The time between a packet's arrival and the next poll is just the sort of latency that they are trying to avoid. Re-enabling interrupts is not a workable solution, though; interrupts, too, are a source of latency. Thus the drive for user-space solutions where an application can simply poll the interface for new packets whenever it is prepared to handle new messages.

Eliezer Tamir has posted an alternative solution in the form of the low-latency Ethernet device polling patch set. With this patch, an application can enable polling for new packets directly in the device driver, with the result that those packets will quickly find their way into the network stack.

The patch adds a new member to the net_device_ops structure:

    int (*ndo_ll_poll)(struct napi_struct *dev);

This function should cause the driver to check the interface for new packets and flush them into the network stack if they exist; it should not block. The return value is the number of packets it pushed into the stack, or zero if no packets were available. Other return values include LL_FLUSH_BUSY, indicating that ongoing activity prevented the processing of packets (the inability to take a lock would be an example) or LL_FLUSH_FAILED, indicating some sort of error. The latter value will cause polling to stop; LL_FLUSH_BUSY, instead, appears to be entirely ignored.

Within the networking stack, the ndo_ll_poll() function will be called whenever polling the interface seems like the right thing to do. One obvious case is in response to the poll() system call. Sockets marked as non-blocking will only poll once; otherwise polling will continue until some packets destined for the relevant socket find their way into the networking stack, up until the maximum time controlled by the ip_low_latency_poll sysctl knob. The default value for that knob is zero (meaning that the interface will only be polled once), but the "recommended value" is 50µs. The end result is that, if unprocessed packets exist when poll() is called (or arrive shortly thereafter), they will be flushed into the stack and made available immediately, with no need to wait for the stack itself to get around to polling the interface.

Another patch in the series adds another call site in the TCP code. If a read() is issued on an established TCP connection and no data is ready for return to user space, the driver will be polled to see if some data can be pushed into the system. So there is no need for a separate poll() call to get polling on a TCP socket.

This patch set makes polling easy to use by applications; once it is configured into the kernel, no application changes are needed at all. On the other hand, the lack of application control means that every poll() or TCP read() will go into the polling code and, potentially, busy-wait for as long as the ip_low_latency_poll knob allows. It is not hard to imagine that, on many latency-sensitive systems, the hard response-time requirements really only apply to some connections, while others have no such requirements. Polling on those less-stringent sockets could, conceivably, create new latency problems on the sockets that the user really cares about. So, while no reviewer has called for it yet, it would not be surprising to see the addition of a setsockopt() operation to enable or disable polling for specific sockets before this code is merged.

It almost certainly will be merged at some point; networking maintainer Dave Miller responded to an earlier posting with "I just wanted to say that I like this work a lot." There are still details to be worked out and, presumably, a few more rounds of review to be done, so low-latency sockets may not be ready for the 3.11 merge window. But it would be surprising if this work took much longer than that to get into the mainline kernel.

Comments (7 posted)

An unexpected perf feature

By Jake Edge
May 21, 2013

Local privilege escalations seem to be regularly found in the Linux kernel these days, but they usually aren't quite so old—more than two years since the release of 2.6.37—or backported into even earlier kernels. But CVE-2013-2094 is just that kind of bug, with a now-public exploit that apparently dates back to 2010. It (ab)uses the perf_event_open() system call, and the bug was backported to the 2.6.32 kernel used by Red Hat Enterprise Linux (and its clones: CentOS, Oracle, and Scientific Linux). While local privilege escalations are generally considered less worrisome on systems without untrusted users, it is easy to forget that UIDs used by network-exposed services should also qualify as untrusted—compromising a service, then using a local privilege escalation, leads directly to root.

The bug was found by Tommi Rantala when running the Trinity fuzz tester and was fixed in mid-April. At that time, it was not recognized as a security problem; the release of an exploit in mid-May certainly changed that. The exploit is dated 2010 and contains some possibly "not safe for work" strings. Its author expressed surprise that it wasn't seen as a security problem when it was fixed. That alone is an indication (if one was needed) that people in various colored hats are scrutinizing kernel commits—often in ways that the kernel developers are not.

The bug itself was introduced in 2010, and made its first appearance in the 2.6.37 kernel in January 2011. It treated the 64-bit perf event ID differently in an initialization routine (perf_swevent_init() where the ID was sanity checked) and in the cleanup routine (sw_perf_event_destroy()). In the former, it was treated as a signed 32-bit integer, while in the latter as an unsigned 64-bit integer. The difference may not seem hugely significant, but, as it turns out, it can be used to effect a full compromise of the system by privilege escalation to root.

The key piece of the puzzle is that the event ID is used as an array index in the kernel. It is a value that is controlled by user space, as it is passed in via the struct perf_event_attr argument to perf_event_open(). Because it is sanity checked as an int, the upper 32 bits of event_id can be anything the attacker wants, so long as the lower 32 bits are considered valid. Because event_id is used as a signed value, the test:

    if (event_id >= PERF_COUNT_SW_MAX)
            return -ENOENT;
doesn't exclude negative IDs, so anything with bit 31 set (i.e. 0x80000000) will be considered valid.

The exploit code itself is rather terse, obfuscated, and hard to follow, but Brad Spengler has provided a detailed description of the exploit on Reddit. Essentially, it uses a negative value for the event ID to cause the kernel to change user-space memory. The exploit uses mmap() to map an area of user-space memory that will be targeted when the negative event ID is passed. It sets the mapped area to zeroes, then calls perf_event_open(), immediately followed by a close() on the returned file descriptor. That triggers:

in the sw_perf_event_destroy() function. The code then looks for non-zero values in the mapped area, which can be used (along with the event ID value and the size of the array elements) to calculate the base address of the perf_swevent_enabled array.

But that value is just a steppingstone toward the real goal. The exploit gets the base address of the interrupt descriptor table (IDT) by using the sidt assembly language instruction. From that, it targets the overflow interrupt vector (0x4), using the increment in perf_swevent_init():

By setting event_id appropriately, it can turn the address of the overflow interrupt handler into a user-space address.

The exploit arranges to mmap() the range of memory where the clobbered interrupt handler will point and fills it with a NOP sled followed by shellcode that accomplishes its real task: finding the UID/GIDs and capabilities in the credentials of the current process so that it can modify them to be UID and GID 0 with full capabilities. At that point, in what almost feels like an afterthought, it spawns a shell—a root shell.

Depending on a number of architecture- or kernel-build-specific features (not least x86 assembly) makes the exploit itself rather fragile. It also contains bugs, according to Spengler. It doesn't work on 32-bit x86 systems because it uses a hard-coded system call number (298) passed to syscall(), which is different (336) for 32-bit x86 kernels. It also won't work on Ubuntu systems because the size of the perf_swevent_enabled array elements is different. The following will thwart the existing exploit:

    echo 2 > /proc/sys/kernel/perf_event_paranoid
But a minor change to the flags passed to perf_event_open() will still allow the privilege escalation. None of these is a real defense of any sort against the vulnerability, though they do defend against this specific exploit. Spengler's analysis has more details, both of the existing exploit as well as ways to change it to work around its fragility.

The code uses syscall(), presumably because perf_event_open() is not (yet?) available in the GNU C library, but it could also be done to evade any argument checks done in the library. Any sanity checking done by the library must also be done in the kernel, because using syscall() can avoid the usual system call path. Kernels configured without support for perf events (i.e. CONFIG_PERF_EVENTS not set) are unaffected by the bug as they lack the system call entirely.

There are several kernel hardening techniques that would help to avoid this kind of bug leading to system compromise. The grsecurity UDEREF mechanism would prevent the kernel from dereferencing the user-space addresses so that the perf_swevent_enabled base address could not be calculated. The PaX/grsecurity KERNEXEC technique would prevent the user-space shellcode from executing. While these techniques can inhibit this kind of bug from allowing privilege escalation, they impose costs (e.g. performance) that have made them unattractive to the mainline developers. Suitably configured kernels on hardware that supports it would be protected by supervisor mode access prevention (SMAP) and supervisor mode execution protection (SMEP), the former would prevent access to the user-space addresses much like UDEREF, while the latter would prevent execution of user-space code as does KERNEXEC.

This is a fairly nasty hole in the kernel, in part because it has existed for so long (and apparently been known by some, at least, for most of that time). Local privilege escalations tend to be somewhat downplayed because they require an untrusted local user, but web applications (in particular) can often provide just such a user. Dave Jones's Trinity has clearly shown its worth over the last few years, though he was not terribly pleased how long it took for fuzzing to find this bug.

Jones suspects there may be "more fruit on that branch somewhere", so more and better fuzzing of the perf system calls (and kernel as a whole) is indicated. In addition, the exploit author at least suggests that he has more exploits waiting in the wings (not necessarily in the perf subsystem), it is quite likely that others do as well. Finding and fixing these security holes is an important task; auditing the commit stream to help ensure that these kinds of problems aren't introduced in the first place would be quite useful. One hopes that companies using Linux find a way to fund more work in this area.

Comments (56 posted)

Patches and updates

Kernel trees

  • Sebastian Andrzej Siewior: 3.8.13-rt9 . (May 21, 2013)


Core kernel code

Development tools

Device drivers

Filesystems and block I/O

Memory management



Virtualization and containers

Page editor: Jonathan Corbet


Empty symlinks and full POSIX compliance

By Jonathan Corbet
May 22, 2013
Symbolic links are a mechanism to make one pathname be an alias for another. One would think that there would be little value in an empty symbolic link — one where the destination pathname is the empty string — but that doesn't keep people from trying to create such links, an act that the Linux virtual filesystem layer does not allow. It turns out that its refusal to allow the creation of empty symbolic links puts Linux out of compliance with the POSIX standard. The real question, though, might be: how much does that really matter?

Pointing to the void

It turns out that many Unix-based systems will happily allow a command like:

    ln -s "" link-to-nothing

On a Linux system, though, that command will fail with a "No such file or directory" error. This is, as was pointed out in a bug report last January, a somewhat confusing message. If the empty string is replaced by the name of a nonexistent file, no such error results. In other words, for most cases, the lack of an existing file is not a concern. So it seems strange that Linux would gripe about "no such file" in the empty string case.

As part of the ensuing discussion, it turned out that the POSIX standard was not consistent with how empty symbolic links (that already exist in the filesystem) are handled in existing systems. Solaris systems will, when such a link is dereferenced, treat it as a link to the current directory; essentially, an empty link is treated as if it were "." instead. BSD systems respond differently: they take the position that no such file can exist and duly return the "no such file" error. Neither of those responses was compliant with POSIX, a problem which was only fixed in early May, when the standard was updated to allow either the Solaris or the BSD behavior. The result is a standard that explicitly says one cannot know how the system will resolve an empty symbolic link; they might work, or they might not.

How Linux handles an attempt to resolve an empty symbolic link that already exists within a filesystem is not well defined. Some of the work of link resolution is pushed down into filesystem-specific code, so the behavior may depend on which filesystem type is in use. It is hard to test because, as mentioned above, Linux does not allow the creation of empty symbolic links, so they can only come by way of a filesystem from another system. But, in general, an attempt to resolve an empty symbolic link can be expected to return a "No such file or directory" response.

The refusal to create an empty symbolic link, as it turns out, is contrary to how POSIX thinks the symlink() system call should work. The standard text says explicitly that the target string "shall be treated only as a character string and shall not be validated as a pathname." Empty strings are valid character strings, and the implementation is not allowed to care that they cannot be the name of a real file, so, by the standard, the creation of such a symbolic link should be allowed.

Back in January, Pádraig Brady posted a patch enabling the creation of empty symbolic links. The patches did not generate much interest at that time. He followed up in May after the standard had been updated; this time Al Viro expressed his feelings on the matter:

Functionality in question is utterly pointless, seeing that semantics of such symlinks is OS-dependent anyway *and* that blanket refusal to traverse such beasts is a legitimate option. What's the point in allowing to create them in the first place?

And that is pretty much where the discussion stopped.

Linux and POSIX compliance

That said, it would not be entirely surprising if such a patch were to make it into the kernel at some point. The cost of enabling the creation of empty symbolic links is essentially zero, and adding that capability would bring Linux a little closer to POSIX compliance. But true POSIX compliance, which is a function of both the kernel and the low-level libraries that sit above it, still seems like a distant goal for Linux distributions as a whole.

As a Unix-like system, Linux is not that far removed from compliance with the POSIX standard. Linux developers normally try to adhere to such standards when it makes sense to do so, but they generally feel no need to apply changes that, in their opinion, do not make technical sense just because a standard document calls for it. The reaction to the creation of empty symbolic links is a case in point. The value of closer adherence to POSIX is not seen as being high enough to justify the addition of a "feature" that seems nonsensical.

The value question is an interesting one. Getting certified to the point where one can use the POSIX trademark is a matter of passing the verification test suite, applying for certification, and handing over a relatively small amount of money as described in the fee schedule [PDF]. An enterprise Linux distributor wishing to claim POSIX compliance could almost certainly attain this certification in a relatively short period of time with an investment that would be far smaller than was required for, say, Common Criteria security certification. Carrying some non-mainline patches to the kernel and C library would likely be necessary, but enterprise distributors have generally shown little reluctance to do that when it suits their interests.

But there are no POSIX-certified Linux distributions on the market now. As far as your editor can tell, the only time a distribution has achieved that certification was when Linux-FT claimed it in 1995. That work was (or was not, depending on which side of the argument you listen to) acquired by Caldera shortly thereafter; Caldera, too, intended to achieve POSIX certification. That certification does not appear to have happened, and Caldera, of course, followed its own unhappy path to its doom. Since then, corporate interest in POSIX certification for Linux has been subdued, to say the least.

One can only conclude that the commercial value of a 100% certified POSIX-compliant distribution is not enough to justify even a relatively small level of effort. If distributors were losing business due to the lack of certification, they would be doing something about it. But, it seems, "almost POSIX" is good enough for users, especially in an era where Linux is the preferred platform for many applications.

POSIX still has its place; it sets the expectations for the low-level interface provided by the operating system and helps to ensure compatibility. But, increasingly, most current development work is outside of the scope of POSIX. The standard cannot hope to keep up with the changes being made to Linux at the kernel level and above. We live in a fast-changing world where, in many cases, "what does Linux do?" is the real standard. The developers who are busily pushing Linux forward have little time or patience for working toward complete POSIX compatibility when the interesting problems are elsewhere, so a fully POSIX-compliant distribution seems unlikely to show up in the near future.

Comments (27 posted)

Brief items

Distribution quotes of the week

I admit to everything. I am merely an artificial creature, designed by Lennart and sent here by the GNOME cabal, to end Debian as it is and turn it into a useless system that is not the UNIX way™.
-- Josselin Mouette

When I finally got my new fangled EGA screen, it was *configurable*: you could use a button to choose to have it in orange, green, or with actual colours. Back in the good old days when you were actually given a choice.

Nowadays, if I type my login name all uppercase, I don't even get an uppercase "PASSWORD:" prompt anymore :(

-- Enrico Zini

Arrows move the cursor, enter follows links, '/' searches. And don't dare touch anything else because nobody knows what could happen!
-- Michał Górny

Comments (7 posted)

Debian GNU/Hurd 2013 released

While it is not an official Debian release, the Debian GNU/Hurd team has announced the release of Debian GNU/Hurd 2013. GNU Hurd is a Unix-style kernel based on the Mach microkernel and Debian GNU/Hurd makes much of the Debian system available atop that kernel.

Debian GNU/Hurd is currently available for the i386 architecture with more than 10.000 software packages available (more than 75% of the Debian archive, and more to come!).

Please make sure to read the configuration information, the FAQ, and the translator primer to get a grasp of the great features of GNU/Hurd.

Due to the very small number of developers, our progress of the project has not been as fast as other successful operating systems, but we believe to have reached a very decent state, even with our limited resources.

Comments (35 posted)

Mageia 3 released

The much-delayed Mageia 3 release is out. "We dedicate this release to the memory of Eugeni Dodonov, our friend, our colleague and a great inspiration to those he left behind. We miss his brilliance, his courtesy and his dedication." Changes include an RPM upgrade, the 3.8 kernel, availability of GRUB2 (but GRUB is still the default bootloader), and more. See the release notes for lots of details.

Comments (6 posted)

NetBSD 6.1

The NetBSD Project has announced NetBSD 6.1, the first feature update of the NetBSD 6 release branch. "It represents a selected subset of fixes deemed important for security or stability reasons, as well as new features and enhancements." See the changelog for details.

Comments (41 posted)

Pidora 18

Pidora is a Fedora remix for the Raspberry Pi. Pidora 18 has been released. "It is based on a brand new build of Fedora for the ARMv6 architecture with greater speed and includes packages from the Fedora 18 package set."

Full Story (comments: none)

Distribution News

Debian GNU/Linux

"Bits from Debian" editors delegation

DPL Lucas Nussbaum has appointed Ana Beatriz Guerrero López and Francesca Ciceri as Bits from Debian editors. "The Bits from Debian Editors oversee and maintain the official blog of the Debian project, "Bits from Debian"".

Full Story (comments: none)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

OpenMandriva Picks Name, Releases Alpha (OStatic)

OStatic covers recent news from OpenMandriva. "While the rest of Linuxdom was reading of the Debian 7.0 and Mageia 3 releases, the OpenMandriva gang have been hard at it trying to get their new distribution some attention. The OpenMandriva name was made official and an alpha was released into the wild."

Comments (none posted)

Tails 0.18 can install packages on the fly (The H)

The H takes a look at Tails, The Amnesiac Incognito Live System. "Existing Tails users are also strongly urged to upgrade to Tails 0.18; the team lists security vulnerabilities discovered in the previous 0.17.2 release to reinforce that recommendation. Another new feature in the Debian-based distribution is support of obfs3 bridges when using the Tor network. Obfuscation bridges make it harder for the ISP to know a user is using the Tor network by disguising the protocol in use; obfs3 is the latest version of the protocol replacing obfs2 which no longer works in China."

Comments (none posted)

Page editor: Rebecca Sobol


An "enum" for Python 3

By Jake Edge
May 22, 2013

Designing an enumeration type (i.e. "enum") for a language may seem like a straightforward exercise, but the recently "completed" discussions over Python's PEP 435 show that it has a few wrinkles. The discussion spanned several long threads in two mailing lists (python-ideas, python-devel) going back to January in this particular iteration, but the idea is far older than that. A different approach was suggested in PEP 354, which was proposed in 2005 but rejected at that time, largely due to lack of widespread interest. A 2010 discussion also led nowhere (at least in terms of the standard library), but the most recent discussions finally bore fruit: Guido van Rossum accepted PEP 435 on May 9.

The basic idea is to have a class that implements an enum, which, in Python, might look a lot like:

    from enum import Enum

    class Color(Enum):
        red = 1
        green = 2
        blue = 3
That would allow using (and the others) as a constant, effectively. Not only would have a value, but it would also have a name ('blue') and an order (based on the declaration order). Enums can also be iterated over, so that:
    for color in Color:
        print(color,, color.value)
gives: red 1 green 2 blue 3

Along the way, there were several different enum proposals made. Ethan Furman offered one that incorporated multiple types of enum, including ones for bit flags, string-valued enums, and automatically numbered sequences. Alex Stewart came up with a different syntax for defining enums to avoid the requirement to specify each numeric value. Neither made it to the PEP stage, though pieces of both were adopted into the first draft of PEP 435, which was authored by Eli Bendersky and Barry Warsaw.

There are a couple of fairly obvious motivations for adding enums, which were laid out in the PEP. An immutable set of related, constant values is a useful construct. Making them their own type, rather than just using sequences of some other basic type (like integer or string) means that error checking can be done (i.e. no duplicates) and that nonsensical operations can raise errors (e.g. * 42). Finally, it is convenient to be able to declare enum members once but to still be able to get a string representation of the member name (i.e. without some kind of overt assignment like:'green').

Some of the use cases mentioned early in the discussion of the PEP are for values like stdin and stdout, the flags for socket() or seek() calls, HTTP error codes, opcodes from the dis (Python bytecode disassembly) module, and so forth. One of the questions that was immediately raised about the original version of the PEP was its insistence that "Enums are not integers!", so ordered comparisons like: <
would raise an exception, though equality tests would not:
    print( == 2)
To some, that seemed to run directly counter to the whole idea of an enum type, but allowing ordered comparisons has some unexpected consequences as Warsaw described. Two different enums could be compared with potentially nonsensical results:
    print( ==
In general, the belief is that "named integers" is a small subset of the use cases for enums, and that most uses do not need ordered comparisons. But, the final accepted PEP does have an IntEnum variant that provides the ordering desired by some. IntEnum members are also a subclass of int, so they can be used to replace user-facing constants in the standard library that are already treated as integers (e.g. HTTP error codes, socket() and seek() flags, etc.).

A second revision of the PEP was posted in April, after lengthy discussion both in python-devel and python-ideas. Furman offered up another proposal, this time as an unnumbered PEP with four separate classes for different types of enums. Two different views of enums arose in the discussion, as Furman summarized:

There seems to be two basic camps: those that think an enum should be valueless, and have nothing to do with an integer besides using it to select the appropriate enumerator [...] and those for whom the integer is an integral part of the enumeration, whether for sorting, comparing, selecting an index, or whatever.

The critical aspect of using or not using an integer as the base type is: what happens when an enumerator from one class is compared to an enumerator from another class? If the base type is int and they both have the same value, they'll be equal -- so much for type safety; if the base type is object, they won't be equal, but then you lose your easy to use int aspect, your sorting, etc.

Worse, if you have the base type be an int, but check for enumeration membership such that == 1 ==, but !=, you open a big can of worms because you just broke equality transitivity (or whatever it's called). We don't want that.

Furman's proposal looked overly complex to Bendersky and others commenting on a fairly short python-ideas thread. Meanwhile in python-devel, another monster thread was spinning up. The first objection to the revised PEP was in raising a NotImplementedError when doing ordered comparisons of enum members. That was quickly dispatched with a recognition that TypeError made far more sense. Other issues, such as the ordered comparison issue that was handled with IntEnum in the final version, did not resolve quite as quickly.

One question, originally raised by Antoine Pitrou, concerned the type of the enum members. The early PEP revisions considered to not be an instance of the Color class, and Warsaw strongly defended that view. At some level, that makes sense (since the members are actually attributes of the class), but it is confusing in other ways. In a sub-thread, Van Rossum, Warsaw, and others looked at the pros and cons of the types of enum members, as well as implementation details of various options. In the end, Van Rossum made some pronouncements on various features, including the question of member type, so:

    isinstance(, Color)
is now an official part of the specification.

As Python's benevolent dictator for life (BDFL), which is Van Rossum's only-semi-joking title, he can put an end to arguments and/or "bikeshedding" about language features. In the same thread, he made some further pronouncements (along with a plea for a halt to the bikeshedding). It is a privilege that he exercises infrequently, but it is clearly useful to the project to have someone in that role. Much like Linus Torvalds for the kernel, it can be quite helpful to have someone who can stop a seemingly endless thread.

Van Rossum's edicts came after Furman summarized the outstanding issues (after a summary request from Georg Brandl). That is a fairly common occurrence in long-running Python threads: someone will try to boil down the differences into a concise list of outstanding issues. Another nice feature of Python discussions is their tone, which is generally respectful and flame-free. Participants certainly disagree, sometimes strenuously, but the tone is refreshingly different from many other projects' mailing lists.

Not everyone is happy with the end result for enums, however. Andrew Cooke is particularly sad about the outcome. He points out that several expected behaviors for enums are not present in PEP 435:

    class Color(Enum):
        red = 1
        green = 1
is not an error; is an alias for (a dubious "feature", he noted with a bit of venom). In addition, there is a way to avoid having to assign values for each enum member (auto-numbering, essentially), but its syntax is clunky:
    Color = Enum('Color', 'red green blue')
Beyond having to repeat the class name as a string (which violates the "don't repeat yourself" (DRY) principle), it starts the numbering from one, rather than zero. Nick Coghlan responded to Cooke's complaints by more or less agreeing with the criticism. There is still room for improvement in Python enums, but PEP 435 represents a solid step forward, according to Coghlan.

It is instructive to watch the design of a language feature play out in public as they do for Python (and other languages). Enums are something that the developers will have to live with for a long time, so it is not surprising that there would be lots of participation and examination of the feature from many different angles. While PEP 435 probably didn't completely satisfy anyone's full set of requirements, there is still room for more features, both in the standard library and elsewhere, as Coghlan pointed out. The story of enums in Python likely does not end here.

Comments (23 posted)

Python and implicit string concatenation

By Jake Edge
May 22, 2013

In a posting with a title hearkening back to a famous letter of old ("Implicit string literal concatenation considered harmful?"), Guido van Rossum opened up a python-ideas discussion about a possible feature deprecation. In particular, he had been recently bitten by a hard-to-find bug in his code because of Python's implicit string concatenation. He wondered if it was perhaps time to consider deprecating the feature, which was added for "flimsy" reasons, he said. As the discussion shows, however, getting rid of dubious language features is a tricky task at best.

The problem stems from Python's behavior when faced with two adjacent string literals: it concatenates the two strings. Van Rossum ran into difficulties and got an argument count exception because he forgot a comma. He wrote:

    foo('a' 'b')
when he really wanted:
    foo('a', 'b')
The former passes one argument (the string "ab"), while the latter passes two. This implicit concatenation can cause similar (but potentially even harder to spot) problems in things like lists:
    [ 'a', 'b',
      'c', 'd'
      'e', 'f' ]
which creates a five-element list with "de" as the fourth element. The reason Van Rossum added the feature to Python is, by his own admission, questionable: "I copied it from C but the reason why it's needed there doesn't really apply to Python, as it is mostly useful inside macros". Beyond that, the string concatenation operator ("+") is evaluated at compile time, so there is no runtime penalty to constructing long strings that way. Given all of that, should the feature be dumped in some future version of Python?

There were some who quickly jumped on the deprecation bandwagon. It is, evidently, a common problem—one that is rather irritating to hit. But other commenters noted some problems in blindly requiring the + operator. Perhaps the biggest problem is that the "%" interpolation operator has higher precedence than +, which means:

    print("long string %d " +
          "another long string %s" % (2, 'foo'))
would not work at all using the "+" operator, but would work fine when omitting it. In the thread, it was mentioned that the % operator may be slated for eventual deprecation itself, but the same problem exists with its replacement: the format() string method. So, a simple substitution that adds + simply won't work in all cases and additional parentheses will be required.

Another alternative, using triple-quoted strings, has its own set of problems, mostly related to handling indentation inside those strings. Adding some kind of "dedent" (i.e. un-indent) operation or syntax into the language might help. Currently:

    '''This string
       will have "extra"
will result in a string with multiple spaces before "will" and "spaces".

There was also a suggestion to add a new "..." operator that would indicate a continued string, but Stephen J. Turnbull (and others) didn't think that justified adding a new operator (the ellipsis), especially given that the symbol is already used for custom list slice operations.

Additional suggestions included a new string modifier (prefacing the string literal with "m" for example) to indicate a continued string:

    a = [m'abc'
Another "farfetched" idea was to allow compile-time string processors to be specified for string literals:
which would run the "merge" processor on the string, creating "abcdef".

There was no end to the suggested alternatives to implicit concatenation—there's a reason the mailing list is called python-ideas after all—but participants were fairly evenly split on whether to deprecate the feature. There were enough complaints about doing so that it seems unlikely that concatenation by juxtaposition will be deprecated any time soon. As Van Rossum noted, though, it is a feature that would almost certainly never pass muster if it were proposed today. Furthermore:

I do realize that this will break a lot of code, and that's the only reason why we may end up punting on this, possibly until Python 4, or forever. But I don't think the feature is defensible from a language usability POV. It's just about backward compatibility at this point.

Van Rossum's lament should be a helpful reminder for language designers. It is very difficult to get rid of a feature once it has been added. That is especially true for a low-level syntax element like literal strings—just ask the developer of make about the tab requirement for makefiles. For Python, concatenation by juxtaposition seems likely to be around for long time to come—perhaps forever.

Comments (9 posted)

Brief items

Quotes of the week

So the current state of the art is just to copy & paste ScreenInit and friends from another driver, because the documentation wouldn't actually be any shorter than the hundreds of lines of code.
Daniel Stone (thanks to Arthur Huillet)

Ask a programmer to review 10 lines of code, he'll find 10 issues. Ask him to do 500 lines and he'll say it looks good.
Giray Özil (from a retweet by Randall Arnold)

Comments (none posted)

QEMU 1.5.0 released

Version 1.5.0 of the QEMU hardware emulator is out. "This release was developed in a little more than 90 days by over 130 unique authors averaging 20 commits a day. This represents a year-to-year growth of over 38 percent making it the most active release in QEMU history." Some of the new features include KVM-on-ARM support, a native GTK+ user interface, and lots of hardware support and performance improvements. See the change log for lots of details.

Full Story (comments: 9)

Perl 5.18.0 released

The Perl 5.18.0 release is out. "Perl v5.18.0 represents approximately 12 months of development since Perl v5.16.0 and contains approximately 400,000 lines of changes across 2,100 files from 113 authors." See this perldelta page for details on what has changed.

Full Story (comments: 1)

New Python releases

Several point releases of Python are now available. Benjamin Peterson announced the release of 2.7.5 on May 15, and Georg Brandl announced 3.2.5 and 3.3.2 on May 16. The primary focus of the releases are regression fixes, so users are encouraged to upgrade.

Comments (none posted)

Newsletters and articles

Development newsletters from the past week

Comments (none posted)

Blender dives into 3D printing industry (Libre Graphics World)

Libre Graphics World looks at the use of Blender in 3D printing; the recent 2.67 release includes a "3D printing toolbox." "While Blender cannot help with making actual devices easier to use, it definitely could improve designing printable objects. And that's exactly what happened last week, when Blender 2.67 was released."

Comments (3 posted)

Page editor: Nathan Willis


Brief items

Ten years of Groklaw

Groklaw is celebrating its tenth anniversary. "Thank you for sticking to the job for ten years without giving out, and for funding the necessary activities that make Groklaw Groklaw. We made a difference in this old world. It's an achievement we can tell our grandchildren about some day. Not everyone can say that, but we actually made a difference. And nobody can take that away from us."

Comments (none posted)

Sony opens up the Xperia Tablet Z

Sony has announced the availability of an Android Open Source Project distribution for its Xperia Tablet Z device. "For all you developers out there, of course this means you can now access the software and contribute to this project. And this is all before the tablet is even available in the US. A special thanks to our Sony Mobile team for helping us create the package early and a huge thanks to the Android developer community for all your support. We can’t wait to see what you’ll do with the code." Source is available on GitHub.

Comments (26 posted)

EFF: Vermont Is Mad as Hell at Patent Trolls

The Electronic Frontier Foundation has sent out a release about how the US state of Vermont is going on the offensive against patent trolls. "Not content to strike back against a single troll, Vermont is also poised to pass a bill dealing with the problem as a whole. The Vermont House and Senate recently passed a bill to combat 'bad faith assertions of patent infringement'. And the latest word is that Vermont's governor is about to sign it into law."

Comments (11 posted)

Google Code to deprecate downloads

Google has announced that it will be phasing out the file download feature for projects hosted on Google Code. "Downloads were implemented by Project Hosting on Google Code to enable open source projects to make their files available for public download. Unfortunately, downloads have become a source of abuse with a significant increase in incidents recently. Due to this increasing misuse of the service and a desire to keep our community safe and secure, we are deprecating downloads."

Comments (41 posted)

Articles of interest

How Google plans to rule the computing world through Chrome (GigaOM)

GigaOM asserts that Google will be taking over the desktop (regardless of the underlying operating system) with its Chrome browser. "For many Chrome is just a browser. For others who use a Chromebox or Chromebook, like myself, it’s my full-time operating system. The general consensus is that Chrome OS, the platform used on these devices, can only browse the web and run either extensions and web apps; something any browser can do. Simply put, the general consensus is wrong and the signs are everywhere."

Comments (21 posted)

Calls for Presentations

DebConf 13: Call for Papers

DebConf 13 will take place near Vaumarcus, Switzerland August 11-18. The call for papers and presentations is open until June 1, 2013. "This year submission of a formal written paper for the conference proceedings is again optional, though encouraged. Providing a written paper in advance means that interested people can attend your session ready with ideas for discussion, and especially helps those who find it hard to follow rapid English speech."

Full Story (comments: none)

EuroPython 2014/2015 Conference Team - Call for Proposals

The EuroPython Society is looking for volunteers to organize EuroPython conferences in 2014 and 2015. This call for proposals closes June 14, 2013.

Full Story (comments: none)

Call for Papers - PostgreSQL Conference Europe 2013

PostgreSQL Conference Europe 2013 will be held on October 29-November 1 in Dublin, Ireland. The call for proposals closes July 22.

Full Story (comments: none)

Upcoming Events

GNU Hackers Meeting 2013 in Paris, France

The GNU Hackers Meeting will take place August 22-25, 2013 in Paris, France. "As we are still in the process of defining the program, we welcome your talk proposals. We are particularly interested in new GNU programs and recent developments in existing packages, plus any related topic. In our experience the audience tends to be technically competent, so feel free to propose very technical topics as well; we will try to schedule all such talks together in the same morning or afternoon session, for the public’s sake."

Full Story (comments: none)

Events: May 23, 2013 to July 22, 2013

The following event listing is taken from the Calendar.

May 22
May 23
Open IT Summit Berlin, Germany
May 22
May 24
Tizen Developer Conference San Francisco, CA, USA
May 22
May 25
LinuxTag 2013 Berlin, Germany
May 23
May 24
PGCon 2013 Ottawa, Canada
May 24
May 25
GNOME.Asia Summit 2013 Seoul, Korea
May 27
May 28
Automotive Linux Summit Tokyo, Japan
May 28
May 29
Solutions Linux, Libres et Open Source Paris, France
May 29
May 31
Linuxcon Japan 2013 Tokyo, Japan
May 30 Prague PostgreSQL Developers Day Prague, Czech Republic
May 31
June 1
Texas Linux Festival 2013 Austin, TX, USA
June 1
June 2
Debian/Ubuntu Community Conference Italia 2013 Fermo, Italy
June 1
June 4
European Lisp Symposium Madrid, Spain
June 3
June 5
Yet Another Perl Conference: North America Austin, TX, USA
June 4 Magnolia CMS Lunch & Learn Toronto, ON, Canada
June 6
June 9
Nordic Ruby Stockholm, Sweden
June 7
June 8
CloudConf Paris, France
June 7
June 9
SouthEast LinuxFest Charlotte, NC, USA
June 8
June 9
AdaCamp San Francisco, CA, USA
June 9 OpenShift Origin Community Day Boston, MA, USA
June 10
June 14
Red Hat Summit 2013 Boston, MA, USA
June 13
June 15
PyCon Singapore 2013 Singapore, Republic of Singapor
June 17
June 18
Droidcon Paris Paris, France
June 18
June 20
Velocity Conference Santa Clara, CA, USA
June 18
June 21
Open Source Bridge: The conference for open source citizens Portland, Oregon, USA
June 20
June 21
7th Conferenza Italiana sul Software Libero Como, Italy
June 22
June 23
RubyConf India Pune, India
June 26
June 28
USENIX Annual Technical Conference San Jose, CA, USA
June 27
June 30
Linux Vacation / Eastern Europe 2013 Grodno, Belarus
June 29
July 3
Workshop on Essential Abstractions in GCC, 2013 Bombay, India
July 1
July 7
EuroPython 2013 Florence, Italy
July 1
July 5
Workshop on Dynamic Languages and Applications Montpellier, France
July 2
July 4
OSSConf 2013 Žilina, Slovakia
July 3
July 6
FISL 14 Porto Alegre, Brazil
July 5
July 7
PyCon Australia 2013 Hobart, Tasmania
July 6
July 11
Libre Software Meeting Brussels, Belgium
July 8
July 12
Linaro Connect Europe 2013 Dublin, Ireland
July 12 PGDay UK 2013 near Milton Keynes, England, UK
July 12
July 14
GNU Tools Cauldron 2013 Mountain View, CA, USA
July 12
July 14
5th Encuentro Centroamerica de Software Libre San Ignacio, Cayo, Belize
July 13
July 19
Akademy 2013 Bilbao, Spain
July 15
July 16
QtCS 2013 Bilbao, Spain
July 18
July 22
openSUSE Conference 2013 Thessaloniki, Greece

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol

Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds