LWN.net Weekly Edition for July 19, 2012
Oracle takes aim at CentOS
The nature of Red Hat's business model nearly guarantees that its flagship Red Hat Enterprise Linux distribution will be shadowed by clones offering the same software for no charge. It is not uncommon for people to wonder whether these RHEL clones, including CentOS, Scientific Linux, and Oracle Linux, are ultimately helpful or harmful to Red Hat. A free enterprise Linux can either serve as an entry point or an alternative for paying customers. There has been less attention paid to how the RHEL clones might affect each other; that may be about to change as a result of Oracle's new marketing initiative aimed directly at CentOS.CentOS is the most popular of the free RHEL clones; it is widely offered to customers by hosting providers. It has become the default option for anybody wanting to run a RHEL-like system without actually paying for it. There can be no doubt that some sites would decide to pop for a real RHEL subscription if a system like CentOS were not available. At the same time, there must certainly be a steady stream of customers who started with CentOS, only to decide that Red Hat's support would be a worthwhile upgrade.
Oracle clearly has its eyes on that stream of customers. The plan seems to be to make it easy for CentOS users to switch a running system over to Oracle's distribution. And easy it is, if Oracle's instructions are to be believed; one need only download a shell script from Oracle's server and feed it, unread, to a root shell. The script will tweak some repository pointers and install a few packages, but it leaves most of the existing CentOS (or Scientific Linux) system as-is until the next update.
Why would CentOS users, who are benefiting from the efforts of a free software project, want to switch to Oracle's offering? Oracle is clearly trying to take advantage of the security update difficulties experienced by CentOS in 2011. The page reads:
Things have improved in the CentOS camp since the 2011 difficulties. The project has changed its workflow and found the sponsorship to hire a couple of developers; the recent CentOS 6.3 release surprised almost everybody with its promptness. But CentOS remains a project with limited resources and a lot of tedious work to do; it's always possible that things could fall behind again. CentOS users who were left without security updates in 2011—at least, those who are concerned about the security of their systems—cannot entirely eliminate that fear from the backs of their minds, even if things look better now.
So it is possible that Oracle is on to something here. Some CentOS users may well jump at the chance to switch to a free RHEL clone with big-company support behind it. And, when some of those users decide that paid support is worth their while, Oracle will naturally be the first provider to come to mind. This little initiative might well translate into some extra revenue for Oracle.
Of course, there could be some costs. The CentOS project is unlikely to be strengthened by having some of its users defect to Oracle. In the worst (presumably unlikely) case, CentOS could be fundamentally damaged if vast numbers of users were to vote with their feet and leave. That would leave the community with one less free enterprise distribution project. There have been a lot of complaints that CentOS is far from a truly open, community-oriented project. But anybody concerned about those issues is unlikely to find Oracle's distribution more to their liking. Oracle does make some good contributions, but community-oriented development is not, in general, among the company's greatest strengths.
Also worth keeping in mind is the fact that Oracle is making no promises that it will provide this free service for any period of time. If this effort fails to provide the desired financial results, Oracle could pull the plug on it at any time—as it did with OpenSolaris. That would leave ex-CentOS users with the choice of somehow migrating back to CentOS (assuming CentOS is still there and healthy) or becoming paid Oracle customers in a hurry. One could argue that any free (beer) distribution poses such a hazard, but a corporate-controlled distribution can only be doubly hazardous.
So this initiative by Oracle looks like it could be either a positive or a negative thing. It could increase the choices for users looking for a well-supported, highly stable, free-of-charge distribution and increase competition in the enterprise distribution space in general. Or it could just be a cynical attempt by a large corporation to profit from a free software project's success and deprive its main competitor of a potential revenue stream. Enterprise distribution users will have to make their own choice as to where their best interests lie.
Akademy: Freedom and the internet
Mathias Klang opened this year's Akademy with a keynote look at freedom and the internet. It was something of a cautionary tale that outlined the promises that technology brings, while noting that the dangers are often being overlooked. Klang comes from an academic and legal background—he is currently a researcher and senior lecturer at the University of Göteborg in Sweden—which gives him something of a different perspective on technology issues.
Klang's talk was titled "Expressions in Code and Freedom", but he came up with a different title the night before the talk: The TiVo-ization of everyday life. That title is "silly", but it does reflect some of the dangers he sees. He noted that he is not a programmer, but is surrounded by them, and they "put up with my stupidity". His background in the law means that he "likes reading licenses" and thinks everyone should. His current research is looking into social media, particularly in the area of control by the providers.
There have been multiple revolutions in communication over the years, with writing only coming about 6000 years ago or so. Punctuation did not arise until 200 BC and putting spaces between words is only 1000 years old. Gutenberg (who Klang called "The Steve Jobs of his day") revolutionized writing once again with the printing press, but the digitization of information was arguably the biggest revolution.
Once information has been digitized we can start connecting up the devices that store that data, which leads to the internet. The internet is not a bad thing, per se, but it is set up for control. The promise of the open web ("so wonderfully open, so wonderfully free") is great, but that openness invites people to come in and start closing it down in various ways.
The web started as an open platform, but that "wild web" is becoming an endangered species. For example, he said, we don't actually publish our own links anymore, instead we use various social media services to send each other links. That leaves us more and more dependent on the people who collect and store our data. It is becoming rare for people to create their own permanent web sites to store their data as it is largely being stored under the control of social media service providers.
"What would newspapers write about if we didn't have Facebook?", Klang asked. Perhaps they would write about the euro crisis instead, he joked. More seriously, social change is happening and much of it is being brought about by technology.
For example, he noted that online Scrabble games are all the rage right now. Two years ago, you wouldn't go to the pub and brag about playing Scrabble. But in Sweden (at least), people are constantly posting their high scores and such to Facebook.
Social media is set up to "create a performance lifestyle", he said. The whole idea behind it is to have an audience, but the tools used to reach that audience are controlled by the providers. Another example is Klang's Facebook post of a picture of his morning coffee as "my amazing coffee". He gets comments from people all over the world who are "lurking around my digital life". It is a bit creepy, overall. The things he routinely does online today would have been considered stalking ten years ago, but "now I'm Facebooking".
The walled gardens and information silos that typify many internet services are a threat. The service providers ensure that they "keep us entertained so we will supply them more data", he said. But, without access to the underlying code and data, we are totally at their mercy.
Klang gave more examples of how technology, social media in particular, is worming its way into everyday life. "People say that if you want to start a revolution, use Facebook", he said, and they generally point to the recent events in Egypt as an example. In Sweden, educators are asking "should we be teaching Facebook in school?" and "how do I use Facebook" as an educational tool?
Beyond that, even police departments are going online. The Swedish police now have a Facebook presence and have even had crimes reported to them via that mechanism. There was recently a "wonderful or sad" twitter message (i.e. "tweet") about a man lying unconscious in Göteborg. Klang does not think that's a good way to report such things, "but the police think it is and that's sad".
Certainly social media sites increase our ability to talk to one another, which is good. But much of that communication is being forced into these walled gardens, he said, "and that's scary".
"It's only technology" is something that is heard a lot, but that's something of a slippery slope. As an example, he pointed to tubular anti-homeless benches in Tokyo. Instead of passing a law against sleeping in parks or putting up a sign, the benches make it almost impossible to sleep on them. This is an example of TiVo-ization in real life, he said. If we create a law against sleeping on benches, there will be complaints about human rights, but creating a technological measure avoids those problems. "Design choices have consequences", Klang said.
"The more technology we embed into our lives, the less freedom we have", he said. We should all "love technology", but recognize that every piece of it has an effect on our lives. That includes all kinds of technology, not just gadgets and web sites, but things like chairs, desks, and carpets as well.
One of the problems is that the educational system teaches students how to use technology, "but we don't teach them code", Klang said. Sweden, for example, has been focusing on the use of various gadgets in schools, but you don't have to be "vaguely technical to use an iPhone or iPad". Educators are asking how to use the iPad in the classroom, rather than asking whether they should use the device.
He referenced Douglas Adams's notion of "digital natives", that those under a certain age (15, say) natively understand technology changes while those over a certain age (e.g. 35) will always be immigrants and lack that understanding. Klang would like to see everyone become a digital native so that the understanding of technology and the consequences of technological change become widespread.
He had several suggestions toward that goal. To begin with, we should all try to "hack society for openness". Our infrastructure remains open, so far, but much of what runs atop it isn't. Richard Stallman, was "not being friendly" when he started the free software movement; "he was being right", Klang said.
"Be that guy", he suggested, and tell people what their information habits are doing to their (and other people's) lives. He likened it to getting a PhD, where you do research that "you and four other people in the world care about", but when people ask, you explain what it is and why it's important. In this case, it is necessary to make people aware of the problems that arise when "going from an information deficit to an information circus", which is what we have seen over the last decade or more. He also said that we should read all of the end-user license agreements (EULAs) and terms of service that are presented to us, but "I know you won't".
He closed with the idea that developers should at least think about what their code does, and "how you are affecting other people". All of the different gadgets out there "manipulate lives", but who decides how they do that, he asked. Everything we do with technology has effects on others, so he encouraged developers to think about those effects. He was clear that he wasn't advocating not building new devices and technologies, only asking that developers think about how those technologies might be used—or abused.
[ The author would like to thank KDE e.V. for travel assistance to Tallinn for Akademy. ]
Akademy: Contour and Plasma Active
Contour was a project to create a mobile, touch-friendly user experience atop Qt and KDE that started in 2010. It eventually became part of the KDE's Plasma Active effort so Eva Brucherseifer came to Akademy to recount how and why that came about. She also discussed how the Contour design process worked, along with the process of integrating Plasma Active with various device platforms.
Brucherseifer is a longtime KDE community member, going back to the 1990s. She was elected to the KDE e.V. board in 2002 and served as its president for three years, which is what happens "if you talk too much about what should be done". She started the embedded services company basysKom in 2003 and serves as its managing director.
Contour's history and goals
In the (northern hemisphere) summer of 2010, there were lots of ideas floating around that factored into the ideas behind Contour, she said. Using context and semantic data to provide recommendations for users was one idea. There was also a lot of talk about the mobile space, and back then people were anticipating numerous Linux devices to ship with Qt because of MeeGo. The idea of "daily use" devices and the need for an improved user interface to support that use case was another consideration.
People at basysKom were thinking about these ideas, which led to Contour. Later in 2010 the project was formed based on Qt and KDE. Contour project members wrote a proposal and the project was granted funding by the German Bundesministerium für Wirtschaft und Technologie (Federal Ministry of Economics and Technology). So, basysKom and the German government each funded half of the project.
The core idea behind Contour is to be "information-centric" rather than "app-centric" as the iPhone is. The goal was to create a new user paradigm where context and usage information are used to adapt the device's behavior to the user and their needs. A "learning system" would be used to try to derive patterns of use so that the device could anticipate and facilitate the user's task. There were ideas on how to do that, she said, but it was unclear if they would work.
For Contour, using KDE's "activities" made sense as a way to group the information around specific tasks that users do. By using contexts, such as what time of day it is and whether the user is at home or work, the interface can have an idea of the "things you do at those times". Deriving the usage patterns will help make it easier for the user, she said.
Contour is a user experience (UX) layer on top of the Plasma shell. Other UXes are possible, including ones targeted at set-top boxes or automobiles.
Designing the interface
The Contour interface evolved as the project tried out various ideas. Brucherseifer put up images of different parts of the interface (e.g. activity switcher, activity screen) as they changed from the initial prototype to the final design (which can be seen in her slides [PDF]). The activity switcher started out as a stacked set of activities, with a slider to activate the switch. That moved through a rotating "wheel" of activities that kept the slider to the final version which kept the wheel idea, but made the activity thumbnails completely visible and eliminated the slider. The three versions from her slides are shown at right.
Similarly, the activity screen that shows the files, applications, and other information associated with an activity went through a number of iterations. It moved from a tree-like structure to something more like a standard desktop. When she shows the interface to customers as an example of what can be done with Qt and QML, "they love it".
There are still more things to do, including an application launcher and a task switcher. WebKit integration is still lacking, but is needed because HTML 5 will be important, she said. Private, password-protected activities are another feature that will be added. Some of these features will be needed for a real product, but the project couldn't get to all of them.
In addition to a tablet UX, basysKom created an in-vehicle infotainment (IVI) UX as an internal project. It took two person-months to implement, she said, and it still needs some polish. It was done as an experiment to see how long it would take.
Designing the UXes went through several phases. It started with sketches, which were then turned into wireframes using Photoshop. Promising versions were then implemented using QML. Multiple iterations back and forth between those steps were required, she said.
Getting the code onto devices was challenging. The project started basing itself on top of MeeGo, because it believed that the two big companies behind the distribution would make for a stable platform. That didn't work out, of course, so a switch to Mer was eventually made.
The project "learned a lot" in getting its code running on two different devices: the WeTab/ExoPC and the Archos G9. There was a need to create binary RPMs for all of the packages as well as a single binary image that could be installed onto the devices. Some of those things are not necessarily easy to accomplish using volunteers in an open community.
Plasma Active
In March 2011, Plasma Active was announced. A workshop was held in Darmstadt, Germany where basysKom joined forces with Sebastian Kügler of open-slx and Aaron Seigo of Coherent Theory to create Plasma Active. Contour was adopted as the "activities and recommendations" piece of Plasma Active. Several coding sprints were held and two releases of Plasma Active have been made so far.
There were multiple reasons behind basysKom's decision to contribute Contour to KDE. The company could have held on to Contour and sold it to customers, but if it wanted to develop the technology upstream the code needed to be free. Trying to set up a "joint process" between the community and companies was another consideration. Plasma Active is a chance for KDE to succeed in the mobile space, which also factored into the decision. She (and, by extension, basysKom) cares about KDE, so it made sense to contribute Contour to the effort.
There were a few challenges that Plasma Active has faced over the last year or so. The cooperation between the community and companies can be hard at times. Volunteers often work all day then work on Plasma Active at night, which can make it hard to hit deadlines. The embedded development process is "not there yet", she said, and encouraged anyone interested to help out with that. In addition, desktop technologies are "too large and slow" for embedded devices; Plasma Active is still too large for many devices.
On the other hand, a lot of things have gone well. There were lots of KDE frameworks that could be reused, which made it relatively easy and quick to get something working. It surprised her how quickly things could come together. The project also produced highly motivated people; there were basysKom employees who wanted to continue working on Plasma Active even after the company needed to scale back its commitment, for example.
Plasma Active is not done yet; what has been produced so far is a "starting point", Brucherseifer said. There is currently no way to manage multiple open applications, for example, which is "probably very solvable", but needs to be done. More automation is needed in creating images for devices and there is a need for release management and QA as well. Those who are interested should get an Archos G9 and an image from basysKom to get started. Support for the WeTab/ExoPC is still available, but those devices are no longer on the market, so the project is focused on the G9 for now.
An audience member asked about how to get the design process used by Contour and Plasma Active into KDE. Brucherseifer said that the most important part is that developers need to be open to suggestions. basysKom hired two UX people to work on the project, but they "didn't really enjoy it". The developers had very different opinions from the designers, which made things difficult at times. There is a need for a different culture between developers and designers, "if both could give a little bit, it would be helpful", she said.
At the end of the talk, Brucherseifer demonstrated the interface, including showing how the recommendation system worked. The UX looks quite usable at this point, though there are still things to do as she noted. It will be interesting to see real devices shipped with Plasma Active (such as the Vivaldi tablet) down the road.
[ The author would like to thank KDE e.V. for travel assistance to Tallinn for Akademy. ]
Security
The ups and downs of strlcpy()
Adding the strlcpy() function (and the related strlcat() function) has been a perennial request (1, 2, 3) to the GNU C library (glibc) maintainers, commonly supported by a statement that strlcpy() is superior to the existing alternatives. Perhaps the earliest request to add these BSD-derived functions to glibc took the form of a patch submitted in 2000 by a fresh-faced Christoph Hellwig.
Christoph's request was rejected, and subsequent requests have similarly been rejected (or ignored). It's instructive to consider the reasons why strlcpy() has so far been rejected, and why it may well not make its way into glibc in the future.
A little prehistory
In the days before programmers considered that someone else might want to deliberately subvert their code, the C library provided just:
char *strcpy(char *dst, const char *src);
with the simple purpose of copying the bytes from the string pointed to by src (up to and including the terminating null byte) to the buffer pointed to by dst.
Naturally, when calling strcpy(), the programmer must take care that the bytes being copied don't overrun the space available in the buffer pointed by dst. The effect of such buffer overruns is to overwrite other parts of a process's memory, such as neighboring variables, with the most common result being to corrupt data or to crash the program.
If the programmer can with 100% certainty predict at compile time the size of the src string, then it's possible (if unwise) to preallocate a suitably sized dst buffer and omit any argument checks before calling strcpy(). In all other cases, the call should be guarded with a suitable if statement to check the size of its argument. However, strings (in the form of input text) are one of the ways that humans interact with computers, and thus quite commonly the size of the src string is controlled by the user of a program, not the program's creator. At that point, of course, it becomes essential for every call to strcpy() to be guarded by a suitable if statement:
char dst [DST_SIZE];
...
if (strlen(src) < DST_SIZE)
strcpy(dst, src);
(The use of < rather than <= ensures that there's at least one byte extra byte available for the null terminator.)
But it was easy for programmers to omit such checks if they were forgetful, inattentive, or cowboys. And later, other more attentive programmers realized that by carefully controlling what was written into the overflowed buffer, and overrunning into more exotic places such as function call return addresses stored on the stack, they could do much more interesting things with buffer overruns than simply crashing the program. (And because code tends to live a long time, and the individual programmers creating it can be slow to to learn about the sharp edges of the tools they use, even today buffer overruns remain one of the most commonly reported vulnerabilities in applications.)
Improving on strcpy()
Prechecking the arguments of each call to strcpy() is burdensome. A seemingly obvious way to relieve the programmer of that task was to add an API that allowed the caller to inform the library function of the size of the target buffer:
char *strncpy(char *dst, const char *src, size_t n);
The
strncpy()
function is like
strcpy(),
but copies at most
n
bytes from
src
to
dst.
As long as
n
does not exceed the space allocated in
dst,
a buffer overrun can never occur.
Although choosing a suitable value for n ensures that strncpy() will never overrun dst, it turns out that strncpy() has problems of its own. Most notably, if there is no null terminator in the first n bytes of src, then strncpy() does not place a null terminator after the bytes copied to dst. If the programmer does not check for this event, and subsequent operations expect a null terminator to be present, then the program is once more vulnerable to attack. The vulnerability may be more difficult to exploit than a buffer overflow, but the security implications can be just as severe.
One iteration of API design didn't solve the problems, but perhaps a further one can… Enter, strlcpy():
size_t strlcpy(char *dst, const char *src, size_t size);
strlcpy() is similar to strncpy() but copies at most size-1 bytes from src to dst, and always adds a null terminator following the bytes copied to dst.
Problems solved?
strlcpy() avoids buffer overruns and ensures that the output string is null terminated. So why have the glibc maintainers obstinately refused to accept it?
The essence of the argument against strlcpy() is that it fixes one problem—sometimes failing to terminate dst in the case of strncpy(), buffer overruns in the case of strcpy()—while leaving another: the loss of data that occurs when the string copied from src to dst is truncated because it exceeds size. (In addition, there is still an unusual corner case where the unwary programmer can find that strlcat(), the analogous function for string concatenation, leaves dst without a null terminator.)
At the very least, (silent) data loss is undesirable to the user of the program. At the worst, truncated data can lead to security issues that may be as problematic as buffer overruns, albeit probably harder to exploit. (One of the nicer features of strlcpy() and strlcat() is that their return values do at least facilitate the detection of truncation—if the programmer checks the return values.)
All of which brings us full circle: to avoid unhappy users and security exploits, in the general case even a call to strlcpy() (or strlcat()) must be guarded by an if statement checking the arguments, if the state of the arguments can't be predicted with certainty in advance of the call.
Where are we now?
Today, strlcpy() and strlcat() are present on many versions of UNIX (at least Solaris, the BSDs, Mac OS X, and Irix), but not all of them (e.g., HP-UX and AIX). There are even implementations of these functions in the Linux kernel for internal use by the kernel code. Meanwhile, these functions are not present in glibc, and were rejected for inclusion in the POSIX.1-2008 standard, apparently for similar reasons to their rejection from glibc.
Reactions among core glibc contributors on the topic of including strlcpy() and strlcat() have been varied over the years. Christoph Hellwig's early patch was rejected in the then-primary maintainer's inimitable style (1 and 2). But reactions from other glibc developers have been more nuanced, indicating, for example, some willingness to accept the functions. Perhaps most insightfully, Paul Eggert notes that even when these functions are provided (as an add-on packaged with the application), projects such as OpenSSH, where security is of paramount concern, still manage to either misuse the functions (silently truncating data) or use them unnecessarily (i.e., the traditional strcpy() and strcat() could equally have been used without harm); such a state of affairs does not constitute a strong argument for including the functions in glibc.
The appearance of an embryonic entry on this topic in the glibc FAQ, with a brief rationale for why these functions are currently excluded, and a note that "gcc -D_FORTIFY_SOURCE" can catch many of the errors that strlcpy() and strlcat() were designed to catch, would appear to be something of a final word on the topic. Those that still feel that these functions should be in glibc will have to make do with the implementations provided in libbsd for now.
Finally, in case it isn't obvious by now, it should of course be noted that the root of this problem lies in the C language itself. C's native strings are not managed strings of the style natively provided in more modern languages such as Java, Go, and D. In other words, C's strings have no notion of bounds checking (or dynamically adjusting a string's boundary) built into the type itself. Thus, when using C's native string type, the programmer can never entirely avoid the task of checking string sizes when strings are manipulated, and no replacements for strcpy() and strcat() will ever remove that need. One might even wonder if the original C library implementers were clever enough to realize from the start that strcpy() and strcat() were sufficient—if it weren't for the fact that they also gave us gets().
Brief items
Security quotes of the week
Systemd gets seccomp filter support
Lennart Poettering has informed the world that the systemd init daemon now has initial support for the seccomp filter mechanism found in the 3.5 kernel. The end result is that processes can be easily configured to be run in a sandboxed environment. "It's actually really cool, and dead simple to use. A Cheers! for security!"
Android Security Overview
The Android project has published an Android Security Overview that provides information about Android security at the system and kernel level as well as application security and more. "This document outlines the goals of the Android security program, describes the fundamentals of the Android security architecture, and answers the most pertinent questions for system architects and security analysts. This document focuses on the security features of Android's core platform and does not discuss security issues that are unique to specific applications, such as those related to the browser or SMS application. Recommended best practices for building Android devices, deploying Android devices, or developing applications for Android are not the goal of this document and are provided elsewhere."
New vulnerabilities
automake: code execution
| Package(s): | automake | CVE #(s): | CVE-2012-3386 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | July 12, 2012 | Updated: | September 30, 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Mandriva advisory:
A vulnerability has been discovered and corrected in automake: A race condition in automake (lib/am/distdir.am) could allow a local attacker to run arbitrary code with the privileges of the user running make distcheck (CVE-2012-3386). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
exif: information leak
| Package(s): | exif | CVE #(s): | CVE-2012-2845 | ||||||||||||||||||||||||||||||||||||
| Created: | July 13, 2012 | Updated: | April 5, 2013 | ||||||||||||||||||||||||||||||||||||
| Description: | From the Mandriva advisory:
An integer overflow in the function jpeg_data_load_data in the exif program could cause a data read beyond the end of a buffer, causing an application crash or leakage of potentially sensitive information when parsing a crafted JPEG file. | ||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||
extplorer: cross-site request forgery
| Package(s): | extplorer | CVE #(s): | CVE-2012-3362 | ||||
| Created: | July 13, 2012 | Updated: | July 18, 2012 | ||||
| Description: | From the Debian advisory:
John Leitch has discovered a vulnerability in eXtplorer, a very feature rich web server file manager, which can be exploited by malicious people to conduct cross-site request forgery attacks. The vulnerability allows users to perform certain actions via HTTP requests without performing any validity checks to verify the request. This can be exploited for example, to create an administrative user account by tricking an logged administrator to visiting an attacker-defined web link. | ||||||
| Alerts: |
| ||||||
glibc: multiple vulnerabilities
| Package(s): | glibc | CVE #(s): | CVE-2012-3404 CVE-2012-3405 CVE-2012-3406 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | July 18, 2012 | Updated: | August 16, 2012 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Red Hat advisory:
Multiple errors in glibc's formatted printing functionality could allow an attacker to bypass FORTIFY_SOURCE protections and execute arbitrary code using a format string flaw in an application, even though these protections are expected to limit the impact of such flaws to an application abort. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
gypsy: multiple vulnerabilities
| Package(s): | gypsy | CVE #(s): | CVE-2011-0523 CVE-2011-0524 | ||||||||||||||||
| Created: | July 17, 2012 | Updated: | May 29, 2013 | ||||||||||||||||
| Description: | From the openSUSE advisory:
Add gypsy-CVE-2011-0523.patch: add config file to restrict the files that can be read. Add gypsy-CVE-2011-0524.patch: use snprintf() to avoid buffer overflows. Add gnome-common BuildRequires and call to gnome-autogen.sh for gypsy-CVE-2011-0523.patch, since it touches the build system. | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
libexif: multiple vulnerabilities
| Package(s): | libexif | CVE #(s): | CVE-2012-2812 CVE-2012-2813 CVE-2012-2814 CVE-2012-2836 CVE-2012-2837 CVE-2012-2840 CVE-2012-2841 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | July 13, 2012 | Updated: | April 5, 2013 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Mandriva advisory:
A heap-based out-of-bounds array read in the exif_entry_get_value function in libexif/exif-entry.c in libexif 0.6.20 and earlier allows remote attackers to cause a denial of service or possibly obtain potentially sensitive information from process memory via an image with crafted EXIF tags (CVE-2012-2812). A heap-based out-of-bounds array read in the exif_convert_utf16_to_utf8 function in libexif/exif-entry.c in libexif 0.6.20 and earlier allows remote attackers to cause a denial of service or possibly obtain potentially sensitive information from process memory via an image with crafted EXIF tags (CVE-2012-2813). A buffer overflow in the exif_entry_format_value function in libexif/exif-entry.c in libexif 0.6.20 allows remote attackers to cause a denial of service or possibly execute arbitrary code via an image with crafted EXIF tags (CVE-2012-2814). A heap-based out-of-bounds array read in the exif_data_load_data function in libexif 0.6.20 and earlier allows remote attackers to cause a denial of service or possibly obtain potentially sensitive information from process memory via an image with crafted EXIF tags (CVE-2012-2836). A divide-by-zero error in the mnote_olympus_entry_get_value function while formatting EXIF maker note tags in libexif 0.6.20 and earlier allows remote attackers to cause a denial of service via an image with crafted EXIF tags (CVE-2012-2837). An off-by-one error in the exif_convert_utf16_to_utf8 function in libexif/exif-entry.c in libexif 0.6.20 and earlier allows remote attackers to cause a denial of service or possibly execute arbitrary code via an image with crafted EXIF tags (CVE-2012-2840). An integer underflow in the exif_entry_get_value function can cause a heap overflow and potentially arbitrary code execution while formatting an EXIF tag, if the function is called with a buffer size parameter equal to zero or one (CVE-2012-2841). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
libxslt: denial of service
| Package(s): | libxslt,libxslt-python | CVE #(s): | CVE-2012-2825 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | July 17, 2012 | Updated: | January 17, 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the CVE entry:
The XSL implementation in Google Chrome before 20.0.1132.43 allows remote attackers to cause a denial of service (incorrect read operation) via unspecified vectors. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
libytnef: denial of service
| Package(s): | libytnef | CVE #(s): | CVE-2010-5109 | ||||||||||||||||||||
| Created: | July 16, 2012 | Updated: | December 22, 2014 | ||||||||||||||||||||
| Description: | Fedora has added a patch to libytnef 1.5 that fixes a possible buffer overflow. See this bug in the Red Hat bugzilla.
The CVE was added much later and it says: Off-by-one error in the DecompressRTF function in ytnef.c in Yerase's TNEF Stream Reader allows remote attackers to cause a denial of service (crash) via a crafted TNEF file, which triggers a buffer overflow. | ||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||
mono: cross-site scripting
| Package(s): | mono | CVE #(s): | CVE-2012-3382 | ||||||||||||||||||||
| Created: | July 13, 2012 | Updated: | August 23, 2012 | ||||||||||||||||||||
| Description: | From the Debian advisory:
Marcus Meissner discovered that the web server included in Mono performed insufficient sanitising of requests, resulting in cross-site scripting. | ||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||
mozilla: multiple vulnerabilities
| Package(s): | firefox, thunderbird, seamonkey | CVE #(s): | CVE-2012-1948 CVE-2012-1950 CVE-2012-1951 CVE-2012-1952 CVE-2012-1953 CVE-2012-1954 CVE-2012-1955 CVE-2012-1957 CVE-2012-1958 CVE-2012-1959 CVE-2012-1961 CVE-2012-1962 CVE-2012-1963 CVE-2012-1964 CVE-2012-1965 CVE-2012-1966 CVE-2012-1967 CVE-2012-1949 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | July 18, 2012 | Updated: | August 15, 2012 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Red Hat advisory:
A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox. (CVE-2012-1948, CVE-2012-1951, CVE-2012-1952, CVE-2012-1953, CVE-2012-1954, CVE-2012-1958, CVE-2012-1962, CVE-2012-1967) A malicious web page could bypass same-compartment security wrappers (SCSW) and execute arbitrary code with chrome privileges. (CVE-2012-1959) A flaw in the context menu functionality in Firefox could allow a malicious website to bypass intended restrictions and allow a cross-site scripting attack. (CVE-2012-1966) A page different to that in the address bar could be displayed when dragging and dropping to the address bar, possibly making it easier for a malicious site or user to perform a phishing attack. (CVE-2012-1950) A flaw in the way Firefox called history.forward and history.back could allow an attacker to conceal a malicious URL, possibly tricking a user into believing they are viewing a trusted site. (CVE-2012-1955) A flaw in a parser utility class used by Firefox to parse feeds (such as RSS) could allow an attacker to execute arbitrary JavaScript with the privileges of the user running Firefox. This issue could have affected other browser components or add-ons that assume the class returns sanitized input. (CVE-2012-1957) A flaw in the way Firefox handled X-Frame-Options headers could allow a malicious website to perform a clickjacking attack. (CVE-2012-1961) A flaw in the way Content Security Policy (CSP) reports were generated by Firefox could allow a malicious web page to steal a victim's OAuth 2.0 access tokens and OpenID credentials. (CVE-2012-1963) A flaw in the way Firefox handled certificate warnings could allow a man-in-the-middle attacker to create a crafted warning, possibly tricking a user into accepting an arbitrary certificate as trusted. (CVE-2012-1964) A flaw in the way Firefox handled feed:javascript URLs could allow output filtering to be bypassed, possibly leading to a cross-site scripting attack. (CVE-2012-1965) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
mozilla: denial of service
| Package(s): | firefox, thunderbird | CVE #(s): | CVE-2012-1960 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | July 18, 2012 | Updated: | August 1, 2012 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Ubuntu advisory:
Tony Payne discovered an out-of-bounds memory read in Mozilla's color management library (QCMS). If the user were tricked into opening a specially crafted color profile, an attacker could possibly exploit this to cause a denial of service via application crash. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||
nova: denial of service
| Package(s): | nova | CVE #(s): | CVE-2012-3371 | ||||||||
| Created: | July 12, 2012 | Updated: | July 18, 2012 | ||||||||
| Description: | From the Ubuntu advisory:
Dan Prince discovered that the Nova scheduler, when using DifferentHostFilter or SameHostFilter, would make repeated database instance lookup calls based on passed scheduler hints. An authenticated attacker could use this to cause a denial of service. | ||||||||||
| Alerts: |
| ||||||||||
openldap: weaker than expected encryption
| Package(s): | openldap | CVE #(s): | CVE-2012-2668 | ||||||||||||||||||||||||||||
| Created: | July 17, 2012 | Updated: | August 9, 2012 | ||||||||||||||||||||||||||||
| Description: | From the CVE entry:
libraries/libldap/tls_m.c in OpenLDAP, possibly 2.4.31 and earlier, when using the Mozilla NSS backend, always uses the default cipher suite even when TLSCipherSuite is set, which might cause OpenLDAP to use weaker ciphers than intended and make it easier for remote attackers to obtain sensitive information. | ||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||
puppet: multiple vulnerabilities
| Package(s): | puppet | CVE #(s): | CVE-2012-3864 CVE-2012-3865 CVE-2012-3866 CVE-2012-3867 | ||||||||||||||||||||||||
| Created: | July 13, 2012 | Updated: | August 13, 2012 | ||||||||||||||||||||||||
| Description: | From the Debian advisory:
CVE-2012-3864: Authenticated clients could read arbitrary files on the puppet master. CVE-2012-3865: Authenticated clients could delete arbitrary files on the puppet master. CVE-2012-3866: The report of the most recent Puppet run was stored with world-readable permissions, resulting in information disclosure. CVE-2012-3867: Agent hostnames were insufficiently validated. | ||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||
rhythmbox: code execution
| Package(s): | rhythmbox | CVE #(s): | CVE-2012-3355 | ||||||||||||
| Created: | July 12, 2012 | Updated: | August 6, 2012 | ||||||||||||
| Description: | From the Ubuntu advisory:
Hans Spaans discovered that the Context plugin in Rhythmbox created a temporary directory in an insecure manner. A local attacker could exploit this to execute arbitrary code as the user invoking the program. The Context plugin is disabled by default in Ubuntu. | ||||||||||||||
| Alerts: |
| ||||||||||||||
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 3.5-rc7; released on July 14. "Hey guys, remember how things have been stabilizing and slowing down, and all the kernel developers were off on summer vacation? Yeah, we need to talk about that." He is still hoping this is the last -rc before the final 3.5 release.
Stable updates: the 3.0.37 and 3.4.5 updates were released on July 16; 3.2.23 was released on July 13. The 3.0.38 and 3.4.6 updates are in the review process as of this writing; they can be expected on or after July 19.
Quotes of the week
-#define HV_LINUX_GUEST_ID_HI 0xB16B00B5 +#define HV_LINUX_GUEST_ID_HI 0x0DEFACED
30 Linux Kernel Developers in 30 Weeks: Dave Jones (Linux.com)
Linux.com interviews Dave Jones, Fedora's kernel maintainer, as part of its Kernel Developers series. "I needed to build my own kernel, because none of the distros shipped one that supported something I needed. And the feature I needed was only available in the development tree at the time (which was 2.1.x at the time). I don't recall what it was, but I think it may have been something silly like VFAT. Things weren't always stable, so I got into a habit of updating regularly (by carrying the latest tarball on a zip disk from the university to home). I started sending patches for things wherever I saw something I thought I could improve. I'm struggling to remember my first real accomplishment. It may have been fixing AFFS during the 2.1.x series. There were a whole bunch of really minor things before then."
Kernel development news
Random numbers for embedded devices
Secure communications are dependent on good cryptography, and cryptography, in turn, is dependent on good random numbers. When cryptographic keys are generated from insufficiently-random values, they may turn out to be easily guessable by an attacker, leaving their user open to eavesdropping and man-in-the-middle attacks. For this reason, quite a bit of attention has been put into random number generation, but that does not mean that problems do not still exist. A set of patches intended for merging into the 3.6 kernel highlight some of the current concerns about random number generation in Linux.Computing systems traditionally do not have sources of true randomness built into them. So they have operated by attempting to extract randomness from the environment in which they operate. The fine differences in timing between a user's keystrokes are one source of randomness, for example. The kernel can also use factors like the current time, interrupt timing, and more. For a typical desktop system, such sources usually provide enough randomness for the system's needs. Randomness gets harder to come by on server systems without a user at the keyboard. But the hardest environment of all may be embedded systems and network routers; these systems may perform important security-related tasks (such as the generation of host keys) before any appreciable randomness has been received from the environment.
As Zakir Durumeric, Nadia Heninger, J. Alex Halderman, and
Eric Wustrow have documented, many of
the latter class of systems are at risk, mostly as a result of keys
generated with insufficient randomness and predictable initial conditions.
They write: "We found that 5.57% of TLS hosts and 9.60% of SSH hosts
share public keys in an apparently vulnerable manner, due to either
insufficient randomness during key generation or device default
keys.
" They were also able to calculate the actual keys used for a
rather smaller (but still significant) percentage of hosts. Their site
includes a key checker; concerned administrators may point it at their
hosts to learn if their keys are vulnerable.
Fixes for this problem almost certainly need to be applied at multiple levels, but kernel-level fixes seem particularly important since the kernel is the source for most random numbers used in cryptography. To that end, Ted Ts'o has put together a set of patches designed to improve the amount of randomness available in the system from when it first boots. Getting there involves making a number of changes.
One of those is to fix the internal add_interrupt_randomness() function, which is used to derive randomness from interrupt timing. Use of this function has been declining in recent years, as a result of both its cost and concerns about the actual randomness of many interrupt sources. Ted's patch set tries to address the cost by batching interrupt-derived randomness on a per-CPU basis and only occasionally mixing it into the system-wide entropy pool. That mixing is also done with a new, lockless algorithm; this algorithm contains some small race conditions, but those could be seen to make the result even more random. An attempt is made to increase the amount of randomness obtained from interrupts by mixing in additional data, including the value of the instruction pointer at the time of the interrupt. After this change, adding randomness from interrupts should be fast and effective, so it is done by default for all interrupts; the IRQF_SAMPLE_RANDOM interrupt flag no longer has any effect.
Next, the patch set adds a new function:
void add_device_randomness(const void *buf, unsigned int size);
The purpose is to allow drivers to mix in device-specific data that, while not necessarily random, is system-specific and unpredictable. Examples include serial, product, and manufacturer information from attached USB devices, the "write counter" from some realtime clock devices, and the MAC address from network devices. Most of this data should be random from the point of view of an attacker; it should help to prevent the situation where multiple, newly-booted devices generate the same keys.
Finally, Ted's patch set also changes the use of the hardware random number generator built into a number of CPUs. Rather than return random numbers directly from the hardware, the code now mixes hardware random data into the kernel's entropy pool and generates random numbers from there. His reasoning is that using hardware random numbers directly requires placing a lot of trust in the manufacturer:
Mixing hardware random data into the entropy pool helps to mitigate that
threat. The first time this patch came around, Linus rejected it, saying "It would be a total
PR disaster for Intel, so they have huge incentives to be
trustworthy.
" That opinion was not
universally shared, though, and the patch remains in the current set.
Chances are it will be merged in its current form.
An important part of the plan, though, is to get these patches into the stable updates despite their size. Then, with luck, device manufacturers will pick them up relatively quickly and stop shipping systems with a known weakness. Even better would be, as Ted suggested, to make changes at the user-space levels as well. For example, delaying key generation long enough to let some randomness accumulate should improve the situation even more. But making things better at the kernel level is an important start.
TCP small queues
The "bufferbloat" problem is the result of excessive buffering in the network stack; it leads to long latencies and poor reliability in the network as a whole. Fixing it is a matter of buffering less data in each system between any two endpoints—a task that sounds simple, but proves to be more challenging than one might expect. It turns out that buffering can show up in many surprising places in the networking stack; tracking all of these places down and fixing them is not always easy.A number of bloat-fighting changes have gone into the kernel over the last year. The CoDel queue management algorithm works to prevent packets from building up in router queues over time. At a much lower level, byte queue limits put a cap on the amount of data that can be waiting to go out a specific network interface. Byte queue limits work only at the device queue level, though, while the networking stack has other places—such as the queueing discipline level—where buffering can happen. So there would be value in an implementation that could limit buffering at levels above the device queue.
Eric Dumazet's TCP small queues patch looks like it should be able to fill at least part of that gap. It limits the amount of data that can be queued for transmission by any given socket regardless of where the data is queued, so it shouldn't be fooled by buffers lurking in the queueing, traffic control, or netfilter code. That limit is set by a new sysctl knob found at:
/proc/sys/net/ipv4/tcp_limit_output_bytes
The default value of this limit is 128KB; it could be set lower on systems where latency is the primary concern.
The networking stack already tracks the amount of data waiting to be transmitted through any given socket; that value lives in the sk_wmem_alloc field of struct sock. So applying a limit is relatively easy; tcp_write_xmit() need only look to see if sk_wmem_alloc is above the limit. If that is the case, the socket is marked as being throttled and no more packets are queued.
The harder part is figuring out when some space opens up and it is possible to add more packets to the queue. The time when queue space becomes free is when a queued packet is freed. So Eric's patch overrides the normal struct sk_buff destructor when an output limit is in effect; the new destructor can check to see whether it is time to queue more data for the relevant socket. The only problem is that this destructor can be called from deep within the network stack with important locks already held, so it cannot queue new data directly. So Eric had to add a new tasklet to do the actual job of queuing new packets.
It seems that the patch is having the intended result:
He also ran some tests over a 10Gb link and was able to get full wire speed, even with a relatively small output limit.
There are some outstanding questions, still. For example, Tom Herbert asked about how this mechanism interacts with more complex queuing disciplines; that question will take more time and experimentation to answer. Tom also suggested that the limit could be made dynamic and tied to the lower-level byte queue limits. Still, the patch seems like an obvious win, so it has already been pulled into the net-next tree for the 3.6 kernel. The details can be worked out later, and the feature can always be turned off by default if problems emerge during the 3.6 development cycle.
Kernel configuration for distributions
Configuring a kernel was once a fairly straightforward process, only requiring knowledge of what hardware needs to be supported. Over time, things have gotten more complex in general, but distributions have added their own sets of dependencies on specific kernel features—dependencies that can be difficult for regular users to figure out. That led Linus Torvalds to put out an RFC proposal to add distribution-specific kernel configuration options.
The problem stems from distributions' user space needing certain
configuration options enabled in order to function correctly. Things like
tmpfs and devtmpfs support, control groups, security options (e.g. SELinux,
AppArmor), and even raw netfilter table support were listed by Torvalds as
"support infrastructure" options that are required by various
distributions. But, in addition to being hard to figure out, those options
tend to change over time, so a configuration that worked for Fedora N may
not work for Fedora N+1. The resulting problems can be hard to find as
Torvalds pointed out: "There's been
several times when I started with my old minimal config, and the
resulting kernel would boot, but something wouldn't quite work right,
and it can be very subtle indeed.
"
So, he suggested adding distribution-specific Kconfig files:
There are others ways to get there, of course, but they leave something to be desired, Torvalds said. Copying the distribution config file would work, but would bring along a bunch of extra options that aren't really necessary for the proper operation of the distribution. Using make localmodconfig (which selects all of options from the running kernel) suffers from much the same problem, he said. The ultimate goal is to have more people able to build kernels:
In general, the idea was met with approval on linux-kernel. There were concerns about how the distribution-specific files would be maintained, and that sometimes they might get out of sync with the distribution's requirements. Dave Jones noted that he sometimes gets blindsided by Fedora kernel requirements (and he is the Fedora kernel maintainer).
Torvalds is pretty explicitly not looking for a perfect solution, however,
just one that is better: "even a 'educated guess' config file is
better than what we have now
". In that message, he outlines two requirements that he
sees for the feature. The first is that each configuration option that is
selected for a particular distribution version come with a comment
explaining why it is needed. The second is that the configuration options
be the minimum required to make the system function properly—not that
it "grow to contain all the options just
because somebody decided to just add random things until things
worked
".
Commenting the options may be difficult even for those who work directly on
distribution kernels though. Ben Hutchings (who maintains the Debian
kernel) pointed out that he sometimes does
not know the reason that a particular option is needed, particularly at
some later point: "just because an option
was requested and enabled to support some bit of userland, doesn't mean
I know what's using or depending on it now
".
Other kinds of configuration options are possible, of course. In his original message, Torvalds mentioned configurations for "common platforms", such as a "modern PC laptop" that would choose options typically required for those (USB storage, FAT/VFAT, power management, etc.). He specifically said that platform configuration should be considered an entirely separate feature from the distribution idea.
KVM (and other virtualization) users were also interested in creating an
option that
would select all of the drivers and other options needed for those kernels.
Currently "you need to hunt through 30+ different menus in order to find
what you need to run in a basic KVM virtual machine
", as Trond
Myklebust put it. There was a lot of
discussion (and much agreement) on the need for better configuration
options for virtualization, but some of that got rather far afield from
Torvalds's original proposal.
Unsurprisingly, kernel developers started thinking about how they could use the feature. There was concern that choosing a particular distribution and its dependencies would make it harder for kernel developers to further customize the configuration. David Lang had some specific complaints about the approach suggested in the RFC, noting that it would be hard to choose a Fedora kernel without getting SELinux for example. He also was concerned about the amount of churn these defconfig-like files might cause (referencing the movement to reduce the number of defconfigs in the ARM tree). But Torvalds makes it clear that Lang and other kernel hackers are not the target of the feature:
Don't complicate the issue by bringing up some totally unrelated question. Don't derail a useful feature for the 99% because you're not in it.
There may be ways to satisfy both camps—Lang seemed to think so anyway—but until someone actually posts some code, it's hard to say. While there was general agreement that the feature would be useful, so far no one has stepped up to do the work. Whether Torvalds plans to do that or was just floating a trial balloon and hoping someone else would run with it is unclear, but it does seem like a feature worth having.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Development tools
Device drivers
Documentation
Filesystems and block I/O
Memory management
Networking
Virtualization and containers
Miscellaneous
Page editor: Jonathan Corbet
Distributions
Left by Rawhide
Your editor has long made an effort to keep a variety of Linux distributions around; this is done both to avoid seeming to endorse any particular distribution and to have a better idea of what the various distributors are up to. The main desktop system, though, has been running Fedora Rawhide for many years. That particular era appears to be coming to an end; it is worthwhile to look at why that is happening and how it reflects on how the Fedora project operates.Rawhide, as it happens, is older than Fedora; it was originally launched in August, 1998—almost exactly 14 years ago. Its purpose was to give Red Hat Linux users a chance to test out releases ahead of time and report bugs; it could also have been seen as an attempt to attract users who would otherwise lean toward a distribution like Debian unstable. Rawhide was not a continuously updated distribution; it got occasional "releases" on a schedule determined by Red Hat. One could argue that Fedora itself now plays the role that Red Hat had originally envisioned for Rawhide. But Rawhide persists for those who find Fedora releases to be far too stable, boring, and predictable.
The Rawhide distribution does provide occasional surprises, to the point that any rational person should almost certainly not consider running it on a machine that is needed for any sort of real work. But, at its best, Rawhide is an ideal tool for LWN editors, a group that has not often been accused of being overly rational. Running Rawhide provides a front-seat view into what the development community is up to; fresh software shows up there almost every day. And it can be quite fresh; Fedora developers will often drop beta-level software into Rawhide with the idea of helping to stabilize it before it shows up in finished form as part of a future Fedora release. With Rawhide, you can experience future software releases while almost never having to figure out how to build some complex project from source.
Rawhide also helps one keep one's system problem diagnosis and repair skills up to date—usually at times when one would prefer not to need to exercise such skills. But that's just part of the game.
In the early days of Fedora, Rawhide operated in a manner similar to Debian unstable, but with a shorter release cycle. When a given Fedora release hit feature freeze, Rawhide would freeze and the flow of scary new packages into the distribution would stop. Except, of course, when somebody put something badly broken in anyway, just to make sure everybody was still awake. While the Fedora release stabilized, developers would accumulate lots of new stuff for the next release; it would all hit the Rawhide repository shortly after the stable release was made. One quickly learned to be conservative about Rawhide updates during the immediate post-release period; things would often be badly broken. So it seemed to many that Rawhide was a little too raw during parts of the cycle while being too frozen and boring at other times.
Sometime around 2009, the project came up with the "no frozen Rawhide" idea. The concept was simple: rather than stabilize Fedora releases in the Rawhide repository, each stable release would be branched off Rawhide around feature-freeze time. So Rawhide could continue forward in its full rawness while the upcoming release stabilized on a separate track. It was meant to be the best of both worlds: the development distribution could continue to advance at full speed without interfering with (or getting interference from) the upcoming release. It may be exactly that, but this decision has changed the nature of the Rawhide distribution in fundamental ways.
In May, 2011, Matthew Miller asked the fedora-devel list: "is Rawhide supposed to be useful?" He had been struggling with a problem that had bitten your editor as well: the X server would crash on startup, leaving the system without a graphical display. The fact that Rawhide broke in such a fundamental way was not particularly surprising; Rawhide is supposed to break in horrifying ways occasionally. The real problem is that Rawhide stayed broken for a number of weeks; the responsible developer, it seems, had simply forgotten about the problem. Said developer had clearly not been running Rawhide on his systems; this was the sort of problem that tended to make itself hard to forget for people actually trying to use the software.
So your editor asked: could it be that almost nobody is actually running
Rawhide anymore? The fact that it could be unusably broken for weeks
without an uproar suggested that the actual user community was quite small.
One answer that came back read: "In
the week before F15 change freeze, are you really surprised that nobody's
running the F16 dumping ground?
" At various times your editor has,
in response to Rawhide bug reports, been told that running Rawhide is a bad
idea (example, another example, yet another example). There seems to be a clear message
that, not only are few people running Rawhide, but nobody is really even
supposed to be running it.
The new scheme shows its effects in other ways as well. Bug fixes can be slow to make it into Rawhide, even after the bug has been fixed in the current release branch. Occasionally, the "stable" branch has significantly newer software than Rawhide does; Rawhide can become a sort of stale backwater at times. It is not surprising that Fedora developers are strongly focused on doing a proper job with the stable release; that bodes well for the project as a whole. But this focus has come at the expense of the Rawhide branch, which is now seen, by some developers at least, as a "dumping ground."
Recently, your editor applied an update that brought about the familiar "GNOME just forgot all your settings" pathology, combined with the apparent loss of the ability to fix those settings. It was necessary to return to xmodmap commands to put the control key where $DEITY (in the form of the DEC VT100 designers) meant it to be, for example. Some time had passed before this problem was discovered, so the obvious first step was to update again, get current, and see if the problem had gone away. Alas, that was just when Rawhide exploded in a fairly spectacular fashion, with an update leaving the system corrupted and unable to boot. Not exactly the fix that had been hoped for. Fortunately, many years of experience have taught the value of exceptionally good backups, but the episode as a whole was not fun.
But what was really not fun was the ensuing discussion. Chuck Forsberg made the reasonable-sounding suggestion that perhaps developers could be bothered to see if their packages actually work before putting them into Rawhide. Adam Williamson responded:
This, in your editor's eyes, is not the description of a distribution that is actually meant to be used by real people.
The interesting thing is that Fedora developers seem to be mostly happy with how Rawhide is working. It gives them a place to stage longer-term changes and see how they play with the rest of the system. Problems can often be found early in the process so that the next Fedora development cycle can start in at least a semi-stable condition. By looking at Rawhide occasionally, developers can get a sense for what their colleagues are up to and what they may have to cope with in the future.
In other words, Rawhide seems to have evolved into a sort of distribution-level equivalent to the kernel's linux-next tree. Developers put future stuff into it freely, stand back, and watch how the monster they have just created behaves for a little while. But it is a rare developer indeed who actually does real work with linux-next kernels or tries to develop against them. Producing kernels that people actually use is not the purpose of linux-next, and, it seems, producing a usable distribution is not Rawhide's purpose.
This article was meant to be a fierce rant on how the Fedora developers should never have had the temerity to produce a development distribution that fails to meet your editor's specific needs. But everybody who read it felt the need to point out that, actually, the Fedora project is not beholden to those needs. If the current form of Rawhide better suits the project's needs and leads to better releases, then changing Rawhide was the right thing for the project to do.
Your editor recognizes that, and would like to express his gratitude for years of fun Rawhide roller coaster rides. But it also seems like time to move on to something else that better suits current needs. What the next distribution will be has yet to be decided, though. One could just follow the Fedora release branches and get something similar to old-style Rawhide with less post-release mess, but perhaps it's time for a return to something Debian-like or to go a little further afield. However things turn out, it should be fun finding a new distribution to get grumpy about.
Brief items
Distribution quote of the week
openSUSE 12.2 RC1 available for testing
The first release candidate for openSUSE's next release, 12.2, is available. This release updates KDE to 4.8.4, but the distribution declined many requests to package 4.9.0, citing stability reasons. Among the other improvements noted, "a lot of systemd fixes came in, including a crash and memleak fix when rotating journals. Many packages now include systemd unit files natively, so these were removed from the systemd package itself". The final 12.2 release is targeted at mid-September.
Distribution News
Debian GNU/Linux
Bits of Debian Med team: report from LSM, Geneva 2012
Andreas Tille reports on various talks at the Libre Software Meeting (LSM) of interest to the Debian Med project. Topics include Medical imaging using Debian, a Debian Med packaging workshop, Integration of VistA into Debian, and more.DebConf makes in-roads in Central America
Neil McGovern has a report on this year's DebConf in Managua, Nicaragua. "The conference brought together around 200 attendees from 32 countries, and helped many people make their first steps in contributing to Debian, including a large number of enthusiastic new volunteers from countries in Central America."
Gentoo Linux
Gentoo Council Elections Results for term 2012/2013
The results are available for the Gentoo Council Elections. The winners are Donnie Berkholz (dberkholz), Fabian Groffen (grobian), Tony Vroon (chainsaw), Tomas Chvatal (scarabeus), Ulrich Müller (ulm), Petteri Räty (betelgeuse) and William Hubbs (williamh).
Newsletters and articles of interest
Distribution newsletters
- Debian Project News (July 12)
- DistroWatch Weekly, Issue 465 (July 16)
- Maemo Weekly News (July 16)
- Ubuntu Weekly Newsletter, Issue 274 (July 15)
31 Flavors of Linux (OStatic)
Susan Linton reports that entrepreneur Todd Robinson is planning on releasing a complete desktop system every day in August. "'I intend to demonstrate the huge advantages of using open source (shared knowledge) solutions in real-world situations by producing a complete desktop operating system each and every day during the month of August 2012.'" The results of his experiment will be presented at Ohio Linux Fest.
Page editor: Rebecca Sobol
Development
John the Ripper
John the Ripper (JtR) is a password-cracking utility developed at Openwall. The recently-released 1.7.9-jumbo-6 version lands a number of important features, such as the ability to unlock RAR, ODF, and KeePass files, the ability to crack Mozilla master passwords, and the ability to speed up cracking by using GPUs — for some, but not all, tasks.
Get crackin'
Despite the heavy dose of crypto-speak in the documentation, in practice JtR is a straightforward-to-use tool with which you can recover lost passwords, open locked files, or test users' password strength from the command line — recovering a password can be as simple as running:
john thepasswdfile
Of course, there is quite a bit going on behind the scenes in that scenario. For
starters, it is important to remember that JtR is built for recovering
passwords for which the encryption algorithm has not been broken, so
it is in effect a brute-force tool that tries every possibility as
quickly as possible. Such an approach can be hard on one's CPU, and also
on one's time (the FAQ estimates
that a single crack for a weak password could take anywhere from one
second to one day), so JtR employs a variety of techniques to speed up
the guessing process. It can use word lists (with the addition of the
--wordlist=FILE switch), it can search probabilistically
(i.e., trying more likely combinations of characters first, with the
--incremental switch), and it can tailor its guesses
based on information gleaned from the user account in question (for
which one needs superuser access, of course). It auto-saves its state
every ten minutes, and you can interrupt and resume cracking jobs to
better optimize your personal time.
JtR can be used to crack a single password or hash value on its own, or it can be deployed against a file full of passwords, logging its successes. There are switches to automatically skip accounts without a shell or by group membership, and utilities to perform related tasks such as emailing users with weak passwords. By default, JtR uses its own encryption routines when cracking a password, but it can also call the system's crypt(3) function, which may be helpful for auditing password hash formats not yet supported by the program.
Not that there are a lot of unsupported formats; JtR tackles many different encryption and hashing algorithms — around 30 in this release. But the main program uses the same "batch cracking" methodology regardless of the underlying format being cracked. Most of the new formats are implemented as plugins, and indeed many of the additions in this latest release were contributed by the JtR community. Considerable effort is also expended on optimizing JtR's performance, which naturally involves squeezing every available advantage out of the architecture. As a result, the optimizations available vary depending on the file format and the processor.
New and improved
The latest release is named JtR 1.7.9-jumbo-6; the "jumbo" indicates that it incorporates community-contributed code. For the sake of comparison, the most recent non-jumbo release is 1.7.9, from November 2011. Openwall also sells a "pro" version of JtR for Linux and Mac OS X, which is currently at 1.7.3, and rolls in a few additional hash types, plus binary packages and a hefty multi-lingual wordlist file. From what I can tell, the community-driven jumbo-6 packages now implement most of the additional features and optimizations in the "pro" version, but of course you get no company-provided support. If compiling from source is too much of a headache, there are also community-contributed builds for Linux (32- and 64-bit), Solaris, OS X, and Android.
According to the release announcement, 1.7.9-jumbo-6 adds 40,000 additional lines of code (that is, not counting changed lines) over the previous release. New hash types supported in this version include IBM's Resource Access Control Facility (RACF), GOST, SHA512-crypt, SHA256-crypt, and several SHA-512 or SHA-256 derivatives (such as those used by DragonFly BSD, EPiServer, and Drupal 7). Several other web application password formats are on the list as well, including Django 1.4, the forum package WoltLab Burning Board 3, and the flavor of SHA1 used by LinkedIn (which reminds one of LinkedIn's recent password troubles).
Just as interesting are the "non-hash" functions, which include a number of encrypted file formats and authentication methods — specifically, message authentication codes and challenge-response protocols (which first require capturing the challenge-response packets using Wireshark or another network sniffer). New in this category are several password-storage formats (Mac OS X keychains, KeePass 1.x files, Password Safe files, and Mozilla master-password files) and general file types (OpenDocument files, Microsoft Office 2007/2010 files, and RAR archives encrypted with the -p option, which leaves metadata in plaintext). The authentication methods added include WPA-PSK, VNC, SIP, and various flavors of HMAC.
GPU-assisted cracking
There are other "assorted enhancements" discussed in the announcement, but the most interesting is GPU-based parallel processing. There are two flavors supported: NVIDIA's Compute Unified Device Architecture (CUDA) and the cross-platform OpenCL. Not every hash or algorithm handled by JtR's normal CPU techniques has support for either, and few have support for both. Some of the GPU-assisted cracking code is marked as "inefficient" in the notes, and some have limitations on the specific graphics chips required. The notes also caution that some ATI cards can hang when running recent drivers.
As for whether or not CUDA and OpenCL result in faster password cracking, the answer at the moment is mixed. It depends largely on the hash or algorithm being cracked; for some (such as bcrypt), the project reports that running solely on the GPU is slower than running solely on the CPU. In addition, the benchmarks included in the announcement note that the GPU still loses on price/performance ratio when compared to CPUs. That may not matter if you are interested in using JtR to spy on your corporate foes, but for standard system administration tasks, it is an important factor. Yet even in those CPU-dominated circumstances, piling on the GPU in addition to the CPU should improve cracking times.
JtR already supports parallel processing with OpenMP for a far larger set of hashes and file formats. All of the new non-hash file formats supported in 1.7.9-jumbo-6 support OpemMP. The new release also includes many new SIMD CPU optimizations, for SSE, XOP, AVX, and even MMX. As a result, sorting out which options to use on which task may be a complicated affair; fortunately when several days of processing time may be required, a few minutes of research is comparatively small potatoes.
Openwall's Alexander Peslyak (who goes by the moniker "Solar
Designer") wrote in the announcement that the GPU support "is just
a development milestone, albeit a desirable one
" for the time
being, and that further optimization in future releases will improve
its performance. GPU support is not a silver bullet, though. Like any
task, password-cracking will always have bottlenecks — in JtR's
case, having the main process generate and distribute the candidate
passwords is frequently a bottleneck that GPUs cannot overcome. But
as Peslyak wrote
in 2011, even the question of whether or not it helps to move the
candidate-generation to the GPU depends largely on the algorithm or
hash. Password cracking is "is not so much about cryptography
as it is about optimization of algorithms and code
", he said.
In that context, then, being able to make use of GPUs that would
otherwise sit idle is a tool that needs to be exploited, even if it
will not reduce the task to triviality.
Since I do not manage multi-user machines, it is difficult to weigh in on JtR's password-auditing features. But for password- or file-cracking, it is simple to get started and well-documented, which is about all that one could ask for. Password recovery falls into the category of "tools you hope you will never need," and when you find yourself recovering a password, you are not likely to enjoy the process. At least JtR makes it relatively painless — for you, although maybe not for your hardware.
Brief items
Quotes of the week
eGenix PyRun - One file Python Runtime 1.0.0
EGenix has announced the release of its one-file Python interpreter PyRun, which is designed to provide "an almost-complete Python standard library
" in a relocatable binary that does not demand system-wide installation. "Compared to a regular Python installation of typically 100MB on disk, this makes eGenix PyRun ideal for applications and scripts that need to be distributed to many target machines, client installations or customers.
"
Firebug 1.10.0 released
Version 1.10.0 of the Firebug web development tool has been released. Information on new features can be found on this page; they include a new cookie manager, command editor syntax highlighting, autocompletion, CSS style tracing, and more. "Firebug doesn’t slow down Firefox start-up time anymore! It’s loaded as soon as the user actually needs it for the first time. Only the Firebug start-button and menu is loaded at the start up time."
Firefox 14 is now available
Firefox 14 has been released. As usual there are new features and lots of bug fixes, including security bugs, in this release. The release notes have the details. There are also release notes for Firefox mobile.Lantz: automation and instrumentation in Python
Herman Grecco has released Lantz, a Python library for controlling and automating laborartory instruments (test equipment, sensors, signal generators, etc.). The code contains Qt4 hooks, and is designed to replace "Domain Specific Languages (DSL) like LabVIEW and MatLab Instrumentation Toolbox."
Redphone released as open source
The Redphone encrypted voice-over-IP application for Android has been released under the GPLv3 license. "As with TextSecure, we hope that making RedPhone OSS will enable access to secure communication for even more people around the world, with an even larger number of developers contributing to make it a great product."
Newsletters and articles
Development newsletters from the last week
- Caml Weekly News (July 17)
- What's cooking in git.git (July 13)
- Haskell Weekly News (July 11)
- Perl Weekly (July 16)
- PostgreSQL Weekly News (July 15)
- Ruby Weekly (July 12)
- Tahoe-LAFS Weekly News (July 16)
Berkholz: How to recruit open-source contributors
Gentoo's Donnie Berkholz has written a treatise on the methods the distribution uses to turn Google Summer of Code students into regular contributors, claiming an increase from 20% to 65%. "In my view (and therefore Gentoo’s view), the code produced during someone’s initial summer of work tends to serve its best purpose as inculcation to a community and its standards, rather than as useful code in itself. We regard that code as potentially throwaway work that is more of an experimentation than something on Gentoo’s critical path."
Jones: Bugzilla 4.2 – what’s new – searching
On his blog, Byron Jones explains the new features of Bugzilla 4.2, focusing on the revisions to search functionality: "a major change between bugzilla 4.0 and bugzilla 4.2 comes in the form of changes to searching. the searching implementation in 4.2 was rewritten from scratch, removing some seriously convoluted code and adding a large amount of test cases." The changes include search result limiting, changes to relative time operators, and more consistent complex queries.
Page editor: Nathan Willis
Announcements
Brief items
An overview of small Linux systems
Remy van Elst has put together a list of small Linux systems as a sort of reference for those looking for something to play with. "You might have heard of the Raspberry Pi, or the Cotton Candy, or the Snowball. Those are, besides nice pi, candy and snow, also small Linux pc’s. Most of them have an ARM chip, a small amount of memory and run some [form] of Linux. This page will provide an overview of what is on the market, specs, an image, and links to the boards."
OSI Announces Individual Membership
The Open Source Initiative (OSI) has announced that it is accepting applications for Individual Membership. "The new Individual Membership category allows individuals who support the mission and work of the OSI to join discussions about that work, to be represented in the evolving governance of the OSI, and to spin up task-focused Working Groups to tackle open source community needs. Individual Members are asked to make a tax-deductible donation to support the mission of OSI."
Articles of interest
How free is my phone? (The H)
The H examines free software in handsets at the baseband processor level. "There are numerous hurdles that must be overcome before a practical open source baseband firmware is a reality. Perhaps the largest of these is concerned with gaining GSM type approval for handsets using the firmware, without which use with public networks is probably illegal in most parts of the world, or at least is a violation of a network's terms of service. And it's questionable whether a handset would ever gain approval if the baseband firmware can be modified at will."
Hardware Hacks: The Raspberry Pi is everywhere (The H)
The H has a new section on the uses of open source software on open hardware. The first article is about Raspberry Pi. "FishPi – Developer Greg Holloway is building an unmanned, autonomous boat controlled by a Raspberry Pi. The solar powered FishPi is being designed with the goal of being able to cross the Atlantic Ocean all by itself. At the moment, Holloway is building a 20-inch long proof-of-concept vehicle and he has not yet tested it for actual seaworthiness, but the plans are to eventually produce a kit that can be sold to interested parties. Holloway expects the finished product to be able to sustain long-term operations and perform observations and data logging, aided by two-way satellite communication."
Valve: Steam’d Penguins
Valve Software, the company behind the Steam game engine, has formed a Linux team and the team has a new weblog. From the first post: "For some time, Gabe has been interested in the possibility of moving Steam and the Source game engine to Linux. At the time, the company was already using Linux by supporting Linux-based servers for Source-based games and also by maintaining several internal servers (running a 64-bit version of Ubuntu server) for various projects. In 2011, based on the success of those efforts and conversations in the hallway, we decided to take the next step and form a new team. At that time, the team only consisted of a few people whose main purpose was investigating the possibility of moving the Steam client and Left 4 Dead 2 over to Ubuntu." There are plans to support other distributions in the future.
New Books
Ubuntu Made Easy--New from No Starch Press
No Starch Press has released "Ubuntu Made Easy" by Rickford Grant and Phil Bull.
Calls for Presentations
Embedded Linux Conference Call for Presentations
The Embedded Linux Conference (ELC) will take place November 5-7, 2012 in Barcelona, Spain. ELC is co-located with LinuxCon Europe. The CfP deadline is August 1.
Upcoming Events
Speakers set for Texas Linux Fest 2012
The Texas Linux Fest 2012 will take place August 3-4 in San Antonio, Texas. The session speakers have been announced and registration is open. "Topics cover nearly all aspects of free/open source software -- ranging from security to the cloud to tablets to file systems."
Events: July 19, 2012 to September 17, 2012
The following event listing is taken from the LWN.net Calendar.
| Date(s) | Event | Location |
|---|---|---|
| July 16 July 20 |
OSCON | Portland, OR, USA |
| July 26 July 29 |
GNOME Users And Developers European Conference | A Coruña, Spain |
| August 3 August 4 |
Texas Linux Fest | San Antonio, TX, USA |
| August 8 August 10 |
21st USENIX Security Symposium | Bellevue, WA, USA |
| August 18 August 19 |
PyCon Australia 2012 | Hobart, Tasmania |
| August 20 August 22 |
YAPC::Europe 2012 in Frankfurt am Main | Frankfurt/Main, Germany |
| August 20 August 21 |
Conference for Open Source Coders, Users and Promoters | Taipei, Taiwan |
| August 25 | Debian Day 2012 Costa Rica | San José, Costa Rica |
| August 27 August 28 |
XenSummit North America 2012 | San Diego, CA, USA |
| August 27 August 28 |
GStreamer conference | San Diego, CA, USA |
| August 27 August 29 |
Kernel Summit | San Diego, CA, USA |
| August 28 August 30 |
Ubuntu Developer Week | IRC |
| August 29 August 31 |
2012 Linux Plumbers Conference | San Diego, CA, USA |
| August 29 August 31 |
LinuxCon North America | San Diego, CA, USA |
| August 30 August 31 |
Linux Security Summit | San Diego, CA, USA |
| August 31 September 2 |
Electromagnetic Field | Milton Keynes, UK |
| September 1 September 2 |
Kiwi PyCon 2012 | Dunedin, New Zealand |
| September 1 September 2 |
VideoLAN Dev Days 2012 | Paris, France |
| September 1 | Panel Discussion Indonesia Linux Conference 2012 | Malang, Indonesia |
| September 3 September 8 |
DjangoCon US | Washington, DC, USA |
| September 3 September 4 |
Foundations of Open Media Standards and Software | Paris, France |
| September 4 September 5 |
Magnolia Conference 2012 | Basel, Switzerland |
| September 8 September 9 |
Hardening Server Indonesia Linux Conference 2012 | Malang, Indonesia |
| September 10 September 13 |
International Conference on Open Source Systems | Hammamet, Tunisia |
| September 14 September 16 |
Debian Bug Squashing Party | Berlin, Germany |
| September 14 September 21 |
Debian FTPMaster sprint | Fulda, Germany |
| September 14 September 16 |
KPLI Meeting Indonesia Linux Conference 2012 | Malang, Indonesia |
| September 15 September 16 |
Bitcoin Conference | London, UK |
| September 15 September 16 |
PyTexas 2012 | College Station, TX, USA |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
