User: Password:
Subscribe / Log in / New account Weekly Edition for May 12, 2011

A brief experiment with PyPy

By Jonathan Corbet
May 11, 2011
While one might ordinarily think of the PyPy project as an experiment in implementing the Python runtime in Python itself, there is really more to it than that. PyPy is, in a sense, a toolbox for the creation of just-in-time compilers for dynamic languages; Python is just the start - but it's an interesting start. It has been almost exactly one year since LWN first looked at PyPy and a few weeks since the 1.5 release, so the time seemed right to actually play with this tool a bit. The results were somewhat eye-opening.

LWN uses a lot of tools written in Python; one of them is the gitdm data miner which is used to generate kernel development statistics. It is a simple program which reads the output of "git log" and generates a big in-memory data structure reflecting the relationships between developers, their employers, and the patches they are somehow associated with. There is very little that is done in the kernel, and there is no use of extension modules written in C. These features make gitdm a natural first test for PyPy; there is little to trip things up.

The test was to stash the git log output from the 2.6.36 kernel release through the present - some 31,000 changes - in a file on a local SSD. The file, while large, should still fit in memory with nothing else running; I/O effects should, thus, not figure into the results. Gitdm was run on the file using both the CPython 2.7.1 interpreter and PyPy 1.5.

When switching to an entirely different runtime for a non-trivial program, it is natural to expect at least one glitch. In this case, there were none; gitdm ran without complaint and produced identical output. There was one significant difference, though: while the CPython runs took an average of about 63 seconds, the PyPy runs completed in about 21 seconds. In other words, for the cost of changing the "#!" line at the top of the program, the run time was cut to one third of its previous value. One might conclude that the effort was justified; plans are to run gitdm under PyPy from here on out.

To dig just a little deeper, the perf tool was used to generate a few statistics of the differing runs:

Cycles124B 42B
Cache misses14M 45M
Syscalls55,000 28,000

As would be expected from the previous result, running with CPython took about three times as many processor cycles as running with PyPy. On the other hand, CPython reliably incurred less than 1/3 as many cache misses; it would be hard to say why. Somehow, the code generated by the PyPy JIT generates more widely spread-out memory references; that may be related to garbage collection strategies. CPython uses reference counting, which can improve cache locality, while PyPy does not.

One other interesting thing to note is that PyPy only made half as many system calls. That called for some investigation. Since gitdm is just reading data and cranking on it, almost every system call it makes is read(). Sure enough, the CPython runtime was issuing twice as many read() calls. Understanding why would require digging into the code; it could be as simple as PyPy using larger buffers in its file I/O implementation.

Given results like this, one might well wonder why PyPy is not much more widely used. There may be numerous reasons, including a simple lack of awareness of PyPy among Python developers and users of their programs. But the biggest issue may be extension modules. Most non-trivial Python programs will use one or more modules which have been written in C for performance reasons, or because it's simply not possible to provide the required functionality in pure Python. These modules do not just move over to PyPy the way Python code does. There is a short list of modules supported by PyPy, but it's insufficient for many programs.

Fixing this problem would seem to be one of the most urgent tasks for the PyPy developers if they want to increase their user base. In other ways, PyPy is ready for prime time; it implements the (Python 2.x) language faithfully, and it is fast. With better support for extensions, PyPy could easily become the interpreter of choice for a lot of Python programs. It is a nice piece of work.

Comments (14 posted)

Scale Fail (part 1)

May 6, 2011

This article was contributed by Josh Berkus

Let me tell you a secret. I don't fix databases. I fix applications.

Companies hire me to "fix the database" because they think it's the source of their performance and downtime problems. This is very rarely the case. Failure to scale is almost always the result of poor management decisions — often a series of them. In fact, these anti-scaling decisions are so often repeated that they have become anti-patterns.

I did a little talk about these anti-patterns at the last MySQL Conference and Expo. Go watch it and then come on back. Now that you've seen the five-minute version (and hopefully laughed at it), you're ready for some less sarcastic detail which explains how to recognize these anti-patterns and how to avoid them.


"Now, why are you migrating databases? You haven't had a downtime in three months, and we have a plan for the next two years of growth. A migration will cause outages and chaos."

"Well ... our CTO is the only one at the weekly CTO's lunch who uses PostgreSQL. The other CTOs have been teasing him about it."

Does this sound like your CTO? It's a real conversation I had. It also describes more technical executives than I care to think about: more concerned with their personal image and career than they are with whether or not the site stays up or the company stays in business. If you start hearing any of the following words in your infrastructure meetings, you know you're in for some serious overtime: "hip", "hot", "cutting-edge", "latest tech", or "cool kids". References to magazine surveys or industry trends articles are also a bad sign.

Scaling an application is all about management of resources and administrative repeatability. This means using technology which your staff is extremely familiar with and which has been tested and proven to be reliable — and is designed to do the thing you want it to do. Hot new features are less important than consistent uptime without constant attention. More importantly, web technology usually makes big news while it's still brand new, which also means poorly documented, unstable, unable to integrate with other components, and full of bugs.

There's also another kind of trendiness to watch out for, it's the one which says, "If Google or Facebook does it, it must be the right choice." First, what's the right choice for them may not be the right choice for you, unless your applications and platform are very similar to theirs.

Second, not everything that Google and Facebook did with their infrastructures are things they would do again if they had to start over. Like everyone else, the top internet companies make bad decisions and get stuck with technology which is painful to use, but even more painful to migrate away from. So if you're going to copy something "the big boys" do, make sure you ask their staff what they think of that technology first.

No metrics

"Have we actually checked the network latency?"
"I'm sure the problem is HBase."
"Yes, but have we checked?"
"I told you, we don't need to check. The problem is always HBase."
"Humor me."
"Whatever. Hmmmmmm ... oh! I think something's wrong with the network ..."

Scaling an application is an arithmetic exercise. If one user consumes X amount of CPU time on the web server, how many web servers do you need to support 100,000 simultaneous users? If the database is growing at Y per day, and Z% of the data is "active" how long until the active data outgrows RAM?

Clearly, you cannot do any of this kind of estimation without at least approximate values for X, Y, and Z. If you're planning to scale, you should be instrumenting every piece of your application stack, from the storage hardware to the JavaScript. The thing you forget to monitor is the one which will most likely bring down your whole site. Most software these days has some way to monitor its performance, and software that doesn't is software you should probably avoid.

Despite this common-sense idea, a surprising number of our clients were doing nothing more sophisticated than Nagios alerts on their hardware. This means that when a response time problem or outage occurs, they had no way to diagnose what caused it, and usually ended up fixing the wrong component.

Worse, if you don't have the math for what resources your application is actually consuming, then you have no idea how many servers, and of what kind, you need in order to scale up your site. That means you will be massively overbuilding some components, while starving others, and spending twice as much money as you need to.

Given how many companies lack metrics, or ignore them, how do they make decisions? Well ...

Barn door decision making

"When I was at Amazon, we used a squid reverse proxy ..."

"Dan, you were an ad sales manager at Amazon."

In the absence of data, staff tend to troubleshoot problems according to their experience, which is usually wrong. Especially when an emergency occurs, there's a tendency to run to fix whatever broke last time. Of course, if they fixed the thing which broke last time, it's unlikely to be the cause of the current outage.

This sort of thinking gets worse when it comes time to plan for growth. I've seen plenty of IT staff purchase equipment, provision servers, configure hardware and software, and lay out networks according to what they did on their last project or even on their previous job. This means that the resources available for the current application are not at all matched to what that application needs, and either you over-provision dramatically or you go down.

Certainly you should learn from your experience. But you should learn appropriate lessons, like "don't depend on VPNs being constantly up". Don't misapply knowledge, like copying the caching strategy from a picture site to an online bank. Learning the wrong lesson is generally heralded by announcements in one or all of the following forms:

  • "when I was at name_of_previous_employer ..."
  • "when we encountered not_very_similar_problem before, we used random_software_or_technique ..."
  • "name_of_very_different_project is using random_software_or_technique, so that's what we should use."

(For non-native English speakers, "barn door" refers to the expression "closing the barn door after the horses have run away")

Now, it's time to actually get into application design.

Single-threaded programming

"So, if I monkey-patch a common class in Rails, when do the changes affect concurrently running processes?"

"Instantly! It's like magic."

The parallel processing frame of mind is a challenge for most developers. Here's a story I've seen a hundred times: a developer writes his code single-threaded, he tests it with a single user and single process on his own laptop, then he deploys it to 200 servers, and the site goes down.

Single-threading is the enemy of scalability. Any portion of your application which blocks concurrent execution of the same code at the same time is going to limit you to the throughput of a single core on a single machine. I'm not just talking here about application code which takes a mutex, although that can be bad too. I'm talking about designs which block the entire application around waiting on one exclusively locked component.

For example, a popular beginning developer mistake is to put every single asynchronous task in a single non-forwarded queue, limiting the pace of the whole application to the rate at which messages can be pulled off that queue. Other popular mistakes are the frequently updated single-row "status" table, explicit locking of common resources, and total ignorance of which actions in one's programming language, framework, or database require exclusive locks on pages in memory.

One application I'm currently working on has a distributed data-processing cloud of 240 servers. However, assignment of chunks of data to servers for processing is done by a single-process daemon running on a single dispatch server, rate limiting the whole cloud to 4000 jobs/minute and 75% idle.

An even worse example was a popular sports web site we worked on. The site would update sports statistics by holding an exclusive lock on transactional database tables while waiting for a remote data service over the internet to respond. The client couldn't understand why adding more application servers to their infrastructure made the timeouts worse instead of better.

Any time you design anything for your application which is supposed to scale, ask yourself "how would this work if 100 users were doing it simultaneously? 1000? 1,000,000?" And learn a functional language or map/reduce. They're good training for parallel thinking.

Coming in part 2

I'm sure you recognized at least one of the anti-patterns above in your own company, as most of the audience at the Ignite talk did. In part two of this article, I will cover component scaling, caching, and SPoFs, as well as the problem with The Cloud.

[ Note about the author: to support his habit of hacking on the PostgreSQL database, Josh Berkus is CEO of PostgreSQL Experts Inc., a database and applications consulting company which helps clients make their PostgreSQL applications more scalable, reliable, and secure. ]

Comments (100 posted)

Behind the Puppet license change

May 11, 2011

This article was contributed by Joe 'Zonker' Brockmeier.

The release of Puppet 2.7 brought one major change that has nothing to do with its actual feature list — the license was changed from the GPLv2 to the Apache License 2.0. This came as no surprise to the Puppet contributor community, but it seems as if it might be part of a trend towards more permissive licenses by companies working with open source. Luke Kanies, the founder and CEO of Puppet Labs (the company that has grown up around Puppet) says that he has no political axe to grind with the decision — it's simply a matter of reducing "friction" when it comes to Puppet adoption.

In conversations about licensing, Kanies shows little passion for the topic. But, when asking about the actual goals for Puppet, he exhibits a lot more interest about what Puppet (or something like Puppet) needs to accomplish — the ability to manage large-scale networks without needing to know the particulars for each device in the network.

Puppet is an "enterprise systems management platform" which started as a replacement for Cfengine. Kanies, who was a major contributor to Cfengine before starting Puppet, has a fairly modest goal for Puppet — ubiquity. "If we can get ubiquity, we can accomplish what we're trying to do... profitability is easy. What we're trying to do is [make it so] you don't need to know what OS you're running on."

According to Kanies, there was simply no good reason to remain with the GPL. The license didn't do anything specifically to address his goals for Puppet, and could actually hinder Puppet's ubiquity. Why? Kanies says that "a number of companies," and two in particular, were "quite afraid" of the GPL. One company, he says, avoids even having Puppet in its infrastructure — to the point of having a separate approval process for deploying GPL software. The other company didn't have qualms about the use of GPL software, but did have concerns about mixing GPL code with other code they ship.

It seems odd in 2011 to hear that companies still have "fears" about the GPL, given its widespread adoption and endorsement by such a diverse selection of companies — up to and including a giant like IBM. However, Kanies says that plenty of companies (or perhaps more accurately, their lawyers) have concerns about the GPL — and IBM is perhaps a poor example:

You're right that IBM is comfortable with the GPL, but there aren't many companies that can sue IBM. It's tough to scare IBM, and IBM not being afraid is not a good indicator that everyone else should not be afraid.

So what's the fear? Kanies says that it's the standard argument about the GPL being untested in the courts, along with the fact that there's disagreement and a lack of clarity about what "linking" means with regards to dynamic languages, and whether that linking creates a derivative work. For the record, Kanies points out that he does not share the same fears about the GPL — but he also does not feel particularly strongly about the GPL, and certainly not enough to keep the license if it stands in the way of Puppet adoption.

As a single event, the change of one project's license from GPL to Apache is not particularly important (outside that project, of course). However, if it's part of a larger migration away from the GPL, then it may be worth noting.

Are projects moving away from the GPL? Not in droves, but there does seem to be less of a tendency for companies or projects without a strong philosophical bent to choosing permissive licenses like the Apache, MIT, and BSD licenses. The 451 Group pointed to some evidence last year that companies were favoring more permissive licenses like the LGPL, BSD, Apache, and Eclipse Public Licenses. In January of this year, Stephen O'Grady noted that Black Duck Software's license figures showed a decline for the GPL overall.

The GPL still seems to be the dominant license, however. Black Duck Software tracks the adoption of GPLv3 versus GPLv2. The GPLv2 has dropped to well under 50% (45.42%), with the GPLv3 at nearly 7% of the projects it tracks. The Apache License 2.0 is at nearly 5%, and the MIT license is at just over 8%, as is the Artistic License. According to O'Grady, this is a 4% decline for GPLv2 since August 2009 (with an increase of only 1.34% for GPLv3) and nearly 4% increase for the MIT license.

On Contributor License Agreements

The license change should not come as a surprise to anyone in the Puppet community, though it has been greeted with some surprise in wider circles. Kanies says he has been asking the community for about two years, and has talked to "all of the major contributors" about the change. Kanies says that none of the contributors have raised a fuss, though he's gotten "one person that's said they're upset, and a couple who seem like they aren't that happy with the change and say 'I'd like to better understand this decision that you've made."

Since late 2009, Puppet has required a Contributor License Agreement (CLA) in order to submit code to the project. Kanies says it's similar to the Apache CLA, which basically provides the right to relicense the software any way the project sees fit.

In the case of Puppet, there seems to be little real cause for concern. Kanies provided ample time for the larger Puppet community to comment on the license change, first raising the issue in April, 2009 (when he thought he might go to the Affero GPL), and announcing the planned change to the Apache license five months later. It seems that few in the Puppet community are upset by the change. Users receiving Puppet under the Apache license are essentially in the same place they were before — able to study, modify, use, and distribute Puppet freely. Contributors to Puppet may not receive the same "protections" that the GPL affords, but it seems that the contributor community to Puppet is not particularly concerned about this.

The Puppet change should serve as a reminder to other developers that CLAs are in place for a reason. When giving permission to a project or organization to re-license a work, it should be assumed the organization will exercise its rights at some point — perhaps in a way that is unoffensive, perhaps not. Absent a guarantee in the CLA to stick with a certain class of license, it should be at least considered that the program may be re-licensed in a way that is less friendly to its user and contributor community.

Comments (12 posted)

Page editor: Jonathan Corbet


Guardian: Better privacy and security for Android

May 11, 2011

This article was contributed by Koen Vervloesem

With more and more of our "computing" happening on mobile devices instead of on traditional computers, securing these devices has become important. Unfortunately, most mobile platforms, including Android, are a step backward when it comes to security, privacy, and anonymity: by default, the user's files on an Android smartphone are not encrypted, instant messaging communication can be sniffed, and web browsing is not anonymous. One project that wants to do something about this — focusing on Android — is The Guardian Project.

The project describes its aim on its home page:

The Guardian Project aims to create easy to use apps, open-source firmware MODs, and customized, commercial mobile phones that can be used and deployed around the world, by any person looking to protect their communications and personal data from unjust intrusion and monitoring.

This is a fairly extensive vision. For now, the Guardian project is in its first phase: enhancing existing applications and developing new secure applications. But the ultimate aim is to customize Android on even lower levels to create a secure mass-market consumer smartphone solution, based on CyanogenMod, a popular alternative Android firmware. This requires enhancements to or replacements for the Android Application Framework, as well as adding new libraries and core security services. The kernel, Android runtime, and Dalvik virtual machine will also have to be secured, and the project is even considering securing or removing hardware drivers. However, until the project is able to create their own Android firmware, the developers recommend using CyanogenMod.

Transparent proxying through Tor


To be able to browse the web, chat, and email without being monitored, Guardian has developed the Orbot application, which brings the power of Tor to Android and is actually the official port of Tor to Android. When first started, Orbot shows a wizard explaining what the user can do with it. If the device is rooted and the firmware is updated to an iptables-capable ROM such as CyanogenMod, Orbot can transparently proxy all web traffic on port 80 (HTTP) and 443 (HTTPS) and all DNS requests, so nothing else has to be configured. The built-in browser, Firefox Mobile, and applications like Gmail, YouTube, and Google Maps use standard web traffic so are routed through Tor transparently. The wizard also allows the user to select individual applications to route their traffic through Tor.

If you don't want to root your Android device, you can only route an application's traffic through Tor if it supports an HTTP or SOCKS proxy. Orbot runs an HTTP proxy on localhost:8118 and a SOCKS 4/5 proxy on localhost:9050. For instant messaging, the Beem application (a Jabber client) supports this, as does Gibberbot. For web browsing there's the Firefox Mobile add-on ProxyMob which exposes settings for HTTP, SOCKS and SSL proxies and is configured by default to use with Orbot. For users that are still on an Android 1.x device, there's the Orweb browser.

After the Orbot wizard is completed, the user can activate Tor by pressing the grey button, which turns to green when the Tor connection is set up correctly. Orbot's settings offer a lot of options, such as automatically starting Orbot when the Android device boots, using a Tor bridge to connect to the Tor network, relaying, and enabling hidden services on the Tor network. Concerned users can always confirm that they're browsing via Tor by visiting the web page

End-to-end encryption


While the Guardian developers previously recommended the Beem project to offer anonymous instant messaging through Tor, they are now developing their own Jabber instant messenger as a replacement for the Google Talk application: Gibberbot. It's still an alpha release and the developers warn that there are still bugs and potentially even security flaws, but the release seems promising. Gibberbot is based on code from the Google Talk IM application, but with support for Tor (via Orbot) and end-to-end encryption (using the Off-the-Record OTR4J library).

The OTR protocol not only encrypts instant messages, but also attests that your correspondents are who you think they are. Moreover, the messages a user sends do not have digital signatures that can be traced to the user afterward, even though during a conversation, the correspondent is assured that the messages are coming from the user without any modification. Another nice feature is that no previous conversation is compromised if the user's private key is lost. Of course, to be able to chat securely with Gibberbot, a correspondent should be using an OTR-compatible chat client, such as Gibberbot on a smartphone, Pidgin or Adium with the OTR plugin on a computer.

The Guardian project recommends some third-party applications, which all offer encryption. The first one is the email client K-9 Mail, which supports OpenPGP encryption using Android Privacy Guard. The project also recommends two projects developed by Whisper Systems, the company of security researcher Moxie Marlinspike: RedPhone, which offers end-to-end VoIP encryption using ZRTP (at the moment it's US-only), and TextSecure, which allows users to send and store SMS messages using an encryption protocol based on OTR but designed with space efficiency in mind. Both Whisper Systems applications only work if both parties use the same application.

Developers, developers, developers

These applications are the most visible deliverable of the Guardian project, but the developers are also building libraries, tools, and source code for other developers, so that they are able to add security-oriented features more easily to their own Android applications. For instance, there's the Orlib library that's a drop-in replacement for Android's standard Socket and HTTPClient libraries, adding Tor support to any application because it uses the HTTP and SOCKS proxies that Orbot offers.

With respect to encryption, Guardian offers SQLCipher, an extension to the SQLite database that provides transparent 256 bit AES encryption of database files. Developers that want to better protect the privacy of their users, can use SQLCipher instead of Android's default SQLite library to encrypt their database files. The Guardian developers are also working on a port of LUKS (Linux Unified Key Setup) to Android. The README emphasizes that it's still alpha software and should be "used with a grain of paranoid salt" with an explanation of why it's so hard to use LUKS on Android:

While the LUKS project itself has been put through the paces on Linux desktops and servers, we are still determining the right conditions for its secure use on Android. With the many combinations of closed hardware, proprietary basebands, multitudes of kernels, firmwares and other mods, it is fairly impossible to guarantee security for any user. That said, we feel this effort is a useful public step forward in providing an increased level of protection for file storage, and exploring the limits of what we can provide as after-market software developers building open-source tools.

The Guardian developers are also collaborating with the human rights organization WITNESS to develop a secure camera application named Camera Obscura. They aim to support some common scenarios for activists, such as an easy way to remove all traces of any captured images when the phone is compromised and automatically blurring the faces of people in the background when recording a video interview of a spokesperson at a protest. There are also collaborations with the developers of the personal firewall DroidWall and with the developers of DroidTracker, an application that shares your location with your friends or colleagues. Other features that the Guardian project is working on are a remote data wipe and better physical access control. All code developed in the Guardian project can be found on GitHub.

Paranoid Androids

The Guardian project is not the only one to secure Android phones. Apart from the ones we already mentioned because Guardian is collaborating with them or recommending them, there's also WhisperCore, a custom Android ROM created by Whisper Systems. By default, WhisperCore encrypts the phone's entire data partition, and it can optionally encrypt the phone's SD card as well. WhisperCore is closed source software (but free for individual use) and is in an early beta phase. Currently it only supports the Google Nexus One and Nexus S phones.

One of the components of WhisperCore is WhisperMonitor, a personal firewall for Android users. When enabled, it intercepts all outbound network traffic and asks the user whether the application is allowed to connect to a specific server/port combination. This way, WhisperMonitor determines egress filter rules for the firewall, giving the user complete control over what each application is able to send over the network. It also provides an interface to modify or update rules defined per application, as well as a complete connection history of all applications.

Of course there are many other small tools, each of them helping in its own domain to secure Android. For instance, the SSH Tunnel application offers an easy-to-use interface to create an SSH tunnel to the user's server in order to use an encrypted channel on an untrusted network. On a rooted phone, the application can even set up system-wide tunneling.

Much work to do

While the Guardian project and other projects to make Android more secure are still in their infancy, many of their applications are already usable for more technically-inclined people. However, if you look at the use cases they're aiming for, it's clear that there's still much work to do to create a privacy-enhanced mobile phone operating system that is consumer-ready. The core developer team is small, but they are eager to collaborate with partner organizations and they have opportunities for internships and jobs. If you're a security-conscious developer who wants to make a difference, the Guardian project is definitely a project to consider joining. But even if you don't have any developer skills, you could help by joining the project as an alpha tester.

Comments (10 posted)

Brief items

Security quote of the week

But it was very interesting to see some of the anti-rootkit tools not showing the dispatch table hooks that are usually pretty straightforward to identify. Also this malware would not allow an external debugger (WinDbg) to break, which was annoying.

The reason for hooks not being reported was that the memory being read by the tools was not the actual memory! The dispatch table as "seen" by the tools appeared not to be hooked—whereas in reality it was hooked. The part that made it interesting was that the memory was being read at the correct address with a mov instruction and not using some system API that could be hooked. We know of some proof-of-concept ways to achieve this, but I had not seen this behavior before from a threat in the wild.

-- Rachit Mathur on a memory forging rootkit

Comments (none posted)

Exim 4.76 fixes a remote security hole

The Exim mail transfer agent suffers from a remotely exploitable format string vulnerability; the 4.76 release contains a fix. "CVE-2011-1764: a format string attack in logging DKIM information from an inbound mail may permit anyone who can send you email to cause code to be executed as the Exim run-time user. No exploit is known to exist, but we do not believe that an experienced attacker would find the exploit hard to construct." Debian has an update available; others are certainly coming.

Full Story (comments: 10)

IronBee, Community and SSL (The H)

The H interviews Ivan Ristić about the IronBee web application firewall. "Going back to my earlier comments, ModSecurity was pretty open, but I think it has a flaw which all GPLv2 programs have, which is that if you have a single entity owning the code and asking people who contribute to assign the IP of their contributions to them, you get a certain asymmetry in the community. [...] So I have good theories on why a community of developers didn't form around ModSecurity; one is the licence and the other is that the program itself is monolithic, so there was a barrier to entry there which stopped people from being able to do something useful. I want to address that too with IronBee; we've made it very modular and we are going to have good documentation, so that if you have an itch to scratch, if you have a particular problem that you need to solve, you don't have to understand the whole thing. "

Comments (10 posted)

Mozilla resists US gov't request to nuke "MafiaaFire" add-on (ars technica)

The US Department of Homeland Security (DHS) has asked Mozilla to remove the MafiaaFire Redirector Firefox add-on, ars technica reports. The article is based on a blog posting from Mozilla lawyer Harvey Anderson, where he says that Mozilla has not complied and instead asked the DHS for a legal justification. The add-on is a simple redirector for domains that were seized by the DHS for alleged copyright violations. "As for the developer of the MafiaaFire Redirector, he says that a Chrome version is coming soon and that his work shouldn't be repressed. 'Now, because my idea, which took less than a week to create—and the Chrome version 2 days—makes them walk around with egg on their face after the millions spent (it cost me less than $100), they went running to Mozilla seeking another favor,' he tells Ars. 'They did not even try to contact us. Hats off to Mozilla for sticking up to them, at first we were afraid if Mozilla would even host it due to its controversial nature but they truly backed up their open source supporting words with actions.'"

Comments (17 posted)

New vulnerabilities

cronie: privilege escalation

Package(s):Cronie CVE #(s):
Created:May 6, 2011 Updated:May 11, 2011
Description: From the openSUSE advisory:

Cronie does not drop all privileges before calling sendmail.

openSUSE openSUSE-SU-2011:0452-1 Cronie 2011-05-06

Comments (none posted)

exim4: format string vulnerability

Package(s):exim4 CVE #(s):CVE-2011-1764
Created:May 9, 2011 Updated:May 18, 2011
Description: From the Exim advisory:

A format string attack in logging DKIM information from an inbound mail may permit anyone who can send you email to cause code to be executed as the Exim run-time user. No exploit is known to exist, but we do not believe that an experienced attacker would find the exploit hard to construct.

Gentoo 201401-32 exim 2014-01-27
openSUSE openSUSE-SU-2012:1404-1 exim 2012-10-27
Debian DSA-2232-1 exim4 2011-05-06
Fedora FEDORA-2011-7047 exim 2011-05-17
Fedora FEDORA-2011-7059 exim 2011-05-17
SUSE SUSE-SR:2011:009 mailman, openssl, tgt, rsync, vsftpd, libzip1/libzip-devel, otrs, libtiff, kdelibs4, libwebkit, libpython2_6-1_0, perl, pure-ftpd, collectd, vino, aaa_base, exim 2011-05-17
Ubuntu USN-1130-1 exim4 2011-05-10
openSUSE openSUSE-SU-2011:0456-1 exim 2011-05-09

Comments (none posted)

kernel: privilege escalation

Package(s):kernel CVE #(s):CVE-2011-1017
Created:May 6, 2011 Updated:August 12, 2011
Description: From the Ubuntu advisory:

Timo Warns discovered that the LDM disk partition handling code did not correctly handle certain values. By inserting a specially crafted disk device, a local attacker could exploit this to gain root privileges.

Oracle ELSA-2011-2037 enterprise kernel 2011-12-15
SUSE SUSE-SU-2011:1058-1 kernel 2011-09-21
Ubuntu USN-1212-1 linux-ti-omap4 2011-09-21
SUSE SUSE-SA:2011:040 kernel 2011-09-20
Ubuntu USN-1202-1 linux-ti-omap4 2011-09-13
SUSE SUSE-SU-2011:0899-1 kernel 2011-08-12
SUSE SUSE-SA:2011:034 kernel 2011-08-12
Ubuntu USN-1187-1 kernel 2011-08-09
openSUSE openSUSE-SU-2011:0861-1 kernel 2011-08-02
openSUSE openSUSE-SU-2011:0860-1 kernel 2011-08-02
SUSE SUSE-SU-2011:0832-1 kernel 2011-07-25
SUSE SUSE-SA:2011:031 kernel 2011-07-25
Ubuntu USN-1168-1 linux 2011-07-15
Ubuntu USN-1167-1 linux 2011-07-13
Ubuntu USN-1161-1 linux-ec2 2011-07-13
Ubuntu USN-1159-1 linux-mvl-dove 2011-07-13
Ubuntu USN-1162-1 linux-mvl-dove 2011-06-29
Ubuntu USN-1164-1 linux-fsl-imx51 2011-07-06
SUSE SUSE-SU-2011:0737-1 kernel 2011-07-05
SUSE SUSE-SU-2011:0711-1 kernel 2011-06-29
Ubuntu USN-1160-1 kernel 2011-06-28
Debian DSA-2264-1 linux-2.6 2011-06-18
Ubuntu USN-1146-1 kernel 2011-06-09
SUSE SUSE-SA:2011:026 kernel 2011-05-20
Ubuntu USN-1111-1 linux-source-2.6.15 2011-05-05

Comments (1 posted)

kernel: multiple vulnerabilities

Package(s):kernel CVE #(s):CVE-2011-1494 CVE-2011-1495 CVE-2011-1745 CVE-2011-1746 CVE-2011-1079
Created:May 10, 2011 Updated:September 13, 2011
Description: From the Red Hat bugzilla:

At two points in handling device ioctls via /dev/mpt2ctl, user-supplied length values are used to copy data from userspace into heap buffers without bounds checking, allowing controllable heap corruption and subsequently privilege escalation. (CVE-2011-1494, CVE-2011-1495)

Struct ca is copied from userspace. It is not checked whether the "device" field is NULL terminated. This potentially leads to BUG() inside of alloc_netdev_mqs() and/or information leak by creating a device with a name made of contents of kernel stack. (CVE-2011-1079)

pg_start is copied from userspace on AGPIOC_BIND and AGPIOC_UNBIND ioctl cmds of agp_ioctl() and passed to agpioc_bind_wrap(). As said in the comment, (pg_start + mem->page_count) may wrap in case of AGPIOC_BIND, and it is not checked at all in case of AGPIOC_UNBIND. As a result, user with sufficient privileges (usually "video" group) may generate either local DoS or privilege escalation. (CVE-2011-1745)

page_count is copied from userspace. agp_allocate_memory() tries to check whether this number is too big, but doesn't take into account the wrap case. Also agp_create_user_memory() doesn't check whether alloc_size is calculated from num_agp_pages variable without overflow. This may lead to allocation of too small buffer with following buffer overflow. (CVE-2011-1746)

SUSE SUSE-SU-2015:0812-1 kernel 2015-04-30
Oracle ELSA-2013-1645 kernel 2013-11-26
Ubuntu USN-1256-1 linux-lts-backport-natty 2011-11-09
Scientific Linux SL-kern-20111005 kernel 2011-10-05
Red Hat RHSA-2011:1350-01 kernel 2011-10-05
SUSE SUSE-SU-2011:1058-1 kernel 2011-09-21
Ubuntu USN-1212-1 linux-ti-omap4 2011-09-21
SUSE SUSE-SA:2011:040 kernel 2011-09-20
Ubuntu USN-1204-1 linux-fsl-imx51 2011-09-13
Ubuntu USN-1202-1 linux-ti-omap4 2011-09-13
Red Hat RHSA-2011:1253-01 kernel-rt 2011-09-12
Ubuntu USN-1189-1 kernel 2011-08-19
SUSE SUSE-SU-2011:0899-1 kernel 2011-08-12
SUSE SUSE-SA:2011:034 kernel 2011-08-12
Ubuntu USN-1187-1 kernel 2011-08-09
openSUSE openSUSE-SU-2011:0860-1 kernel 2011-08-02
Scientific Linux SL-kern-20110715 kernel 2011-07-15
SUSE SUSE-SU-2011:0832-1 kernel 2011-07-25
SUSE SUSE-SA:2011:031 kernel 2011-07-25
CentOS CESA-2011:0927 kernel 2011-07-18
Ubuntu USN-1170-1 linux 2011-07-15
Ubuntu USN-1168-1 linux 2011-07-15
Red Hat RHSA-2011:0927-01 kernel 2011-07-15
Ubuntu USN-1167-1 linux 2011-07-13
Ubuntu USN-1161-1 linux-ec2 2011-07-13
Ubuntu USN-1159-1 linux-mvl-dove 2011-07-13
Ubuntu USN-1162-1 linux-mvl-dove 2011-06-29
Ubuntu USN-1164-1 linux-fsl-imx51 2011-07-06
Ubuntu USN-1183-1 kernel 2011-08-03
Ubuntu USN-1160-1 kernel 2011-06-28
Red Hat RHSA-2011:0883-01 kernel 2011-06-21
Fedora FEDORA-2011-6447 kernel 2011-05-04
Debian DSA-2264-1 linux-2.6 2011-06-18
Scientific Linux SL-kern-20110519 kernel 2011-05-19
CentOS CESA-2011:0833 kernel 2011-05-31
Red Hat RHSA-2011:0833-01 kernel 2011-05-31
Debian DSA-2240-1 linux-2.6 2011-05-24
Red Hat RHSA-2011:0500-01 kernel-rt 2011-05-10
Red Hat RHSA-2011:0498-01 kernel 2011-05-10
Red Hat RHSA-2011:0542-01 kernel 2011-05-19
Fedora FEDORA-2011-6541 kernel 2011-05-05

Comments (none posted)

kernel: multiple vulnerabilities

Package(s):kernel-rt CVE #(s):CVE-2011-1078 CVE-2011-1170 CVE-2011-1171 CVE-2011-1172
Created:May 11, 2011 Updated:August 19, 2011

From the Red Hat advisory:

* A missing initialization flaw in sco_sock_getsockopt_old() could allow a local, unprivileged user to cause an information leak. (CVE-2011-1078, Low)

* Missing validations of null-terminated string data structure elements in the do_replace(), compat_do_replace(), do_ipt_get_ctl(), do_ip6t_get_ctl(), and do_arpt_get_ctl() functions could allow a local user who has the CAP_NET_ADMIN capability to cause an information leak. (CVE-2011-1170, CVE-2011-1171, CVE-2011-1172, CVE-2011-1080, Low)

Oracle ELSA-2013-1645 kernel 2013-11-26
Oracle ELSA-2012-1156 kernel 2012-08-15
Scientific Linux SL-kern-20120815 kernel 2012-08-15
CentOS CESA-2012:1156 kernel 2012-08-15
Red Hat RHSA-2012:1156-01 kernel 2012-08-14
openSUSE openSUSE-SU-2012:0236-1 kernel 2012-02-09
Ubuntu USN-1256-1 linux-lts-backport-natty 2011-11-09
Ubuntu USN-1212-1 linux-ti-omap4 2011-09-21
Ubuntu USN-1204-1 linux-fsl-imx51 2011-09-13
Ubuntu USN-1202-1 linux-ti-omap4 2011-09-13
Ubuntu USN-1189-1 kernel 2011-08-19
Ubuntu USN-1187-1 kernel 2011-08-09
Ubuntu USN-1186-1 kernel 2011-08-09
SUSE SUSE-SU-2011:0832-1 kernel 2011-07-25
SUSE SUSE-SA:2011:031 kernel 2011-07-25
Ubuntu USN-1167-1 linux 2011-07-13
Ubuntu USN-1159-1 linux-mvl-dove 2011-07-13
Red Hat RHSA-2011:0883-01 kernel 2011-06-21
Debian DSA-2264-1 linux-2.6 2011-06-18
Scientific Linux SL-kern-20110519 kernel 2011-05-19
CentOS CESA-2011:0833 kernel 2011-05-31
Red Hat RHSA-2011:0833-01 kernel 2011-05-31
Debian DSA-2240-1 linux-2.6 2011-05-24
Red Hat RHSA-2011:0542-01 kernel 2011-05-19
Red Hat RHSA-2011:0500-01 kernel-rt 2011-05-10

Comments (none posted)

kernel: multiple vulnerabilities

Package(s):kernel CVE #(s):CVE-2011-0726 CVE-2011-1019 CVE-2011-1080
Created:May 11, 2011 Updated:August 19, 2011

From the Red Hat advisory:

* The start_code and end_code values in "/proc/[pid]/stat" were not protected. In certain scenarios, this flaw could be used to defeat Address Space Layout Randomization (ASLR). (CVE-2011-0726, Low)

* A flaw in dev_load() could allow a local user who has the CAP_NET_ADMIN capability to load arbitrary modules from "/lib/modules/", instead of only netdev modules. (CVE-2011-1019, Low)

* A missing validation of a null-terminated string data structure element in do_replace() could allow a local user who has the CAP_NET_ADMIN capability to cause an information leak. (CVE-2011-1080, Low)

Oracle ELSA-2013-1645 kernel 2013-11-26
openSUSE openSUSE-SU-2012:0236-1 kernel 2012-02-09
Ubuntu USN-1256-1 linux-lts-backport-natty 2011-11-09
SUSE SUSE-SU-2011:1058-1 kernel 2011-09-21
Ubuntu USN-1212-1 linux-ti-omap4 2011-09-21
SUSE SUSE-SA:2011:040 kernel 2011-09-20
Ubuntu USN-1204-1 linux-fsl-imx51 2011-09-13
Ubuntu USN-1202-1 linux-ti-omap4 2011-09-13
Ubuntu USN-1189-1 kernel 2011-08-19
SUSE SUSE-SU-2011:0899-1 kernel 2011-08-12
SUSE SUSE-SA:2011:034 kernel 2011-08-12
Ubuntu USN-1187-1 kernel 2011-08-09
SUSE SUSE-SU-2011:0832-1 kernel 2011-07-25
SUSE SUSE-SA:2011:031 kernel 2011-07-25
Ubuntu USN-1170-1 linux 2011-07-15
Ubuntu USN-1167-1 linux 2011-07-13
Ubuntu USN-1159-1 linux-mvl-dove 2011-07-13
Ubuntu USN-1162-1 linux-mvl-dove 2011-06-29
Ubuntu USN-1160-1 kernel 2011-06-28
Debian DSA-2264-1 linux-2.6 2011-06-18
CentOS CESA-2011:0833 kernel 2011-05-31
Ubuntu USN-1141-1 linux, linux-ec2 2011-05-31
Red Hat RHSA-2011:0833-01 kernel 2011-05-31
Debian DSA-2240-1 linux-2.6 2011-05-24
Red Hat RHSA-2011:0500-01 kernel-rt 2011-05-10
Red Hat RHSA-2011:0498-01 kernel 2011-05-10

Comments (none posted)

otrs2: cross-site scripting

Package(s):otrs2 CVE #(s):CVE-2011-1518
Created:May 9, 2011 Updated:May 17, 2011
Description: From the Debian advisory:

Multiple cross-site scripting vulnerabilities were discovered in Open Ticket Request System (OTRS), a trouble-ticket system.

Debian DSA-2231-1 otrs2 2011-06-06
SUSE SUSE-SR:2011:009 mailman, openssl, tgt, rsync, vsftpd, libzip1/libzip-devel, otrs, libtiff, kdelibs4, libwebkit, libpython2_6-1_0, perl, pure-ftpd, collectd, vino, aaa_base, exim 2011-05-17
openSUSE openSUSE-SU-2011:0464-1 otrs 2011-05-10

Comments (1 posted)

postfix: code execution

Package(s):postfix CVE #(s):CVE-2011-1720
Created:May 11, 2011 Updated:June 21, 2011

From the Debian advisory:

A heap-based read-only buffer overflow allows malicious clients to crash the smtpd server process using a crafted SASL authentication request.

Gentoo 201206-33 postfix 2012-06-25
Pardus 2011-84 postfix 2011-06-21
CentOS CESA-2011:0843 postfix 2011-06-01
CentOS CESA-2011:0843 postfix 2011-05-31
Red Hat RHSA-2011:0843-01 postfix 2011-05-31
SUSE SUSE-SR:2011:010 postfix, libthunarx-2-0, rdesktop, python, viewvc, kvm, exim, logrotate, dovecot12/dovecot20, pure-ftpd, kdelibs4 2011-05-31
Fedora FEDORA-2011-6771 postfix 2011-05-09
Fedora FEDORA-2011-6777 postfix 2011-05-09
SUSE SUSE-SA:2011:023 postfix 2011-05-11
Debian DSA-2233-1 postfix 2011-05-10
openSUSE openSUSE-SU-2011:0476-1 postfix 2011-05-11
Mandriva MDVSA-2011:090 postfix 2011-05-17
Ubuntu USN-1131-1 postfix 2011-05-11

Comments (none posted)

python: information disclosure

Package(s):python CVE #(s):CVE-2011-1015
Created:May 6, 2011 Updated:October 18, 2012
Description: From the Red Hat advisory:

An information disclosure flaw was found in the way the Python CGIHTTPServer module processed certain HTTP GET requests. A remote attacker could use a specially-crafted request to obtain the CGI script's source code.

Gentoo 201401-04 python 2014-01-07
Ubuntu USN-1613-1 python2.5 2012-10-17
Ubuntu USN-1613-2 python2.4 2012-10-17
Ubuntu USN-1596-1 python2.6 2012-10-04
CentOS CESA-2011:0492 python 2011-05-05
CentOS CESA-2011:0491 python 2011-05-05
Red Hat RHSA-2011:0491-01 python 2011-05-05
Red Hat RHSA-2011:0492-01 python 2011-05-05
Red Hat RHSA-2011:0554-01 python 2011-05-19
Mandriva MDVSA-2011:096 python 2011-05-22

Comments (none posted)

sssd: access restriction bypass

Package(s):sssd CVE #(s):CVE-2011-1758
Created:May 5, 2011 Updated:May 11, 2011

From the Red Hat Bugzilla entry:

A flaw was introduced in SSSD 1.5.0 that, under certain conditions, would have sssd overwrite a cached password with the filename of the kerberos credential store (defined by krb5_ccache_template in sssd.conf). This could allow an attacker to gain access to an account without knowing the password if they knew the cached-credential string.

Fedora FEDORA-2011-5815 sssd 2011-04-22

Comments (none posted)

widelands: arbitrary file overwrite

Package(s):widelands CVE #(s):
Created:May 5, 2011 Updated:May 11, 2011

From the Red Hat Bugzilla entry:

A Debian bug report noted that a security fix was committed to widelands. The commit log is quite vague, but it looks as though it might be an arbitrary file overwrite vulnerability, judging by the code changes.

Fedora FEDORA-2011-6124 widelands 2011-04-28
Fedora FEDORA-2011-6110 widelands 2011-04-28

Comments (none posted)

wordpress: privilege escalation

Package(s):wordpress CVE #(s):
Created:May 11, 2011 Updated:May 11, 2011

From the WordPress update announcement:

This release addresses a vulnerability that allowed Contributor-level users to improperly publish posts.

Fedora FEDORA-2011-6380 wordpress 2011-05-02
Fedora FEDORA-2011-6363 wordpress 2011-05-02

Comments (none posted)

xen: arbitrary code execution

Package(s):xen CVE #(s):CVE-2011-1583
Created:May 9, 2011 Updated:November 7, 2011
Description: From the Red Hat advisory:

It was found that the xc_try_bzip2_decode() and xc_try_lzma_decode() decode routines did not correctly check for a possible buffer size overflow in the decoding loop. As well, several integer overflow flaws and missing error/range checking were found that could lead to an infinite loop. A privileged guest user could use these flaws to crash the guest or, possibly, execute arbitrary code in the privileged management domain (Dom0).

Debian DSA-2337-1 xen 2011-11-06
openSUSE openSUSE-SU-2011:0578-1 xen 2011-06-01
openSUSE openSUSE-SU-2011:0580-1 xen 2011-06-01
Fedora FEDORA-2011-6914 xen 2011-05-13
Red Hat RHSA-2011:0496-01 xen 2011-05-09
CentOS CESA-2011:0496 xen 2011-05-11

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 2.6.39-rc7, released on May 9. Says Linus: "So things have been pretty quiet, and unless something major comes up I believe that this will be the last -rc." Full details can be found in the long-form changelog.

Stable updates: the,, and stable updates were released on May 9. Each contains a long list of important fixes.

Comments (none posted)

Quotes of the week

Rebuilding the kernel enables end users to make modifications to their devices that are normally not intended by the device manufacturer, such as theming the device by changing system icons and removing/modifying system components. Please note that Sony Ericsson is not recommending this.
-- But they do tell you how

I can easily handle such people, being a bit bigger than that, and lots of experience being a bouncer at a punk-rock bar for a number of years.
-- The source of Greg Kroah-Hartman's kernel skills

For the life of me I can't understand why you distro guys need to keep patching the kernel when you could just add a line to your initscripts. I'm suspecting that lameness is involved.
-- Andrew Morton

Comments (2 posted)

AMD and Coreboot

Coreboot (formerly LinuxBIOS) is a free BIOS implementation; it offers escape from a long list of woes stemming from poorly-written BIOS's, but it has always suffered from limited hardware support. AMD has now announced support for Coreboot on a new set of processors, and more going forward: "Finally, AMD is now committed to support coreboot for all future products on the roadmap starting next with support for the upcoming 'Llano' APU. AMD has come to realize that coreboot is useful in a myriad of applications and markets, even beyond what was originally considered. Consequently, AMD plans to continue building its support of coreboot in both features and roadmap for the foreseeable future."

Comments (29 posted)

Kernel development news

Ftrace, perf, and the tracing ABI

By Jonathan Corbet
May 11, 2011
Arjan van de Ven recently reported that a 2.6.39 change in how tracepoint data is reported by the kernel broke powertop; he requested that the change be partially reverted. The resulting discussion covered the familiar problem of how tracepoints mix with the kernel ABI. But it also revealed some serious disagreements on how tracing data should be provided by the kernel and, perhaps, the direction that this interface will take in the future.

Each tracepoint defined in the kernel includes a number of fields containing values relevant to the specific event being documented. For example, the sched_switch tracepoint, which fires when the scheduler is switching between processes, includes the IDs of both processes, their priorities, and so on. Every tracepoint also has a few "common" fields, including the process ID, its flags, and the value of the preempt_count variable; if trace data is read in binary form, those values will appear at the beginning of the structure read from the kernel.

Prior to the 2.6.32 development cycle, those common fields also included the thread group ID; that value was removed in September, 2009. A look at the powertop source shows that the program expects that field to still be there (though it does not use it); its internally-defined structure for trace data includes a tgid field. So this change should have broken powertop, and it would have except for one other change: on the very same day, Steve Rostedt added the lock_depth common field to report whether the current process held the big kernel lock (BKL). The addition of this field was never meant to be permanent: its whole purpose, after all, was to help with the removal of the BKL from the kernel entirely.

For 2.6.39, the lock_depth common field was removed, and powertop broke. Arjan subsequently complained; he also supplied a patch which put a zero-filled padding field where lock_depth used to be. Steve opposed the patch, on the grounds that, had powertop used the tracing ABI properly, it would not have broken. The kernel exports information about each tracepoint; for the above-mentioned sched_switch, that information can be examined from the command line:

    # cat /sys/kernel/debug/tracing/events/sched/sched_switch/format
    name: sched_switch
    ID: 51
	field:unsigned short common_type; offset:0; size:2;	signed:0;
	field:unsigned char common_flags; offset:2; size:1; signed:0;
	field:unsigned char common_preempt_count; offset:3; size:1; signed:0;
	field:int common_pid; offset:4; size:4; signed:1;

	field:char prev_comm[16]; offset:8; size:16; signed:1;
	field:pid_t prev_pid; offset:24; size:4; signed:1;
	field:int prev_prio; offset:28; size:4; signed:1;
	field:long prev_state; offset:32; size:8; signed:1;
	field:char next_comm[16]; offset:40; size:16; signed:1;
	field:pid_t next_pid; offset:56; size:4; signed:1;
	field:int next_prio; offset:60; size:4; signed:1;

A properly-written program, Steve says, should read this file and use the offset values found there to obtain the data it is interested in. Linus seemed to agree that it would have been nice if things worked out that way, but that's not what happened. Instead, at least one program became dependent on the binary format of the trace data exported from the kernel. That is enough to make that format part of the kernel ABI; breaking that program counts as a regression. So Arjan's patch was merged.

Steve did not like this outcome; it went against all the effort which had gone into creating a means by which tracepoints could change without breaking applications. The alternative, he said, was to bury the kernel in compatibility cruft:

The reason tracepoints have currently been stable is that kernel design changes do not happen often. But they do happen, and I foresee that in the future, the kernel will have a large number of "legacy tracepoints", and we will be stuck maintaining them forever.

What happens if someone designs a tool that analyzes the XFS filesystem's 200+ tracepoints? Will all those tracepoints now become ABI?

The notion that XFS tracepoints could become part of the ABI was dismissed as "crazy talk" by Dave Chinner, but there is nothing inherently different about those tracepoints. They could, indeed, end up as part of the kernel ABI.

Steve was also concerned about the size of events; removal of lock_depth, beyond eliminating a (now) meaningless bit of data, also served to make each event four bytes smaller. There is always pressure to reduce the overhead of tracing, and reducing the bandwidth of the data copied to user space is part of that; adding the pad field goes against that goal. David Sharp (of Google) chimed in to note that data size matters a lot to them:

The size of events is a *huge* issue for us. Please look at the patches we have been sending out for tracing: A lot of them are about reducing the size of events. Most of the patches we carry internally are about reducing the size of events. Memory is the most scarce resource on our systems, so we *cannot* afford to use large trace buffers.

Steve had hoped to remove some of the other common fields as well (a change that Google has already made internally); that idea has gone by the wayside for now. Tracepoints, it seems, are ABI, even when the information they report no longer makes sense in the kernel.

The remainder of this discussion became a sort of bunfight between Steve and Ingo Molnar as they sought to place the blame for this problem and to determine how things will go in the future. Ingo attacked Steve for resisting the idea of unchanging tracepoints, accused him of maintaining ftrace as a fork of perf in the kernel (despite the fact that ftrace was there first), and said that perf needed to take over:

perf is basically the ftrace UI and APIs done better, cleaner and more robustly. Look at all the tooling that sprang up around that ABI, almost overnight. ftrace evolved through many iterations in the past and perf was simply the next logical step.

He also threatened to stop pulling tracing changes from Steve.

Steve, in return, blamed perf for bolting itself onto the ftrace infrastructure, then exporting ftrace's binary structures directly to user space. He blamed Ingo for blocking changes intended to improve the situation (for example, the creation of a separate directory for stable tracepoints agreed to at the 2010 Kernel Summit) and complained that Ingo was ignoring his attempts to create tracing infrastructure which works for everybody. He also worried, again, that set-in-stone tracepoint formats would impede progress in the kernel.

Despite all of this, Steve is willing to work toward the unification of ftrace and perf, as long as it doesn't mean leaving ftrace behind:

Now that perf has entered the tracing field, I would be happy to bring the two together. But we disagree on how to do that. I will not drop ftrace totally just to work on perf. There's too many users of ftrace that want enhancements, and I will still support that. The reason being is that I honestly do not believe that perf can do what these users want anytime in the near future (if at all). I will not abandon a successful project just because you feel that it is a fork.

So it seems that, while there are clearly disagreements and tension between the developers in this area, there should also be room for a solution that works for everybody. Development emphasis will clearly continue to move toward perf, but, despite Ingo's desire to the contrary, ftrace will likely continue to be improved. We may see efforts to push applications toward libraries that can shield them from tracepoint changes, but, for now, every tracepoint added to the kernel will probably have to be considered to be part of its ABI; given that, developers should probably be reviewing new tracepoints more closely than they have been. And, with luck, instrumentation in Linux - which has improved considerably in the last few years - will continue to get better.

Comments (9 posted)

2.6.39 development statistics

By Jonathan Corbet
May 10, 2011
As of this writing, the 2.6.39-rc7 prepatch has just been released and Linus has announced that it may be the last one before the final release. Being a traditional sort of operation, would not let that release go by without looking at the statistics for this development cycle. It has been a busy cycle, but with some interesting changes.

There have been just over 10,000 non-merge changesets merged for 2.6.39; with the sole exception of 2.6.37 (11,446 changesets), that's the highest since 2.6.33. Those changes came from 1,236 developers; only 2.6.37 (with 1,276 developers) has ever exceeded that number. Those developers added 670,000 lines of code while deleting 346,000 lines, for a net growth of 324,000 lines. The most active contributors this time around were:

Most active 2.6.39 developers
By changesets
Thomas Gleixner4424.4%
David S. Miller2012.0%
Mike McCormack1381.4%
Mark Brown1271.3%
Tejun Heo1191.2%
Russell King890.9%
Arnaldo Carvalho de Melo860.9%
Arend van Spriel770.8%
Al Viro730.7%
Aaro Koskinen720.7%
Tomas Winkler700.7%
Greg Kroah-Hartman690.7%
Chris Wilson650.6%
Joe Perches600.6%
Mauro Carvalho Chehab600.6%
Borislav Petkov600.6%
Eric Dumazet590.6%
Uwe Kleine-König590.6%
Dan Carpenter590.6%
Artem Bityutskiy580.6%
By changed lines
Wey-Yi Guy456805.6%
Wei Wang252243.1%
Alan Cox208802.6%
Laurent Pinchart204592.5%
Guan Xuetao201672.5%
Larry Finger147631.8%
Tomas Winkler140951.7%
Arnd Bergmann137481.7%
Igor M. Liplianin134911.7%
Aaro Koskinen132741.6%
Russell King128621.6%
Mike McCormack115821.4%
Jozsef Kadlecsik103741.3%
Bhanu Gollapudi99251.2%
Thomas Gleixner88691.1%
Olivier Grenie81671.0%
Greg Ungerer81051.0%
Sakari Ailus75130.9%
Joe Perches70480.9%

Thomas Gleixner got to the top of the per-changesets list with a massive reworking of how interrupts are managed in the kernel - a job which required significant changes in almost every architecture. David Miller did a great deal of work cleaning up, reworking, and optimizing the networking stack. Mike McCormack did a lot of cleanup work on the rtl8192e driver in the staging tree, Mark Brown contributed the usual large pile of changes concentrated in the sound driver subsystem, and Tejun Heo improved things all over the tree, primarily in the x86 architecture code.

On the lines-changed side, Wey-Yi Guy reworked some Intel network drivers, Wei Wang worked on the Realtek card reader driver in the staging tree, Alan Cox added the GMA500 driver to staging, Laurent Pinchart did a bunch of Video4Linux work including the addition of the media controller subsystem, and Guan Xuetao added the unicore32 architecture.

There were just over 200 known employers supporting work on the 2.6.39, the most active of which were:

Most active 2.6.39 employers
By changesets
Red Hat126012.6%
Texas Instruments3723.7%
Wolfson Micro1461.5%
ST Ericsson1161.2%
By lines changed
Red Hat521406.4%
Texas Instruments395364.9%
Realsil Micro253703.1%
Peking University204872.5%
KFKI Research Inst104301.3%
ST Ericsson86111.1%

The percentage of changes coming from developers known to be working on their own time is at the lowest level seen since we started generating these statistics. Whether that means that volunteers are slowly losing interest in working with the kernel or that everybody who can do kernel work has been hired is hard to say.

Red Hat, as always, generates large numbers of patches; Texas Instruments continues the steady increase we have seen over the last few years, while Oracle continues to decline. New entries this time around include Realsil (the Realtek card reader work), the Peking University Microprocessor R&D Laboratory (the unicore32 architecture), NetUP (various drivers), and the KFKI Research Institute (ipset).

Occasionally it is interesting to look at the list of non-author signoffs - Signed-off-by tags added by developers who are not the authors of the patches involved. For 2.6.39, that list looks like this:

Developers with the most signoffs (total 8766)
Greg Kroah-Hartman116213.3%
David S. Miller5466.2%
John W. Linville4375.0%
Mauro Carvalho Chehab4345.0%
Andrew Morton3173.6%
James Bottomley2202.5%
Ingo Molnar1862.1%
Mark Brown1581.8%
Sascha Hauer1351.5%
Tony Lindgren1291.5%
Takashi Iwai1241.4%
Samuel Ortiz1061.2%
Paul Mundt1001.1%
Matthew Garrett991.1%
Russell King981.1%
Jeff Kirsher971.1%
Jiri Kosina951.1%
Linus Torvalds941.1%
Patrick McHardy901.0%
Konrad Rzeszutek Wilk891.0%

Greg Kroah-Hartman contributed "only" 69 patches to 2.6.39, but another 1,162 - over 13% of the total - passed through his hands on their way into the kernel. The bulk of those changes applied to the staging tree, but they were certainly not limited to staging. Linus Torvalds directly merged only 94 changes from others; everything else came in by way of a subsystem maintainer's tree.

Despite being one of the more active development cycles in recent years, 2.6.39 has also been one of the smoothest. The number of difficult regressions has been small, and, if Linus's current plan holds, the cycle could complete in just over 60 days, which would make it the shortest development cycle since the beginning of the git era. Kernel development is not without its glitches, but the process would appear to be working quite smoothly.

(As always, thanks are due to Greg Kroah-Hartman for his help in the creation of these statistics.)

Comments (13 posted)

Stable pages

By Jonathan Corbet
May 11, 2011
When a process writes to a file-backed page in memory (through either a memory mapping or with the write() system call), that page is marked dirty and must eventually be written to its backing store. The writeback code, when it gets around to that page, will mark the page read-only, set the "under writeback" page flag, and queue the I/O operation. The write-protection of the page is not there to prevent changes to the page; its purpose is to detect further writes which would require that another writeback be done. Current kernels will, in most situations, allow a process to modify a page while the writeback operation is in progress.

Most of the time, that works just fine. In the worst case, the second write to the page will happen before the first writeback I/O operation begins; in that case, the more recently written data will also be written to disk in the first I/O operation and a second, redundant disk write will be queued later. Either way, the data gets to its backing store, which is the real intent.

There are cases where modifying a page that is under writeback is a bad idea, though. Some devices can perform integrity checking, meaning that the data written to disk is checksummed by the hardware and compared against a pre-write checksum provided by the kernel. If the data changes after the kernel calculates its checksum, that check will fail, causing a spurious write error. Software RAID implementations can be tripped up by changing data as well. As a result of problems like this, developers working in the filesystem area have been convinced for a while that the kernel needs to support "stable pages" which are guaranteed not to change while they are under writeback.

When LWN looked at stable pages in February, Darrick Wong had just posted a patch aimed at solving this problem. In situations where integrity checking was in use, the kernel would make a copy of each page before beginning a writeback operation. Since nobody in user space knew about the copy, it was guaranteed to remain unmolested for the duration of the write operation. This patch solved the problem for the integrity checking case, but all of those copy operations were expensive. Given that providing stable pages in all situations was seen as desirable, that cost was considered to be too high.

So Darrick has come back with a new patch set which takes a different - and simpler - approach. In short, with this patch, any attempt to write to a page which is under writeback will simply wait until the writeback completes. There is no need to copy pages or engage in other tricks, but there may be a cost to this approach as well.

As noted above, a page will be marked read-only when it is written back; there is also a page flag which indicates that writeback is in progress. So all of the pieces are there to trap writes to pages under writeback. To make it even easier, the VFS layer already has a callback (page_mkwrite()) to notify filesystems that a read-only page is being made writable; all Darrick really needed to do was to change how those page_mkwrite() callbacks operate in presence of writeback.

Some filesystems do not provide page_mkwrite() at all; for those, Darrick created a generic empty_page_mkwrite() function which locks the page, waits for any writeback to complete, then returns the locked page. More complicated filesystems do have page_mkwrite() handlers, though, so Darrick had to add similar functionality for ext2, ext4, and FAT. Btrfs has implemented stable pages internally for some time, so no changes were required there. Ext3 turns out to have some complicated interactions with the journal layer which make a stable page implementation hard; since invasive changes to ext3 are not welcomed at this point, that filesystem may never get stable page support.

There have been concerns expressed that this approach could slow down applications which repeatedly write to the same part of a file. Before this change, writeback would not slow down subsequent writes; afterward, those writes will wait for writeback to complete. Darrick ran some benchmarks to test this case and found a performance degradation of up to 12%. This slowdown is unwelcome, but there also seems to be a consensus that there are very few applications which would actually run into this problem. Repetitively rewriting data is a relatively rare pattern; indeed, the developers involved are saying that they don't even know of a real-world case they can test.

Lack of awareness of applications which would be adversely affected by this change does not mean that they don't exist, of course. This is the kind of change which can create real problems a few years down the line when the code is finally shipped by distributors and deployed by users; by then, it's far too late to go back. If there are applications which would react poorly to this change, it would be good to get the word out now. Otherwise the benefits of stable pages are likely to cause them to be adopted in most settings.

Comments (22 posted)

Patches and updates

Kernel trees


Core kernel code

Development tools

Device drivers

Filesystems and block I/O

Memory management


Virtualization and containers


Page editor: Jonathan Corbet


Ubuntu developer summit

By Jake Edge
May 11, 2011

Ubuntu community manager Jono Bacon opened the "most important event of the Ubuntu cycle", Ubuntu Developer Summit (UDS), which was held May 9-13 in Budapest, Hungary. In addition to Ubuntu, though, there was a large Linaro presence because the Linaro Development Summit (LDS) was going on at the same time. The close relationship between the distribution and the consortium of ARM companies was clearly in evidence. Both summits not only shared conference facilities, but were also closely aligned in terms of how their sessions were run and recorded. Along the way in the first morning's talks, attendees also learned the proper pronunciation for "oneiric".

Ubuntu and Linaro developers were there to plan out their respective development cycles; Ubuntu for 11.10 (aka Oneiric Ocelot) and Linaro more generally for the next six months to a year. Up until now, Linaro has been doing releases in six-month cycles, each just a month after the Ubuntu release that was being tracked. But, as Linaro CEO George Grey announced later in the morning, there would be no Linaro 11.11 release as the organization was moving to a monthly release cycle.

Bacon on the UDS format

[Jono Bacon]

Bacon noted that 11.04 ("Natty Narwhal") was a "tremendously adventurous cycle" that took Ubuntu "to the next level". But UDS is the time to look ahead to the next release and it is a "critical event" for the distribution. It is, he said, not a conference, but rather an interactive event where developers and other members of the community come together to "design, debate, and discuss" the shape of the next release.

Each session at UDS is an hour-long focused discussion, which is based on a blueprint that is in Launchpad. It is an "incredibly dynamic schedule" that is updated with changes to session times and rooms, as well as having new sessions added based on the outcomes of the meetings or additional blueprints being added. There are often fifteen simultaneous meetings taking place, with roughly two-thirds of those being UDS, and the remaining meetings being for Linaro.

In addition, the meetings are well set up for external participation as there is audio streamed from each room, as well as an IRC channel established and displayed on a screen so that those not present can participate in the discussion. Notes are taken in Etherpad for each meeting so that anyone can follow along or review them later.

There is an established structure for the meetings as well, which starts with a goal for the meeting, Bacon said. That goal is discussed, conclusions are drawn, and the outcome and action items are recorded. Each meeting has a leader who is tasked with setting the goal, moderating the discussion, and ensuring that all participants, even those who tend to not say much, get a chance to talk, he said.

But the end result of the meeting is action items. People are "here to do real work", he said, and part of that is identifying the actions that need to be taken in the next six (five really) months to achieve the goal. In addition to action items, though, there need to be people assigned to accomplish them. If people are reluctant to sign up for those action items, "start nominating people", as that works well to flesh out who should be doing what, he said.

The UDS meetings serve as a "valuable piece of face time" that should be used to satisfy the overarching goal, which is to "deliver the most incredible Ubuntu experience we can", he said. Bacon then turned the stage over to Ubuntu founder Mark Shuttleworth.

Shuttleworth on Natty and Ubuntu values

[Mark Shuttleworth]

Shuttleworth congratulated the assembled Ubuntu community on its work on Natty Narwhal, which was a "profoundly challenging" cycle for many reasons, he said. Ubuntu is in the middle of a transition, which makes it normal for there to be questioning and challenging debate around that transition. But the organization achieved "many of the things we set out to do", he said.

Several specific accomplishments from Natty were called out, including work by the documentation team that made major contributions to both GNOME and Unity documentation during the cycle. That team was successful in "spanning that divide [between GNOME and Unity] with grace and eloquence", he said. There were also major strides made on accessibility, which is one of the core values of the Ubuntu community. There is more accessibility work to do, he said, but it will get finished during the Oneiric Ocelot cycle.

With Unity, "we've set a new bar for disciplined design in free software", Shuttleworth said, by testing the assumptions of the design team with real user testing. He noted that the "mission" for the distribution is to have 200 million Ubuntu users within four years. Ubuntu is not targeting "developers' hearts and minds", but rather the "world's hearts and minds". But that shouldn't leave developers behind because they "need all the things that consumers do, and more", he said.

Shuttleworth also spent some time to "restate and reaffirm our values". People start using something new, like Ubuntu, because of the buzz around it, but at some point they may reevaluate that decision, asking themselves "why am I here?". It makes sense for people to participate or to continue to participate in a project like Ubuntu if they share the mission and values of the group.

The governance of Ubuntu is a meritocracy, he said, and not a democracy. Where hard decisions need to be made, he wants to have the best person making them, whether that person is a Canonical employee or not. But, once a person has been given that responsibility, it doesn't make sense to continually second guess them, he said, "that is how we will be both free software and incredibly effective".

There needs to be accountability to members, contributors, and users, as well. Shuttleworth said that he and other decision-makers should have no problem being questioned or challenged about decisions they have made, "but that can't get in the way" of progress. When you get "stressed" about a particular decision, he said, ask yourself whether the right people are making that decision.

Transparency is also important. There has be a sense of a lack of transparency in some decisions made in the last few years where those decisions were presented as having already been made. The community can "expect and reasonably demand" discussion of those decisions, he said. But Ubuntu brings together the community and multiple companies to make a single platform, and many of the different groups that come together in Ubuntu have different ideas of what (and how) things should be done. Transparency is a "value that we hold", he said, but it requires respect on all sides.

Contributor agreements

Making a case for the Canonical contributor agreement is an area where Shuttleworth has "failed as a leader", he said. He has "strong views" on what it will take to build a collaboration between the community and various companies, and contributor agreements will play a role. Each side has different goals and different constraints. Those need to be respected by all participants so that they can work together.

When all sides are closely aligned in their goals and constraints, they can work together fairly easily, but that isn't really collaboration so much as it is teamwork, he said. Ideological constraints put up barriers, and "free" is not the only way that companies will produce software. There are "second-class options in vast tracts of free software", he said, and in order for that to change, working with various companies will be important. He noted that Android and iOS have quickly created large amounts of useful software even in the face of the Microsoft monopoly.

Starting "today", Shuttleworth is going take on the job of making the case for contributor agreements. It will be difficult to do, but he is up to the challenge, because of the importance. He noted that at one point Canonical had done some work on some software that had been created by Novell, who "had done a lot of work that we benefited from", while Canonical had done "some work that we were proud of". He initially refused to sign a contributor agreement with Novell for that code, but then couldn't sleep that night and changed his mind in the morning because he realized that he was not being "generous".

Ownership of a project comes with responsibilities, and contributors should be willing to give up some rights to their code if they aren't taking on those responsibilities, he said. If someone gave you a plant for your garden, but asked you to agree not to sell the house if you accepted it, you likely wouldn't agree to that, he said. "It would not be generous on their part". He recognizes that convincing the community about contributor agreements is an uphill battle, but that the "upside in this case is all on my side" because those agreements are not popular in the community.


After a brief farewell (but not goodbye) message from Ubuntu CTO Matt Zimmerman, Shuttleworth noted that this development cycle started with a challenge: how does one pronounce "oneiric" (which means dreamy or dreamlike)? With the help of some community members with improvisational skills, and a prop named "Rick" (Spencer, director of Ubuntu engineering), several possibilities were demonstrated: "annoy-rick", "one-eye-rick", "on-a-rick", and so on, before Shuttleworth settled on the "winner", which was "o-near-rick", though, of course, several other alternatives are being heard throughout the summit.

There are "hundreds of things being decided" during the week of UDS, Shuttleworth said. Though there won't be any major shifts (a la Unity) for this cycle, there are lots of choices being made. One immediate decision point was whether to use Eucalyptus or OpenStack as the default cloud platform, and that decision needed to be made on the first day, he said. That was decided in favor of OpenStack, though Eucalyptus will still be supported.

There are some other changes that may be afoot, including potentially switching to Thunderbird as the default email client, as well as possibly changing from Firefox to Chromium as the default web browser. Other, less visible changes will be decided upon as well. After the week of UDS, it will be time to "get stuff done" to make those decisions, and all the other plans made, come together for Oneiric Ocelot, he said.

Comments (33 posted)

Brief items

Distribution quotes of the week

Being on the board is not about changing fedora. People running for election in order to change "How Fedora is done" are often up for disappointment. For the most part the Board's job is to listen when people disagree and see if we can get people to start listening and not shout past each other. It doesn't always work but that is how things go.
-- Stephen Smoogen

Which brings me to the Fedora/Linux tie in here. Every few months I see a sad tale of someone who tried the Fedora {mailing lists|forum|irc channel} and had a bad first impression, which leads to a "I am never going to use {Linux|Fedora} again!". Please take a few moments to think logically and not judge an entire Linux distribution or Operating system based on one forum post, email or 5 minutes in an IRC channel. Do some research, work on explaining your problem better or in a different way, try a different support channel, or at the very least note that your impression is based on only one single drive by. It's hard to overcome a bad first impression, but do consider giving more than a single chance.
-- Kevin Fenzi

I was joking with a colleague of mine saying the Slackware desktop could be tossed into the road, run over by a few cars, and the thing would still work!
-- Jack Wallen by way of

Comments (7 posted)

BackTrack 5 released

BackTrack 5 - a distribution oriented around security research and penetration testing, has been released. "Based on Ubuntu Lucid LTS. Kernel 2.6.38, patched with all relevant wireless injection patches. Fully open source and GPL compliant. Head down to our downloads page to get your copy now!" (LWN reviewed the previous BackTrack release last year).

Comments (6 posted)

CyanogenMod 7.0.3 released

The CyanogenMod 7.0.3 release is out. "This update contains a bug fix for our update notification system, as well as an important security fix. It is recommended that all users running a version of CyanogenMod prior to 7.0.3 update to this release." They don't seem to be talking about what security problem has actually been fixed at this time.

Comments (6 posted)

Filesystem hierarchy standard 3.0 process begins

The filesystem hierarchy standard (FHS) describes how files and directories should be laid out on a compliant distribution. The FHS has not been updated since 2004, and is starting to look dated in places. A new effort to update the FHS and release a version 3.0 by July has been launched and a call for participation has gone out. There's a new repository and a new mailing list; click below for details on how to participate in this process.

Full Story (comments: 37)

Distribution News

Debian GNU/Linux

DebConf11 - Sponsored registration date has been extended

The May 8 deadline for sponsored registration for DebConf11 in Banja Luka has been extended to May 19. "Thanks to a significant increase in sponsorship from one of our main sponsors, we have been permitted and encouraged to welcome more people to the conference. The extension is a one-off occurrence in response to this new development, and to allow any potential attendees time to access more information before registering."

Full Story (comments: none)


Fedora Board and FESCo elections

Nominations are open until May 15 for the Fedora Board and Fedora Engineering Steering Committee. There are three seats open on the Board and five seats open for FESCo. "Additionally the nomination period also serves as the time for the community to present questions to be posed of candidates."

Full Story (comments: none)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

Clem: Linux Mint 11 - Preview

Linux Mint developer Clement Lefebvre takes a look at the upcoming Linux Mint 11 release. "With the upstream changes in Unity and Gnome 3, some of the big Linux Mint projects (netdiscovery, restoration snapshots) were postponed and more time was given to ensure this release would feature a functional yet traditional desktop. In many ways, Linux Mint 11 feels like a more modern and more polished version of Linux Mint 10. In contrast with the many distributions adopting new interfaces, Linux Mint 11 will feature the best Gnome "2" desktop you've ever got to see." Linux Mint 11 RC has been released.

Comments (none posted)

Ubuntu Makes Lubuntu Official Derivative (Linux Pro Magazine)

Linux Magazine reports that Lubuntu will become an official Ubuntu derivative with the release of Ubuntu 11.10. "Lubuntu is mainly geared toward low-memory computers, such as the Pentium II, which packs a mere 128MB of RAM. The LXDE desktop lacks some features, but it does not require much in terms of resources. Lubuntu uses PCManFM as a file manager, the Chromium browser, and the Sylpheed email client."

Comments (1 posted)

Page editor: Rebecca Sobol


LGM: Two Krita talks

May 11, 2011

This article was contributed by Nathan Willis

One of the most interesting scenes found at combination events that assemble open source developers and software users in a single room is when each camp presents a talk about the same code. Libre Graphics Meeting (LGM) frequently offers up these scenes, including, this year, a two-perspective take on the KDE painting application Krita.

The developers' perspective is by very nature different than the users' of course: for any growing software project, the project leaders have to care about the architecture of the codebase, the processes of reviewing and merging contributions, extending the functionality in a sensible (and manageable) way, and so forth. Those issues rarely conflict with users' needs, although in some cases there is tension between the project and users when users are clamoring to have a particular feature now, but the "right" way to implement it is costly. See support for 16-bit depth colors in GIMP, for example.

On the other hand, it is common enough for a project's developers to be more motivated about adding new features than about fixing "soft" issues like usability bugs. New features are fun to write; tweaking the interface may not be.

The users' perspective on its own often seems to focus more on the application's workflow than its feature set. Clearly, when there is a missing tool or function, that takes center stage — but the rest of the time, it is other issues that dominate the feedback: the ease of finding particular functions, switching between contexts or tasks, customizing or configuring the interface. This task-oriented point-of-view can become unhelpful, too. Taken to its extreme, any individual's usability feedback can represent only one way to use the application — and each user only uses a subset of the application's functionality (sometimes a very small subset).

Exactly how different the user and developer views are varies greatly depending on the application, of course. A compiler's users are not radically different animals than its developers. For creative software like Krita, however, the two groups' perspectives can be very different.

Krita 2.3


Krita developer LukᚠTvrdý spoke first, highlighting the process and the changes that led to the project's new release, Krita 2.3 — which has the distinction of being the first version officially blessed as being "end user ready."

Much of what went into declaring Krita 2.3 ready for end user artists is under the hood: speed and stability improvements. The painting engine is six times faster than the previous release, Tvrdý said, and other operations are noticeably faster (including a twelve-fold improvement in the speed of the "smudge" tool).

Krita's main tool metaphor is pluggable brush engines, which can be added separately and tweaked individually, rather than a set of discrete tools (e.g., "paintbrush," "airbrush," and "pencil" as commonly found in raster image editors). The basic brush engines emulate physical tools: different types of paintbrush, chalk, spray, etc — but more advanced brush engines actually incorporate code that draws dynamically for the user. Krita 2.3 introduces several of these advanced brushes, including a hatching brush that shades areas of the canvas by drawing hatch lines, and the sketch brush that shades and fills areas around each cursor stroke. One could imagine naive versions of either tool (such as creating hatch lines simply by pasting a hatch texture), but what makes Krita's implementations worth mention are that they actually work in non-trivial ways to simulate natural media.

Since 2010, the Krita team has worked with interaction designer Peter Sikking to focus its development on painting, drawing, and sketching as needed by real digital artists, and a number of changes to the user interface have been made in the 2.3 release as a result. The layout of the toolbars is different than that of many other raster image editors, they are designed for accessibility with a pen-based tablet. Every brush that the user selects can be modified through a drop-down menu exposed in the top toolbar; there is complete custom control available over how pressure, tilt, and speed (if they are reported by the pen hardware) affect the size, color, opacity, shape, and behavior of the brushes. The relationship between the pen hardware and brush behavior can be controlled with response curves (not just fixed values), so that variations on the "light" end of the pressure spectrum can produce more subtle changes than variations on the "hard" end.

The Krita 2.3 interface also makes the color selector widget a far more prominent piece of screen real estate than it is in GIMP or most other editors. GIMP, for example, maintains a small foreground/background palette; to change either color the user must click on the widget to open a modal dialog box. Krita's color selector is adaptable to a variety of shapes, layouts, and color models (think Red-Green-Blue, Hue-Saturation-Value, Hue-Saturation-Lightness, etc). Changing colors is a one-click operation, which is far better for pen tablet users, and the color selector automatically keeps a history of recent colors visible as selectable swatches down the right-hand side to boot.

Finally, Krita 2.3 sports a few functions requested directly by artists. One is "mirror" mode painting, which renders mirror-image brush strokes on the canvas in addition to the strokes actually drawn. Another is the ability to rotate the canvas itself, on screen, without transforming the underlying image. As Tvrdý explained, this allows the artist to move the canvas like a physical piece of paper to approach it from a better angle — while it is easy to hand-draw a straight horizontal line, he said, most people find it difficult to draw a completely straight diagonal or vertical line with a tablet. The rotation feature makes these actions trivial. Last but not least, Krita's interface is built with dockable widgets that the user can reposition at will. The new release allows users to save set of docker positions and tool options as reusable "workflows," so each user can maintain several set of UI tweaks built around particular tasks.

Tvrdý commented briefly on Krita development, noting particular that the application has found great success working with Google's Summer of Code program. Tvrdý himself was a GSOC participant three years ago, and observed that one of the best features of working with the program is that it allows the mentored student to see his or her finished code incorporated into Krita in only a few months' time. Partly that is due to GSOC's time constraints, but it also depends on the Krita team's ability to make projects workable and to keep the code architecture modular enough that GSOC-sized chunks are feasible.

Pen and ink

[Timothée Giet]

The list of features and UI changes in Krita 2.3 might have been a standard-issue update report had Tvrdý not been followed immediately on stage by an artist who uses Krita every day. Timothée Giet is a painter and comic book artist who produces his Wasted Mutants title in Krita. Giet did a live demonstration of how he creates an individual page in Krita, from blank canvas all the way up to full color, lettered product.

Perhaps the first thing a software developer might notice about Giet's workflow (hypothetically, that is; the Krita team is no stranger to him) is that he makes heavy use of only a small set of the application's features — perhaps two or three of the application's dozen or so brush engines. Which subset he uses is style-dependent. Comic books (even those that are drawn digitally) tend to emulate the look of hand-drawn artwork that includes a separate black ink layer, a separate color layer, separate text layers, and separate grid boxes. In the print world, these steps would historically be performed by separate artists; in the one-man digital comic world, they result in numerous stacked and grouped layers. Whereas a photo manipulation might require only three or four layers, Giet ended up with a dozen or more, all for a single panel.

It was also interesting to watch how Giet built up the layers of the drawing with different tools; starting with the aforementioned sketch tool — which he used with a very compact set of brush constraints, a choice that lent a distinctly different look to his results than most of the sketch brush demos showcase. He used mirror mode to quickly sketch out a pair of characters that he subsequently inked in using two different methods.

Giet also chose to undo and redo the same areas again and again, rapidly changing the brush settings and adjusting the colors as he did so, until he found exactly the right look. It is sometimes tempting to think of Undo as a function reserved for when one makes a mistake (as it frequently does in a word processor), but rapid, multi-step undo and redo can also be a core part of the creative process. If it was not as fast as it is in Krita 2.3 or if undos were limited to a small number of steps, this process likely would not have worked.

As Tvrdý implied when discussing the project's feedback mechanism with real-world artists, Giet used a number of pre-sets in the interface and tool settings. In addition, during the audience question-and-answer session, another artist in the crowd asked about the saved pre-sets (specifically, how many were available). As with Undo, it is easy to think of something like saved pre-sets as a "minor" function — after all, a new brush engine takes considerably more work to develop, and extends the core functionality of the application. But saved pre-sets garnered more attention from the users even than the highly-unusual sketch brush.

Another artist in the audience asked Giet how he managed the multiple pages needed to publish a comic book issue; he responded that he currently uses a separate file for every page. The Krita developers, however, said that multi-page documents were on tap for the next version of the application.

Giet is a single artist, so his way of working within Krita may have no bearing on the painter or artist sitting in the next seat — particularly with respect to the tools used. What Tvrdý said about the project's UI changes better fitting real-world artists seemed to hold true, though. From time to time, Giet changed brush settings and colors every few seconds, and the pen-friendly selection widgets were obviously an order-of-magnitude faster than an interface based on pop-up dialogs would have been. After the talk, Tvrdý mentioned that he found it painful from time to time during Giet's demo to see the artist struggle with one bit of the UI or another; that could be one of the most useful bits of feedback of the entire process.

Overall, the development teams that have made a habit of attending LGM over the years seem to have reaped tangible benefits. Unfortunately, graphics applications may be one of the few areas of computing that make live feedback sessions and demos easy to perform — it is hard to imagine the same sort of session for a spreadsheet, IDE, or accounting package, for example. But who knows; perhaps it could be easily done, provided both sides of the equation were interested in taking the stage.

Comments (4 posted)

Brief items

Quotes of the week

I certainly agree that perl6 is at least as much a different language from perl5 as Java is a different language from C. I am appalled at how messed up things have become. Even people who should know better, people whom I explain this all to again and again and again and again, will ever a few weeks' time lapse again into the Successionist Heresy.

They once again start thinking of perl6 succeeding perl5 **NOT** in the way that Java has succeeded C, but rather in the way that Windows 98 succeeded Windows 95 or the Intel 586 processor succeeded the 386. It is intensely aggravating to watch, yet who can blame them? Every technical product they're ever used that comes with an ever-increasing numeric suffix is one that is meant to be "the next" version, one that will soon supplant that old dinosaur.

This is a miserable situation that we're now quagmired in. It is harmful to perl, because it is superlatively misleading.

-- Tom Christiansen

  Sometimes related to the development of frogr, and sometimes not, I'd
  like to thank here to some people who helped me in a way or another:

- * My girlfriend, who proved to have infinite patience all the time
+ * My wife, who proved to have infinite patience all the time

  * My son, who was born right at the same time I started this project,
    so they're some kind of "brothers" or the like.
-- Mario Sanchez Prada - congratulations are due

Open source is different from a community-driven project. We're light on community, but everything we do ends up in an open source repository. We make the code open source when the first device is ready. We're building a platform, we're not building an app. When you're building a platform, you evolve and improve APIs, and sometimes APIs are deprecated.

When you're dealing with new APIs community processes typically don't work - it's really hard to tell when you're done, and it's hard to tell when it's a release and when it's beta. And developers need an expectation that the APIs they're using are done. If someone were to look at an early release, they could start using APIs that aren't ready and their software might not work with devices.

-- Android's Andy Rubin

Comments (4 posted)

Some interesting GNOME Shell extensions

Ron Yorston has posted a set of GNOME Shell extensions "for grumpy old stick-in-the-muds." Through some JavaScript magic, these tweaks add application launchers to the panel, restore static workspaces, and more. People who are unhappy with the GNOME 3 interface changes might want to give this add-on a try.

Comments (32 posted)

LyX 2.0.0 released

Version 2.0.0 of the LyX document processing system is out. "With this release, LyX celebrates 15 years of existence, and we start a new family of 2.x releases to acknowledge LyX's passing from a child to an adult. We hope you will celebrate this anniversary with us." There are many new features in this release; see the "what is new" page and LWN's LyX 2.0 review for more information.

Full Story (comments: 5)

Passlib 1.4 released

Passlib is "a comprehensive password hashing library for python, supporting over 20 different hash schemes and an extensive framework for managing existing hashes." The 1.4 release is out; it adds better LDAP support, PBKDF2 hash support, and more; see the release notes for details.

Full Story (comments: none)

A Canadian non-profit organization for PostgreSQL assets

A Canadian non-profit company is being set up to take ownership of the PostgreSQL domain names, keys, trademarks, and such which have, to date, been held by Marc Fournier. "For years, we have had the issue that if anything happened to Marc, getting control of these assets could be difficult and cause us weeks of wasted time, and perhaps even result in being offline for days or weeks. Even to date, we've had issues where problems have happened while Marc was away and been unable to resolve them quickly." The organization will be run by a board consisting of Josh Berkus, Marc Fournier, Dave Page, and Chris Browne.

Full Story (comments: none)

Thoughts about QT 5 from Nokia

Nokia's QT Labs weblog is carrying a lengthy posting looking forward to the QT5 development cycle. "Another major change with Qt 5 will be in the development model. Qt 4 was mainly developed in-house in Trolltech and Nokia and the results were published to the developer community. Qt 5 we plan to develop in the open, as an open source project from the very start. There will not be any differences between developers working on Qt from inside Nokia or contributors from the outside."

Comments (18 posted)

Newsletters and articles

Development newsletters from the last week

Comments (none posted)

Introduction to programming in Erlang (developerWorks)

There is an introduction to the Erlang programming language (the first part in a series) on the developerWorks site. "Erlang provides a number of standard features not found in or difficult to manage in other languages. Much of this functionality exists in Erlang because of it's telecom roots. For example, Erlang includes a very simple concurrency model, allowing individual blocks of code to be executed multiple times on the same host with relative ease. In addition to this concurrency Erlang uses an error model that allows failures within these processes to be identified and handled, even by a new process, which makes building highly fault tolerant applications very easy. Finally, Erlang includes built-in distributed processing, allowing components to be run on one machine while being requested from another."

Comments (7 posted)

Pinta turns 1.0 and brings simple image editing to Linux ( has a review of the Pinta image editor. "What you get with Pinta is indeed a subset of what you will find in GIMP. Drawing tools, just not as many of them. Filters and effects, but a smaller collection. Layers and image adjustments, but not every feature. For a lot of people, of course, that is perfectly fine. The only question is how to determine which group you fit into."

Comments (none posted)

Page editor: Jonathan Corbet


Brief items

Apple not providing LGPL webkit source code

Harald Welte notes that Apple has not been releasing the source code for the company's latest releases of Webkit, which is LGPL licensed software. "I think it is time that Apple gets their act together and becomes more straight-forward with LGPL compliance. It is not acceptable to delay the source code release for 8 weeks after shipping a LGPL licensed software. Especially not, if you have already demonstrated in the past that you are well aware of the obligations and have a process and a website to release the corresponding source code under the license conditions." (Thanks to Paul Wise)

Update: this situation would appear to be resolved for now.

Comments (2 posted)

Matt Zimmerman leaving Canonical

Matt Zimmerman, CTO at Canonical, has announced that he will be leaving the company next month. "It has been my privilege to have played a part in creating Ubuntu and Canonical. It has been a pleasure to work with so many talented, dedicated and fun people over the years. I am immensely proud of what we have accomplished together: bringing free software to people, places and organizations which have derived so much benefit from it."

Comments (6 posted)

Ixonos Joins Linux Foundation

The Linux Foundation has announced that Ixonos is its newest member. "Ixonos creates solutions for mobile devices and services for wireless technology suppliers and telecommunications companies, as well as mobile device and consumer electronics manufacturers. It has been actively involved in mobile Linux development efforts since 2006 and joins The Linux Foundation today to maximize its investment in the operating system. The company will collaborate with other leading vendors, users and developers to help advance Linux-based mobile platforms, including Android and MeeGo."

Full Story (comments: none)

Articles of interest

Oracle ordered to reduce claims against Google from 132 to 3 (Groklaw)

Groklaw has an order from the court in Oracle v. Google requiring that most of the claims in the case be dropped forever. "Currently, there are 132 claims from seven patents asserted in this action, and there are hundreds of prior art references in play for invalidity defenses. This is too much. The following schedule will ensure that only a triable number of these items - three claims and eight prior art references - are placed before the jury in October, all others to be forsaken. Oracle will surrender all of its present infringement claims against Google based on the 129 asserted claims that will not be tried. Oracle may not renew those infringement claims in a subsequent action except as to new products."

Comments (19 posted)

Red Hat CEO hates patent trolls, but says sometimes you just have to pay up (Network World)

Network World interviewed Red Hat CEO Jim Whitehurst about patents, lawsuits, and settlements as well as Red Hat's corporate culture and future outlook. "Despite some victories -- including one against that same Acacia last year -- Red Hat has elected to settle with what it deems patent trolls in various cases which it cannot disclose, according to Whitehurst. [...] 'When it's so little money, at some point, bluntly, it's better to settle than fight these things out,' Whitehurst said. [...] Red Hat fights when it believes bigger principles are at stake. Red Hat and Novell jointly won a case against an Acacia subsidiary in East Texas last year when a jury ruled that the companies did not infringe on user interface patents. Red Hat also filed an amicus brief on behalf of rival Microsoft in a patent dispute pending before the Supreme Court."

Comments (11 posted)

GNU Mediagoblin Project launches (NetworkWorld)

Joe "Zonker" Brockmeier covers the launch of GNU Mediagoblin. "So what's GNU Mediagoblin? The project is starting with the goal of creating a federated photo sharing site that could stand alongside popular services like Flickr, DeviantArt, Picasa, and Facebook. Eventually, the project hopes to tackle other types of media, but the first target is photo/artwork sharing. Right now? It's very much a work in progress."

Comments (14 posted)

Ubuntu cloud chief beats CTO to exit door (The Register)

The Register reports that Neil Levine has left Canonical. "Levine was with Canonical for just two years but he oversaw the push to turn Ubuntu into the kind of Linux server platform that's capable of letting you easily deploy apps to the cloud. His division handled cloud and server products. That meant he'd work on the integration of Eucalyptus into the Maverick Meerkat release of Ubuntu and was scheduled to worked on improving the ability for cloud frameworks such as Hadoop and Cassandra to interoperate in future versions of Ubuntu server."

Comments (37 posted)

New Books

FSF announces publication of two new books by Richard Stallman

The Free Software Foundation has released the second edition of Richard Stallman's selected essays, "Free Software, Free Society", and his semi-autobiography, "Free as in Freedom: Richard Stallman and the Free Software Revolution".

Full Story (comments: none)

Printed Python 3.2 Tutorial and Language Reference Manual

New printed editions of "An Introduction to Python" and the "Python Language Reference Manual" are now available for Python version 3.2.

Full Story (comments: none)

Creating a Website: The Missing Manual, Third Edition--New from O'Reilly Media

O'Reilly Media has announced the release of "Creating a Website: The Missing Manual, Third Edition", by Matthew MacDonald.

Full Story (comments: none)

Upcoming Events

Upcoming L2Ork tour of Europe

Linux Laptop Orchestra (L2Ork) has announced its maiden tour of Europe. Click below for dates and locations.

Full Story (comments: none)

CHAR(11) replication conference open for bookings

CHAR(11), the conference on PostgreSQL Clustering, High Availability and Replication, will be held in Cambridge, UK on July 11-12, 2011. "The schedule includes talks from major contributors Jan Wieck, Greg Smith, Magnus Hagander, Koichi Suzuki, Dimitri Fontaine and Simon Riggs, talks from Heroku and other experienced users, as well as technical talks from vendors Continuent, MGRID and EMC. Topics include scalability, high availability, transaction processing and data warehousing. Many talks cover the latest research as well as future plans."

Full Story (comments: none)

LinuxCon 2011 Keynotes Announced

The Linux Foundation has announced the confirmed keynotes for this year's LinuxCon, taking place in Vancouver BC from August 17 to 19, 2011.

Full Story (comments: none)

PostgreSQL Conference West: 2011 Announced

PostgreSQL Conference West will take place September 27-30, 2011 in San Jose, California. "The 27th will be a training day with half and full day trainings available for separate registration, and the 28th-30th will be 45-90 minute sessions."

Full Story (comments: none)

Events: May 19, 2011 to July 18, 2011

The following event listing is taken from the Calendar.

May 16
May 19
PGCon - PostgreSQL Conference for Users and Developers Ottawa, Canada
May 16
May 19
RailsConf 2011 Baltimore, MD, USA
May 20
May 21
Linuxwochen Österreich - Eisenstadt Eisenstadt, Austria
May 21 UKUUG OpenTech 2011 London, United Kingdom
May 23
May 25
MeeGo Conference San Francisco 2011 San Francisco, USA
June 1
June 3
Workshop Python for High Performance and Scientific Computing Tsukuba, Japan
June 1 Informal meeting at IRILL on weaknesses of scripting languages Paris, France
June 1
June 3
LinuxCon Japan 2011 Yokohama, Japan
June 3
June 5
Open Help Conference Cincinnati, OH, USA
June 6
June 10
DjangoCon Europe Amsterdam, Netherlands
June 10
June 12
Southeast LinuxFest Spartanburg, SC, USA
June 13
June 15
Linux Symposium'2011 Ottawa, Canada
June 15
June 17
2011 USENIX Annual Technical Conference Portland, OR, USA
June 20
June 26
EuroPython 2011 Florence, Italy
June 21
June 24
Open Source Bridge Portland, OR, USA
June 27
June 29
YAPC::NA Asheville, NC, USA
June 29
July 2
12º Fórum Internacional Software Livre Porto Alegre, Brazil
June 29 Scilab conference 2011 Palaiseau, France
July 9
July 14
Libre Software Meeting / Rencontres mondiales du logiciel libre Strasbourg, France
July 11
July 16
SciPy 2011 Austin, TX, USA
July 11
July 12
PostgreSQL Clustering, High Availability and Replication Cambridge, UK
July 11
July 15
Ubuntu Developer Week online event
July 15
July 17
State of the Map Europe 2011 Wien, Austria
July 17
July 23
DebCamp Banja Luka, Bosnia

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol

Copyright © 2011, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds