User: Password:
|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for July 8, 2011

Companies and contributions

By Jonathan Corbet
July 7, 2011
Recently we have seen a small series of articles (example) in the press suggesting that contributions to open source projects are in decline. If that were truly the case, it would certainly be a cause for concern; our community lives or dies based on contributions of code, artwork, documentation, and more. If the stream of contributions is falling off, our future is in doubt. The good news is that the rumors of the death of our community are somewhat exaggerated.

The source for much of this speculation would appear to be the results of a survey of the Eclipse community published in June. This survey led to a number of interesting conclusions; for the curious, the results are available as a report [PDF] or in an ODS spreadsheet. One can learn a lot about the Eclipse user community there; some of its members, evidently, still develop in RPG. Most of them, though, have moved on to more current technology.

Among the conclusions drawn from this survey were that developers are increasingly using Linux as their desktop system, openJDK adoption is on the increase, git usage is on the increase (to a massive 7%; 58% of respondents still use subversion), and that open source participation "seems to be stalled." The reasoning behind that last conclusion was:

In the survey, we asked a question about the corporate policies towards open source participation. In 2009 48% claimed they could contribute back to OSS but in 2010 only 35.4% claim they could contribute back. Conversely, 41% in 2010 claimed they use open source software but do not contribute back but in 2009 it was 27.1%. Obviously not a trend any open source community would like to see.

Poking holes in this conclusion is not a particularly difficult task. The survey covered Eclipse users, not the open source community or the corporate space as a whole. It asked about policies, but the respondents were mostly developers and designers who may not know what the company's policies really are. Less than 1,500 people responded to the 2009 survey, while nearly 2,000 responded to the 2010 survey, so they are not comparing the same group of people. 41% of the responses came from France and Germany, a proportion which might (or might not) be representative of the Eclipse user base, but which certainly is not representative of the community as a whole. And so on.

Looking at the survey, one could easily conclude that the Eclipse user community is growing; as it grows, it attracts companies that are relatively new to free software. These companies will naturally contribute less than those which have been comfortable with free software for some time. That conclusion is, of course, an exercise in hand waving, but it somehow seems more plausible than the idea of a wave of anti-free-software policies - not seen elsewhere - taking over the corporate world.

As a whole, our community does not appear to be in decline. We have no shortage of healthy projects with active contributor bases. Once upon a time, creating a single full-featured web browser looked like a nearly insurmountable challenge; now we have several of them. One can look at compilers, desktop suites, database managers, or point of sale systems; we have multiple vigorous competing projects in each. Sites like github can spring from nowhere and find themselves hosting millions of repositories in just a few years. In a community with this much activity, coming up with any good understanding of changes in participation rates will be hard. But one does not have to look for long to realize that things can't be in any particularly bad shape.

In this context, it is interesting to look at the last couple of press releases from the Linux Foundation; the LF has announced that LexisNexis and Toyota have become members - Toyota as a Gold member. LexisNexis also released its high-performance clustering system as free software. Neither of these new members can be thought of as a traditional information technology company, but both have found it in their interest to support the development of Linux.

That looks like the future of open source participation. Software companies have, for the most part, been working with our community for some years now. Most computing-related hardware companies are also contributing; there are signs that at least one of the big remaining holdouts will announce a change of policy before the end of the year. Companies like Volkswagen started contributing to Linux as far back as 2007. Increasingly, we are seeing interest from financial service companies, traditional manufacturers, and beyond. Being part of the free software community is not just for software companies anymore, and that is a good thing; it suggests that we'll not see a real decline in participation anytime soon.

Comments (5 posted)

A Firefox user plays with Chromium

By Jonathan Corbet
July 7, 2011
There was a period of time when it seemed that Internet Explorer was set to be the only web browser with any significant presence; Linux users looked to be doomed to a barely-supported web life using niche browsers. The success of Firefox saved us from that fate; for a while, it seemed that Firefox was set to take over. But the situation is more complicated than that; now the press is talking about the rapid rise of Google's "Chrome" browser. Your editor, having not seriously messed with Chrome/Chromium for some time, decided to experiment with using it full-time for a while. The end result: Chromium is a capable tool with only a few annoying glitches.

Discussions of Chrome tend to run into confusion based on the fact that there are actually two related browsers. Chromium is an open-source (BSD-licensed) project, while Chrome is a binary-only program available for free download. Chromium is the upstream for Chrome; they differ in that Google adds a bunch of proprietary stuff (Flash player, PDF viewer, codecs), an automatic update system, and a more colorful logo to Chrome. Both browsers are available for a number of Linux distributions. Anybody wanting a fully-free system will naturally stick to Chromium.

For a user moving to Chromium from Firefox, there is, at the outset, little in the way of culture shock in store. The Chromium developers seem to have put a great deal of work into making that transition easy. Chromium will pick up a lot of information from an existing Firefox installation, including bookmarks, browsing history, passwords, and more. (As an aside, it's worth noting just how easily Chromium can get its hands on the Firefox password store; any other program can do the same). The appearance is quite similar, and many of the keyboard shortcuts are the same. After a while one begins to notice little things that are missing (the combination of shift and the scroll wheel to move through the history is at the top of your editor's list), but it mostly just works.

Firefox makes a huge variety of configuration options available to users; Chromium has a rather smaller set. Most of the important things are there, but, once again, anybody who has made extensive use of Firefox's configurability will run into annoying gaps. At the top of the "pet peeve" list here is the lack of any ability to control animated images. Your editor is an easily distracted type; text is much harder to read when there are images jumping around on the screen. The "animate once" option in Firefox has always seemed like an ideal compromise; it enables viewing of kitten animations sent by one's daughter while filtering out ongoing obnoxiousness. Chromium users have no such feature.

Also missing is any sort of mechanism for associating "helper" programs with content types. There appears to be no way, for example, to tell the browser to pass a PDF file to evince or an m3u file to the user's choice of media player. As a result, Chromium, out of the box, is totally unable to deal with PDF files; one must install an extension to be able to view them at all. (Chrome has a PDF viewer built into it). This behavior seems to be driven by the ChromeOS use case, where the concept of applications outside the browser is deemed suspicious at best. For a full desktop system, though, it is limiting.

Extensions for Chromium are not in short supply. AdBlock is there, for those who want it. On the other hand, the lack of NoScript hurts; the "NotScript" extension tries to fill that gap, but it's not the same. NotScript setup is bizarre, requiring the user to hand-edit a file named

    ~/.config/chromium/Default/Extensions/\
	odjhifogjcknibkahlpidmdajjpkkcfn/0.9.6_0/CHANGE__PASSWORD__HERE.js

and insert a password which, seemingly, is never used again. NotScript seems to break more sites than NoScript does; the Red Hat bugzilla site, for example, simply refreshes forever with scripts disabled. NotScript also breaks Chrome's PDF viewer unless scripts are enabled for the site hosting the PDF file. There is (it must be said) no direct equivalent to the Firemacs extension providing Emacs keybindings; a similar extension failed to work. Many of these features are apparently harder to implement in Chromium than they are in Firefox; it seems likely that Chromium's emphasis on sandboxing and security, along with an attempt to make extensions portable across releases, may be to blame here.

Various glitches notwithstanding, Chromium is a capable and full-featured browser. It does appear to be quite fast, though Firefox's speed has rarely been a problem in recent times. Having done the work to switch over to this browser and integrate it into his workflow, your editor does not feel any immediate need to switch back to Firefox.

Chromium is promoted as an open source project, but the community has learned that Google often sees "open source" in its own unique way. It would appear, though, that Chromium is actually run like a real open-source development project. The project's code repository contains commits from some 759 developers, most of whom have been active in the last year. Developers tend to use @chromium.org email addresses, making it hard to tell how many of them come from outside Google. The project does give commit privileges to outside developers, though - they are not limited to the submission of patches. Google must certainly maintain a certain degree of control over the direction of the project, but Chromium does truly seem to have a development community of its own.

Despite its free license and growing adoption, Chromium tends to be supported reluctantly by many distributors. The project's release cycles are unclear at best, and its practice of forking and bundling libraries does not sit well with distributors; see this posting from Tom Callaway for a long discussion on the disconnect between Chromium and distributors. Chromium has an open bug tracker entry on making the project more distributor-friendly, but it seems to have more cobwebs than contributions. For reasons that have been extensively discussed over the years, web browser projects seem to have a hard time fitting into the distributor ecosystem.

Even so, there are repositories for a number of common distributions. Some work better than others; the Fedora repository does not support Rawhide, for example. But just about anybody wanting to run Chromium without building it (a daunting process which requires a 64-bit machine just to have the address space to do the link) on Linux can do so. That said, it's probably a fair guess that an awful lot of Linux users are running the proprietary Chrome releases. One should never underestimate the allure of a working YouTube. For those who would like to take that path, there are a number of "release channels" with varying distances from the bleeding edge.

To conclude: Chromium is a capable tool which has brought an interesting new level of competition to the browser space. The project's emphasis on speed and security are certainly welcome, as is the relatively open (for Google) nature of the project itself. On the down side, one might well wonder whether it is wise to put yet another piece of web infrastructure into a single company's hands. Google's intentions seem to be good now, but, as we've often seen, companies can change alignment overnight. So while Chromium is a welcome option to have, it might be best if it does not take over. The continued existence and success of strong competitors in the free software community can only be a good thing.

Comments (126 posted)

Prey: Open source theft recovery

July 6, 2011

This article was contributed by Nathan Willis

Chances are, you know someone who has lost a laptop or smartphone, or had one stolen outright. Although preventing the theft itself would be the most satisfying outcome, there is little a software developer can do on that front (although one might consider proximity-monitoring over Bluetooth to be a step in the right direction). Full-disk encryption and password-protection for the BIOS are common strategies, but another approach — device tracking — has never enjoyed a high level of popularity in open source. The Prey project offers one open source device tracking solution, with a small range of options — including self-hosting or using the company's online monitoring service.

Device tracking options

All software-based device tracking systems share one common element: they periodically record data about the machine and its surroundings, including location, IP address, and network settings, or perhaps even a photo taken with an on-board camera. Where they differ is in how and when that information is transmitted to the device owner, and how the application detects whether or not it has been lost or stolen. Some upload their periodically-collected data sets to a remote server on every run, others simply "phone home" to perform a status check, and only record and report information if they get the appropriate signal.

Proprietary solutions like GadgetTrak or LoJack for Laptops tend to rely on a central server run by the software provider, who waits for the device owner to report the loss or theft. The first widely-deployed open source device tracker was Adeona, which took a different approach: it uploaded its reports to a remote storage location at every snapshot period. That eliminated the need to maintain a central database of tracked devices, which is what the proprietary companies charged their recurring fees to maintain. Sadly, Adeona is currently offline, as the storage service used, OpenDHT, has been deactivated.

Prey operates more like the proprietary services. The default behavior is for the Prey script to wake up periodically and make an HTTP request to a preconfigured URL. The response code sent tells the script what to do next: a 200 (the "OK" status code) will shut the script down normally. Only a 404 "Not Found" code (which requires making a successful connection to the web server) will trigger the script's theft response. Prey can be configured only to send reports about its status and whereabouts, or to trigger local actions, such as an audible alarm, flashing an alert notification on screen, or locking the system.

[Prey control panel]

Fork Ltd., the company behind Prey's development, offers two administration options. Users can configure Prey to use the "Prey Control Panel" site as the pre-shared URL, or to run in standalone mode. Control Panel users can configure device-reporting settings and actions through the web site, as well as mark a device as missing (which removes the URL requested by the script's heartbeat-checking routine) and view any reports it sends. Standalone users must manually edit the Prey configuration file, supplying their own heartbeat URL, and provide valid SMTP and email configuration details — in standalone mode, the script emails its reports rather than posting them to the Control Panel server. Fork recently started offering paid "Pro" accounts, which allow simultaneous tracking of more devices and additional configuration options, but says it will always offer free accounts.

Stalking Prey

[Prey configuration]

The current release of Prey is version 0.5.3, and is available for Linux, Mac OS X, Windows, and Android. An iPhone/iPad version is presently in private beta testing. On Linux, the application itself is a collection of Bash scripts and support files (such as the audio file played as an alarm). The source bundle can be installed on any modern distribution, requiring only the set up of a cron job to run the main script at the desired interval. It depends on cURL to make its HTTP requests, the streamer utility from xawtv to capture webcam images, and Python to run a small GUI configuration tool. There is also a Debian package available on the download page, which is compatible with Debian and Ubuntu distributions.

The actual setup process is simple, but the Prey project's web site suffers from severe documentation drought. There are no step-by-step guides suitable for new users (much less for configuring Prey in standalone mode): the necessary information is split up between the README file in the code bundle, the FAQ page, the "Knowledgebase," and the mailing list. If all you are interested in doing is signing up for the free Control Panel service, you will probably muddle through without too many hiccups, but I found it difficult to track down the answers to many of the questions I would want to know before I embarked on installing it.

For example, the site's splash page, FAQ, and README all discuss adding a "valid web URL" as the means of enabling Prey in standalone mode, but the example they give is a URL on one's own domain or blog. In the prey configuration file, there are no client-side settings to control the actions (alarm, lock, etc.) performed or the data gathered when Prey sends a report — all of which are settings exposed on the Prey-hosted Control Panel site. If you read through the mailing list archives, though, you discover that the URL Prey checks for is intended not merely to return a valid HTTP response code, but to serve up an XML configuration file. There is still no documentation for the syntax and options of the file, but evidently if one first sets up an account with the Prey service, one can download and save the file to then use later in standalone mode.

I would also like to see more thorough disclosure of what machine information Prey sends to the Control Panel server. Because the Prey script makes an HTTP request, presumably a fair portion of the information could simply be gathered from the request headers, but any time an application "phones home" I prefer to see what it is going to say before I install it the first time. After initially adding a device, the Control Panel has a detailed hardware information section disclosing more about the system configuration than I generally like strangers to be able to access. For non-Pro accounts, the connection to the Control panel site is not sent over SSL, which makes it vulnerable to eavesdropping. There are scattered references to Prey using Amazon S3 to store its webcam report snapshots, rather than sending them directly to the Prey site, but here again the documentation is sparse.

From a user standpoint, though, Prey works quite well. On Linux, the Bash scripts run with root privileges, which might concern those of a high-security mindset. Prey can gather an impressive set of information about your machine for the reports it sends back if the machine goes missing. In addition to webcam snapshots and the standard geolocation and WiFi-triangulation options, it can complete a traceroute, capture a desktop screenshot, log any recently-modified files, and log running applications.

The Control Panel interface is easy to understand, with newbie-friendly explanations of the various settings and options. Even the setup tool does a good job of outlining the difference between Control Panel and standalone mode. I am not sold on the value of playing an MP3 alarm file, but the ability to lock down the currently-running session (with password-protection) is a necessity when dealing with theft. The Prey team also deserves kudos for explaining the value of encrypting the hard disk and password-protecting the BIOS; those measures may be the only means to prevent attackers from wiping a machine entirely.

If you are not patient enough to wait until someone steals your laptop, you can dry-run Prey by running the /usr/share/prey/prey.sh script with the --check option. This will test the network connection, the crontab entry, and the validity of the API key and device ID for your hardware. Testing actual report generation is harder; for non-pro accounts the only option appears to be marking your device as "missing" in the Control panel interface.

Prey forward technologies

Taking a look at the configuration file (which on Linux systems is installed at /usr/share/prey/config), there are some tantalizing options that do not yet seem to be active in the 0.5.3 code, including SSH tunneling and SCP/SFTP file transfer, both with support for RSA key-based authentication instead of passwords. There are also some experimental options for modifying the HTTP request in order to get around firewalls and other attempts to block Prey. Here, too, my hope is that we will see further details documented on the site, rather than have them buried in the comments of the configuration file alone.

One final concern for potential Prey users is whether the three-device limit imposed on non-Pro accounts is genuinely enough. Even a single individual is likely to hit the three-device limit these days, and although each member of a family could set up his or her own Control panel account, I suspect that in most families a single family member will end up bearing the burden of keeping tabs on the portable electronics. The limit will, of course, help sell Pro subscriptions — or push users into setting up their own standalone server.

To me, that makes the need for better documentation of standalone mode configuration all the more important. While an enterprising developer could craft a homemade equivalent to the Prey Control Panel and serve up a valid XML file to clients, all indications are that the Prey project wants to operate as a true open source citizen — the source code is available on GitHub and the mailing list is public. It just has a little bit further to go.

If you are looking for an open source alternative to the proprietary device tracking products on the market, Prey is worth examining. Because it is based on standard Linux system tools, it ought to run on any distribution — perhaps even including Maemo and MeeGo handsets, although they do not have Bash installed by default. You just might have some hoops to jump through if you are intent on running your own device-tracking server.

Comments (22 posted)

Page editor: Jonathan Corbet

Security

A decline in email spam?

By Jake Edge
July 7, 2011

One of the biggest internet irritants over the last decade or two clearly has to be email spam. It has collectively taken billions of hours of users' time to deal with, consumed countless terabytes of wasted disk space, burned bandwidth better spent on kitten videos, and used up vast quantities of developer time to come up with new ways to filter it out or come up with other technological fixes. So, recent reports that email spam is in decline are certainly welcome, if true, but even with the 90% decline over the last year that is being reported, the amount of spam being sent is still staggering—and likely to be with us for a long time to come.

I haven't heard friends and colleagues extolling a reduction in the amount of spam they receive but, as they say, the plural of anecdote is not data. One would think that such a precipitous drop would be noticed by email users, however. In any case, Cisco, Symantec, and others are reporting numbers like 34 billion spam emails per day for April, down from 300 billion in mid-2010. That's an enormous drop in the volume, even if 34 billion a day is still huge. Without any hard data to the contrary, some significant drop-off in spam volume is a reasonable conclusion—and one worth exploring a little bit.

Spam has always been driven by its economics. In the early days, it cost almost nothing to send out huge volumes of email, and the chances of getting caught and meaningfully punished were quite small. That led to various "spam kings" who made outrageous amounts of money by spamming the world. If sending spam is, for all intents and purposes, free, you don't need a very high response rate to the pitch in order to bring in substantial sums. But that led to a backlash.

Users quickly tired of digging through email that was 90-100% spam, ISPs got smarter about not allowing their systems to be used for spam transmission, and, eventually, governments decided to ramp up the punishment side of the equation. Spam filtering became ubiquitous, blacklists that identified sites sending spam started to pop up, prosecutions of those sending spam were successful to some extent, and so on. The cost of sending spam has risen substantially over the years.

That's not to say that there aren't some folks still making lots of money sending spam, but these days there are bigger phish (so to speak) to fry. The most lucrative schemes today don't rely on sending enormous volumes of email and are more targeted instead.

It would be nice to think that users are getting a bit more sophisticated—or just running out of body parts to enlarge. It's hard to say whether that's true or not, but, even with the growth in new internet users, one might hope that the negative publicity about internet scams is making users more wary. Unfortunately, one doesn't have to search very far to find a news item about someone taken in by email claiming to be from a foreigner who wants to send them "EIGHT BILLION DOLLARS". So, it's probably overoptimistic to attribute much of the spam volume drop to users being less likely to respond to the pitch.

Filtering has certainly gotten better over the years, and moved from something users had to fiddle with to "the cloud" (or at least their ISP). Spammers have routinely run their emails through tools like SpamAssassin to try to evade filters, but there are limits to that approach, especially when individual Bayesian filters are factored in. It's difficult for even gullible users to respond to a spam pitch they don't see, so filtering has likely done much to reduce the effectiveness of spam.

Another factor that may be at play here is that many folks have moved beyond email for much or all of their communication. Text messages, instant messaging, and the services provided by various walled gardens (e.g. Facebook, Twitter) have replaced email for a lot of people, especially those darn kids, these days. Spam has, of course, evolved to assail those media as well. That kind of spam is not reflected in these recent statistics, however.

So, while it is somewhat heartening to hear that some folks are probably receiving less email spam, it's unlikely that it's really going to change things for most people. Users will still need filtering, ISPs and governments will still need to be vigilant, and clicking on links in dodgy email will still be a bad idea. While likely mind-numbing, seven days of reading all the email you receive might also prove somewhat eye-opening.

Like it or not, spam has become part of our culture. From the origin of the "spam" name to the various terms for different kinds of spam (419 spam, phishing, etc.), spam has used and been used by internet culture. Over the years, various folks have imagined horrible demises for spammers—e.g. Rule 34—usually involving the products they pitch in some bizarre fashion. So, at least we can get a chuckle from spam now and again, even as it is an extremely annoying—sometimes dangerous—phenomenon. In fact, it would be nice if junk (snail) mail filters were even half as good as email filters are these days.

Comments (42 posted)

Brief items

Security quotes of the week

The researchers tallied the losses from fake AV [anti-virus] victims of the three operations: One firm's victims lost $11 million; the second, $5 million; and the third, $116.9 million. That meant about $45 million per year in income for AV1, $3.8 million for AV2, and $48.4 million for AV3. The AV operators charged their victims $49.95 to 69.95 for six-month licenses, and $79.95 to $89.95 for lifetime licenses.
-- Dark Reading on a report [PDF] on fake anti-virus companies

You are supposed to be protecting us, but at this point you are ... terrorizing us. You have arbitrary and capricious rules that you apply without the aid of common sense. I mean where did TSA officials get their training, Abu Ghraib?
-- Elie Mystal

People get USB sticks all the time. The problem isn't that people are idiots, that they should know that a USB stick found on the street is automatically bad and a USB stick given away at a trade show is automatically good. The problem is that the OS trusts random USB sticks. The problem is that the OS will automatically run a program that can install malware from a USB stick. The problem is that it isn't safe to plug a USB stick into a computer.

Quit blaming the victim. They're just trying to get by.

-- Bruce Schneier

The ongoing WikiLeaks fight is a wake-up call for anyone who's been blithely relying on the cloud. It only took a few days for WikiLeaks to become a digital refugee, slogging from one service provider to the next, trying to find someone with enough backbone to keep it online in the face of legal threats, political intervention, and mysterious traffic-floods from persons or governments unknown.
-- Cory Doctorow

Comments (5 posted)

Vsftpd backdoor discovered in source code (The H)

The H reports that the vsftpd download site has been compromised and version 2.3.4 contains a back door. "The bad tarball included a backdoor in the code which would respond to a user logging in with a user name ':)' by listening on port 6200 for a connection and launching a shell when someone connects." Anybody who downloaded and installed that version should be looking to replace it quickly.

Comments (37 posted)

Top 25 Most Dangerous Software Errors

Each year, the SANS Institute and MITRE's Common Weakness Evaluation (CWE) project team up to create a list of the most dangerous software errors. The 2011 edition has just been released with SQL injection followed by OS command injection topping the list. "The Top 25 list is a tool for education and awareness to help programmers to prevent the kinds of vulnerabilities that plague the software industry, by identifying and avoiding all-too-common mistakes that occur before software is even shipped. Software customers can use the same list to help them to ask for more secure software. Researchers in software security can use the Top 25 to focus on a narrow but important subset of all known security weaknesses. Finally, software managers and CIOs can use the Top 25 list as a measuring stick of progress in their efforts to secure their software."

Comments (none posted)

New vulnerabilities

bind: denial of service

Package(s):bind9 bind CVE #(s):CVE-2011-2464 CVE-2011-2465
Created:July 6, 2011 Updated:November 18, 2011
Description: Multiple versions of the bind9 name server are affected by two remote denial of service vulnerabilities. See the ISC advisories for CVE-2011-2464 and CVE-2011-2465 for more information.
Alerts:
Gentoo 201206-01 bind 2012-06-02
Oracle ELSA-2011-1458 bind 2011-11-18
Slackware SSA:2011-224-01 bind 2011-08-15
Fedora FEDORA-2011-9127 bind 2011-07-08
Mandriva MDVSA-2011:115 bind 2011-07-20
SUSE SUSE-SU-2011:0759-1 bind 2011-07-19
openSUSE openSUSE-SU-2011:0788-1 bind 2011-07-19
Fedora FEDORA-2011-9146 bind 2011-07-08
Scientific Linux SL-bind-20110707 bind 2011-07-07
Slackware SSA:2011-189-01 bind 2011-07-11
SUSE SUSE-SA:2011:029 bind 2011-07-08
CentOS CESA-2011:0926 bind 2011-07-07
Red Hat RHSA-2011:0926-01 bind 2011-07-07
Debian DSA-2272-1 bind9 2011-07-05
Ubuntu USN-1163-1 bind9 2011-07-05
Pardus 2011-100 bind bind-tools 2011-08-03

Comments (none posted)

dokuwiki: cross-site scripting

Package(s):dokuwiki CVE #(s):CVE-2011-2510
Created:July 7, 2011 Updated:October 10, 2011
Description:

From the Red Hat Bugzilla entry:

It was found that DokuWiki's RSS embedding mechanism did not properly escape user-provided links. An attacker could use this flaw to conduct cross-site scripting (XSS) attacks, potentially leading to arbitrary JavaScript code execution.

Alerts:
Gentoo 201301-07 dokuwiki 2013-01-09
Debian DSA-2320-1 dokuwiki 2011-10-08
Fedora FEDORA-2011-8831 dokuwiki 2011-06-28
Fedora FEDORA-2011-8816 dokuwiki 2011-06-28

Comments (none posted)

feh: arbitrary file overwrite

Package(s):feh CVE #(s):CVE-2011-0702
Created:July 5, 2011 Updated:October 14, 2011
Description: From the Red Hat bugzilla:

A Debian bug report indicated that feh is vulnerable to an arbitrary file overwrite flaw. If a user could guess the PID of the feh process and create a symlink in /tmp, they could cause the overwrite of any file that the user running feh has write access to via wget overwriting the file.

Alerts:
Gentoo 201110-08 feh 2011-10-13
Fedora FEDORA-2011-8747 feh 2011-06-26
Fedora FEDORA-2011-8750 feh 2011-06-26

Comments (none posted)

krb5-appl: privilege escalation

Package(s):krb5-appl CVE #(s):CVE-2011-1526
Created:July 5, 2011 Updated:March 22, 2012
Description: From the Red Hat advisory:

It was found that gssftp, a Kerberos-aware FTP server, did not properly drop privileges. A remote FTP user could use this flaw to gain unauthorized read or write access to files that are owned by the root group.

Alerts:
Scientific Linux SL-krb5-20120321 krb5 2012-03-21
Oracle ELSA-2012-0306 krb5 2012-03-07
Red Hat RHSA-2012:0306-03 krb5 2012-02-21
Gentoo 201201-14 mit-krb5-appl 2012-01-23
SUSE SUSE-SU-2012:0042-1 krb5 2012-01-05
SUSE SUSE-SU-2012:0010-1 krb5 2012-01-05
openSUSE openSUSE-SU-2012:0019-1 krb5-appl 2012-01-05
openSUSE openSUSE-SU-2011:1169-1 krb5 2011-10-24
Debian DSA-2283-1 krb5-appl 2011-07-25
Mandriva MDVSA-2011:117 krb5-appl 2011-07-22
Fedora FEDORA-2011-9109 krb5-appl 2011-07-06
Fedora FEDORA-2011-9080 krb5-appl 2011-07-06
Scientific Linux SL-krb5-20110705 krb5-appl 2011-07-05
Red Hat RHSA-2011:0920-01 krb5-appl 2011-07-05

Comments (none posted)

lftp: man-in-the-middle vulnerability

Package(s):lftp CVE #(s):
Created:July 7, 2011 Updated:July 7, 2011
Description:

From the Pardus advisory:

lftp up to and including version 4.1.3 has an option "ssl:verify-certificate" which unfortunatly defaults to "no". Ie no certificate checks. Moreover, when compiled with openssl rather than gnutls lftp does not turn off SSLv2 (bad for openssl pre 1.0) and lacks code to actually verify the hostname. Ie it's prone to MITM.

Alerts:
Pardus 2011-91 lftp 2011-07-06

Comments (none posted)

libvoikko: denial of service

Package(s):libvoikko CVE #(s):
Created:July 1, 2011 Updated:July 7, 2011
Description: From the Fedora advisory:

Backport a security fix from version 3.2.1: Fix handling of embedded null characters in input strings entered through the Python interface. The bug could be used to cause denial of service conditions and possibly other problems. Users of these interfaces are recommended to upgrade to this release. Applications that use the native C++ library directly (this includes all well known desktop applications) are not affected by this bug and no changes to the native library have been made in this release.

Alerts:
Fedora FEDORA-2011-8232 libvoikko 2011-06-14
Fedora FEDORA-2011-8227 libvoikko 2011-06-14

Comments (none posted)

NetworkManager: privilege escalation

Package(s):NetworkManager CVE #(s):CVE-2011-2176
Created:July 7, 2011 Updated:November 23, 2011
Description:

From the Red Hat Bugzilla entry:

It was found that NetworkManager, a network devices and connections manager, did not properly enforce the PolicyKit 'auth_admin' action element settings (did not require authentication by an administrative user), when the 'auth_admin' element was specified in org.freedesktop.network-manager-settings.system.wifi.share.open (connection sharing via an open WiFi network) action. A local attacker could use this flaw to setup an unsecure (passwordless) Ad-Hoc wireless network.

Alerts:
openSUSE openSUSE-SU-2011:1273-1 NetworkManager 2011-11-23
Mandriva MDVSA-2011:171 networkmanager 2011-11-11
Fedora FEDORA-2011-8612 NetworkManager 2011-06-24
Scientific Linux SL-Netw-20110712 NetworkManager 2011-07-12
Red Hat RHSA-2011:0930-01 NetworkManager 2011-07-12
Fedora FEDORA-2011-9005 NetworkManager 2011-07-03

Comments (none posted)

nfs-utils: authentication bypass

Package(s):nfs-utils CVE #(s):CVE-2011-2500
Created:July 7, 2011 Updated:July 19, 2011
Description:

From the Red Hat Bugzilla entry:

A security flaw was found in the way nfs-utils performed authentication of an incoming request, when an IP based authentication mechanism was used and certain file systems were exported to either to a netgroup or a wildcard (e.g. *.my.domain), and some file systems (either the same or different to the first set) were exported to specific hosts, IP addresses, or a subnet. A remote attacker, able to create global DNS entries could use this flaw to access above listed, exported file systems.

Alerts:
Scientific Linux SL-nfs--20111206 nfs-utils 2011-12-06
Red Hat RHSA-2011:1534-03 nfs-utils 2011-12-06
openSUSE openSUSE-SU-2011:0747-1 nfs-utils 2011-07-19
Fedora FEDORA-2011-8934 nfs-utils 2011-07-01

Comments (none posted)

OpenSSH: private key disclosure

Package(s):openssh CVE #(s):
Created:July 7, 2011 Updated:July 8, 2011
Description:

From the OpenSSH advisory:

ssh-keysign is a setuid helper program that is used to mediate access to the host's private host keys during host-based authentication. It would use its elevated privilege to open the keys and then immediately drop privileges to complete its cryptographic signing operations.

After privilege was dropped, ssh-keysign would ensure that the OpenSSL random number generator that it depends upon was adequately prepared. On configurations that lacked a built-in source of entropy in OpenSSL, ssh-keysign would execute the ssh-rand-helper program to attempt to retrieve some from the system environment.

However, the file descriptors to the host private key files were not closed prior to executing ssh-rand-helper. Since this process was "born unprivileged" and inherited the sensitive file descriptors, there was no protection against an attacker using ptrace(2) to attach to it and instructing it to read out the private keys.

Alerts:
Pardus 2011-89 openssh 2011-07-06

Comments (1 posted)

packagekit: incorrect package signature check

Package(s):PackageKit CVE #(s):CVE-2011-2515
Created:July 5, 2011 Updated:July 7, 2011
Description: From the Red Hat bugzilla:

the basic problem here is that yum changed what PackageKit was relying on, and the end result is that a user can install an unsigned package without a GPG check, but be told by PackageKit that it is in fact signed (and trusted). It still requires a user to download said unsigned package manually (or from a rogue repo that is already setup) and also requires authentication to install the package.

Alerts:
Fedora FEDORA-2011-8943 PackageKit 2011-07-01

Comments (none posted)

php: arbitrary file creation/overwrite

Package(s):php5 CVE #(s):CVE-2011-2202
Created:June 30, 2011 Updated:April 13, 2012
Description:

From the Debian advisory:

CVE-2011-2202: Path names in form based file uploads (RFC 1867) were incorrectly validated.

Alerts:
SUSE SUSE-SU-2013:1351-1 PHP5 2013-08-16
Oracle ELSA-2012-1046 php 2012-06-30
Mandriva MDVSA-2012:071 php 2012-05-10
SUSE SUSE-SU-2012:0496-1 PHP5 2012-04-12
Scientific Linux SL-php-20120130 php 2012-01-30
Oracle ELSA-2012-0071 php 2012-01-31
CentOS CESA-2012:0071 php 2012-01-30
Red Hat RHSA-2012:0071-01 php 2012-01-30
Scientific Linux SL-php-20120119 php 2012-01-19
Oracle ELSA-2012-0033 php 2012-01-18
CentOS CESA-2012:0033 php 2012-01-18
Red Hat RHSA-2012:0033-01 php 2012-01-18
Oracle ELSA-2011-1423 php53/php 2011-11-03
Oracle ELSA-2011-1423 php53/php 2011-11-03
Scientific Linux SL-NotF-20111102 php53/php 2011-11-02
Mandriva MDVSA-2011:165 php 2011-11-03
CentOS CESA-2011:1423 php53 2011-11-03
Red Hat RHSA-2011:1423-01 php53/php 2011-11-02
Ubuntu USN-1231-1 php5 2011-10-18
openSUSE openSUSE-SU-2011:1138-1 php5 2011-10-17
openSUSE openSUSE-SU-2011:1137-1 php5 2011-10-17
Gentoo 201110-06 php 2011-10-10
Fedora FEDORA-2011-11537 maniadrive 2011-08-26
Fedora FEDORA-2011-11528 maniadrive 2011-08-26
Fedora FEDORA-2011-11537 php-eaccelerator 2011-08-26
Fedora FEDORA-2011-11528 php-eaccelerator 2011-08-26
Fedora FEDORA-2011-11537 php 2011-08-26
Fedora FEDORA-2011-11528 php 2011-08-26
Slackware SSA:2011-237-01 php 2011-08-25
Debian DSA-2266-1 php5 2011-06-29

Comments (none posted)

qemu-kvm: arbitrary code execution

Package(s):qemu-kvm CVE #(s):CVE-2011-2512
Created:July 5, 2011 Updated:July 19, 2011
Description: From the Debian advisory:

It was discovered that incorrect sanitising of virtio queue commands in KVM, a solution for full virtualization on x86 hardware, could lead to denial of service of the execution of arbitrary code.

Alerts:
Gentoo 201210-04 qemu-kvm 2012-10-18
SUSE SUSE-SU-2011:0806-1 kvm 2011-07-19
openSUSE openSUSE-SU-2011:0803-1 kvm 2011-07-19
Ubuntu USN-1165-1 qemu-kvm 2011-07-06
Scientific Linux SL-qemu-20110705 qemu-kvm 2011-07-05
Red Hat RHSA-2011:0919-01 qemu-kvm 2011-07-05
Debian DSA-2270-1 qemu-kvm 2011-07-01

Comments (none posted)

qemu-kvm: privilege escalation

Package(s):qemu-kvm CVE #(s):CVE-2011-2212
Created:July 6, 2011 Updated:July 25, 2011
Description: The virtio subsystem in qemu-kvm suffers from a buffer overflow which can be exploited to crash the guest or execute arbitrary code.
Alerts:
Gentoo 201210-04 qemu-kvm 2012-10-18
Debian DSA-2282-1 qemu-kvm 2011-07-25
SUSE SUSE-SU-2011:0806-1 kvm 2011-07-19
openSUSE openSUSE-SU-2011:0803-1 kvm 2011-07-19
Ubuntu USN-1165-1 qemu-kvm 2011-07-06
Scientific Linux SL-qemu-20110705 qemu-kvm 2011-07-05
Red Hat RHSA-2011:0919-01 qemu-kvm 2011-07-05

Comments (none posted)

rubygem-activesupport: cross-site scripting

Package(s):rubygem-activesupport CVE #(s):CVE-2011-2197
Created:June 30, 2011 Updated:September 7, 2011
Description:

From the Red Hat Bugzilla entry:

An cross-site scripting (XSS) flaw was found in the way Ruby on Rails performed management of safe buffers (certain methods could append unsafe strings to buffers, already containing strings marked as safe without marking the resulting buffer as unsafe). A remote attack could use this flaw to conduct XSS attacks by tricking a local user into visiting a specially-crafted web page.

Alerts:
Fedora FEDORA-2011-8580 rubygem-actionpack 2011-06-24
Fedora FEDORA-2011-8494 rubygem-activesupport 2011-06-21

Comments (none posted)

syslog-ng: denial of service

Package(s):syslog-ng CVE #(s):CVE-2011-1951
Created:June 30, 2011 Updated:July 7, 2011
Description:

From the Red Hat Bugzilla entry:

A denial of service flaw was found in the way syslog-ng processed certain log patterns, when 'global' flag was specified and PCRE backend was used for matching. A remote attacker could use this flaw to cause excessive memory use by the syslog-ng process via specially-crafted pattern.

Alerts:
Gentoo 201412-09 racer-bin, fmod, PEAR-Mail, lvm2, gnucash, xine-lib, lastfmplayer, webkit-gtk, shadow, PEAR-PEAR, unixODBC, resource-agents, mrouted, rsync, xmlsec, xrdb, vino, oprofile, syslog-ng, sflowtool, gdm, libsoup, ca-certificates, gitolite, qt-creator 2014-12-11
Fedora FEDORA-2011-8405 syslog-ng 2011-06-21

Comments (none posted)

tftp: buffer overflow

Package(s):tftp CVE #(s):CVE-2011-2199
Created:July 5, 2011 Updated:July 10, 2012
Description: From the openSUSE advisory:

Malicious clients could overflow a buffer in tftpd by specifying a large value for the utimeout option.

Alerts:
Mageia MGASA-2012-0147 tftp 2012-07-09
Gentoo 201206-12 tftp-hpa 2012-06-21
openSUSE openSUSE-SU-2011:0734-1 tftp 2011-07-05

Comments (none posted)

weechat: man-in-the-middle attack

Package(s):weechat CVE #(s):CVE-2011-1428
Created:July 5, 2011 Updated:July 7, 2011
Description: From the CVE entry:

Wee Enhanced Environment for Chat (aka WeeChat) 0.3.4 and earlier does not properly verify that the server hostname matches the domain name of the subject of an X.509 certificate, which allows man-in-the-middle attackers to spoof an SSL chat server via an arbitrary certificate, related to incorrect use of the GnuTLS API.

Alerts:
Debian DSA-2598-1 weechat 2013-01-05
Fedora FEDORA-2011-7839 weechat 2011-06-03
Fedora FEDORA-2011-7843 weechat 2011-06-03

Comments (none posted)

wordpress: privilege escalation

Package(s):wordpress CVE #(s):
Created:July 7, 2011 Updated:July 12, 2011
Description:

From the WordPress advisory:

This release fixes an issue that could allow a malicious Editor-level user to gain further access to the site.

Alerts:
Fedora FEDORA-2011-8908 wordpress 2011-06-30
Fedora FEDORA-2011-8877 wordpress 2011-06-30

Comments (none posted)

xen: privilege escalation

Package(s):xen CVE #(s):CVE-2011-1898
Created:June 30, 2011 Updated:November 7, 2011
Description:

From the Xen advisory:

Intel VT-d chipsets without interrupt remapping do not prevent a guest which owns a PCI device from using DMA to generate MSI interrupts by writing to the interrupt injection registers. This can be exploited to inject traps and gain control of the host.

Alerts:
Oracle ELSA-2013-1645 kernel 2013-11-26
Red Hat RHSA-2012:0358-01 kernel 2012-03-06
Scientific Linux SL-kern-20111129 kernel 2011-11-29
CentOS CESA-2011:1479 kernel 2011-11-30
Oracle ELSA-2011-1479 kernel 2011-11-30
Red Hat RHSA-2011:1479-01 kernel 2011-11-29
Debian DSA-2337-1 xen 2011-11-06
SUSE SUSE-SU-2011:0942-1 Xen 2011-08-25
Scientific Linux SL-kern-20110823 kernel 2011-08-23
openSUSE openSUSE-SU-2011:0941-1 xen 2011-08-25
Red Hat RHSA-2011:1189-01 kernel 2011-08-23
SUSE SUSE-SU-2011:0925-1 Xen 2011-08-19
Fedora FEDORA-2011-8421 xen 2011-06-21
Fedora FEDORA-2011-8403 xen 2011-06-21

Comments (4 posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 3.0-rc6, released on July 4. It has a new Intel isci driver which adds a significant chunk of code, but, otherwise it's basic fixes. "It's getting to the point where I'm thinking I should just release 3.0, because it's been pretty quiet, and the fixes haven't been earth-shakingly exciting." See the full changelog for all the details.

Stable updates: no stable updates have been released in the last week, and none are in the review process as of this writing.

Comments (3 posted)

Quotes of the week

The number of times I have to explain to industrial and business customers that Linux doesn't suck but the defaults are stupid is astounding, and they then wonder why either the authors or their vendor is a complete and utter moron.
-- Alan Cox

I have to say, I look over these patches and my mind wants to turn to things like puppies. And ice cream.
-- Andrew Morton

Your changelog fails the basic test by mentioning "corner case" simply because the whole futex code consists only of corner cases.
-- Thomas Gleixner

I realize that it's annoying to spend a lot of time on a specific implementation and then see competing code get merged. Unfortunately, this happens all the time, and the code we merge is often not the one that has had the most effort spent on it, but the one that looks most promising at the time when it gets merged.
-- Arnd Bergmann

And quite frankly, Christoph Hellwig has now _twice_ said good things about that driver, which is pretty unusual. It might mean that the driver is great. Of course, it's way more likely that space aliens are secretly testing their happy drugs on Christoph. Or maybe he's just naturally mellowing.
-- Linus Torvalds

Comments (none posted)

What are they polling for?

By Jonathan Corbet
July 7, 2011
The poll(), select(), and epoll_wait() system calls all allow an application to ask the kernel whether I/O on any of a list of file descriptors would block and, optionally, to wait until one or more descriptors become ready for I/O. Internally, they are all implemented with the poll() method in the file_operations structure:

    unsigned int (*poll) (struct file *filp, struct poll_table_struct *pt);

This function returns a value indicating whether non-blocking I/O is currently possible; it is also expected to add a wait queue to the "poll table" (pt) passed in. If no file descriptors are ready for I/O, the calling process will block on all of the accumulated wait queues.

poll() has long implemented an optimization: if an early poll() function indicates that I/O is possible, the kernel knows that it will not be blocking the calling process. So it stops accumulating wait queues; this state is indicated by passing a null pointer for pt. That all works well except in one case: what if a driver needs access to some of the information stored in the poll table?

In particular, the driver might want to know whether the caller is interested in readiness for read or write access, or whether it is looking for exceptional events. For example, if the application wants to read from the descriptor, the driver may need to fire up some device machinery to make that possible. This situation has not come up very often, but it does tend to affect Video4Linux drivers. In response, Hans Verkuil has posted a patch slightly changing the way poll() works.

With the patch, the poll table is never passed as null; instead, the "we will not be blocking" case is marked internally. So the set of events requested by the application is always available; Hans has provided a helper function to access that information:

    unsigned long poll_requested_events(const poll_table *p);

There has been little discussion of the patch; it doesn't seem like there is any real reason for it not to go in for 3.1.

Comments (none posted)

Kernel development news

Seccomp filters: No clear path

By Jake Edge
July 7, 2011

Patches to expand the functionality of seccomp ("secure computing") have been floating around for two years or more without making any real progress into the mainline. There are a number of projects that are interested in using an expanded seccomp, but the patches themselves seem to have run into a "catch-22" situation. There are conflicting visions of how the feature should be added, without a clear sense that any of the options will be acceptable to all of the maintainers involved. That leaves a useful feature without a clear path into the kernel, which is undoubtedly frustrating to some.

We first looked at seccomp sandboxing a little over two years ago, when Adam Langley posted patches that would provide a way for a process to restrict the system calls that it (and its children) could make. The idea is to allow processes to sandbox themselves by choosing which system calls are available, rather than being restricted to just the four hard-coded system calls that the existing seccomp implementation allows (read(), write(), exit(), and sigreturn()). The impetus behind Langley's patches was to provide an easier mechanism for sandboxing processes in the Chromium web browser—and to eventually remove the somewhat convoluted sandbox that Chromium currently uses on Linux.

At the time of that proposal, Ingo Molnar suggested that Ftrace-style filtering would make the expanded seccomp much more useful. That idea wasn't universally hailed at the time, and the seccomp feature went mostly dormant until it was restarted by Will Drewry back in April. Drewry took Molnar's suggestions and implemented a version of seccomp that would allow system calls to be enabled, disabled, or filtered with simple boolean expressions (e.g. sys_read: (fd == 0)).

While Molnar was pleased with the progress, he didn't think it went far enough and suggested that a perf-like interface be used instead of prctl(), which is used by the existing seccomp. He had some fairly wide-ranging ideas that using perf events in a more active way could lead to better kernel security solutions than the existing Linux Security Modules (LSM) approach provides. Once again, this idea was not universally popular. The LSM developers, in particular, were not enamored by that idea.

Nevertheless, Drewry implemented a proof of concept along the lines of what Molnar had suggested. That led to complaints from a somewhat surprising direction, as both Peter Zijlstra and Thomas Gleixner strongly objected to perf being used in an active role. Their responses didn't leave room for any middle ground, with Zijlstra, who is one of the perf maintainers along with Molnar, saying that he and Gleixner would NAK "any and all patches that extend perf/ftrace beyond the passive observing role".

All of which led Drewry, who must be feeling a bit whipsawed at this point, to return to the patchset that seemed to have the most support: using Ftrace/perf-style filters, but maintaining the prctl() interface that is currently used by seccomp. Linus Torvalds had expressed some skepticism that the feature would have any real users, but Drewry outlined how it would be used by Chromium, and several other developers spoke up in favor of expanding seccomp, saying that QEMU, Linux containers (LXC), and others would use the feature. Those endorsements, along with resolving some other technical concerns, was enough for Torvalds to remove his objection to the feature. But, as might be guessed, Molnar is still not satisfied with the approach.

When Drewry reposted the patchset toward the end of June, and asked what the next steps were, Molnar noted that his concerns were not being addressed: "You are pushing the 'filter engine' approach currently, not the (much) more unified 'event filters' approach." But Drewry is trying to find a balance between the needs of the potential users, other maintainers, and Molnar's requests, which is somewhere between difficult and impossible:

Based on the support from potential API consumers, I believe there is interest in this patch series, and I worry that just like with the last two attempts in the last two years, this series will be relegated to the lwn archives in anticipation of a future solution that uses infrastructure that isn't quite ready. I'm trying to approach a problem that can be addressed today in a flexible, future-friendly way, rather than try to open up a larger cross-kernel impacting patch series that I'm unsure of exactly how to integrate sanely and don't know that I can commit to doing.

But Molnar is adamant that the "filter engine" approach is short-sighted, citing the diffstats of the various implementations as evidence:

Not doing it right because "it's too much work", especially as the trivial 'proof of concept' prototype already gave us something very promising that worked to a fair degree:
       bitmask (2009):  6 files changed,  194 insertions(+), 22 deletions(-)
 filter engine (2010): 18 files changed, 1100 insertions(+), 21 deletions(-)
 event filters (2011):  5 files changed,   82 insertions(+), 16 deletions(-)
are pretty hollow arguments to me. That diffstat sums up my argument of proper structure pretty well.

But, as Drewry points out, there is still a lot of work to be done to get beyond the proof-of-concept and to a fully fleshed-out solution. Given that the approach has already received several NAKs, doing all of that work has a very uncertain future. Drewry would like to see the feature be available soon, and is concerned that working on the larger problem is likely to delay that significantly, if it can ever get beyond the objections: "If all the other work is a prerequisite for system call restriction, I'll be very lucky to see anything this calendar year assuming I can even write the patches in that time."

Molnar is undeterred, however, suggesting that there is a path into the kernel through the tree that he co-maintains:

Do it properly generalized - as shown by the prototype patch. I can give you all help that is needed for that: we can host intermediate stages in -tip and we can push upstream step by step. You won't have to maintain some large in-limbo set of patches. 95% of the work you've identified will be warmly welcome by everyone and will be utilized well beyond sandboxing! That's not a bad starting position to get something controversial upstream: most of the crazy trees are 95% crazy.

The problem, of course, is that the 5% is the piece that Drewry and others are most interested in seeing (i.e. the system call restrictions for sandboxing) in the kernel. So, what Molnar seems to be offering is a fairly sizable chunk of work that could, in the end, still leave the "interesting" part out in the cold. Molnar may be confident that he can overcome the objections from Zijlstra and Gleixner, but Drewry can hardly be as sanguine. He describes the problem as he sees it:

It seems like a catch-22. There's not a perfectly clear path forward, and anything that looks like the perf-style proof of concept will be NACK'd by other maintainers. While I believe we could lift perf up off its foundation and create a shared location for storing perf events and ftrace events so that they will be inherited the same way (currently nack'd by linus) and walked the same way (kinda), the syscall interface couldn't currently be shared (also nack'd by perf), and creating a new one is possible modeled on the perf one, but it's also unclear what the ABI should be for a generic filtering system.

Both Zijlstra and Gleixner have been absent from the most recent discussion, so it's a little hard to guess what their thoughts are. In the absence of any kind of posting softening their stances, though, it would be a bad idea to believe that they have changed their minds.

It's a problem that we have seen before, where a new feature is, to some extent, held hostage to requests that a larger problem be solved. The problem was discussed at the 2009 Kernel Summit, where there was agreement that those requests should be advisory in nature, rather than demands. In this case, Molnar is not really demanding that the bigger task be done, just that he is uninterested in taking the code via the -tip tree unless it solves the larger problem.

It is unclear where things go from here. Drewry said that he would look at trying to do things Molnar's way ("but if my only chance of any form of this being ACK'd is to write it such that it shares code with perf and has a shiny new ABI, then I'll queue up the work for when I can start trying to tackle it"), but it may be a ways off. In the meantime, there are various projects interested in using the feature.

If falling back to the bitmask version of the feature solves enough of the problem for those projects, there is the possibility of trying to get that into the kernel via another tree (e.g. the security tree). There would undoubtedly be objections from Molnar, but if enough users lined up behind it, that might be a reasonable approach. It would create an ABI that would need to be maintained going forward, which is one of Molnar's objections, but it would solve problems for Chromium and others.

Steven Rostedt suggested adding the seccomp expansion as a discussion item for the Kernel Summit in October, which might provide a path forward. It's likely that most or all of the interested parties will be there (unlike the Linux Security Summit that will be held with Plumbers in September, which was suggested as an alternative). While a face-to-face discussion could be helpful, it might be a stretch to believe that the disagreement between active vs. passive perf could be resolved that way. On the other hand, it could lead to some kind of decree about the proper direction from Torvalds. That could go a long way toward resolving the issue.

Comments (1 posted)

CMA and ARM

By Jonathan Corbet
July 5, 2011
LWN recently looked (again) at the contiguous memory allocator (CMA) patch set; CMA is intended to provide large, contiguous DMA buffers to drivers without requiring that memory be set aside for that exclusive purpose. CMA was recently reposted with the idea that it is nearly ready for merging. There is a clear desire to see this code get at least into the -mm tree, even if it is not yet quite ready for the mainline. Most reviewers are pleased with CMA; it would seem that there are very few roadblocks remaining. Except that, as it turns out, one big obstacle remains.

Over the years, LWN has also looked at ARM's special memory management challenges. Recent ARM CPUs are, like those implementing other architectures, becoming more complex in order to improve performance. So ARM processors can now do speculative prefetching of memory contents in surprising ways. This prefetching works well on cached memory, but should not be used on memory that has been marked as uncached. An additional complication comes from the fact that virtual memory systems can have more than one mapping for a given range of memory, and caching is a feature of the mapping, not the memory itself. So one might well wonder what happens if different mappings have different caching attributes. On recent ARM processor designs, what happens is officially undefined; in practice, it can mean problems like corrupted memory, machine checks, or simple hangs. As it happens, kernel developers normally go out of their way to avoid that kind of behavior.

The current CMA mechanism is used as an allocator behind dma_alloc_coherent(), which creates a cache-coherent DMA buffer. In the absence of bus-snooping hardware that is able to notice when a DMA transfer changes memory, "cache-coherent" is likely to mean simply "uncached." So CMA must, on such systems, create an uncached range of memory to hand back to the requesting driver. That is easily done, and all should be well...at least, unless there happens to be another mapping to the same memory with different caching attributes.

Unfortunately, conflicting mappings can come about easily on a Linux system. One of the first things the kernel does as it boots is to create a "linear mapping" which provides kernel-space virtual addresses for most or all of the memory present in the system. The kernel cannot manipulate memory directly without such a mapping; putting as much of memory as possible into a persistent mapping thus makes sense. On a 32-bit system, just under 1GB of memory can be mapped this way (64-bit systems can always map all of memory and will be able to do so for quite some time yet). This kernel-mapped memory is called "low memory"; almost all allocations of memory for the kernel's use come from the low memory area. Naturally, low memory is mapped with caching enabled; to do otherwise would destroy the performance of the system. If a region of low memory is turned into a DMA buffer with an uncached mapping, the system will have two conflicting mappings for the same memory and will have moved into "undefined behavior" territory.

These conflicting mappings are the reason behind ARM maintainer Russell King's strong opposition to the merging of CMA in its current form. He believes that the code is unsafe on ARM systems; it should not, he says, be merged until the mapping problem has been solved. The interesting thing is that the existing DMA API has the same problem on ARM; dma_alloc_coherent() uses vanilla alloc_pages() to obtain a buffer, then changes the caching attributes before giving the buffer back to the caller. The addition of CMA does not make ARM's DMA API any more or less safe than it was before; it just perpetuates an existing problem.

Russell has a patch pending for 3.1 which addresses this problem by setting aside a chunk of memory which is never mapped into the kernel's address space. With this memory pool available, coherent DMA mappings can be set up without endangering the operation of the system. The whole reason CMA exists, though, is to provide large, contiguous buffers without the need to set aside memory; Russell's approach thus defeats the entire purpose. The pressures which have led to the creation of CMA will not go away anytime soon, so it seems that another solution is needed. Arnd Bergmann has outlined two possibilities, neither of which is entirely pleasant:

  • CMA could be changed to only allocate from the high memory zone. High memory is (by definition) not in the kernel's linear mapping, so no other mappings should exist. The problem with this approach is that it forces the use of high memory on all systems; ARM-based systems are reaching the point where some of them need high memory anyway, but that need is not, yet, universal. Getting enough memory into the high memory zone to be useful could require moving the boundary and shrinking low memory; that is not desirable because low memory is often a limiting resource already. Even if that obstacle can be overcome, the ARM architecture poses unique challenges which would make a high memory implementation hard.

  • Memory that has been turned into a coherent DMA buffer could simply be removed from the kernel's linear mapping until the buffer is no longer needed. This approach seems simple until one remembers that the kernel uses huge pages for the linear mapping. Splitting those huge pages into smaller pages would increase translation lookaside buffer (TLB) contention, reducing the performance of the system as a whole.

Compared to these alternatives, simply setting aside a chunk of memory at boot time might not look like such a bad idea after all. CMA developer Marek Szyprowski's plan appears to be to go with the second of those two alternatives; he thinks that it can be done without significantly hurting performance.

In truth, the best tradeoff will almost certainly differ from one platform to the next. In some situations, memory will be tight enough that a significant runtime penalty to avoid making static DMA buffers seems worthwhile; on others, setting aside a bit of memory may not be a real problem. So what may come of all this is a set of choices to be made when configuring a kernel. There does not appear to be a single solution which just works for everybody on the horizon at this time.

Comments (1 posted)

Deferred driver probing

By Jonathan Corbet
July 7, 2011
The developers working on the initial OLPC laptop ran into an interesting problem: the camera driver would fail to initialize if it was built into the kernel, but it worked just fine if built as a module. That problem still exists; it is a symptom of an issue which comes up frequently in contemporary systems: there is no way to know at build time what dependencies exist between different hardware units, so there is no way to ensure that drivers are loaded in the right order. A new patch from Grant Likely tries to solve that problem in a simple sort of way; it will probably improve the situation, but a complete solution is still lacking.

The problem with the camera driver is a result of the fact that the "camera" is, in reality, three devices working in concert: a DMA bridge, a sensor, and an I2C bus connecting the two. The bridge (which plays the role of the overall "camera driver") must locate and identify the sensor as part of its setup routine; if the sensor does not exist, initialization will fail. But the sensor will not exist until its driver and the I2C bus driver have been loaded into the system. If all of the drivers are built into the kernel, the bridge driver's probe() function may be called first; there will be no sensor, so everything fails.

Contemporary systems - especially those of the mobile variety - are increasingly built this way. Grant gave another example:

A "sound card" typically consists of multiple devices; one or more codecs (often i2c or spi attached), a sound bus (often i2s), a dma controller, and a lump of machine/platform specific code that ties them all together. Right now the ASoC code is going through all kinds of gymnastics make each component register with the ASoC layer and the 'tie together' driver has to wait for each of them to show up.

The key point to understand is that the various components that make up a "device" may appear to be entirely unrelated at the hardware level. They can be on different buses; some of them may be subcomponents of entirely different devices. A general-purpose kernel has no real way to know what the real dependencies between devices are until all of the pieces are present and have started to recognize each other.

Grant's patch takes a simple approach to solving this problem: drivers which are unable to initialize their devices as the result of missing resources can request that the operation be retried at some point in the future. That request is a simple matter of returning -EAGAIN from the probe() function. The driver core maintains a simple linked list of drivers that have requested this sort of deferral; when the time seems right, the deferred probe() invocations are retried to see if things work any better.

One of the concerns raised with regard to this patch had to do with the determination of the right time. How might the kernel know when a failed initialization might work? The event which may change the situation is the successful addition of a new device to the system, so the current patch retries all of the deferred calls every time a new device shows up. The mechanism used for the retries (a workqueue) will tend to coalesce these attempts when a lot of devices are being registered (during system boot, for example), but it still strikes some reviewers as being inefficient. Grant has promised a revision of the patch which improves the situation.

A related question is: when can the kernel conclude that there is no longer any point in retrying a specific driver's probe() function? In today's dynamic hardware environment, there never comes a point where one can say that no more devices will show up. This question has no real answer; it could be that, in a poorly configured or broken system, the process will never terminate. The cost of a driver stuck in the deferred state should be small, though.

Others have questioned the need for this mechanism at all, but the responses have made it clear that something needs to be done to address this kind of hardware. A proper solution in the driver core seems like a better answer than a bunch of one-off hacks in specific drivers. So something will probably go in.

Someday perhaps we will see a more elegant and efficient mechanism. One could imagine an API allowing a driver to specify which resources it is looking for; that driver's probe() function would then be put on hold until those resources become available. The driver core already generates events when new devices become available; some code matching those events to waiting drivers could be the last piece. But there would be a need to come up with a language by which a driver could express a need like "a device at address 42 on this I2C bus"; getting that right could take some work.

Meanwhile, Grant's patch offers a "good enough" solution which appears capable of solving the problem most of the time. Accepting "good enough" when it's truly good enough is a key part of pragmatic programming. So chances are we'll have deferred driver initialization in the kernel sometime soon; fancier mechanisms may be rather longer in coming.

Comments (4 posted)

Patches and updates

Kernel trees

Architecture-specific

Core kernel code

Development tools

Device drivers

Filesystems and block I/O

Memory management

Security-related

Virtualization and containers

Miscellaneous

Page editor: Jonathan Corbet

Distributions

DoudouLinux: You know, for kids

July 7, 2011

This article was contributed by Nathan Willis

The more Linux distributions that pop up, the more skeptical one can become of those that seek to "tailor" the platform for a specific subset of users because, too often, that tailoring process amounts to little more than applying the project leader's personal set of cosmetic preferences and default application choices. That is not the case for projects that attempt to craft a distribution for an entirely different demographic, though, such as children. Most "Linux for kids" distributions seem to target more or less the same space, but one, DoudouLinux, is taking a markedly different approach.

What Linux?

[Activities]

For starters, the distribution's name is derived from doudou, a French idiomatic word that seems to roughly mean "child's security blanket." But it may still elicit giggles from English speakers, particularly the extremely young users it sets out to serve. If you can get past that, however, DoudouLinux does offer a different take on the children's distribution niche.

There are several active kid's Linux projects today — Edubuntu, Qimo, OLPC Sugar, Foresight Kids, etc. — but most of them focus either on classroom usage or on school-age children, meaning those old enough to be comfortable using the computer to write or do general-computing tasks. There is a gray area of course, but think mid-to-late primary school at the youngest. DoudouLinux, on the other hand, is tailored towards children in the two to seven year old age range, for use at home. The project's site explains its purpose as being a home-friendly, more educational alternative to a game console plugged in to the television.

One can feel the difference in at least three ways. First, classroom-oriented distributions usually have to cope with remote management and administration issues, often using remote desktop software, and they expect the computer to be directly connected to a LAN, perhaps with access to a storage-area network. Some are even traditional "thin-client" designs. DoudouLinux dispenses with all of that. It is designed to run from live CD or live USB storage, offers minimal system configuration or management options, and the majority of its software will run without a live Internet connection.

The second difference is that "school age" kids have different application requirements. As they get older, the office suite becomes more important, and what qualifies as "educational" applications becomes more specialized, such as focusing on math or science and less on games. DoudouLinux's application offerings do not include a word processor or spreadsheet at all, but incorporate plenty of interactive and tactile-experience learning tools.

Third, extremely young children like those on the early end of DoudouLinux's audience simply do not have the fine motor skills to use a mouse or keyboard with standard-size icons, and those who are not reading yet might be lost entirely. Those requirements affect the choice of applications, but also require changes to the window system as well as the keyboard and mouse settings. DoudouLinux provides parents with a set of tools to configure the system settings to support a range of ages and mouse-finesse levels.

Course catalog

DoudouLinux's latest stable release is code-named Gondwana, from June of 2011, which you can download in CD or USB-oriented images from the project's web site. There are images in 16 languages (including two in Serbian, for both the Latin and the Cyrillic alphabets) from the stable series, and six additional languages in the "awaiting" section, which may have outstanding translation issues. There is also a PDF "Quickstart" manual in each of the stable languages. Downloading can take a while, even on a fast network connection, because the project does not have mirrors set up.

The images are built for 32-bit Intel-compatible processors only; to its credit the project does an admirable job of explaining what generations of Apple hardware this includes, as well as giving a rough overview of Linux and free software concepts in general. The principle difference between the CD and USB images are that the USB images are pre-configured to use part of the flash drive as persistent storage. DoudouLinux allows a small amount of system customization that is helpful to be persistent between sessions.

At the moment, DoudouLinux is intended primarily to run from such removable storage devices, although there is documentation for installing it to a hard drive as well. The roadmap indicates that future releases hope to add support for booting within Windows, a la Ubuntu's Wubi.

DoudouLinux is based on a minimal Debian "Lenny" (5.x) system, with a custom, tab-based user interface. At boot time, there is an "Activities Menu" displayed that offers seven session options: the full DoudouLinux system, a "Mini DoudouLinux" session with a reduced set of applications offered, and single-application sessions for several of the applications: Gamine, Pysycache, Childsplay, TuxPaint, and GCompris.

[Learn tab]

Gamine and Pysycache are both mouse-training games, aimed at different levels of experience. They hail from DoudouLinux's educational games section, which also includes language and geography titles, plus the general-purpose educational suites Childsplay and GCompris, each of which offers a variety of activities.

The other application categories include "Work" tools (a category that includes the web browser, IM software, dictionary, and calculator), "Multimedia" (which includes several music-creation applications and the Stopmotion animation tool), and general games (under "Enjoy"). The general games category includes standard puzzle and non-violent arcade fare, plus common card and board games. Each of the categories has its own tab in the DoudouLinux application launcher, which responds to single-clicks for better ease-of-use. Likewise, every application or game runs in full-screen mode, as the developers felt that window management was asking for trouble with children.

Few if any of the application offerings in DoudouLinux Gondwana are not available in other Debian-derived distributions. The real customization work involved building the activities chooser and the various configuration tools, all of which are available under the GPLv3. The configuration tools are found under the "Tune" tab in the full DoudouLinux session. They allow sound, monitor, printer, and data persistence control, as well as the separate administration tools for GCompris and Pysycache, configuration of the Activity chooser, and a bug reporting tool.

[Mouse configuration]

Although the configuration tools are intended for use by parents, they are simple to use — offering access to presets with one click. Mouse and monitor settings are usually foolproof on a modern Linux system, but there is documentation online for dealing with troublesome printers.

While it is not configurable from the live system, DoudouLinux runs the DansGuardian web-filtering system in the background. DansGuardian is a keyword-based content filter, not a site blacklist, so it is advertised as doing a better job of insulating children from accidental adult content delivered through search results and other side-bands. The filter is certainly restrictive in the form offered by DoudouLinux, even blocking Google Image Search results for many innocent common words to a baffling extent (for example, "cow" is permissible, but "puppy" is not). In theory, young children will probably not spend too much time in web searches, but without parent-accessible configuration tools, DansGuardian is a distinctly "when in doubt, block it" tool.

Report card time

Overall, DoudouLinux "Gondwana" is a worthy choice for families with very young children. The content that it offers is not much different than you would find in another "educational" distribution, but there is far less configuration to do, and because the system is limited in scope, it runs fast. The choice to offer it as a live-CD image only also strikes me as wise, given the inherent risks that sitting a toddler in front of a keyboard can entail. DoudouLinux cannot protect your hardware from spilled milk or the sudden disconnection of the flash drive, but then again no piece of software can.

As for the applications themselves, I am probably not a good judge of young children's learning tools, since I do not have kids myself. I am a bit skeptical that the "bright colors, cartoon characters everywhere" motif is really necessary, because I see all too many kids who are capable users of their parents' smartphones. But I do think it is commendable that the DoudouLinux team included some less-common choices in its application suite, such as the music creation tools. Too many distributions' "music" category includes only an audio player; DoudouLinux gives the kids a piano keyboard and the Hydrogen drum machine. Which probably makes the volume control tool all the more critical.

That said, I was a bit surprised not to find a typing tutor like Tux Typing in the menu, nor to find any webcam tools like GNOME's Cheese. The Empathy instant messenger client is included, which presumably would need to be configured by a parent. Video chat with grandparents is probably the intended usage, but it does stand out as needing a distinctly more complicated setup process.

The DoudouLinux project is not resting on its laurels. The roadmap includes several technical challenges: unifying the CD and USB images into a single file with user-selectable persistence, a more flexible activity chooser, and an online tool to customize the CD image. The project's issue tracker also indicates that it plans to migrate to Debian "Squeeze" (6.x), as well as the ever-present need to add more languages and more complete translations. At least one new feature is being discussed: a parental control to limit use of the computer to specific times of day. Given the project's goal to supplant game console and television vegetative-behavior, this sounds like an even more useful control than DansGuardian.

DoudouLinux's origins page says that much of the user experience design is the fruit of lead developer Jean-Michel Philippe's iterative attempts to build an interface that his own children could understand and enjoy. In that sense, it does reflect one developer's personal touch, but one based on testing and user-centric design. Better still, since the project's founding, the list of contributors has grown considerably, including two teams of university-based developers from Russia. Ultimately, the only important test of a "Linux for kids" distribution is whether or not the kids in your own house will use it, but DoudouLinux is worth a close examination for those whose children are too young to take advantage of the school-oriented builds offered by most of the competition.

Comments (4 posted)

Brief items

Distribution quotes of the week

However, times change, people change, and I realized that time has come for me to sail to new horizons and look for new challenges. Many things changed in Mandriva during those 2.5 years, and it is a completely different company from what it once was. Many people had left, and many new faces has arrived as well during this time. But nonetheless, albeit all the challenges, problems, disappointments and achievements, we managed to stay alive and kicking even in cases where nobody was believing that Mandriva would survive. But it did.
-- Eugeni Dodonov is leaving Mandriva

Eugeni now is more or less tied with Warly in my hall of fame list for past release engineers++, and without doubt the most successful managing to do the previously thought-to-be impossible task of scaling sufficiently for it impressively well!
-- Per Øyvind Karlson

In May 2011, the Beefy Miracle Fedora spin was started. The Beefy Miracle Fedora spin itself was free and open source from the start, but members of the Gnaw project were concerned with Beefy Miracle's dependence on the (then) non-GPL bun warmer, owned by a guy in a hot dog cart down the street. In July 2011, a project was started in response to this: KFCE, a different desktop not using bun warmers, but instead built on chicken fryers under the Gnaw Lesser General Public License (LGPL), a free cookery license that allows GPL-incompatible kitchen appliances to link to it.
-- Máirín Duffy

Comments (none posted)

ConnochaetOS 0.9 RC1 released

The first release candidate for ConnochaetOS 0.9 is available for testing. "The upcoming ConnochaetOS 0.9 reached the RC1 release and should be ready for testing. All open bugs were fixed in this release and it should run pretty stable by now." ConnochaetOS is a desktop oriented distribution that aims to put modern free/libre software onto old hardware.

Full Story (comments: none)

Ubuntu Oneiric Ocelot Alpha 2 Released

Oneiric Ocelot Alpha 2 (11.10) is available for testing. Images are also available for Ubuntu Server Cloud, Xubuntu, and Edubuntu.

Full Story (comments: none)

Distribution News

Debian GNU/Linux

Debian at several conferences

The Debian Project has announced that it will be present at several events in the coming weeks, ranging from developer-oriented conferences to workshops for users and wannabe developers. "The Debian Project invites all interested persons to attend these events, ask questions, take a look at Debian 6.0 "Squeeze", exchange GPG fingerprints to boost the web of trust, and get to know the members and the community behind the Debian Project."

Full Story (comments: none)

delegation for the backports team

DPL Stefano Zacchiroli has appointed Alexander Wirt and Gerfried Fuchs to the Backports Team. "The Backports Team oversee and maintain the well-being of Debian's backport service that allow Debian Developers to prepare backported packages for Debian users."

Full Story (comments: none)

Fedora

Announcing FUDCon APAC 2011 in Pune

Jared Smith has announced that FUDCon (Fedora Users and Developers Conference) in the Asia-Pacific region (APAC) will be held in Pune, India November 4-6, 2011. Bidding is open for FUDCon APAC 2012 and FUDCon LATAM (Latin America) 2012. "FUDCon APAC 2012 will be held between March 1st and May 31st, and FUDCon LATAM will be held between June 1st - August 31st."

Full Story (comments: none)

Gentoo Linux

Gentoo Council Election Results 2011 to 2012

The Gentoo Council election results are in and dberkholz, jmbsvicetto, grobian, chainsaw, ulm, betelgeuse, and hwoarang are the winners.

Full Story (comments: none)

Ubuntu family

Upcoming Ubuntu Classroom Events

Ubuntu Developer Week will be happening July 11-15, 2011, followed by Ubuntu Community Week, July 18-22, both on IRC. Join others in the classroom.

Comments (none posted)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

Duffy: Anaconda Language & Keyboard Layout Selection

On her blog, Máirín Duffy looks into the confusion of choosing a language and keyboard layout in the Anaconda installer used by Fedora. In the posting, she mocks up various ways that the selection screens could be improved to make it easier for non-English users to find their language and keyboard. It is Anaconda-specific, but describes problems that plague many language/keyboard choosing programs. "While Anaconda's manually-maintained lang table ties together some useful information across the languages (a default locale, keyboard layout, etc.), at least for keyboard layout selection, it could be a bit smarter providing not only a sane default, but also 'adjacent' alternatives that are more likely than most of the others. These are long lists where only a fraction of the data in the list is relevant to the user. If I choose English as my installation language, and I'm (for example) in the U.K., there’s an annoying amount of difference between the two keyboards, yet as you can see in the screenshots above, the Ukranian layout is between the default and the (likely) more relevant U.K. layout. That the U.K. layout is near the U.S. one is sadly a coincidence based on the first few letters of each country’s name (thankfully we’re both united!)"

Comments (16 posted)

Hertzog: How to start contributing to Debian?

Raphaël Hertzog has an introductory article on getting started as a Debian contributor. "The Debian website has a page explaining how to help Debian. While it provides no less than 10 suggestions in a daunting text-only list, it's difficult to know what to do next once you picked up something that you could do. I will try to fix this by providing concrete information for each cases in upcoming articles but in the mean time I propose you another approach to start with." (Thanks to Paul Wise)

Comments (none posted)

Introducing Update Packs in Linux Mint Debian

The Linux Mint Blog introduces a new Update Manager for the Linux Mint Debian Edition (LMDE), which uses the "rolling release" model.

As an LMDE user you're supposed to point to 2 distinct repositories:
  1. The Linux Mint repository (deb http://packages.linuxmint.com/ debian main upstream import)
  2. A Debian Testing repository... so here you have a choice and you can pick 1 of the following repositories:
    1. You can play it safe by pointing to the "Latest" update pack repo (deb http://debian.linuxmint.com/latest testing main contrib non-free)
    2. You can act as a guinea pig for others and help the team with the testing by pointing to the "Incoming" update pack repo (deb http://debian.linuxmint.com/incoming testing main contrib non-free)
    3. You can point straight to the Debian Testing repositories themselves and do without all this (deb http://ftp.debian.org/debian testing main contrib non-free)

Comments (1 posted)

Wolf: DebConf13 at home? Why not?

DebConf 11 will start soon in Banja Luka and that means it's time to start thinking about hosting DebConf 13. Gunnar Wolf looks at the bidding process for hosting DebConf. "I have organized DebConf in my country. It was hellish, but at the same time, it's one of my most cherished experiences. And I'm sure the same will be said by the leaders of each successive bid — It is one of the most rewarding experiences you can imagine. Next year, DebConf will be held in tropical Managua, Nicaragua. But, where will we meet in 2013? Well, that depends on you, my dear reader! Do you want to work your ass off for Debian and have utter fun? Do you want to show and share your country with this huge family of developers? Start thinking about pushing for a DebConf13 bid!" (Thanks to Paul Wise)

Comments (none posted)

Page editor: Rebecca Sobol

Development

A look at Gawk 4.0.0

July 7, 2011

This article was contributed by Joe 'Zonker' Brockmeier.

GNU Awk (Gawk) is one of those workhorse utilities that usually doesn't make news. The 4.0.0 release, however, deserves a look. Announced on June 30th, the latest iteration of Gawk brings the first Gawk debugger, a sandbox mode for running less trusted scripts, revised internals, a number of changes to regular expressions, and IPv6 compatibility.

Gawk is one implementation of Awk. Named for the last names of its inventors (Alfred Aho, Peter Weinberger, and Brian Kernighan), Awk is a scripting language that's standard across UNIX platforms — and by standard I mean that Awk has been part of a standard UNIX (or UNIX-like) system since the beginning, as well as part of the POSIX specification from the Open Group. Even though Gawk is one of many Awks, it stands out as being one of the most widely used. What is it used for? Gawk is best-known for data extraction and reporting, though it's been used to write IRC bots, a YouTube downloader, and for AI programming.

New in 4.0.0

Gawk 4.0.0 is a fairly hefty update from the 3.1.8 release. According to the release announcement, Gawk not only packs a number of new features, bugfixes, and some updates to comply with POSIX 2008, it also has "revamped internals."

To find out what was revamped, and why, I asked Gawk maintainer Arnold Robbins by email. It turns out that the revamp has a lengthy history. Robbins says that "some years ago" John Haque took on rewriting Gawk's internals using a byte-code-style engine — and to implement a debugger in the process. Unfortunately, that work wasn't integrated and Haque moved on. In early 2010, Robbins started trying to bring Haque's code up to date. According to Robbins, the rewrite doesn't provide a huge performance boost, but it does bring a major useful feature:

The performance is about the same (or slightly better) than the original internals, and I have not yet found a case where it's worse. But the really big gain, and why I wanted to have the change, is that gawk now provides an awk-level debugger (similar to GDB).

Right now, dgawk is usable, but still limited. It doesn't report what an error is, but will only report "syntax error" when there's a problem. The debugger will also only work when running a program on the command line — it cannot be attached to a running Awk program. It is unlikely that the Gawk developers will focus on adding that functionality, since the Gawk manual notes that limiting debugging to programs started from within dgawk "seems reasonable for a language which is used mainly for quickly executing, short programs."

Gawk's regular expressions have undergone some changes in 4.0.0. Interval expressions are now part of the default syntax for Gawk, and no longer require the -W or --posix options. Interval expressions — where one or two numbers inside braces (such as {n} or {n,m}) tell Gawk to match a regular expression n or n through m times — were not part of the original Awk specification. Also, \s and \S have been added to match any white space character, or any character that is not white space (respectively).

While Gawk tries to be POSIX-compliant, it does have features above and beyond POSIX — and 4.0.0 introduces a few more. Gawk now supports two new patterns, BEGINFILE and ENDFILE, that can be used to perform actions before reading a file and after (respectively). These are similar to BEGIN/END rules, but are applied before and after reading individual files (since Gawk may process two or more files while running any given script). For example, Gawk programs can now test to see if a file is readable before trying to process it. In prior versions of Gawk, this was not possible — so a script would fail with a fatal error if a file passed to Gawk was not readable.

Gawk has long had the ability to work over a network connection. With the 4.0.0 release, Gawk supports IPv6 using the /inet6 special file, or /inet4 to force IPv4.

The Internet is awash with Awk/Gawk scripts that users might want to run, but worry that the scripts will do more than what's advertised. To address this, Gawk 4.0.0 has a sandbox option (--sandbox), which restricts Gawk to operating on the input data that's been specified. It does this by disabling Gawk's system() function, input redirection using getline, and output redirection using the print and printf functions.

However, Robbins cautions against being overconfident in the security it would convey.

It was contributed by a user who felt a need for it, IIRC, for use in Web CGI scripts where you don't want someone to send in malicious data that can trick the script into writing in your filesystem. It makes a certain amount of sense to have an option like that. It is most definitely *not* intended to make any promises of security.

The sandbox mode is not on by default, says Robbins, because it would break "an untold number of existing awk scripts." In short, this option may be useful, but the features disabled by the sandbox option may not be the only way a malicious script could harm a user's system.

The 4.0.0 release is the end of the line for some options and several old and unsupported operating systems. The redundant --compat, --copyleft, and --usage options are gone. The option for raw sockets has been removed, as it was not implemented. If you're still on Amiga, BeOS, Cray, NeXT, SunOS 3.x, MIPS RiscOS, or a handful of others, Gawk 3.1.8 is the final supported release. That the Gawk team has dropped those platforms is no surprise — that they've been carried so long past their expiration date is. It would be challenging indeed to find new proprietary software that's supported on BeOS or Amiga.

With the Gawk 4.0.0 release out of the way, Robbins says that the "big ticket" items for upcoming releases are to merge Gawk's three executables (gawk, pgawk for profiling, and dgawk for debugging) into one to reduce the installation footprint. He also says that Haque "has some other plans related to performance, but that's about all I can say about it in public." Robbins also says that there are plans to merge in some of the XMLgawk extensions. (XMLgawk is an extension of Gawk that has an XML parsing library based on the Expat XML parser.)

Robbins also has a few ideas listed in the Gawk roadmap on his site, which includes support for multiple-precision floating-point (MPFR) so gawk can use infinite precision numbers. He notes that it will be a "big job" and has yet to decide whether MPFR support would be on by default. Gawk is released as it's ready, so there are no dates specified as to when the features can be expected.

The Gawk team is not large, but it's got a healthy set of core contributors. Robbins says that Gawk has six people who maintain ports to different systems, one who handles testing on "a zillion different Unix systems," one contributor who helped out with documentation, and "various other people such as the xmlgawk developers, and several people from different GNU/Linux distributions." Naturally, this also includes Robbins and Haque.

Though Awk is not a particularly "sexy" language these days, it's still a go-to for system administrators and developers. It's good to see that the GNU Project is not only maintaining Gawk, but adding interesting new features that help keep it relevant.

Comments (16 posted)

Brief items

Quotes of the week

If we take the version number away and update people silently, the users don't have to think and won't think about any of it.
-- Asa Dotzler

<fellow> Maybe Perl isn't given to over-magic line-noisy crap. I hear there's even a new version. What'd it get us?

<japh> ~~, for smart matching, with 27-way recursive runtime dispatch by operand type!

<fellow> ...

-- Ricardo Signes

Comments (2 posted)

CERN's Open Hardware License v1.1

[OHR logo] CERN has announced the release of version 1.1 of its Open Hardware License. "'For us, the drive towards open hardware was largely motivated by well-intentioned envy of our colleagues who develop Linux device-drivers,' said Javier Serrano, an engineer at CERN's Beams Department and the founder of the OHR. 'They are part of a very large community of designers who share their knowledge and time in order to come up with the best possible operating system. We felt that there was no intrinsic reason why hardware development should be any different.'" CERN also maintains the Open Hardware Repository as a collecting point for free hardware designs.

Comments (none posted)

Mercurial 1.9 released

Version 1.9 of the Mercurial distributed source code management system is out. New features include a functional file set matching language, a new command server mode, and more; see the release notes for details.

Comments (12 posted)

notmuch 0.6 released

After a long hiatus, the notmuch mail indexing project has put out the 0.6 release. New features include folder-based search, PGP/MIME support, some new automatic tags, a number of performance improvements, an initial set of Go bindings, and a lot more. (LWN looked at notmuch in March, 2010).

Full Story (comments: none)

Newsletters and articles

Development newsletters from the last week

Comments (none posted)

Systemd for Developers II

Lennart Poettering has written a second installment in his "systemd for developers" series. "This time we'll focus on adding socket activation support to real-life software, more specifically the CUPS printing server. Most current Linux desktops run CUPS by default these days, since printing is so basic that it's a must have, and must just work when the user needs it. However, most desktop CUPS installations probably don't actually see more than a handful of print jobs each month... That all together makes CUPS a perfect candidate for lazy activation: instead of starting it unconditionally at boot we just start it on-demand, when it is needed. That way we can save resources, at boot and at runtime."

Comments (7 posted)

Zeuthen: Writing a C library, part 3

The third part of David Zeuthen's guide to writing low-level libraries looks at modularity, error handling, and object-oriented design. "Even with a library doing proper parameter validation (to catch programmer errors early on), if you pass garbage to a function you usually end up with undefined behavior and undefined behavior can mean anything including formatting your hard disk or evaporating all booze in a five-mile radius (oh noz). That's why some libraries simply calls abort() instead of carrying on pretending nothing happened."

Comments (none posted)

Zeuthen: Writing a C library, parts 4 and 5

David Zeuthen continues to crank out updates to his "Writing a C library" series faster than we can point to them; part 4 (helpers, daemons, and testing) and part 5 (API design, documentation, and versioning) are now out. "A C library is, almost by definition, something that offers an API that is used in applications. Often an API can't be changed in incompatible ways (it can, however, be extended) so it is usually important to get right the first time because if you don't, you and your users will have to live with your mistakes for a long time."

Comments (none posted)

Not much in new Thunderbird 5, but roadmap looks promising (ars technica)

The Thunderbird mail client gets a review of its version 5 release over at ars technica. "In addition to moving to the Gecko 5 engine, Thunderbird also brings some other improvements. Thunderbird 5 has gained Firefox's slick new tab-hosted add-on management user interface. Startup time has noticeably improved in the new version, allowing the user to start working with the application sooner after startup."

Comments (18 posted)

FabFi: An open source wireless network built with trash (opensource.com)

Opensource.com has a report on FabFi, which is an effort to build low-cost wireless network infrastructure that can operate independently from governments and wireless data companies. "And the main components can be built out of trash. Some boards, wires, plastic tubs, and cans can build you a FabFi node. The design of the node purposefully uses things that are widely available wherever the project takes place. Users in Afghanistan discovered that instead of requiring specialty made reflectors, they could use the metal from USAID vegetable oil cans because it turns out to be the right malleability and size for these reflectors."

Comments (4 posted)

Interview with Lennart Poettering (LinuxFR.org)

LinuxFR talks with Lennart Poettering about his work on Avahi, PulseAudio, and systemd. "You should never forget that in the whole industry there are about 3.5 people paid full-time for doing generic maintainance work of the Linux audio stack (which I consider consisting primarily of ALSA and PulseAudio and a few things around it). With this little manpower I can only say that what has been achieved is pretty good. While we still can't fully match competing audio stacks like CoreAudio, we are a lot closer than we ever were. I do hope that the folks who kept constantly complaining would be a lot more appreciative if they understood that."

Comments (76 posted)

PiTiVi Video Editor Now Kitten-Friendly (Linux.com)

Linux.com has posted a review of PiTiVi 0.14. "PiTiVi is a GStreamer-based non-linear video editor (NLE) developed by members of the GStreamer project itself. That means it is often the first project to showcase new features, and last month's new release is no exception. The major new feature is support for audio and video filter 'effects' but there are usability and speed improvements worth examining, too."

Comments (9 posted)

Paley: Why are the Freedoms guaranteed for Free Software not guaranteed for Free Culture?

Here is a "rantifesto" from Nina Paley, who is frustrated that the freedoms guaranteed by free software licenses aren't always present in other types of works. "Cultural works released by the Free Software Foundation come with 'No Derivatives' restrictions... The problem with this is that it is dead wrong. You do not know what purposes your works might serve others. You do not know how works might be found 'practical' by others. To claim to understand the limits of 'utility' of cultural works betrays an irrational bias toward software and against all other creative work. It is anti-Art, valuing software above the rest of culture. It says coders alone are entitled to Freedom, but everyone else can suck it. Use of -ND restrictions is an unjustifiable infringement on the freedom of others." (Thanks to Davide Del Vento).

Comments (76 posted)

Brazilian government signs up to develop OpenOffice and LibreOffice (The H)

The H reports on an announcement at the FISL conference. "The Brazilian government has signed a letter of intent to work with both The Document Foundation and the Apache OpenOffice.org community to develop the Office Suite platforms maintained by both communities. The letter asserts that the ODF standard is already a guarantee of interoperability within the government. As Brazil is one of the biggest users of both LibreOffice and OpenOffice with an estimated million public computers running the free/open source office suites, the [government] aims to make the national contribution to the projects more effective."

Comments (8 posted)

Page editor: Jonathan Corbet

Announcements

Brief items

Project Harmony 1.0 and its discontents

The Harmony Project (an effort to create a standardized set of contributor agreements last covered here in April) has launched version 1.0 of its agreements. There is a cute selection tool allowing projects to pick the agreement which best suits their wishes. It's not clear how the agreements have changed since the first public disclosure in April.

Harmony remains controversial; see these responses by Bradley Kuhn, Richard Fontana, and Dave Neary. Quoting Richard: "Despite my admiration, respect and affection for those who have been driving Harmony, I cannot endorse the product of their work. I believe Harmony is unnecessary, confusing, and potentially hazardous to open source and free software development."

Comments (11 posted)

Nortel's patent pile sold

Nortel has announced that it has sold its pile of patents for $4.5 billion. "The sale includes more than 6,000 patents and patent applications spanning wireless, wireless 4G, data networking, optical, voice, internet, service provider, semiconductors and other patents. The extensive patent portfolio touches nearly every aspect of telecommunications and additional markets as well, including Internet search and social networking." Google's attempt to buy these patents failed; they have gone to a consortium made up of Apple, EMC, Ericsson, Microsoft, Research In Motion, and Sony. It's not hard to imagine unpleasant things resulting from that.

Comments (40 posted)

ESA Summer of Code in Space 2011

The European Space Agency (ESA) has announced a Summer of Code program. "ESA Summer of Code in Space (SOCIS) is a pilot program run by the Advanced Concepts Team of the European Space Agency that offers student developers stipends to write code for various space-related open source software projects. Through SOCIS, accepted student applicants are paired with a mentor or mentors from the participating projects, thus gaining exposure to real-world software development scenarios. In turn, the participating projects are able to more easily identify and bring in new developers." Mentoring organizations can apply before July 15.

Comments (1 posted)

The 2011 survey of software usage in neuroscience research

The results of an online survey in which neuroscientists were asked to share some details about their computing environments are available. A paper (PDF) by Michael Hanke and Yaroslav O. Halchenko shows that GNU/Linux is prevalent in neuroscience computing. "GNU/Linux is often perceived as a huge heterogeneous family of distributions that is impossible to support as a whole. However, our data show that the vast majority of all GNU/Linux-based neuroscientists use only two flavors of this platform: Red Hat-based, and Debian-based GNU/Linux distributions, with a preference for Debian-based systems in the personal environment." (Thanks to Adrian M. Whatley)

Comments (none posted)

Articles of interest

FSFE Newsletter - July 2011

The July edition of the Free Software Foundation Europe Newsletter is out.

Full Story (comments: none)

Linux IT to underwrite open-source adoption (CRN)

CRN reports that the UK company Linux IT is offering to underwrite any community-based open-source software that meets the requirements of its verification process. "Michael Breeze, marketing director at open-source software distributor Interactive Ideas, backed Linux IT's strategy. "We are seeing many companies and public sector organisations that are now actively looking for open source software alternatives, but having supported software is critical," he said. "The new programme from Linux IT now provides those companies with the option of using more open source software in a structured, supported environment.""

Comments (none posted)

New Books

Designed for Use--New from Pragmatic Bookshelf

Pragmatic Bookshelf has released "Designed for Use" by Lukas Mathis.

Full Story (comments: none)

Metasploit: The Penetration Tester's Guide--New from No Starch Press

No Starch Press has released "Metasploit: The Penetration Tester's Guide" by David Kennedy, Jim O'Gorman, Devon Kearns, and Mati Aharoni.

Full Story (comments: none)

The Book of Ruby--New from No Starch Press

No Starch Press has released "The Book of Ruby" by Huw Collingbourne.

Full Story (comments: none)

Calls for Presentations

linux.conf.au 2012 CFP open

The 2012 iteration of linux.conf.au (Ballarat, January 16-20) is now accepting proposals for talks; the deadline is July 29. "Though there are many elements needed to run a great conference, it is the speakers that truly make linux.conf.au such an amazing event. Being an international conference, but one with a uniquely Australian flavour, we are working to bring a terrific mix of both local and global speakers from different backgrounds to Ballarat in January."

Full Story (comments: none)

Upcoming Events

Denver 2011 PG DAY (Date Moved)

The PG Day 2011 in Denver, CO was scheduled for September and has been moved to October 21. The call for papers is open until August 31 and free registration is open until July 31.

Full Story (comments: none)

GR Conference 2011

The GNU Radio conference is open for registration. The conference takes place September 14-16, 2011 in Philadelphia, PA. "Ettus Research, LLC will cover registration fees for any student who comes and will give a presentation on their work with GNU Radio."

Full Story (comments: none)

Events: July 14, 2011 to September 12, 2011

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
July 9
July 14
Libre Software Meeting / Rencontres mondiales du logiciel libre Strasbourg, France
July 11
July 16
SciPy 2011 Austin, TX, USA
July 11
July 15
Ubuntu Developer Week online event
July 15
July 17
State of the Map Europe 2011 Wien, Austria
July 17
July 23
DebCamp Banja Luka, Bosnia
July 19 Getting Started with C++ Unit Testing in Linux
July 24
July 30
DebConf11 Banja Luka, Bosnia
July 25
July 29
OSCON 2011 Portland, OR, USA
July 30
July 31
PyOhio 2011 Columbus, OH, USA
July 30
August 6
Linux Beer Hike (LinuxBierWanderung) Lanersbach, Tux, Austria
August 4
August 7
Wikimania 2011 Haifa, Israel
August 6
August 12
Desktop Summit Berlin, Germany
August 10
August 12
USENIX Security ’11: 20th USENIX Security Symposium San Francisco, CA, USA
August 10
August 14
Chaos Communication Camp 2011 Finowfurt, Germany
August 13
August 14
OggCamp 11 Farnham, UK
August 15
August 16
KVM Forum 2011 Vancouver, BC, Canada
August 15
August 17
YAPC::Europe 2011 “Modern Perl” Riga, Latvia
August 17
August 19
LinuxCon North America 2011 Vancouver, Canada
August 20
August 21
PyCon Australia Sydney, Australia
August 20
August 21
Conference for Open Source Coders, Users and Promoters Tapei, Taiwan
August 22
August 26
8th Netfilter Workshop Freiburg, Germany
August 23 Government Open Source Conference Washington, DC, USA
August 25
August 28
EuroSciPy Paris, France
August 25
August 28
GNU Hackers Meeting Paris, France
August 26 Dynamic Language Conference 2011 Edinburgh, United-Kingdom
August 27
August 28
Kiwi PyCon 2011 Wellington, New Zealand
August 27 PyCon Japan 2011 Tokyo, Japan
August 27 SC2011 - Software Developers Haven Ottawa, ON, Canada
August 30
September 1
Military Open Source Software (MIL-OSS) WG3 Conference Atlanta, GA, USA
September 6
September 8
Conference on Domain-Specific Languages Bordeaux, France
September 7
September 9
Linux Plumbers' Conference Santa Rosa, CA, USA
September 8 Linux Security Summit 2011 Santa Rosa, CA, USA
September 8
September 9
Italian Perl Workshop 2011 Turin, Italy
September 8
September 9
Lua Workshop 2011 Frick, Switzerland
September 9
September 11
State of the Map 2011 Denver, Colorado, USA
September 9
September 11
Ohio LinuxFest 2011 Columbus, OH, USA
September 10
September 11
PyTexas 2011 College Station, Texas, USA
September 10
September 11
SugarCamp Paris 2011 - "Fix Sugar Documentation!" Paris, France
September 11
September 14
openSUSE Conference Nuremberg, Germany

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol


Copyright © 2011, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds