User: Password:
|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for September 9, 2010

Debian squeezes out Chromium

By Jake Edge
September 8, 2010

Web browsers tend to be fast-moving targets. They are frequently updated both for security holes and to add new functionality. Two of the most popular free software browsers, Firefox and Chromium, also have fairly short lifecycles, which often requires distributions to backport security fixes into unsupported releases. Both Mozilla and Google are more focused on the Windows versions of their browsers—where application updates with bundled libraries are the norm—but it makes life difficult for Linux distributions. So difficult that it appears Debian will be dropping Chromium from the upcoming 6.0 ("Squeeze") release.

On September 1, Giuseppe Iuculano posted to debian-release asking that the release team allow him to replace Chromium version 5, which was in the testing repository, with version 6. Version 5 will no longer receive updates from Google, but Iuculano was concerned that it would too difficult to backport patches that fix some "security issues" with SVG in WebKit because of major refactoring in that codebase. Roughly a week a later, he uploaded Chromium 6 and asked that the team either unblock that from being added to unstable or remove Chromium 5 from testing. The team opted for the latter.

One of the big problems is that Chromium uses a WebKit version that is bundled into the browser source, rather than using a particular released version of the library. Each time Google updates the browser, a new WebKit comes with it, and the old browser goes into an unsupported state. In order to keep using the v5 browser, any security fixes from the new browser and bundled libraries would have to be reflected into the older code—not a small task by any means.

In addition, Chromium versions come fast and furious: v5 was only supported for roughly two months, so the release team was worried that the same would be true of v6. Meanwhile, Squeeze has been frozen, which means that new features are not being added. For a release that prides itself on stability, there really was no choice but to drop Chromium.

In response to Debian project leader Stefano Zacchiroli's request for information to better understand the decision, release assistant Julien Cristau put it this way:

We were given a choice between removing chromium-browser from testing, or accepting a diff of
22161 files changed, 8494803 insertions(+), 1202268 deletions(-)
That didn't seem like much of a choice. I don't have any reason to believe the new version won't have the same problem 2 months (or a year) from now, and as far as I know neither the security team nor the stable release managers usually accept that kind of changes in stable.

Zacchiroli noted that he is a Chromium user and wants to have a clear story for why the browser is not in Squeeze, both for users and for upstream. While it won't be available in the Squeeze repository (testing right now, but stable once it is released), Chromium will likely still be available in the newly official backports repository. That led Michael Gilbert to suggest a slightly different interpretation of what "supported in stable" might mean:

I think that this need is justification to declare backports "officially supported by the debian project". Thus when asked this question, you can point to the fact that chromium is indeed supported on stable, just via a different model than folks are used to. That is of course assuming someone is willing to support the backport. I may do that if Giuseppe isn't interested.

Having chromium not present in stable proper helps the security team immensely.

While it may help the security team, there are still some things to be worked out for users of the backports repository. Currently, packages that come from backports do not get automatically updated when doing an apt-get upgrade (or its GUI equivalent). That would mean that users would have to remember to go grab the latest Chromium whenever a security update came out. Since backports has become an official part of Debian, there is thought that changing the behavior to pick up updates from there would make sense.

It's not just Chromium that is affected however. Squeeze will ship with an Iceweasel—Debian-branded Firefox—in its repository, but there seems to be a belief that over time, as the shipping Iceweasel version falls further and further behind, it too will be coming from backports. That would give further reason to make backports updates automatic and that seems to be the consensus on the debian-release list.

This is a problem that we haven't heard the last of. For any distribution with a long-lasting support window (like Ubuntu LTS, Debian, or any of the enterprise distributions), it is going to be very difficult to keep supporting older browsers. The alternative, which is the direction that most are taking, is to update to the latest upstream version throughout the distribution's support window.

New browser versions often require newer system libraries, though, which may conflict with other application's requirements. Either the browser can be backported to use the libraries that shipped with the distribution, or the newer libraries can be bundled with the browser. Ubuntu has taken the latter approach, choosing to bundle some libraries for 10.04 LTS.

It's also possible that eventually it may be more than just web browsers that adopt a fast release schedule with fairly short support windows. If that happens, it seems likely that distributions will need to pool their resources to backport fixes into the older releases or just follow the upstream releases. It will be especially prevalent for cross-platform software, where the Windows or Mac OS X versions are the most popular. Bundling libraries is the usual path taken by applications on those platforms, so they don't suffer under the same constraints that Linux systems do.

Keeping up with what seems to be an ever-increasing pace of development, while still maintaining stability for users, is a tricky problem to manage. Distributions may find that it is one they can't manage alone—or at all. If the latter, distributions will increasingly have to rely on stability coming from the upstream projects, but there is tension there as well.

Fast-moving projects are likely to make changes to fundamental components, changing both the program's behavior and its interface. While that doesn't necessarily make the application unstable in the usual sense of that term, it does change things in ways that stability oriented distributions try to avoid. It's a difficult balancing act and we'll have to see how it plays out.

Comments (47 posted)

Simplifying application installation

By Jake Edge
September 8, 2010

As part of the discussion that is currently raging in the Fedora community about its target audience, specifically with regard to its update policy, Máirín Duffy came up with some user archetypes to put faces to the different kinds of users Fedora might target. But that much-commented-on blog posting also had a "bonus" section that pointed out how confusing the update interface can be for users who aren't technically savvy (i.e. "Caroline Casual-User"). She suggested one way that Fedora could potentially reduce the confusion and, as it turns out, Richard Hughes had already been working on a similar idea called app-install for the past few months. App-install could help alleviate much of the confusion that casual users have when faced with updates—no matter how many or few there are.

The problems that Duffy identified mostly centered around what gets presented to users in the PackageKit GUI. She has an annotated version of the interface that points out all of the confusing information that is displayed. Even folks who are relatively new to Linux, but have some amount of technical background, will probably find her notations to be somewhat silly ("A streamer? Are we having a party? With "G's"?" for GStreamer for example). But that is part of her point—there are many things that technical users quickly recognize and at least partially understand that will completely go over the heads of casual users.

One could argue that targeting these casual users does not make sense for Fedora—many do—but Duffy's vision is that Fedora should reduce the complexity to a "platform update", at least for some users. That update would contain all of the different pieces that might normally be distributed on a package-by-package basis. That way, a casual user sees just one update, presumably fairly infrequently, that encompasses all of the underlying package thrash that seems to occur frequently with Fedora.

As she readily admits in the posting, it may not be quite that simple. In fact, she identified three groups of packages ("base platform", "core desktop", and "applications") that could have bundled updates. Each of those bundles might have updates released on different schedules, and each would be rolled into the monthly platform update.

There are some other considerations as well, though, particularly security fixes. Security updates come rather frequently for Fedora and most distributions that keep up with upstream vulnerabilities. Her plan for those is a bit vague, but the central idea is to produce less churn, less frequently, so that system breakage (i.e. base platform and to a lesser extent core desktop) is rare. That would also help reduce one problem that casual users suffer from: they are forced to make decisions about updates based on long lists of strange package names with largely incomprehensible reasons for the update.

While app-install is not a complete solution to what Duffy is proposing, it is a step down that path. The basic idea is that casual users want "applications" rather than packages. Application is defined to only include programs that are shipped with desktop files and include an icon—probably exactly what less-technical users expect. In addition, the applications are listed by the name users are used to, rather than by a package name that can be different for each distribution (i.e. Firefox vs. mozilla-firefox)—and sometimes changes between releases of the same distribution.

It is not a coincidence that app-install seeks to standardize these names across distributions. It is specifically geared to be a cross-distribution tool and Fedora's Hughes was joined by two other distribution hackers, SUSE's Roderick Greening and Sebastian Heinlein of Ubuntu, to create the specification and tools. At its core, app-install is a data format that contains the information needed to display applications in friendlier, localized way. The data gets stored in a SQLite database file that can be distributed along with a tarball of the application icons.

[Install demo]

Hughes has created two demonstration applications, an installer and an updater, that use the app-install data. This allows applications to be listed with their icons and descriptions extracted from the desktop files. Version 2 of the app-install schema, which is currently under development, will add the ability to display application ratings and screen shots to help users have a better understanding of what the application is and does before installing it. It is a vision that is clearly influenced by things like the Android Market and Ubuntu Software Center.

[Update demo]

The app-install data is extracted from the desktop files of the applications, which, as the README file notes, has "nice translations gently massaged and QA'd by upstream". By extracting the data into a SQLite file, and pulling out just the icons needed, much less disk space is used while still providing icons and descriptions for all of the applications that a distribution ships. There are currently tools available to extract the needed metadata from Ubuntu and Fedora packages, and it should be fairly easy to add the ability to extract it from other distributions.

Hughes announced app-install in a fedora-devel posting on September 7 and there were immediate questions about the benefits to Fedora from a cross-distribution implementation. Hughes had a ready answer to that:

Fedora is *not* a big enough ecosystem to drive fully localized and feature rich user experiences. Working with other distros mean we can work as one big team and share the burden of translation, bug-fixes and writing new common code. I certainly don't want to write software for Fedora, but rather write software for Linux, and then write the small amount of Fedora interface code.

There were also questions about using Hughes's definition of an application, but the clear focus of the app-install project is for casual users: "If you know what an epoch is, it's probably not for you", he said in the announcement. But by the same token, PackageKit (and yum or apt-get) will continue to exist, "so panic not". The idea behind app-install should be seen as a potential broadening of the reach for distributions, so that they can attract more "consumer"-style users.

One could certainly imagine integrating some of Duffy's ideas into this framework, with an app-install-based installer possibly replacing the applications bundle she described, while adding functionality to support the core desktop and base platform bundles. It is far from clear that Fedora will adopt an update strategy along the lines that she described, but it is clear that there are likely to be changes to Fedora's update policies. The ideas described by Duffy, as well as Jon Masters and others, along with the code from the app-install folks may just find a place as part of those changes.

It would also seem likely that, unlike some other cross-distribution initiatives, the app-install code will actually be used in multiple places. Hughes notes that Heinlein is driving many of the changes in the Ubuntu Software Center, so that as app-install matures it will find its way into that code. One would guess that having Greening involved means that openSUSE is looking at using it as well.

Because it doesn't really change anything for existing users—they'll still have the tools they've always used—but adds a "simplicity layer" for new users, there should be relatively little resistance to the app-install idea. There is a burden for distributions to maintain the app-install data, but it's something that could be automated pretty easily. Since it arguably updates Linux into the world of "Apps", which is certainly a popular paradigm at the moment, its adoption by at least some consumer-oriented distributions is not terribly far-fetched.

Comments (19 posted)

LC Brazil: Consumers, experts, or admins?

By Jonathan Corbet
September 7, 2010
Your editor had the good fortune to be able to attend the first LinuxCon Brazil event, held in São Paulo. There were a number of interesting talks to be seen, presented by speakers from Brazil and far beyond. This article will cover three in particular which were interesting as a result of the very different views they gave on how Linux users work with their systems.

Consumers

[Jane Silber] Jane Silber is the (relatively) new CEO at Canonical; she went to Brazil to deliver a keynote on the "consumerization of IT" and, in particular, its implications on open source. What she was really there to talk about, of course, was the interesting stuff that is being done with the Ubuntu distribution. Linux serves the needs of expert users very well, but, according to Jane, the future of Linux is very much in the hands of "consumers," so we need to shift our focus toward that user base. There are a number of things being done in the Ubuntu context to make that happen.

At the top of the list is "fit and finish," which she described as "the sprinkling of fresh parsley" that makes the whole meal seem complete. There have been incredible advances in this area, she says, but our software still has a lot of rough edges to it. Consumer-type users can usually figure things out, but it does not build confidence in the software. To make things better, Canonical is sponsoring a lot of usability research and working with upstream projects to improve their usability. Results are posted on design.canonical.com. Projects like 100 papercuts have also helped to improve the usability of free software.

Another area of interest is the distribution pipeline; the key here is "speed and freshness." At one point, that idea was typified by the six-month release cycle which, according to Jane, was innovative when Ubuntu started using it. But now a six-month cycle is far too slow; the example we need to be emulating now is, instead, the Apple App Store. Apple's distribution mechanism would be rather less popular if it only got new software every six months. The Ubuntu Software Store has been set up in an attempt to create a similar mechanism for Ubuntu users. It will provide much quicker application updates and - of course - the ability for users to purchase software.

The end result of all this work is intended to be a "fit, finished, fresh pipeline" of software, creating "a wave of code that consumers can surf on top of." This may not be quite the way that most LWN readers think about their use of Linux, but it may yet prove to be an approach which brings Linux to a whole new community of users.

Experts

Vinod Kutty represents a very different type of consumer: the Chicago Mercantile Exchange, whose Linux-based trading platform moves unimaginable amounts of money around every day. His talk covered the Exchange's move to Linux, which began in 2003. There was a fair amount of discussion of performance benefits and cost savings, but the interesting part of the talk had to do with how Linux changed the way that the Exchange deals with its [Vinod Kutty] software and its suppliers. According to Vinod, we have all become used to buying things that we don't understand, but open source changes that.

He described an episode where a proprietary Unix system was showing unwanted latencies under certain workloads. With the Unix system, the only recourse was to file a support request with the vendor, then wait until it got escalated to a level where somebody might just be able to fix it. With an enterprise Linux distribution, the support request is still made, but the waiting time is no longer idle. Instead, they can be doing their own investigation of the problem, looking for reports of similar issues on the net or going directly into the source. Chances are good that the problem can be nailed down before the vendor gets back with an answer.

A related issue is that of quality assurance and finding bugs. According to Vinod, we are all routinely performing QA work for our vendors. The difference with Linux is that any time spent chasing down problems benefits the community as a whole; it also benefits CME when the fix comes back in future releases.

Linux, Vinod said, has become the "Wikipedia of operating systems"; it is a store of knowledge on how systems should be built. Taking full advantage of that knowledge requires building up a certain amount of in-house expertise. But having that expertise greatly reduces the risk of catastrophic problems; depending on outside vendors, instead, increases that risk. The value of open source is that it allows us to move beyond being consumers and know enough about our systems to take responsibility for keeping them working.

Administrators

Once upon a time, it seemed like it was simply not possible to run a Linux-related conference without Jon 'Maddog' Hall in attendance. Unfortunately, we don't see as much of Maddog as we used to; one reason for that is that he has been busy working on schemes in Brazil. One of those is Project Cauã. Maddog cannot be faulted for lack of ambition: he hopes to use Linux in a plan to create millions of system administration jobs, reduce energy use, and increase self-sufficiency in Brazilian cities.

Maddog has long been critical of the One Laptop Per Child project which, he says, is targeting children who are too young, too poor, and too far from good network access. He sees a group which can benefit more from direct help: children living in large cities in countries like Brazil. These kids live in a dense environment where network access is possible. They also [Maddog] live in an environment where many people and businesses have computers, but they generally lack the expertise to keep those computers working smoothly. The result is a lot of frustration and lost time.

The idea behind Project Cauã is to put those kids to work building a better computing infrastructure and supporting its ongoing operation. In essence, Maddog would like to create an array of small, independent Internet service provider businesses, each serving one high-rise building. The first step is training the people - older children - who will build and run those businesses. They are to be trained in Linux system administration skills, of course, but also in business management skills: finding customers, borrowing money, etc. The training will mostly be delivered electronically, over the net or, if necessary, via DVD.

These new businesspeople will then go out to deliver computing services to their areas. There is a whole network architecture being designed to support these services, starting with thin client systems to put into homes or businesses. The idea behind these systems is that they have no moving parts, so they are quiet and reliable. They can be left on all the time, making them far more useful. Maddog is working with a São Paulo university to design these thin clients, with the plan of releasing the designs so that any of a number of local businesses can manufacture them. Despite equipping these systems with some nonstandard features - a digital TV tuner and a femtocell cellular modem, for example - Maddog thinks they can be built for less than $100 each.

The project envisions that each building would have an incoming network link from a wholesale ISP and at least one local server system. Three sizes of servers are planned, with the smallest one being made of two thin clients tied together. These servers will run a local wireless network and will be tied into a city-wide mesh network as well. Applications will generally run in virtual machines - on either the client or the server - and will all use encrypted communications and storage.

Project Cauã trainees will be able to buy the equipment using loans underwritten by the project itself; they then should be able to sell network access and support services to the tenants of the buildings they cover. If all goes according to plan, this business should generate enough money to pay off the loans and provide a nice income. Unlike the OLPC program which, according to Maddog, has a ten-year payoff time at best, Project Cauã will be able to turn a Brazilian city kid into a successful businessperson within two years.

A project like this requires some funding to get off the ground. It seems that Brazil has a law requiring technical companies to direct 4% of their revenue toward fast development projects. Project Cauã qualifies, and, evidently, a number of (unnamed) companies have agreed to send at least some of their donations in that direction. With funds obtained in this way, Maddog is able to say that the project is being launched with no government money at all.

This project is still in an early state; the computer designs and training materials do not yet exist. Some people have expressed doubts as to whether the whole thing is really feasible. But one cannot deny that it is an interesting vision of the use of free software to make life better for Brazilian city dwellers while creating many thousands of small businesspeople who understand the technology and are able to manage it locally. This is not "cloud computing," where resources are pulled back to distant data centers. It is local computing, with the people who understand and run it in the same building.

Linux is being pushed in a lot of different directions for a wide variety of users. Beyond doubt, there will be people out there who want to deal with Linux-based systems in a pure "consumer" mode. For them, Linux differs little from any other operating system. Others, though, want to dig more deeply into the system and understand it better so that they can fix their own problems or run it in the way that suits them best - or empower others to do the same. Linux, it seems, is well placed to serve all of these different classes of users.

Comments (22 posted)

Page editor: Jonathan Corbet

Security

Where are the non-root X servers?

By Jake Edge
September 8, 2010

One of the heralded features that was supposed to come with moving the graphics modesetting code into the kernel (i.e. kernel modesetting or KMS) was that it would—finally—allow systems to rid themselves of an enormous body of code running as root: the X server. KMS has made its way into distributions now, but for the most part there has been no switch to running the X server as a non-privileged user. Progress is being made, but there is another missing piece, at least for multi-user systems: some way for processes to enforce exclusive access to files they want to open.

One need only look at the recent kernel hole that was exposed by root-privileged X servers for a good reason to want an unprivileged X. It is a complicated chunk of code that is exposed to all manner of attacks, both local and, potentially, from across the network. It has been the source of vulnerabilities in the past and almost certainly will be again in the future. Reducing its privileges and possibly running it as a separate user will make any attacks against it less potent or completely ineffective—the recent exploit would have been stopped cold for example.

Prior to KMS, the X server had to do all manner of poking at the hardware to get its job done, and that required root privileges. Once that code moved into the kernel, the X server just needed to be able to access the devices provided. The graphics device driver enforces exclusive access so that other processes on the same machine cannot intercept—or interfere with—graphics commands, but there is another set of devices, /dev/input/*, that is more problematic.

In current systems, where X runs as root, it owns the files in /dev/input and the permissions only allow root to access them. If X were to run as either the logged-in user or some other separate user, overly restrictive permissions could not be used. For multi-user systems, regular users could end up with a "dangling" reference—in the form of an open file descriptor—to an input device. Once another user started X, that reference could be used for keystroke logging.

One possible solution would be to add a revoke() system call to Linux. That call would disconnect all processes from a file and allow the caller to have exclusive access. Unfortunately, no one has found an acceptable way to add revoke() capabilities to the kernel. There have been several attempts over the years (we most recently looked at one in 2007), but it is a hard problem to solve, mostly due to things like mmap()-ed files and the private copy-on-write mappings that are generated by fork().

With a working revoke(), the X server could just ensure that it is the only process that has access to the input stream. An alternative would be to have the X server run as a system user that lives in a specific group with access to the input devices, but that has flaws of its own. An exploit against the server would potentially give an attacker a means to access all users that are logged into X sessions, so a malicious local user or some remote exploit of a vulnerable X program might be able to affect all users of the system.

Keystroke logging can obviously lead to root compromise if someone types in the root password, but there are other things that users type that they don't want exposed, of course. Passwords for other systems (e.g. ssh, web applications) and all kinds of sensitive information (e.g. financial data for Gnucash or Kmoney) are input into X sessions. While it would be nice to get away from running X as root, the benefit needs to outweigh the cost—easy keystroke logging does not pass that bar.

The Moblin mobile distribution pioneered "non-root-X" and its descendant MeeGo has continued down that path, but neither of those distributions allows for multiple users. If there are no other users that could get access to the input devices, it is fairly straightforward to run the X server as the logged-in user, which is what Moblin/MeeGo do.

Ubuntu has been looking at the problem as well. There is a blueprint for the feature that is targeted for Ubuntu 10.10, but with a "Low" priority and it has not made an appearance in the recently released Beta. Unlike MeeGo, Ubuntu and other distributions will need to deal with multi-user use case, which seems to be the sticking point.

Fedora also recently discussed a non-root X server, after Mike McGrath asked about it on fedora-devel. That led to Matthew Garrett's security quote-of-the-week pointing out the problem with input devices and no revoke(). While some thought that was a good argument for PackageKit's ability to perform root-privileged actions without a password being typed in, Gregory Maxwell was quick to point out the flaw in that thinking:

This is an improvement because if Fedora removes "the need to ever type a root password" by simply allowing packagekit to give the user all the root abilities the user needs then the attacker doesn't need to wait around for the user to do something privileged, they can just ask packagekit as the user to do it for them. I'm sure this will save a lot of time.

So, at least for multi-user systems, we are still a ways out from seeing X servers running as a non-root user. The hardware access issues have been resolved—for those graphics cards that have KMS drivers—but there are still underlying plumbing issues that haven't been. For older hardware without KMS drivers, or those with proprietary-only drivers, X is always going to have to run as root.

It would be nice to limit the damage an exploit can do to only the user that got exploited, rather than the entire system or all logged-in X users. But that will require revoke() which doesn't seem to be in the pipeline. Conceptually, revoke() is a completely reasonable addition to the kernel, and it really isn't clear why we don't have it yet. It is certainly something that the security community could be working on to remove it as a barrier to a more secure X server.

Starting out by running X as a system user with various udev permission-switching rules and some kind of arbiter like ConsoleKit as Ubuntu is attempting might be the right approach. It definitely seems like Ubuntu has made the most visible progress toward the goal. Other distributions may be taking a wait-and-see approach in the interim.

Comments (29 posted)

Brief items

Security quotes of the week

Suddenly banka.com is free of fraud. Snakeoil works, they find! They happily let the Snakeoil salesman use them as a use case. So our Snakeoil salesman goes across the street to bankb.com. Bankb.com has seen a two fold increase in fraud over the last few months (all of banka.com's fraud plus their own), strangely and they're desperate to do something about it. Snakeoil salesman is happy to show them how much banka.com has decreased their fraud just by buying their shoddy product. Bankb.com is desperate so they say fine and hand over the cash.
-- Robert Hansen (aka RSnake) on the success of snake oil

The United Arab Emirates continues to wrestle with Research in Motion over government access to BlackBerry messages, threatening to ban the company's services if it doesn't severely weaken the anti-snooping protections on its smartphones. But years before the RIM battle boiled over, other Western companies handed the country a far greater power: the capability to infiltrate the secure system used by most banking, mail, and financing sites, making the most protected data on the Web available to the prying eyes of the emirates' government-connected telecommunications giant.
-- Danny O'Brien on certificate authorities in Slate

Comments (4 posted)

MWR Labs: Assessing the Tux Strength

The MWR Labs group at MWR Info Security is running a series of articles comparing Linux distributions from a security point of view. Part 1: user space memory protection looks at protection against memory corruption attacks, while Part 2 - into the kernel examines kernel security settings. "The notable exceptions in the results are Fedora and Ubuntu. Both distributions do not allow the ability to write code to a certain memory region and then execute it. This can be observed from the results of the first five tests. Fedora goes one step further and also prevents the bss, data and heap sections from being marked as executable using the 'mprotect' system call. It should be noted that there would still be numerous other memory regions where an attacker could upload their code and then use the 'mprotect' function to mark it as executable."

Comments (22 posted)

New vulnerabilities

barnowl: denial of service

Package(s):barnowl CVE #(s):CVE-2010-2725
Created:September 3, 2010 Updated:September 8, 2010
Description: From the Debian advisory:

It has been discovered that in barnowl, a curses-based instant-messaging client, the return codes of calls to the ZPending and ZReceiveNotice functions in libzephyr were not checked, allowing attackers to cause a denial of service (crash of the application), and possibly execute arbitrary code.

Alerts:
Debian DSA-2102-1 barnowl 2010-09-03

Comments (none posted)

freetype: denial of service

Package(s):freetype CVE #(s):CVE-2010-3053
Created:September 8, 2010 Updated:January 20, 2011
Description: The freetype library can be forced to crash via a maliciously-crafted BDF font file.
Alerts:
SUSE SUSE-SU-2012:0553-1 freetype2 2012-04-23
Gentoo 201201-09 freetype 2012-01-23
MeeGo MeeGo-SA-10:31 freetype 2010-10-09
Debian DSA-2105-1 freetype 2010-09-07
SUSE SUSE-SR:2010:019 OpenOffice_org, acroread/acroread_ja, cifs-mount/samba, dbus-1-glib, festival, freetype2, java-1_6_0-sun, krb5, libHX13/libHX18/libHX22, mipv6d, mysql, postgresql, squid3 2010-10-25
openSUSE openSUSE-SU-2010:0726-1 freetype2 2010-10-15

Comments (none posted)

kernel: privilege escalation

Package(s):kernel CVE #(s):CVE-2010-3110
Created:September 8, 2010 Updated:September 8, 2010
Description: The ioctl() implementation for Novell's "novfs" /proc interface is missing several bounds checks, enabling unprivileged local users to crash the kernel or possibly execute arbitrary code in kernel mode.
Alerts:
SUSE SUSE-SA:2010:039 kernel 2010-09-08
openSUSE openSUSE-SU-2010:0592-1 kernel 2010-09-08

Comments (none posted)

Mozilla products: multiple vulnerabilities

Package(s):firefox seamonkey thunderbird xulrunner CVE #(s):CVE-2010-2760 CVE-2010-2762 CVE-2010-2764 CVE-2010-2765 CVE-2010-2766 CVE-2010-2767 CVE-2010-2768 CVE-2010-2769 CVE-2010-3166 CVE-2010-3167 CVE-2010-3168 CVE-2010-3169 CVE-2010-2763
Created:September 8, 2010 Updated:June 27, 2011
Description: Firefox 3.6.9, firefox 3.5.12, and SeaMonkey 2.0.7 have been released; they fix another long list of security issues.
Alerts:
openSUSE openSUSE-SU-2014:1100-1 Firefox 2014-09-09
Gentoo 201301-01 firefox 2013-01-07
MeeGo MeeGo-SA-10:39 firefox 2010-10-09
SUSE SUSE-SA:2010:056 MozillaFirefox,seamonkey,MozillaThunderbird 2010-11-08
Debian DSA-2124-1 xulrunner 2010-11-01
openSUSE openSUSE-SU-2010:0906-1 seamonkey thunderbird 2010-10-28
Fedora FEDORA-2010-15070 xulrunner 2010-09-22
Fedora FEDORA-2010-15070 perl-Gtk2-MozEmbed 2010-09-22
Fedora FEDORA-2010-15070 mozvoikko 2010-09-22
Fedora FEDORA-2010-15070 gnome-web-photo 2010-09-22
Fedora FEDORA-2010-15070 gnome-python2-extras 2010-09-22
Fedora FEDORA-2010-15070 galeon 2010-09-22
Fedora FEDORA-2010-15070 firefox 2010-09-22
openSUSE openSUSE-SU-2010:0632-2 seamonkey 2010-09-20
Debian DSA-2106-2 xulrunner 2010-09-19
Ubuntu USN-978-2 thunderbird 2010-09-16
Ubuntu USN-975-2 firefox 2010-09-16
openSUSE openSUSE-SU-2010:0632-1 MozillaFirefox 2010-09-17
Mandriva MDVSA-2010:173 firefox 2010-09-11
CentOS CESA-2010:0682 thunderbird 2010-09-12
CentOS CESA-2010:0681 firefox 2010-09-12
Slackware SSA:2010-253-02 thunderbird 2010-09-10
Slackware SSA:2010-253-01 firefox 2010-09-10
Slackware SSA:2010-253-03 seamonkey 2010-09-10
Fedora FEDORA-2010-14362 perl-Gtk2-MozEmbed 2010-09-09
Fedora FEDORA-2010-14362 mozvoikko 2010-09-09
Fedora FEDORA-2010-14362 gnome-web-photo 2010-09-09
Fedora FEDORA-2010-14362 gnome-python2-extras 2010-09-09
Fedora FEDORA-2010-14362 galeon 2010-09-09
Fedora FEDORA-2010-14362 xulrunner 2010-09-09
Fedora FEDORA-2010-14362 firefox 2010-09-09
Fedora FEDORA-2010-14351 sunbird 2010-09-09
Fedora FEDORA-2010-14352 sunbird 2010-09-09
Fedora FEDORA-2010-14351 thunderbird 2010-09-09
Fedora FEDORA-2010-14352 thunderbird 2010-09-09
Ubuntu USN-978-1 thunderbird 2010-09-08
Ubuntu USN-975-1 firefox, firefox-3.0, firefox-3.5, xulrunner-1.9.1, xulrunner-1.9.2 2010-09-08
CentOS CESA-2010:0682 thunderbird 2010-09-09
CentOS CESA-2010:0681 firefox 2010-09-09
CentOS CESA-2010:0680 seamonkey 2010-09-09
CentOS CESA-2010:0680 seamonkey 2010-09-08
Red Hat RHSA-2010:0682-01 thunderbird 2010-09-07
Red Hat RHSA-2010:0680-01 seamonkey 2010-09-07
Debian DSA-2106-1 xulrunner 2010-09-08
Red Hat RHSA-2010:0681-01 firefox 2010-09-07
openSUSE openSUSE-SU-2010:0632-3 mozilla-xulrunner191 2010-10-11
SUSE SUSE-SA:2010:049 MozillaFirefox,MozillaThunderbird,seamonkey 2010-10-12
Fedora FEDORA-2010-15184 seamonkey 2010-09-24
Fedora FEDORA-2010-15115 seamonkey 2010-09-23

Comments (none posted)

mysql: multiple vulnerabilities

Package(s):mysql-server CVE #(s):
Created:September 6, 2010 Updated:September 8, 2010
Description: From the Pardus advisory:

1) An error within the handling of DDL statements after having changed the "innodb_file_per_table" or "innodb_file_format" configuration parameters can be exploited to crash the server.

2) An error when handling joins involving a unique "SET" column can be exploited to crash the server.

3) An error when handling NULL arguments passed to "IN()" or "CASE" operations can be exploited to crash the server.

4) An error when processing certain malformed arguments passed to the "BINLOG" statement can be exploited to crash the server.

5) An error when processing "TEMPORARY" InnoDB tables featuring nullable columns can be exploited to crash the server.

6) An error when performing alternating reads from two indexes on tables using the "HANDLER" interface can be exploited to crash the server.

7) An error when handling "EXPLAIN" statements on certain queries can be exploited to crash the server.

8) An error when handling "LOAD DATA INFILE" statements can lead to the return of an "OK" packet although errors have been encountered.

Alerts:
Pardus 2010-122 mysql-server 2010-09-06

Comments (none posted)

quagga: denial of service

Package(s):quagga CVE #(s):CVE-2010-2948 CVE-2010-2949
Created:September 7, 2010 Updated:December 8, 2010
Description: From the Debian advisory:

When processing a crafted Route Refresh message received from a configured, authenticated BGP neighbor, Quagga may crash, leading to a denial of service. (CVE-2010-2948)

When processing certain crafted AS paths, Quagga would crash with a NULL pointer dereference, leading to a denial of service. In some configurations, such crafted AS paths could be relayed by intermediate BGP routers. (CVE-2010-2949)

Alerts:
Oracle ELSA-2012-1259 quagga 2012-09-13
Oracle ELSA-2012-1258 quagga 2012-09-13
Gentoo 201202-02 quagga 2012-02-21
SUSE SUSE-SU-2011:1316-1 quagga 2011-12-12
Ubuntu USN-1027-1 quagga 2010-12-07
Red Hat RHSA-2010:0945-01 quagga 2010-12-06
SUSE SUSE-SR:2010:022 gdm, openssl, poppler, quagga 2010-11-30
openSUSE openSUSE-SU-2010:0984-1 quagga 2010-11-29
Red Hat RHSA-2010:0785-01 quagga 2010-10-20
Mandriva MDVSA-2010:174 quagga 2010-09-11
Fedora FEDORA-2010-14002 quagga 2010-09-02
Fedora FEDORA-2010-14009 quagga 2010-09-02
Debian DSA-2104-1 quagga 2010-09-06
CentOS CESA-2010:0785 quagga 2010-10-25
CentOS CESA-2010:0785 quagga 2010-10-20

Comments (none posted)

sblim-sfcb: arbitrary code execution

Package(s):sblim-sfcb CVE #(s):CVE-2010-1937 CVE-2010-2054
Created:September 6, 2010 Updated:September 9, 2010
Description: From the Red Hat bugzilla:

Heap-based buffer overflow in httpAdapter.c in httpAdapter in SBLIM SFCB before 1.3.8 might allow remote attackers to execute arbitrary code via a Content-Length HTTP header that specifies a value too small for the amount of POST data, aka bug #3001896. (CVE-2010-1937)

Integer overflow in httpAdapter.c in httpAdapter in SBLIM SFCB 1.3.4 through 1.3.7, when the configuration sets httpMaxContentLength to a zero value, allows remote attackers to cause a denial of service (heap memory corruption) or possibly execute arbitrary code via a large integer in the Content-Length HTTP header, aka bug #3001915. NOTE: some of these details are obtained from third party information. (CVE-2010-2054)

Alerts:
Fedora FEDORA-2010-10323 sblim-sfcb 2010-06-24
Fedora FEDORA-2010-12847 sblim-sfcb 2010-08-17

Comments (none posted)

smbind: sql injection

Package(s):smbind CVE #(s):
Created:September 6, 2010 Updated:September 8, 2010
Description: From the Debian advisory:

It was discovered that smbind, a PHP-based tool for managing DNS zones for BIND, does not properly validating input. An unauthenticated remote attacker could execute arbitrary SQL commands or gain access to the admin account.

Alerts:
Debian DSA-2103-1 smbind 2010-09-05

Comments (none posted)

sssd: authentication bypass

Package(s):sssd CVE #(s):CVE-2010-2940
Created:September 3, 2010 Updated:January 24, 2011
Description: From the CVE entry:

The auth_send function in providers/ldap/ldap_auth.c in System Security Services Daemon (SSSD) 1.3.0, when LDAP authentication and anonymous bind are enabled, allows remote attackers to bypass the authentication requirements of pam_authenticate via an empty password.

Alerts:
Fedora FEDORA-2010-13557 sssd 2010-08-26
Fedora FEDORA-2010-13549 sssd 2010-08-26

Comments (none posted)

sudo: privilege escalation

Package(s):sudo CVE #(s):CVE-2010-2956
Created:September 7, 2010 Updated:October 27, 2010
Description: From the Gentoo advisory:

Markus Wuethrich of Swiss Post reported that sudo fails to restrict access when using Runas groups and the group (-g) command line option.

Alerts:
rPath rPSA-2010-0075-1 sudo 2010-10-27
Fedora FEDORA-2010-14996 sudo 2010-09-21
SUSE SUSE-SR:2010:017 java-1_4_2-ibm, sudo, libpng, php5, tgt, iscsitarget, aria2, pcsc-lite, tomcat5, tomcat6, lvm2, libvirt, rpm, libtiff, dovecot12 2010-09-21
Slackware SSA:2010-258-03 sudo 2010-09-15
Slackware SSA:2010-257-02 sudo 2010-09-15
Mandriva MDVSA-2010:175 sudo 2010-09-12
CentOS CESA-2010:0675 sudo 2010-09-12
Fedora FEDORA-2010-14355 sudo 2010-09-09
openSUSE openSUSE-SU-2010:0591-1 sudo 2010-09-08
Red Hat RHSA-2010:0675-01 sudo 2010-09-07
Ubuntu USN-983-1 sudo 2010-09-07
Gentoo 201009-03 sudo 2010-09-07

Comments (none posted)

wireshark: denial of service

Package(s):wireshark CVE #(s):CVE-2010-2992 CVE-2010-2993
Created:September 3, 2010 Updated:April 19, 2011
Description: From the CVE entries:

packet-gsm_a_rr.c in the GSM A RR dissector in Wireshark 1.2.2 through 1.2.9 allows remote attackers to cause a denial of service (crash) via unknown vectors that trigger a NULL pointer dereference. (CVE-2010-2992)

The IPMI dissector in Wireshark 1.2.0 through 1.2.9 allows remote attackers to cause a denial of service (infinite loop) via unknown vectors. (CVE-2010-2993)

Alerts:
Gentoo 201110-02 wireshark 2011-10-09
SUSE SUSE-SR:2011:007 NetworkManager, OpenOffice_org, apache2-slms, dbus-1-glib, dhcp/dhcpcd/dhcp6, freetype2, kbd, krb5, libcgroup, libmodplug, libvirt, mailman, moonlight-plugin, nbd, openldap2, pure-ftpd, python-feedparser, rsyslog, telepathy-gabble, wireshark 2011-04-19
openSUSE openSUSE-SU-2011:0010-2 wireshark 2011-01-12
SUSE SUSE-SR:2011:001 finch/pidgin, libmoon-devel/moonlight-plugin, libsmi, openssl, perl-CGI-Simple, supportutils, wireshark 2011-01-11
SUSE SUSE-SR:2011:002 ed, evince, hplip, libopensc2/opensc, libsmi, libwebkit, perl, python, sssd, sudo, wireshark 2011-01-25
openSUSE openSUSE-SU-2011:0010-1 wireshark 2011-01-04
Fedora FEDORA-2010-13427 wireshark 2010-08-24
Fedora FEDORA-2010-13416 wireshark 2010-08-24

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel remains 2.6.36-rc3; no new prepatches have been released over the last week. Linus has returned from his trip to Brazil and resumed merging changes, so 2.6.36-rc4 can probably be expected in the near future.

Stable updates: there have been no 2.6 stable updates released over the last week.

Comments (none posted)

Stable kernel 2.4.37.10

The 2.4 kernel lives - for a little while longer, at least. Willy Tarreau has just released the 2.4.37.10 update, with a small set of important fixes. This might just be the last update in this series, unless some sort of important fix comes in. "If nothing happens before September 2011, it's possible that there won't be any 2.4.37.11 at all. By that time, the 2.6 kernel will have been available for almost 8 years, this should have been enough for anyone to have a look at it. Users now have one year to migrate or to report critical bugs. I think that's an honest deal." See the announcement for the full description of his planned policy.

Comments (4 posted)

Quotes of the week

A lot of my conversations about union mounts with Al [Viro] go like this:

Al: "Rewrite it this way."
Val: "But then how do we get the nameidata?"
Al: "Arrrrrrrrrrrrrggggh."

-- Valerie Aurora

Well damn, good detective work. I wonder how many of those nasty random bad-page-state bug reports just got fixed. I dub thee September's "Hero of the Linux kernel"!
-- Andrew Morton (to Jiri Slaby)

Kernel developers are paid to work on features, yes. They are not paid to fix bugs for random folks who want run the latest stable kernel.
-- Ted Ts'o

Comments (none posted)

Prefetching considered harmful

By Jonathan Corbet
September 8, 2010
As anybody who has read What every programmer should know about memory knows, performance on contemporary systems is often dominated by cache behavior. A single cache miss can cause a processor stall lasting for hundreds of cycles. The kernel employs many tricks and techniques to optimize cache behavior, but, as is often the case with low-level optimization, it turns out that some of those tricks are not as helpful as had been thought.

The kernel's linked list macros include a set of operators for iterating through a list. At the top of a list-processing loop, the macros will issue a prefetch operation for the next entry in the list. The hope is that, by the time one entry has been processed, the CPU will have fetched the following entry into its cache, avoiding a stall at the beginning of the next trip through the loop. It seems like the sort of micro-optimization which can only help, and nobody has looked closely at these prefetch operations for a long time - until now. Andi Kleen has just posted a patch removing most of those prefetches.

Andi's contention is that, on contemporary processors, the prefetch operations are actually making things worse. These processors already prefetch everything they can get their hands on, so the explicit prefetch is unlikely to help. Even if that prefetch does start a memory cycle earlier than it would have otherwise happened, list processing loops tend to be so short that the amount of additional parallelism gained is quite small. Meanwhile the prefetch operations bloat the kernel image, increase register use, and cause the compiler to generate worse code. So, he says, we are better off without them.

With the prefetch operations removed, Andi's kernel image ends up being 10KB smaller. It also shows no performance regressions over mainline kernels. Unless somebody else gets different results, that seems like enough to justify putting this patch into the mainline.

Comments (11 posted)

Too many Cc's

By Jonathan Corbet
September 8, 2010
In kernel-related email communications, the general rule is to err on the side of sending copies to too many recipients rather than too few. The volume on the mailing lists is such that one can never assume that interested people will see any specific message there, so it's customary to copy people explicitly. Recently, though, a number of kernel developers have started to complain that they are getting copies of patches that they have no interest in. Often, the selection of recipients seems entirely random.

The culprit is the get_maintainer.pl script shipped with the kernel source. This script is actually a useful tool; it will look in the MAINTAINERS file to find the people who might be interested in a specific patch. Potentially less usefully, it will also dig through the repository history and list other developers who have made changes to the files modified by a given patch. So anybody who has tweaked a given file in the recent past - possibly making only trivial changes - will be listed as people to copy on any other patches to the same file.

Looking at the file revision history can, indeed, be a useful way to find the "real" maintainers; the information in MAINTAINERS, instead, can be incomplete or outdated. But, clearly, one needs to look at what a developer has actually done in a given area; fixing a file for an API change does not mean that the developer is actively working on that code. Many developers don't perform that check and, instead, just send mail to everybody listed by the script.

The level of grumpiness caused by widely-broadcast patches seems to be on the rise. Developers who don't want to receive an irritated response to their postings might want to take a little care or, at least, use the --nogit option to get_maintainer.pl.

Comments (6 posted)

Kernel development news

Further notes on stable kernels

By Jonathan Corbet
September 8, 2010
Last week's article on stable kernels drew a number of comments, both public and private. Those comments suggested a couple of other ways of looking at how the stable tree works and how patches get into it. That, in turn, has inspired this follow up look at the stable kernel process.

A certain amount of unhappiness was expressed regarding the tables of the most active stable contributors. Those tables attribute stable contributions to the individuals who write the patches. Things were done this way for two reasons: (1) the patch author is, indeed, the person who fixed the bug, and (2) that is the information which is available in the stable kernel repository. It made sense, your editor thought, to assign credit - for the fix, but, also, potentially, for the bug which required the fix - in this way.

It turns out that a number of people see stable contributions in a different way. The real credit, they say, belongs to the person who notices that a patch fixes a bug in stable kernels and ensures that the fix gets directed to the stable kernel maintainer. There are people, often those working on maintaining distributor kernels, who spend a lot of time watching the patch stream and looking for just this kind of fix. It is a lot of work, and the people who do that work certainly deserve credit for the service they are performing for the community.

Your editor would be delighted to be able to produce a table crediting this type of stable contributor. Unfortunately, the electronic trail needed to create this table simply does not exist. One could try to play games by looking at how the patch tags differ between the mainline and stable versions of the fix; there will often be an extra signoff or Cc: tag naming the person who forwarded the patch to the stable tree. But such schemes will be approximate and error-prone. If we really want to track and credit developers who flag patches for the stable tree, we almost certainly need to add a new patch tag making that credit explicit in every patch.

A related complaint came, via private mail, from a subsystem maintainer; his point of view was that the subsystem maintainers are the people doing the real legwork to get important fixes into the stable tree. A diligent maintainer will be evaluating all patches as they are merged into the subsystem tree, catching those which have stable kernel implications and directing them accordingly. He suggested a study to evaluate the percentage of stable patches coming out of each subsystem tree as a way to identify which maintainers are on top of things.

Your editor, intrigued by that idea, ran a quick study. The table below shows numbers for some selected subsystems for the 2.6.32 stable series. Since 2.6.32 is still under maintenance, it will have received patches from all of the mainline releases from 2.6.33 to the present. For each subsystem, we can look at how many patches have gone into the mainline (through 2.6.36-rc3) and how many of those went into the stable series. The results look like this:

Subsystem Patches Pct
(mainline)(stable)
fs/ext4 216 90 42%
fs/btrfs 155 42 27%
drivers/usb 1003 112 11%
arch/x86 1877 176 9%
drivers/acpi 291 24 8%
mm 602 48 8%
kernel 1471 96 7%
sound 1369 88 6%
fs/ext3 58 3 5%
drivers/scsi 1054 51 5%
net 2324 98 4%
drivers/input 381 13 3%
arch/powerpc 917 18 2%
drivers/media 1705 26 2%
block 182 3 2%
arch/arm 3221 19 <1%
tools 873 3 <1%

At the upper end of the table, it is unsurprising to find the ext4 and btrfs filesystems showing a high percentage of stable patches. Both of those filesystems are undergoing heavy stabilization work at the present, so it makes sense that the bulk of the changes merged will be important fixes. The relatively small percentage of ext3 changes going into the stable tree was interesting; a quick check shows that many of the ext3 changes which did not go to stable reflect API changes in the VFS and disk quota code. That said, it also appears that a small number of fixes might have fallen through the cracks.

It's hard to draw conclusions from much of the rest of the table; different subsystems will naturally vary in the ratio of fixes to new features, so they will never have the same percentage of patches going into the stable tree. That said, there do seem to be some real variations in how many fixes are being directed to stable by the subsystem maintainers. One might, for example, wonder if a few more than 19 of the 3221 changes to the ARM architecture could have qualified for the stable tree.

This maintainer also pointed out one other aspect of the problem: the maintainer's real job is often to say "no" in the same way as with mainline patches. It seems that some developers have an expansive view of of which changes are suitable for the stable tree, so they flag patches which are too large and invasive, or which do not actually fix serious bugs. In these cases, the maintainer must remove the stable tag and keep the patch from going in that direction. Needless to say, this kind of activity is even harder to track, so there will be no "stable rejections" table.

In any case, maintainers needing to turn away marginal stable patches seems like the right kind of problem to have. Bugs are annoying in the best of times, but they are doubly annoying when a fix exists but is not distributed to people who need it. The stable tree seems to be doing a good job of getting those fixes out; that makes Linux better for all of us.

Comments (none posted)

Working on workqueues

By Jonathan Corbet
September 7, 2010
One of the biggest internal changes in 2.6.36 will be the adoption of concurrency-managed workqueues. The short-term goal of this work is to reduce the number of kernel threads running on the system while simultaneously increasing the concurrency of tasks submitted to workqueues. To that end, the per-workqueue kernel threads are gone, replaced by a central set of threads with names like [kworker/0:0]; workqueue tasks are then dispatched to the threads via an algorithm which tries to keep exactly one task running on each CPU at all times. The result should be better use of the CPU for workqueue tasks and less memory tied up by the workqueue machinery.

That is a worthwhile result in its own right, but it's really only a beginning. The 2.6.36 workqueue patches were deliberately designed to minimize the impact on the rest of the kernel, so they preserved the existing workqueue API. But the new code is intended to do more than replace workqueues with a cleverer implementation; it is really meant to be a general-purpose task management system for the kernel. Making full use of that capability will require changes in the calling code - and in code which does not yet use workqueues at all.

In kernels prior to 2.6.36, workqueues are created with create_workqueue() and a couple of variants. That function will, among other things, start up one or more kernel threads to handle tasks submitted to that workqueue. In 2.6.36, that interface has been preserved, but the workqueue it creates is a different beast: it has no dedicated threads and really just serves as a context for the submission of tasks. The API is considered deprecated; the proper way to create a workqueue now is with:

    int alloc_workqueue(char *name, unsigned int flags, int max_active);

The name parameter names the queue, but, unlike in the older implementation, it does not create threads using that name. The flags parameter selects among a number of relatively complex options on how work submitted to the queue will be executed; its value can include:

  • WQ_NON_REENTRANT: "classic" workqueues guaranteed that no task would be run by two threads simultaneously on the same CPU, but made no such guarantee across multiple CPUs. If it was necessary to ensure that a task could not be run simultaneously anywhere in the system, a single-threaded workqueue had to be used, possibly limiting concurrency more than desired. With this flag, the workqueue code will provide that systemwide guarantee while still allowing different tasks to run concurrently.

  • WQ_UNBOUND: workqueues were designed to run tasks on the CPU where they were submitted in the hope that better memory cache behavior would result. This flag turns off that behavior, allowing submitted tasks to be run on any CPU in the system. It is intended for situations where the tasks can run for a long time, to the point that it's better to let the scheduler manage their location. Currently the only user is the object processing code in the FS-Cache subsystem.

  • WQ_FREEZEABLE: this workqueue will be frozen when the system is suspended. Clearly, workqueues which can run tasks as part of the suspend/resume process should not have this flag set.

  • WQ_RESCUER: this flag marks workqueues which may be involved in memory reclaim; the workqueue code responds by ensuring that there is always a thread available to run tasks on this queue. It is used, for example, in the ATA driver code, which always needs to be able to run its I/O completion routines to be sure it can free memory.

  • WQ_HIGHPRI: tasks submitted to this workqueue will put at the head of the queue and run (almost) immediately. Unlike ordinary tasks, high-priority tasks do not wait for the CPU to become available; they will be run right away. That means that multiple tasks submitted to a high-priority queue may contend with each other for the processor.

  • WQ_CPU_INTENSIVE: tasks on this workqueue can be expected to use a fair amount of CPU time. To keep those tasks from delaying the execution of other workqueue tasks, they will not be taken into account when the workqueue code determines whether the CPU is available or not. CPU-intensive tasks will still be delayed themselves, though, if other tasks are already making use of the CPU.

The combination of the WQ_HIGHPRI and WQ_CPU_INTENSIVE flags takes this workqueue out of the concurrency management regime entirely. Any tasks submitted to such a workqueue will simply run as soon as the CPU is available.

The final argument to alloc_workqueue() (we are still talking about alloc_workqueue(), after all) is max_active. This parameter limits the number of tasks which can be executing simultaneously from this workqueue on any given CPU. The default value (used if max_active is passed as zero) is 256, but the actual maximum is likely to be far lower, given that the workqueue code really only wants one task using the CPU at any given time. Code which requires that workqueue tasks be executed in the order in which they are submitted can use a WQ_UNBOUND workqueue with max_active set to one.

(Incidentally, much of the above was cribbed from Tejun Heo's in-progress document on workqueue usage).

The long-term plan, it seems, is to convert all create_workqueue() users over to an appropriate alloc_workqueue() call; eventually create_workqueue() will be removed. That task may take a little while, though; a quick grep turns up nearly 300 call sites.

An even longer-term plan is to merge a number of other kernel threads into the new workqueue mechanism. For example, the block layer maintains a set of threads with names like flush-8:0 and bdi-default; they are charged with getting data written out to block devices. Tejun recently posted a patch to replace those threads with workqueues. This patch has made some developers a little nervous - problems with writeback could create no end of trouble when the system is under memory pressure. So it may be slow to get into the mainline, but it will probably get there eventually unless regressions turn up.

After that, there is no end of special-purpose kernel threads elsewhere in the system. Not all of them will be amenable to conversion to workqueues, but quite a few of them should be. Over time, that should translate to less system resource use, cleaner "ps" output, and a better-running system.

Comments (4 posted)

Another old security problem

By Jonathan Corbet
September 8, 2010
In August, a longstanding kernel security hole related to overflowing the stack area was closed. But it turns out there are other problems in this area, at least one of which has been known about since late last year. Fixes are in the works, but it's hard not to wonder if we are not handling security issues as well as we should be.

Once again, the problem was reported by Brad Spengler, who posted a short program demonstrating how easily things can be made to go wrong. The program allocates a single 128KB array, which is filled as a long C string. Then, an array of over 24,000 char * pointers is allocated, with each entry pointing to the large string. The final step is to call execv(), using this array as the arguments to the program to be run. In other words, the exploit is telling the kernel to run a program with as many huge arguments as it can.

Once upon a time, the kernel had a limit on the maximum number of pages which could be used by a new program's arguments. This limit would have prevented any problems resulting from the sort of abuse shown by Brad's program, but it was removed for 2.6.23; it seems that any sort of limit made life difficult for Google. In its place, a new check was put in which looks like this (from fs/exec.c):

	/*
	 * Limit to 1/4-th the stack size for the argv+env strings.
	 * This ensures that:
	 *  - the remaining binfmt code will not run out of stack space,
	 *  - the program will have a reasonable amount of stack left
	 *    to work from.
	 */
	rlim = current->signal->rlim;
	if (size > ACCESS_ONCE(rlim[RLIMIT_STACK].rlim_cur) / 4) {
		put_page(page);
		return NULL;
	}

The reasoning was clear: if the arguments cannot exceed one quarter of the allowed size for the process's stack, they cannot get completely out of control. It turns out that there's a fundamental flaw in that reasoning: the stack size may well not be subject to a limit at all. In that case, the value of the limit is -1 (all ones, in other words), and the size check becomes meaningless. The end result is that, in some situations, there is no real limit on the amount of stack space which can be consumed by arguments to exec(). And, unfortunately, the consequences are not limited to the offending process.

At a minimum, Brad's exploit is able to oops the system once the stack tries to expand too far. He mentioned the possibility of expanding the stack down to address zero - thus reopening the threat of null-pointer exploits - but has not been able to figure out a way to make such exploits work. The copying of all those arguments will, naturally, consume large amounts of system memory; due to another glitch, that memory use is not properly accounted for, so, if the out-of-memory killer is brought in to straighten things out, it will not target the process which is actually causing the problem. And, as if that were not enough, the counting and copying of the argument strings is not preemptible or killable; given that it can run for a very long time, it can be very hard on the performance of the rest of the system.

Brad says that he first reported this problem in December, 2009, but got no response. More recently, he sent a note to Kees Cook, who posted a partial fix in response. That fix had some technical problems and was not applied, but Roland McGrath has posted a new set of fixes which gets closer. Roland has taken a minimal approach, not wanting to limit argument sizes more than absolutely necessary. So his patch just ensures that the stack will not grow below the minimum allowed user-space memory address (mmap_min_addr). That check, combined with the guard page added to the stack region by the August fix, should prevent the stack from growing into harmful areas. Roland has also added a preemption point to the argument-copying code to improve interactivity in the rest of the system, and a signal check allowing the process to be killed if necessary. He has not addressed the OOM killer issue, which will need to be fixed separately.

Roland's patch seems likely to fix the worst problems, though some commenters feel that it does not go far enough. One assumes that fixes will be headed toward distribution kernels in the near future. But there are a couple of discouraging things to note from this episode:

  • It seems that the code which is intended to block runaway resource use in a core Linux system call was never really tested at its extremes. The Linux kernel community does not have a whole lot of people who do this kind of auditing and testing, unfortunately; that leaves the task to the people who have an interest (either benign or malicious) in security issues.

  • It took some nine months after the initial report before anybody tried to fix the problem. That is not the sort of rapid response that this community normally takes pride in.

The problem may indicate a key shortcoming in how Linux kernel development is supported. There are thousands of developers who are funded to spend at least some of their time doing kernel work. Some of those are paid to work in security-related areas like SELinux or AppArmor. But it's not at all clear that anybody is funded simply to make sure that the core kernel is secure. That may make it easier for security problems to slip into the kernel, and it may slow down the response when somebody points out problems in the code. There is a strong (and increasing) economic interest in exploiting security issues in the kernel; perhaps we need to find a way to increase the level of interest in preventing these issues in the first place.

Comments (26 posted)

Patches and updates

Kernel trees

Core kernel code

Development tools

Device drivers

Documentation

Filesystems and block I/O

Memory management

Security-related

Virtualization and containers

Miscellaneous

Page editor: Jonathan Corbet

Distributions

Looking at Fedora 14 and Ubuntu 10.10

September 7, 2010

This article was contributed by Joe 'Zonker' Brockmeier.

Watching Ubuntu and Fedora development is something like watching episodes of Iron Chef: Given roughly the same ingredients and the same amount of time, the two projects produce vastly different dishes. The Fedora 14 and Ubuntu 10.10 release cycle is particularly pronounced in this regard, with Ubuntu's focus largely on refining improvements from 10.04 and Fedora introducing major changes to the infrastructure.

The two distributions follow largely the same development cycle, six months between releases with new major releases each Spring and Fall. Ubuntu's next release is scheduled for October 10th, while Fedora 14 was scheduled for October 26th, but slipped by a week and is now scheduled for November 2nd. Even though the two distributions ship roughly the same software, the difference in features that have been emphasized by each is significant in the upcoming releases. It almost goes without saying that F14 and Ubuntu 10.10 include the normal array of package updates for the usual suspects like Firefox, the Linux kernel, etc.

Fedora's feature list largely consists of infrastructure improvements and developer-oriented updates. For example, some of the features scoped for F14 include providing a GNUstep development environment, updating to Perl 5.12, updating to Python 2.7, and adding Rakudo Star — the first production release of the Perl 6 implementation for the Parrot virtual machine.

In contrast, Ubuntu 10.10 ("Maverick Meerkat") has more conservative developer tools, with Python 2.6.6 and Perl 5.10 still the defaults in the beta release. Rakudo doesn't seem to be available at all. Fedora has also adopted a new version of the libjpeg library, libjpeg-turbo, which is still awaiting packaging in Ubuntu. In short, Fedora is sticking to its philosophy of shipping free software first.

Fedora is also being adventurous with its init system. The project is in the process of switching to systemd, an alternative to the venerable System V init and (more recently) the Ubuntu-led Upstart, which Fedora adopted with the Fedora 9 release. Even though systemd is shipped with the Fedora 14 alpha, there's no guarantee that it will wind up in Fedora 14 final. The Fedora project scheduled a test day on September 7th get feedback on systemd and determine whether the new init system for Fedora will hit the streets with Fedora 14 or be held back for Fedora 15.

Not surprisingly, many of F14's features are likely to be important to Red Hat for future Red Hat Enterprise Linux releases. Assuming all goes according to plan, F14 will also be the first release to include support for Spice. The Spice project is designed to provide high-quality remote access to virtual desktops, allowing users to run several Linux or Windows clients via QEMU on a single server and display the clients on remote machines. It's not something that will appeal to many home users, but the ability to run many client OSes on a single server and display on remote clients via Virtual Device Interfaces (VDIs) will appeal to large organizations.

Fedora 14 is also road testing features for Multipath install, which is to say Storage Area Network (SAN) devices. Again, not something that will really appeal to the consumer desktop market, but important to larger organizations.

Compared to the Ubuntu 10.04 release, Maverick seems like a fairly modest update. Many of the new features in 10.10 focus on Ubuntu-specific features like the Ubuntu One services offered by Canonical and supported by software shipped with Ubuntu. In particular, the release includes a number of improvements to the Ubuntu Software Center, with a focus on the "For Purchase" section. Presumably the idea is to offer proprietary packages within Ubuntu following the 10.10 release. So far, the only package to show up is the Fluendo DVD player, which is priced at $24.95.

The Software Center has received a number of usability enhancements and is very polished now. For instance, it now shows where an application has been installed — something that may have confused some users. For programs like Firefox, the Software Center can also show add-ons or extensions, so users will be able to easily find plugins or extensions packaged for the software. The history of packages installed, updated, or removed by the Software Center is also displayed by date and action. When comparing the Software Center and the PackageKit front-end for Fedora side by side, PackageKit is much less polished and user friendly.

Ubuntu has also worked on refining the Ubiquity installer for 10.10. Whereas the Fedora Project is trying to tackle more complex storage and so on, Ubuntu is working on hiding the complexity of partitioning disks and dealing with storage as much as possible. Ubuntu now presents a dialog at the beginning of the install suggesting that "for best results" the machine be plugged in, and that the system should be connected to a network. Ubuntu also offers to install things like Flash and MP3 support, though they're not shipped on the disc. Unfortunately, while it suggests being connected to a network is a Good Thing, it doesn't offer a way to actually configure wireless networking at install time if the system is not connected via Ethernet. The partitioner has also been simplified, and has a positively Mac-like feel.

Fedora and Ubuntu are also diverging significantly with the netbook experience. Fedora is scheduled to include the MeeGo 1.0 UX experience for F14, though it doesn't seem to be packaged for the alpha release. Ubuntu, on the other hand, is pursuing its own netbook experience called Unity (which is somewhat ironic, since Ubuntu is going it alone) that is not based on MeeGo (or Moblin which was the earlier basis for Ubuntu's netbook distribution).

One thing that won't be appearing in Ubuntu 10.10 is the much talked about Ubuntu Font Family that's being designed outside the community. The Ubuntu font is currently available to Ubuntu Members through a Private Package Archive (PPA), but doesn't appear to be ready for release with 10.10. Ubuntu has also included more refinements of its indicator applets in GNOME and improved the sound controls so that if the user is listening to Rhythmbox, some simple playback controls (play/pause, forward, backward) are included in the drop-down control.

The next release for Ubuntu is the release candidate, scheduled for September 30th. The Fedora 14 beta, taking into account the slip, is now scheduled on September 28th.

Both releases seem to be shaping up well, if very differently — as befitting the focus of the distributions and projects. Ubuntu 10.10 is a polished consumer OS that is well-suited for users who are new to Linux, or just prefer a desktop system that's easy to use. Fedora's developer-centric approach makes for an OS that is easy enough to use, but better suited for developers or experienced users who want to tinker with technologies before they make an official appearance in RHEL and other distributions. Ubuntu, on the other hand, is the end result of development rather than the beginning. Many of the changes in 10.10, e.g. the Ubuntu One improvements and the application indicators, are unlikely to show up in other distributions (excepting, perhaps, Linux Mint).

Comments (46 posted)

New Releases

Debian GNU/Linux 5.0 updated

The Debian project has announced the sixth update of its stable distribution Debian GNU/Linux 5.0 ("lenny"). "This update mainly adds corrections for security problems to the stable release, along with a few adjustment to serious problems. Please note that this update does not constitute a new version of Debian GNU/Linux 5.0 but only updates some of the packages included. There is no need to throw away 5.0 CDs or DVDs but only to update via an up-to-date Debian mirror after an installation, to cause any out of date packages to be updated."

Comments (1 posted)

Linux Mint Debian (201009) released

Linux Mint has announced the first release of the Linux Mint Debian Edition (LMDE). "Today is very important for Linux Mint. It's one day to remember in the history of our project as we're about to maintain a new distribution, a rolling one, which promises to be faster, more responsive and on which we're less reliant on upstream components. Linux Mint Debian Edition (LMDE) comes with a Debian base, which we transformed into a live media and on top of which we added a new installer. It's rougher and in some aspects not as user-friendly as our other editions, it's very young but it will improve continuously and rapidly, and it brings us one step closer to a situation where we're fully in control of the system without being impacted by upstream decisions."

Comments (none posted)

openSUSE 11.4 Milestone 1 released

The openSUSE project has announced the release of openSUSE 11.4 Milestone 1."M1 starts off openSUSE 11.4 development at a cracking pace with performance improvements in the package management network layer and version updates to major components."

Comments (none posted)

Ubuntu 10.10 Beta (Maverick Meerkat) Released

Ubuntu 10.10 (Maverick Meerkat) beta is available for testing. The Ubuntu 10.10 family of Kubuntu, Xubuntu, Edubuntu, Ubuntu Studio, and Mythbuntu, have also reached beta status. Maverick Meerkat is scheduled for a final release on October 10, 2010.

Full Story (comments: 28)

Distribution News

Distribution quote of the week

I'm kind of sympathetic towards the point of view that that's a perfectly sound goal for an operating system, but it may not necessarily be the best goal for the Fedora operating system to pick. Just because that's the target of Windows and OS X doesn't mean it has to be ours.

For a start, there's already plenty of Linux distributions besides Fedora with those goals. To put it bluntly, there's Ubuntu, and Ubuntu's doing a fairly decent job of being Ubuntu. I've argued before that it's somewhat dangerous for the overall ecosystem for Ubuntu to drive out all competitors, and to an extent I stand by that, but OTOH, I'm not entirely sure that being Ubuntu's token competition is the best thing Fedora can be.

-- Adam Williamson

Comments (none posted)

Debian GNU/Linux

Release Update: freeze guidelines, transitions, BSP, rc bug fixes

The latest Debian release update looks at the freeze status, bug squashing parties and RC bugs. Also, the next development version of Debian will be called "wheezy".

Full Story (comments: none)

Fedora

Fedora Board Meeting Recap 2010-09-03

Click below for a recap of the September 3, 2010 meeting of the Fedora board. Topics include F14 beta, outstanding community domain requests, and more.

Full Story (comments: none)

Other distributions

GNU/Linux powers state-of-the-art hearing aid research

64 Studio Ltd. has created a Linux distribution for HörTech gGmbH to aid in research on hearing impairment and augmentation technology. "64 Studio was commissioned by HörTech to create a GNU/Linux real-time audio distribution, code-named Mahalia, optimized for the Lenovo Thinkpad X200 notebook. Giso Grimm of the Carl von Ossietzky-Universität Oldenburg explained: "We prefer to use ready-to-use Linux audio distributions over patching the kernel ourselves, since our expertise is in signal processing, not kernel development. When we were faced with the fact that our then favourite audio distribution failed to deliver stable real-time kernels for several releases, we asked 64 Studio to tailor us a customized distribution with a working real-time kernel that matched our specific needs and ran stable on the selected hardware.""

Full Story (comments: 6)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

Mint 9: Minty fresh Linux (ComputerWorld)

ComputerWorld has a review of Mint 9. "Mint also has a variety of other small default programs of its own to make using it a pleasure. The most important of these is the new Backup Tool. This comes with an easy-to-user interface and features incremental backups, compression, and integrity checks. You can also tag your installed software, save this as a list, and then restore the selection on a different computer or on a new version of Linux Mint. This is a rather handy trick if, like me, you find yourself often moving from one PC to another and don't want to jump through the hoops required by imaging software."

Comments (none posted)

Spotlight on Linux: Zenwalk Linux 6.4 "Live" (Linux Journal)

Linux Journal has a short review of the latest Zenwalk Live release. "It is quite an ambitious project as it offers and maintains five different editions. The Standard Edition is the flagship version for the project. It's a complete system for desktops, laptops, and servers. The Core Edition is a basic version of Linux with no X server, no graphical environment, and no applications. It's for those who like to build their own system their own way. The Live Edition is the installable Live CD that will boot any of 12 common languages. And finally, there are the alternative graphical environment editions: the GNOME Edition and the Openbox Edition."

Comments (none posted)

Page editor: Rebecca Sobol

Development

Matterhorn brings integrated open source video distribution

September 8, 2010

This article was contributed by Nathan Willis

The Opencast project unveiled the 1.0 release of Matterhorn at the end of August. Matterhorn is an integrated video recording, processing, and distribution platform. Its primary goal is to let educational institutions set up a streamlined process for recording video in classrooms and releasing it in web or podcast form, but anyone who captures video on a recurring basis may find it useful.

The 1.0 code is available for download both as source and as pre-packaged "all in one" binaries for Linux servers. Matterhorn is written in Java, implemented on top of the Apache Felix service framework (which is included in the binary distribution). The all-in-one distribution is packaged and configured to be run from /opt/matterhorn on an Apache server. There are pre-install scripts for RPM-based and Debian-based systems that install a handful of third-party utilities, but otherwise the binary distribution is meant to be run out of the box, serving up its administration interface on http://localhost:8080.

Alternatively, Matterhorn can be installed from source, using the Apache Maven build system. In addition, the Matterhorn platform includes several components — from program scheduling and video transcoding to feed publishing — which can be installed and run on separate servers. This design allows institutions to deploy multiple "worker" nodes to scale up a Matterhorn system for increased video processing volume. Separate instructions for multiple-server installation are available on the project's documentation site.

The documentation is quite thorough, on top of which the project has taken pains to make the system friendly for non-IT-experts to manage; in addition to coming "pre-configured" in the binary distribution, almost all user-facing configuration options are exposed in the web interface of the administration server. Lower-level options like database settings and paths to system executables still need to be edited in text configuration files, but an inexperienced developer can unpack the binary distribution and run it relatively painlessly. For those evaluating the system for their own institution, the documentation even provides a detailed list of the recommended hardware for video capture, encoding, and media distribution.

A bird's eye view of Matterhorn

[Recordings list]

Matterhorn 1.0 is built to be an end-to-end video deployment system, with a workflow that consists of four basic components. In the video capture and administration component, users can upload video files that they have produced elsewhere, or schedule automated capture from pre-defined video resources, such as lecture hall cameras. The list of configured video capture resources is maintained inside Matterhorn, so that teachers can schedule a recording by date, time, and room, including recurring classes.

The ingest and processing component takes uploaded video content and processes it in a variety of ways. It can be automatically transcoded for several resolutions and codecs, automatically segmented on scene transitions, watermarked or branded with video overlays, or (in the case of presentation slide videos) scanned with optical character recognition (OCR) to extract text. OCRed slide text can then be used as captioning for screen readers. Caption text can also be added for any video content by human transcribers. The system also allows each video to be marked as "hold for review" so that an administrator can examine it for quality or wait to publish it until it has been submitted to a transcriber and fully captioned.

[Video player]

The distribution management component consists of publishing tools, including local storage and web delivery, uploading to public off-site services (e.g., YouTube), DVD-ready output, and RSS feeds designed to integrate with other content management systems (CMSes). Because of the educational focus, the supported CMSes include Sakai, Blackboard, Moodle, and other academic courseware suites. The engagement tools component includes a web video player, the search interface, and annotation tools for the end user.

The recommended hardware configuration is modest, consisting of an MPEG encoder board, an ITX motherboard-based machine, and SATA storage. A separate VGA capture unit is listed as well, presumably to capture overhead projector content separately from live video. The result is roughly equivalent to a standard desktop-caliber PC in price. Intel's low-power Atom chip was initially the target, but had to be dropped from the recommendation in order to keep up with full-rate video.

Accessibility all around

Perhaps the most striking thing about Matterhorn is its integrated accessibility support. Support for human transcription is built in, OCR (via the open source Ocropus library) is built in, and the development trunk includes work to enable automatic speech-to-text transcription through CMU Sphinx. Seven captioning formats are supported, to enable compatibility with the widest range of video players and hosting services. The embeddable web video player itself is accessible with keyboard and screen reader commands.

Few other open source video delivery systems come close. The hybrid open/closed source Kaltura video system, for example, allows for end-user annotations that could be used to store transcriptions, but it is not part of the automated workflow. A commercial plugin is available for human transcription, but it is tied to a specific, paid transcription service.

Of course, as accessibility experts will always tell you, making video accessible for sight-impaired users actually increases accessibility for sighted users as well. Matterhorn's caption text is fully searchable, via both the site search engine and within the embedded video player. The video player will display updated captions as a user scrolls back and forth in the timeline, making it easier to locate points of interest in a video.

Educational use

To educators, the lecture scheduling and courseware distribution features are important. If the institution has dedicated video recording hardware in each classroom, setting up a recurring recording is as straightforward as selecting its weekly times in a web calendar widget. The administration interface maintains video recordings in "series" that share basic metadata. For a recurring lecture series, this means that each new recording is automatically tagged with the correct metadata when it is captured, before automatic processing and publishing to the eventual feed or output system.

From the student's perspective, one of the more interesting features is the two-pane video player, which allows two time-synchronized videos to play together: a camera on the speaker, and a second track showing presentation slides. Whether a specific recording represents slides or a live speaker is flagged when the video is uploaded; slide tracks are automatically queued for OCR scanning. Matterhorn also attempts to automatically detect abrupt changes in a slide video — which generally signal slide transitions — and creates thumbnails of each slide to serve as video bookmarks.

Integration with Moodle, Sakai, and other courseware projects is almost a given. The Matterhorn video player is embeddable, so a course instructor or administrator can simply drop it into a page, but the RSS output allows a Matterhorn video series to be automatically synchronized with Moodle or Sakai course materials, or made available to other applications. The documentation mentions Apple's iTunes U educational service, but any podcast application will work.

Outlook

The turn-key design of Matterhorn will likely appeal to schools, who may not wish to expend time training dozens or hundreds of teachers to record and encode video on their own. That said, there is nothing in Matterhorn's design that is unique to the education market. Any group, community, or project that produces video content regularly could benefit from the ability to automate the process to some degree, not to mention the niceties of free slide OCR and thumbnailing — dozens of open source conferences and events could benefit from those features alone.

Considering how often web video issues are in the news these days, it is genuinely surprising that there are not more open source video workflow and distribution projects out there. The aforementioned Kaltura is fairly well known, although only a subset of its products are open source (the familiar "Community Edition" play). But Kaltura is actually more focused on entities building stand-alone video delivery sites, such as their own branded "channels." Hence, a great deal of emphasis is placed on search engine optimization and web advertising frameworks in Kaltura's feature set.

The Plumi project is closer in scope to Matterhorn, providing a decent video workflow from upload to distribution, but it works entirely within Plone sites. One of Matterhorn's advantages over Plumi is it CMS-neutral output capabilities.

Now that version 1.0 is out the door, it will be interesting to see how the education and non-education markets respond. The Opencast project Planet feed shows several real-world universities using Matterhorn for video distribution, including many in Europe and the North America that are listed as sponsoring partners in the project.

Plans are well underway for the next development cycle, which may integrate the automated speech-to-text transcription feature, and will definitely feature the OpenCaps web-based caption editing tool. From a fully open source video distribution toolchain, to a fully accessible web video player, Matterhorn offers much that schools — and open source projects — could find educational.

Comments (6 posted)

Brief items

bzr 2.2.0 released

Version 2.2.0 of the bzr distributed source code management system is available. "This is primarily a bugfix and polish release over the 2.1 series, with a large number of bugs fixed (>120), and some performance improvements."

Full Story (comments: none)

Cairo 1.10.0 available

The 1.10.0 release of the Cairo graphics library has finally been released. "One of the more interesting departures for cairo for this release is the inclusion of a tracing utility, cairo-trace. cairo-trace generates a human-readable, replayable, compact representation of the sequences of drawing commands made by an application. This can be used to inspecting applications to understand issues and as a means for profiling real-world usage of cairo." The profiling feature has evidently been used to improve performance in a number of areas. There is also improved printing support, better 16-bit buffer support, and better use of hardware acceleration.

Full Story (comments: 1)

Firefox and SeaMonkey updates released

The Mozilla project has released firefox 3.6.9 and 3.5.12 and SeaMonkey 2.0.7. These updates fix a relatively long list of scary security problems; the firefox 3.6.9 update also add support for X-Frame-Options, which can be used by web sites to prevent their content from being trapped inside another site's frames.

Comments (none posted)

Thunderbird 3.1.3 and 3.0.7 security updates now available

Mozilla has released Thunderbird 3.1.3 and Thunderbird 3.0.7 with security and stability updates. See the release notes for details (3.1.3 and 3.0.7).

Full Story (comments: none)

GDB 7.2 released

Version 7.2 of the GDB debugger is out. New features include support for the D language, some C++ improvements, better Python support, better tracepoint support, and more; see the announcement for the details.

Full Story (comments: 1)

Mozilla Labs Gaming launches

The Mozilla Labs Gaming project has announced its existence. "Modern Open Web technologies introduced a complete stack of technologies such as Open Video, audio, WebGL, touch events, device orientation, geo location, and fast JavaScript engines which make it possible to build complex (and not so complex) games on the Web. With these technologies being delivered through modern browsers today, the time is ripe for pushing the platform. And what better way than through games?" The project is starting with a competition to see who can build the best web-based game.

Comments (5 posted)

PostgreSQL 9.1alpha1 released

Most PostgreSQL users are waiting for the 9.0 release; meanwhile, the first alpha 9.1 release has just been announced. There are a number of enhancements in performance, statistics gathering, XML support, and more; click below for the details.

Full Story (comments: none)

Vignatti: X Census (for 1.9)

Tiago Vignatti has put together a report on the development X.org 1.9. In the tradition of the kernel statistics reported on LWN, and the more recent GNOME census, he ranks developers and employers based on the number of changes made to various pieces of the X.org tree during the development of 1.9 (April 2 to August 20). The statistics are broken up along functional lines into several categories: X implementation, X input drivers, user space video drivers, Pixman, X11 conformance testing, and X documentation. "Of course lines of code and changeset are far from being a good metric to see actually how the development happened. But still, it does represents something."

Comments (63 posted)

Newsletters and articles

Development newsletters from the last week

Comments (none posted)

Introduction to ClamAV's Low Level Virtual Machine

The ClamAV project has put up an introductory post showing how to make use of an interesting new feature: integration of an LLVM just-in-time compiler for malware-scanning bytecode. "Here's a case study to see how ClamAV bytecode can come in handy (this is an integer overflow vulnerability in a old version of OpenOffice CVE-2008-2238)."

Comments (none posted)

Graesslin: Driver dilemma in KDE workspaces 4.5

Martin Graesslin looks at problems with the interaction between KWin and some graphics drivers. "Now that I have explained all our checks we did to ensure a smooth user experience, I want to explain how it could happen that there are regressions in 4.5. In 4.5 we introduced two new features which require OpenGL Shaders: the blur effect and the lanczos filter. Both are not hard requirements. Blur effect can easily be turned off by disabling the effect and the lanczos filter is controlled by the general effect level settings which is also used for Plasma and Oxygen animations. Both new features check for the required extensions and get only activated iff the driver claims support for it. So everything should be fine, shouldn't it? Apparently not when it comes to the free graphics drivers (please note and remember: we do not see such problems with the proprietary NVIDIA driver!)." (Thanks to Jos Poortvliet)

Comments (78 posted)

systemd for Administrators, Part II

Lennart Poettering has posted the second installment in his series on system administration with systemd. "In systemd we place every process that is spawned in a control group named after its service. Control groups (or cgroups) at their most basic are simply groups of processes that can be arranged in a hierarchy and labelled individually. When processes spawn other processes these children are automatically made members of the parents cgroup. Leaving a cgroup is not possible for unprivileged processes. Thus, cgroups can be used as an effective way to label processes after the service they belong to and be sure that the service cannot escape from the label, regardless how often it forks or renames itself. Furthermore this can be used to safely kill a service and all processes it created, again with no chance of escaping."

Comments (none posted)

Page editor: Jonathan Corbet

Announcements

Non-Commercial announcements

CodePlex.com donates $25,000 to Mercurial project

Microsoft's CodePlex foundation CodePlex.com has announced the donation of $25,000 to support the development of the Mercurial source code management system. "While Team Foundation Server is still the most used version control system on CodePlex, our users are clearly benefiting from having access to Mercurial for their open source projects. The CodePlex team is happy to be able to offer our community of more than 17,000 projects a choice. With Mercurial as an important feature of CodePlex, we are excited to be making this donation to help support the Mercurial project."

Comments (66 posted)

Legal Announcements

Mozilla Public License Alpha 2

The second alpha version of the revised Mozilla Public License has been posted; the text has been annotated to make it relatively easy to see what has been changed. "The most significant change in this draft is the patent language. We have made it easier to read but also, we hope, better at protecting communities who choose to use the MPL. It should also have the side effect of making the license Apache-compatible, allowing projects licensed under the next MPL release to include Apache-licensed code in their code bases."

Comments (none posted)

Articles of interest

Morgan: Finding more women to speak at Ohio LinuxFest: success!

On her blog, Mackenzie Morgan reports on efforts to increase the number of women speakers at Ohio LinuxFest. Due to the outreach, the number of women speakers went from five of 31 last year to 14 of 38 this year. "Recognising the various concerns women speakers can face, we tried to specifically address potential issues in the email sent to women-focused mailing lists. Some of these known issues include lack of confidence in new speakers, not being clear what the intended audience is, or the "imposter syndrome," where someone doesn't recognize that they are qualified to speak on a topic. The woman to woman dialog made the difference.".

Comments (none posted)

Resources

FSFE - Newsletter September 2010

The September edition of the Free Software Foundation Europe (FSFE) Newsletter is out. "In this edition we are covering Free Software in education, distributed Free Software solutions as alternative to centralised services, some ways to celebrate what we -- the Free Software community -- already achieved, and how you can participate in the European football championship even if you are not interested in football."

Full Story (comments: none)

LPI Launches Certified Solution Provider Program

The Linux Professional Institute (LPI) has announced a new partner program. "The Linux Professional Institute Certified Solution Provider program (LPI-CSP) will be offered to prospective partners this year in Latin America, North America and Spain and in 2011 within other regions around the world."

Full Story (comments: none)

LPI Launches "Community Corner" with Jon "maddog" Hall

The Linux Professional Institute (LPI) has announced an upcoming Community Corner blog. "The blog will feature commentary and regular contributions from Jon "maddog" Hall, a widely recognized mentor and leader in the programming community and a longtime and respected champion of Free and Open Source Software. Mr. Hall is also Executive Director of Linux International."

Full Story (comments: none)

Contests and Awards

Open Source and innovation: Demo Cup at Open World Forum

The Demo Cup is an international competition for Open Source projects, organized as part of the Open World Forum in Paris on October 1, 2010. The list of finalists has been announced. "These 13 finalists will compete against each other on 1 October 2010 from 2:00pm to 4:00pm at the Open World Forum. Each will have exactly eight minutes to convince the Jury and the audience, by putting forward a practical demonstration of how their product might become a game-changer in their marketplace."

Full Story (comments: none)

3 finalists "Best Object Databases Lecture Notes"

ODBMS.ORG and a jury of members of the ODBMS.ORG Experts Board have selected three finalists for the "Best Object Databases Lecture Notes" Award 2010. "The Awards recognize the most complete and up to date lecture notes on Object Databases, that have been, or have strong potential to be, instrumental to the teaching of theory and practice in the field of objects and databases."

Full Story (comments: none)

Calls for Presentations

2010 LLVM Developers' Meeting Announcement and Call for Speakers

The fourth annual LLVM Developers' Meeting will be held on November 4, 2010 in San Jose, California, USA. "We are looking for speakers for this year's meeting. Topics can include the LLVM or Clang infrastructure or new uses of LLVM or Clang. If you are interested in presenting at this year's LLVM Developers' Meeting, please submit your talk proposal to us by September 22, 2010."

Full Story (comments: none)

Call for Papers for SCALE 9x opens

The call for papers for the 2011 Southern California Linux Expo is open until December 13, 2010. The conference is in Los Angeles, California, February 25-27, 2011. "With the continued growth of Open Source software in business, SCALE will address the need for system management knowledge by adding a speaker track focused on system administration. Combined with the two specialized tracks -- Beginners and Developers -- and the two general interest tracks, SCALE 9x will have content for every attendee."

Full Story (comments: none)

Upcoming Events

End User Summit Program announced

The Linux Foundation has announced the program for the 2010 End User Summit. "The Summit will take place October 12-13, 2010 at the Hyatt Regency Jersey City in New Jersey and will provide end users and kernel developers a direct connection to one another for advancing the features most critical to using Linux in the enterprise. By bringing together sophisticated end users and senior Linux developers, The Linux Foundation hopes to accelerate innovation and adoption of Linux in the most cutting-edge environments. Companies from financial services, healthcare, energy and government, among other industries, will be attending the invitation-only event."

Full Story (comments: none)

Paris Mini-DebConf 2010

Debian France is organizing the first Paris Mini-DebConf. "This event will welcome everyone with an interest in the Debian project for a series of talks and workshops about Debian. Allowing users, contributors and developers to meet each other is also one of the main goals of this event. We are especially looking forward to meeting Debian developers from around the world :)" The conference takes place in Paris, October 30-31, 2010.

Full Story (comments: none)

Events: September 16, 2010 to November 15, 2010

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
September 16
September 18
X Developers' Summit Toulouse, France
September 16
September 17
Magnolia-CMS Basel, Switzerland
September 16
September 17
3rd International Conference FOSS Sea 2010 Odessa, Ukraine
September 17
September 18
FrOSCamp Zürich, Switzerland
September 17
September 19
Italian Debian/Ubuntu Community Conference 2010 Perugia, Italy
September 18
September 19
WordCamp Portland Portland, OR, USA
September 18 Software Freedom Day 2010 Everywhere, Everywhere
September 21
September 24
Linux-Kongress Nürnberg, Germany
September 23 Open Hardware Summit New York, NY, USA
September 24
September 25
BruCON Security Conference 2010 Brussels, Belgium
September 25
September 26
PyCon India 2010 Bangalore, India
September 27
September 29
Japan Linux Symposium Tokyo, Japan
September 27
September 28
Workshop on Self-sustaining Systems Tokyo, Japan
September 29 3rd Firebird Conference - Moscow Moscow, Russia
September 30
October 1
Open World Forum Paris, France
October 1
October 2
Open Video Conference New York, NY, USA
October 1 Firebird Day Paris - La Cinémathèque Française Paris, France
October 3
October 4
Foundations of Open Media Software 2010 New York, NY, USA
October 4
October 5
IRILL days - where FOSS developers, researchers, and communities meet Paris, France
October 7
October 9
Utah Open Source Conference Salt Lake City, UT, USA
October 8
October 9
Free Culture Research Conference Berlin, Germany
October 11
October 15
17th Annual Tcl/Tk Conference Chicago/Oakbrook Terrace, IL, USA
October 12
October 13
Linux Foundation End User Summit Jersey City, NJ, USA
October 12 Eclipse Government Day Reston, VA, USA
October 16 FLOSS UK Unconference Autumn 2010 Birmingham, UK
October 16 Central PA Open Source Conference Harrisburg, PA, USA
October 18
October 21
7th Netfilter Workshop Seville, Spain
October 18
October 20
Pacific Northwest Software Quality Conference Portland, OR, USA
October 19
October 20
Open Source in Mobile World London, United Kingdom
October 20
October 23
openSUSE Conference 2010 Nuremberg, Germany
October 22
October 24
OLPC Community Summit San Francisco, CA, USA
October 25
October 27
GitTogether '10 Mountain VIew, CA, USA
October 25
October 27
Real Time Linux Workshop Nairobi, Kenya
October 25
October 27
GCC & GNU Toolchain Developers’ Summit Ottawa, Ontario, Canada
October 25
October 29
Ubuntu Developer Summit Orlando, Florida, USA
October 26 GStreamer Conference 2010 Cambridge, UK
October 27 Open Source Health Informatics Conference London, UK
October 27
October 29
Hack.lu 2010 Parc Hotel Alvisse, Luxembourg
October 27
October 28
Embedded Linux Conference Europe 2010 Cambridge, UK
October 27
October 28
Government Open Source Conference 2010 Portland, OR, USA
October 28
October 29
European Conference on Computer Network Defense Berlin, Germany
October 28
October 29
Free Software Open Source Symposium Toronto, Canada
October 30
October 31
Debian MiniConf Paris 2010 Paris, France
November 1
November 2
Linux Kernel Summit Cambridge, MA, USA
November 1
November 5
ApacheCon North America 2010 Atlanta, GA, USA
November 3
November 5
Linux Plumbers Conference Cambridge, MA, USA
November 4 2010 LLVM Developers' Meeting San Jose, CA, USA
November 5
November 7
Free Society Conference and Nordic Summit Gorthenburg, Sweden
November 6
November 7
Technical Dutch Open Source Event Eindhoven, Netherlands
November 6
November 7
OpenOffice.org HackFest 2010 Hamburg, Germany
November 8
November 10
Free Open Source Academia Conference Grenoble, France
November 9
November 12
OpenStack Design Summit San Antonio, TX, USA
November 11 NLUUG Fall conference: Security Ede, Netherlands
November 11
November 13
8th International Firebird Conference 2010 Bremen, Germany
November 12
November 14
FOSSASIA Ho Chi Minh City (Saigon), Vietnam
November 12
November 13
Japan Linux Conference Tokyo, Japan
November 12
November 13
Mini-DebConf in Vietnam 2010 Ho Chi Minh City, Vietnam
November 13
November 14
OpenRheinRuhr Oberhausen, Germany

If your event does not appear here, please tell us about it.

Audio and Video programs

Embedded Linux Conference videos available

Michael Opdenacker has announced the availability of videos from this year's Embedded Linux Conference, which was held in San Francisco in April. The slides and Theora video are available for most, if not all, of the talks. Opdenacker and the Free Electrons team do the community a great service by doing the work to record and transcode the videos. "If you are interested in such talks, what about joining the European edition of the conference? It will take place in Cambridge (UK), on October 27-28, and will be colocated with the GStreamer conference (October 26). See http://www.embeddedlinuxconference.com/elc_europe10/ and http://gstreamer.freedesktop.org/conference/ for details."

Full Story (comments: none)

Page editor: Rebecca Sobol


Copyright © 2010, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds