User: Password:
Subscribe / Log in / New account Weekly Edition for October 21, 2010

How not to recognize free hardware

By Jonathan Corbet
October 20, 2010
Your editor has often written about the value of open, hackable devices. Users of such hardware can customize it to their needs, remove antifeatures, improve security, and make it do things which the manufacturer never contemplated. Open hardware is thus more valuable than the locked-down variety. Unfortunately, open hardware is discouragingly rare; it can also be hard to find even when it exists. Open and closed variants of a specific device are often sold under similar (or identical) names and packaging, making purchasing it a risky affair.

There would be obvious value to a mechanism by which hardware purchasers could know which products are truly open without having to dig around on the net or risk buying the wrong device. There is clearly interest in this information; look at the extensive lists of routers which can run distributions like OpenWRT, for example, or the long list of Android phones offered through online auction sites which are marked as being rooted. But hardware is, for all practical purposes, never labeled as being open in a useful way, even when the hardware is indeed open. There is an information gap here: manufacturers are not informing their customers of an attribute which could make their products more appealing.

With that in mind, your editor looked at the Free Software Foundation's recently-announced endorsement criteria for its new "Respects Your Freedom" mark. Attempts to identify free-software-friendly hardware have come and gone in the past, but the FSF might just have the staying power to make something stick. Unfortunately, in your editor's opinion, this initiative is flawed in a way which has doomed a number of FSF initiatives. Someday we may have a mark which makes open hardware easy to identify, but it won't be this one.

There is much that is good in the FSF's criteria. At the beginning is the obvious requirement that the device should work with 100% free software - though even the FSF makes exceptions for "auxiliary processors." So a cellular handset, with a closed baseband processor, should still qualify. It must also be possible to replace the running software using only free tools; devices with locked-down memory or cryptographic signature verification need not apply. These requirements are an obvious description of what's required for a piece of hardware to be truly open.

Interestingly, the device is allowed to implement DRM mechanisms - but only in free software, so the DRM can be removed by a suitably skilled and motivated user. On the other hand, the device is not allowed to phone home with identify or location information except when the user has asked for that behavior. One could argue that, for the purposes of judging the openness of hardware, phoning home could be seen in the same light as DRM: OK as long as it can be ripped out of the device. If the goal is "respects your freedoms," though, it is not that hard to make a case for mandating more respectful treatment of personal information from the outset. So far so good.

Consider, though, this aspect of the "100% free software" requirement:

This applies to all software that the seller includes in the product, or provides with the product, or recommends for use in conjunction with the product, or steers users towards installation in the product... By way of explanation, a general-purpose facility for installing other programs, with which the choice of programs to install comes directly from the user, is not considered to steer users toward anything in particular. However, if the facility typically suggests installation particular programs, then it steers users towards those programs.

The FSF has never been content to work toward the creation of free software and advocacy for its use; it has also made an overt effort to ensure that, like an Orwellian "unperson," proprietary software is never even mentioned. So a mobile device running a system like MeeGo might qualify for the FSF's endorsement, assuming it's open, lacking binary drivers, etc. But if the application installer lists a popular proprietary Flash plugin or network telephony application, it may be deemed to be "steering users" toward non-free code. That would cost it the endorsement, despite the fact that it's a fully open and respectful device.

Let it be said: your editor does not believe that "respect your freedoms" includes hiding information about available options. Free software should be able to win on its own merits; it doesn't require attempts to create ignorance about proprietary alternatives. Viewing users as needing to be "steered" in the right direction does not seem respectful.

This requirement, alone, is probably enough to drive otherwise friendly manufacturers away from seeking endorsement for any kind of device which will have an associated application store. But it gets worse:

Any product-related materials that mention the FSF endorsement must not also carry endorsements or badges related to proprietary software, such as "Works with Windows" or "Made for Mac" badges, because these would give an appearance of legitimacy to those products, and may make users think the product requires them.

It's a rare manufacturer indeed who will not mark a Windows-compatible product as being compatible with Windows. A requirement like this would force such a manufacturer to sell the same product in two different packages - an expense that is unlikely to be made up through extra sales of FSF-endorsed devices. Manufacturers cannot usually afford to ignore the existence of large, lucrative markets, so they will inevitably decide to do without the FSF endorsement, even if their product would otherwise qualify.

Finally, the criteria require "cooperation with FSF and GNU public relations," described this way:

The seller must use FSF approved terminology for the FSF's activities and work, in all statements and publications relating to the product. This includes product packaging, and manuals, web pages, marketing materials, and interviews about the product. Specifically, the seller must use the term "GNU/Linux" for any reference to an entire operating system which includes GNU and Linux, and not mislead with "Linux" or "Linux-based system" or "a system with the Linux kernel." And the seller must talk about "free software" more prominently than "open source."

This requirement has little to do with respect for a user's freedoms and everything to do with promoting the FSF's particular agenda and world view. To obtain the FSF's endorsement for a specific device, a company must train all of its representatives in the use of Stallmanesqe newspeak. It is a mixing of two entirely different objectives - promoting open hardware and promoting the FSF's world view - that seems likely to be detrimental to both. Companies have learned to be careful in how they use each others' trademarks, but that does not extend to wider restrictions on "approved terminology." One can easily see a corporate lawyer balking at such a requirement; as a result, an endorsement mark which would have carried the FSF's name and URL will not appear.

We owe a lot to the Free Software Foundation for helping to make the free software explosion happen when it did. Without the FSF, things would have happened differently and more slowly, and the world would certainly have been worse. The FSF still serves an important role; we need a no-compromises advocate for free software out there. But, by conflating free software with control over language and options, the FSF often seems to work counter to its stated goals. That is certainly the case here; your editor predicts that the number of products carrying the FSF's endorsement will be easily counted without running out of fingers. An opportunity to recognize and promote freedom-supporting hardware has been lost, and that is a sad thing.

Comments (74 posted)

UTOSC: Applying open source ideals to more than software

By Jake Edge
October 20, 2010

Over the last few years, Red Hat has taken the lead in investigating ways to apply the concepts behind free software to various other fields. That has led to things like the book The Open Source Way and the web site. Karsten Wade of Red Hat's community architecture team came to the Utah Open Source Conference to formalize some of the ideas that underpin open source and describe how they can be applied more widely. It is, he said, an effort to "decouple open source from technology", so that other people can "remap" those ideas onto other fields.

The underlying idea behind free and open source software is the four freedoms embodied in GNU's free software definition, and the idea that "none of us are free until all of us are free", Wade said. There is very little difference between the free software and open source camps, but there is an "artificially constructed fight" between the two. This conflict is sustained, largely by the press, to make it look like there is some "deep inherent argument" between the supporters of each.

[Karsten Wade]

But if we remove all of the labels, both sides agree on 85-95% of the issues, he said, and the differences are largely in what the priorities are. The idea of "free software" has worked well for hackers, while "open source" has been popular with businesses, as "it resonates with them". That has led to there being a really strong brand around the open source term, which is why Red Hat (and others) talk about "the open source way". They are "riding on the coattails of a well-known brand", Wade said.

Certain elements must be present in any open source endeavor. There needs to be an infrastructure set up that fosters participation, as well as an infrastructure to share the results of any work. Obviously, there has to be something to share, along with people to share it, i.e. participants. These elements are also present in what is known as a "community of practice"—a sociological concept that shares much with the open source way. Looking at communities of practice can help to understand the communities that already exist for FOSS, as well as to help shape new communities as they arise.

A community of practice is a group of people that come together because they share a concern or passion. There is a specific domain that the community of practice operates in—the concern or passion of its members—along with a "practice", which is how they address problems in that domain. These are much like—perhaps identical to—the requirements for using open source techniques. In addition, communities of practice have a number of principles that help separate them from other kinds of groups.

It is important that a community of practice be designed for evolution—it is not just thinking about what is going on now, but allowing for new ideas that will help define its future. A dialog between those inside and outside of the community is important, so that the group doesn't become insular. There should be different levels of participation available, which will allow anyone who is not harming the group to still do what they want: "if they want to fold napkins, let them". Often these peripheral participants learn how to get things done within the group by finding out who to talk to; it is a stepping stone.

Communities of practice also develop in both public and private spaces. It is essential that they are governed in public, but private talks are important as well. People need to be able to get together privately, over drinks for example, to discuss things outside of the public sphere. These communities also focus on value, because "people want to belong to something that makes a difference". If these things sound familiar, it is because FOSS projects are almost always communities of practice.

Wade used barn-raising as an example of where some of the ideas behind open source comes from. If you want to get people together to build a barn, you don't just stack up wood, cement bags, and shovels, then ask everyone to dive in. You first need to survey the land, build the foundation, and get it ready for lots of people to participate. "The infrastructure had to be there so that everyone could do the common work", he said. FOSS projects are much the same way.

Another analogy he used was that of musicians coming together on the village green. Each musician brings their style and tunes to the common space, and each is ready to learn from what the others bring. As the "jam" progresses, there is a friendly competition between the participants to try to outdo each other. That is similar to how FOSS project participants work together.

While many believe that FOSS works on the "Tom Sawyer" model, where one group or organization takes advantage of the work of others (much as Tom took advantage of his friends' fence-painting work), that is not the open source way, Wade stressed. Red Hat and others are often accused of that, but that's "not how it goes"; the community will notice freeloaders. Some may get away it for a while, but eventually it will be noticed.

Michael Tiemann's experience showing his daughter how to work the resonant pendulum at San Francisco's Exploratorium was Wade's final analogy. In order to move that pendulum, it takes regular, small tugs on the weak magnet, but eventually the 350-pound pendulum will be swinging four feet in either direction. In FOSS, Wade said, "sometimes it seems like we have to wrap a rope around it and give it a huge tug", but we don't need that, and little incremental things (like regular releases) can make all the difference.

As the talk wound down, Wade surveyed the audience for additional examples of communities of practice and open source techniques being used in other fields. He also showed two videos from that described how two very different fields (seed banks and film-making) were recognizing that open source ideals and techniques can be—and are being—used in their work. The open source way is a powerful tool that has been in use for a very long time, and in a wide variety of places. Talks like this one can only help to spread that word, so that it can be applied even more widely.

Comments (none posted)

New releases from MySQL descendants Drizzle and MariaDB

October 20, 2010

This article was contributed by Nathan Willis

For years, MySQL has been the highest-profile open source relational database system, but with the Sun (and, later, Oracle) acquisition of MySQL's corporate parent MySQL AB, the development community has split in several directions. Now, a few years later, both of the leading community-driven forks of MySQL, Drizzle and MariaDB, have made important new releases. Drizzle, the light-and-lean database system designed for web and cloud applications, unveiled its first beta release — complete with MySQL migration tools — and MariaDB, the full-featured database system positioned as a direct competitor to MySQL, made a "gamma" release, and picked up an important endorsement.

Drizzle beta

Drizzle build 1802 was released at the end of September, and was dubbed the "Drizzle7 beta release" in the accompanying announcement. In addition to the usual assortment of speed-ups, bug fixes, and new options, three major features grabbed the headlines. One is the introduction of Sphinx-based documentation. Sphinx is a documentation system based around the reStructuredText markup format, which is intended to make it easier to integrate application documentation inline within the source code itself. Indeed, Drizzle is taking advantage of this feature, storing its documentation in its source tree.

Of more importance to database users, however, are two features that simplify the transition from MySQL to Drizzle. First, the drizzledump backup and restore utility now has the ability to detect when it is run against a MySQL database, and export a dump of the database in a Drizzle-compatible format. For the slightly more daring, it can also dump the MySQL database and import the data and structures directly into a Drizzle database in a single command. Either way, it eliminates the need to run a costly conversion between the two applications.

Second, Drizzle can now speak MySQL's native TCP/IP protocol. By default, Drizzle uses the same TCP port reserved for MySQL, 3306. Future plans are to develop a separate Drizzle protocol running on TCP port 4427, but for the time being, the ability to use MySQL's network protocol has the effect of making it much easier to port applications written for MySQL over to Drizzle.

This feature includes the network protocol only; Drizzle does not support Unix sockets as a connection method, which is part of the project's stripped-down philosophy. Drizzle was started by former MySQL architect Brian Aker in 2007 as a response to what he felt was MySQL's increasing focus solely on enterprise applications, abandoning many of the project's original constituents, web application developers. Aker has said on several occasions that he wants to develop Drizzle as a community project, in contrast to the final days of his involvement with MySQL, when virtually all of the MySQL developers were employed working on the project full-time, and patches from outsiders dropped to virtually zero.

Drizzle adopted a smaller, faster "microkernel" architecture, stripping out advanced functionality such as views, triggers, and query caching, while pushing many of the remaining functions (such as logging or authentication) into pluggable modules. In several places, it simplifies MySQL's multi-faceted design, such as offering only one type of binary blob, specifying UTF-8 as the text format, and UTC as the only timestamp "time zone" format. The result is a faster database management system around one-third the size of MySQL, and one that Aker hopes will be easier for new developers to understand and contribute to.

The project allows all contributors to retain their own copyright on their contributions. The project uses Bazaar as its source code management system, making incremental releases every two weeks. Aker stated in 2009 that there were more than 100 contributors to the project, which is roughly in line with the size of the Drizzle-developers team on According to Launchpad, there are 325 active branches under development, owned by 66 developers or teams. Also telling is that the developers hail from a variety of different employers, including Canonical, Google, Oracle/Sun, and Rackspace.

In addition to the community focus, Drizzle is optimized for "cloud" and web application usage in a number of ways. As mentioned earlier, it provides TCP/IP as its only connection method. It is also optimized for 64-bit processors and "massive concurrency" over multi-core and multi-CPU machines, including sharding across multiple nodes. Finally, it is built for Unix-like servers only (offering no Windows version), and supports external stored procedures in scripting languages like Ruby, Perl, and PHP.

MariaDB gamma

While Drizzle is an attempt to hone the MySQL code base into a lean-and-mean database manager, MariaDB takes nearly the opposite approach, building a system with an array of high-end options well suited for enterprise usage. MariaDB was started by MySQL creator Michael "Monty" Widenius in 2009, with the goal of developing a community-driven project that could serve as a drop-in replacement for the official MySQL.

The 5.2.2-gamma release of MariaDB was also announced at the end of September, and is described as a "release candidate" marking the end of the 5.2 development cycle. The list of new features is tellingly longer than that of Drizzle's, including a reworked version of the default InnoDB storage engine and two entirely new storage engines: OQGRAPH, which is designed for storing tree-like structures and complex graphs, and Sphinx, a text-oriented storage engine (this Sphinx bears no relation to the Sphinx documentation system used in Drizzle; chalk it up squarely to coincidence).

Also new is support for virtual columns (fields containing expressions that are evaluated upon retrieval), segmented key caches for the MyISAM engine (which allow multiple threads to fetch keys simultaneously without locking the entire cache), the ability to CREATE tables with storage-engine-specific attributes, an extended user statistics system, and pluggable authentication.

MariaDB 5.2.2 is based on the MySQL 5.1.50 source code, but several of the new features mentioned above — such as extended user statistics and segmented key caching — come from other sources. On top of that, some of the new functionality is still in development for Oracle's MySQL, including virtual columns. The official MySQL's pluggable authentication system is available only to Oracle customers with commercial support contracts. MariaDB comes with an authentication module that allows the system to use existing MySQL user accounts, thus easing the transition between the two products.

On the whole, however, MariaDB aims at compatibility with MySQL. Widenius's new venture is partly an attempt to rebuild the community-based development approach that MySQL enjoyed in its early days, and partly an attempt to build a different business model around database development. Unlike MySQL AB, which was sold to Sun in a deal that he engineered, Widenius describes his new business Monty Program AB as a "hacker business model" where revenue from its support contracts go directly back into maintaining the code. He also founded the non-profit Open Database Alliance with other MySQL service providers to attract various independent support providers and database resellers.

Like Drizzle, the MariaDB source code is hosted at In contrast, however, contributors must sign a contributor agreement that assigns joint ownership of the contribution to Monty Program AB. Monty Program AB employees also review all patches and contributions and approve membership in the Maria-captains team that has commit rights. According to Launchpad, there are 21 Maria-captains members, and 149 in the larger Maria-developers group, all working on 62 active branches.

Despite intentionally following the official MySQL development series, MariaDB has started to attract attention on its own. Last week, a number of former MySQL executives launched SkySQL, a database support company competing head-to-head with Oracle's services — including support for MariaDB alongside support for MySQL. SkySQL executive Kaj Arno told "If you are a MySQL customer and your bug is fixed in MariaDB, I think it might make sense to move," though he added that encouraging customers to migrate was not the company's goal.

Lessons learned?

To say that Oracle's acquisition of Sun and the open source projects it stewarded has been poorly-received by the community would be quite the understatement. This month, the big debate is over (OOo) fork LibreOffice, and Oracle has taken a hard line: renewing its public commitment to OOo and threatening to excommunicate OOo community council members who do not distance themselves from the new project.

Looking at how well the MySQL forks have matured, however, it does not look like LibreOffice supporters have too much to fear. Drizzle and MariaDB are both prepared to help any interested MySQL users migrate away from the platform. Drizzle is taking shape as a fast and light replacement for the large market segment of customers whose MySQL database is primarily designed to serve as the back-end of a web application, while MariaDB is actually ahead of MySQL on supporting high-end features for enterprise customers. In either case, MySQL may not enjoy its current position as the default database of choice for much longer.

Comments (11 posted)

Page editor: Jonathan Corbet


Kernel vulnerabilities: old or new?

By Jonathan Corbet
October 19, 2010
A quick search of the CVE database turns up 80 CVE numbers related to kernel vulnerabilities so far this year. At one recent conference or another, while talking with a prominent kernel developer, your editor confessed that he found that number to be discouragingly high. In an era where there is clearly an increasing level of commercial, criminal, and governmental interest in exploiting security holes, it would be hard to be doing enough to avoid the creation of vulnerabilities. But, your editor wondered, could we be doing more than we are? The response your editor got was, in essence, that the bulk of the holes being disclosed were ancient vulnerabilities which were being discovered by new static analysis tools. In other words, we are fixing security problems faster than we are creating them.

That sort of claim requires verification; it is also amenable to being verified by a researcher with sufficient determination and pain resistance. Your editor decided to give it a try. "All" that would be required, after all, was to look at each vulnerability and figure out when it was introduced. How hard could that be?

So, the basic process followed was this: pick a CVE entry, find the patch which closed the hole, then dig through the repository history and other resources in an attempt to figure out just when the problem was first introduced into the kernel. In some cases, the answer was relatively easy to find; others were sufficiently hard that your editor eventually gave up. One especially valuable resource in the search turned out to be the Red Hat bugzilla; the developers there (and Eugene Teo in particular) go out of their way to document the particulars of vulnerabilities. Sometimes, the commit which introduced the bug was simply listed there. The "git gui blame" utility is also quite useful when doing this kind of research.

About 60 of the 80 vulnerabilities listed above were dealt with in this way before your editor's eyes crossed permanently. The results can be seen in the following table. Let it be said from the outset that there will inevitably be some errors in the data below; the most likely mistake will be assigning blame to a commit which actually just moved the vulnerability from somewhere else. That may lead to a bias that makes vulnerabilities look more recent than they really are. That said, a best effort has been made, and things should not be too far off.

CVE # Introduced Fixed
CVE-2010-3477 -- <2.6.13 0f04cfd0 2.6.36
CVE-2010-3442 -- <2.6.13 5591bf07 2.6.36
CVE-2010-3437 -- <2.6.13 252a52aa 2.6.36
CVE-2010-3310 -- <2.6.13 9828e6e6 2.6.36
CVE-2010-3301 d4d67150 2.6.27 36d001c7 2.6.36
CVE-2010-3298 542f5482 2.6.29 7011e660 2.6.36
CVE-2010-3297 -- <2.6.13 44467187 2.6.36
CVE-2010-3296 4d22de3e 2.6.21 49c37c03 2.6.36
CVE-2010-3084 2d96cf8c 2.6.30 ee9c5cfa 2.6.36
CVE-2010-3081 42908c69 2.6.26 c41d68a5 2.6.36
CVE-2010-3080 7034632d 2.6.24 27f7ad53 2.6.36
CVE-2010-3079 5072c59f 2.6.27 9c55cb12 2.6.36
CVE-2010-3078 -- <2.6.13 a122eb2f 2.6.36
CVE-2010-3067 -- <2.6.13 75e1c70f 2.6.36
CVE-2010-3015 unknown 731eb1a0 2.6.34
CVE-2010-2960 ee18d64c 2.6.32 3d96406c 2.6.36
CVE-2010-2959 ffd980f9 2.6.25 5b75c497 2.6.36
CVE-2010-2955 3d23e349 2.6.33 42da2f94 2.6.36
CVE-2010-2946 -- <2.6.13 aca0fa34 2.6.36
CVE-2010-2943 -- <2.6.13 7124fe0a 2.6.35
CVE-2010-2942 -- 2.6.9 1c40be12 2.6.36
CVE-2010-2803 unknown b9f0aee8 2.6.36
CVE-2010-2798 71b86f56 2.6.19 728a756b 2.6.35
CVE-2010-2653 -- <2.6.13 e74d098c 2.6.34
CVE-2010-2538 e441d54d 2.6.29 2ebc3464 2.6.35
CVE-2010-2537 c5c9cd4d 2.6.29 2ebc3464 2.6.35
CVE-2010-2524 6103335d 2.6.25 4c0c03ca 2.6.35
CVE-2010-2521 -- <2.6.13 2bc3c117 2.6.34
CVE-2010-2492 dd2a3b7a 2.6.21 a6f80fb7 2.6.35
CVE-2010-2478 0853ad66 2.6.27 db048b69 2.6.35
CVE-2010-2248 -- <2.6.13 6513a81e 2.6.34
CVE-2010-2240 -- <2.6.13 320b2b8d 2.6.35
CVE-2010-2226 f6aa7f21 2.6.25 1817176a 2.6.35
CVE-2010-2071 744f52f9 2.6.29 2f26afba 2.6.35
CVE-2010-2066 748de673 2.6.31 1f5a81e4 2.6.35
CVE-2010-1643 -- <2.6.13 731572d3 2.6.28
CVE-2010-1641 71b86f56 2.6.19 7df0e039 2.6.35
CVE-2010-1636 f2eb0a24 2.6.29 5dc64164 2.6.34
CVE-2010-1488 28b83c51 2.6.32 b95c35e7 2.6.34
CVE-2010-1437 -- <2.6.13 cea7daa3 2.6.34
CVE-2010-1436 18ec7d5c 2.6.19 7e619bc3 2.6.35
CVE-2010-1188 -- <2.6.13 fb7e2399 2.6.20
CVE-2010-1173 -- <2.6.13 5fa782c2 2.6.34
CVE-2010-1162 -- <2.6.13 6da8d866 2.6.34
CVE-2010-1148 c3b2a0c6 2.6.29 fa588e0c 2.6.35
CVE-2010-1146 73422811 2.6.31 cac36f70 2.6.34
CVE-2010-1087 -- <2.6.13 9f557cd8 2.6.33
CVE-2010-1086 -- <2.6.13 29e1fa35 2.6.34
CVE-2010-1085 9ad593f6 2.6.27 fed08d03 2.6.33
CVE-2010-1084 be9d1227 2.6.15 101545f6 2.6.34
CVE-2010-1083 -- <2.6.13 d4a4683c 2.6.33
CVE-2010-0622 c87e2837 2.6.18 51246bfd 2.6.33
CVE-2010-0415 742755a1 2.6.18 6f5a55f1 2.6.33
CVE-2010-0410 7672d0b5 2.6.14 f98bfbd7 2.6.33
CVE-2010-0307 unknown 221af7f8 2.6.33

Some other notes relevant to the table:

  • No attempt was made to find the origin of vulnerabilities which were present in the initial commit which began the git era during the 2.6.12 development cycle. Anything which was already present then can certainly be said to be an old bug.

  • Some parts of the code have been changed so many times that it can be truly hard to determine when a vulnerability was introduced; places where your editor give up are marked as "unknown" above. One could maybe come up with a real answer by bisecting and trying exploits, but your editor's dedication to the task was not quite that strong.

  • A couple of these bugs are old in a different way - CVE-2010-1188 was fixed in 2008, but was only understood to be a security issue in 2010. Anybody running a current kernel would not be vulnerable, but bugs like this can be nicely preserved in enterprise kernels for many years.

Looking at when the vulnerabilities were introduced yields a chart like this:

[Kernel vulnerabilities]

So, in a sense, the above-mentioned kernel hacker was correct - an awful lot of the vulnerabilities fixed over the last year predate the git era, and are thus over five years old. It seems that security bugs can lurk in the kernel for a very long time before somebody stumbles across them - or, at least, before somebody reports them.

According to the information above, we have fixed dozens of vulnerabilities since 2.6.33 without introducing any. The latter part of that claim might be charitably described as being unlikely to stand the test of time. There were (at least) 13 vulnerabilities fixed in the 2.6.35 cycle, 21 in the 2.6.36 cycle. We can hope that fewer vulnerabilities were added in that time; it seems certain, though, that (1) the number of vulnerabilities added will not be zero, and (2) it will probably take us five years or more to find many of them.

There may be some comfort in knowing that a large proportion of 2010's known security vulnerabilities are not a product of 2010's development. Indeed, assuming that a fair number of the old vulnerabilities are a bit older yet, one can also claim that they are not a product of the "new" kernel development model adopted in the early 2.6 days. That claim could be tested by extending this research back into the BitKeeper era; that is a task for a future project.

Your editor remains concerned, though, that it is too easy to put insecure code into the kernel and too hard to discover the vulnerabilities that are created. Analysis tools can help, but there really is no substitute for painstaking and meticulous code review when it comes to keeping vulnerabilities out of the kernel. At times, it is clear that the amount of review being done is not what it should be. There may well come a day when we'll wish we had found a way to be a bit more careful.

Comments (36 posted)

Brief items

Security quotes of the week

PinDr0p exploits artifacts left on call audio by the voice networks themselves. For example, VoIP calls tend to experience packet loss-split-second interruptions in audio that are too small for the human ear to detect. Likewise, cellular and public switched telephone networks (PTSNs) leave a distinctive type of noise on calls that pass through them. Phone calls today often pass through multiple VoIP, cellular and PTSN networks, and call data is either not transferred or transferred without verification across the networks.Using the call audio, PinDr0p employs a series of algorithms to detect and analyze call artifacts, then determines a call's provenance (the path it takes to get to a recipient's phone) with at least 90 percent accuracy and, given enough comparative information, even 100 percent accuracy.
-- Georgia Tech reports on recent research

The recent CVE-2010-2961 mountall vulnerability got a nice write-up by xorl today. I've seen a few public exploits for it, but those that I've seen, including the one in xorl's post, miss a rather important point: udev events can be triggered by regular users without any hardware fiddling. While the bug that kept udev from running inotify correctly on the /dev/.udev/rules.d directory during initial boot kept this vulnerability exposure pretty well minimized, the fact that udev events can be triggered at will made it pretty bad too. If udev had already been restarted, an attacker didn't have to wait at all, nor have physical access to the system.

While it is generally understood that udev events are related to hardware, it's important to keep in mind that it also sends events on module loads, and module loads can happen on demand from unprivileged users. For example, say you want to send an X.25 packet, when you call socket(AF_X25, SOCK_STREAM), the kernel will go load net-pf-9, which modules.alias lists as the x25 module. And once loaded, udev sends a "module" event.

-- Kees Cook with a useful reminder

Comments (none posted)

TaintDroid code released

TaintDroid is an Android firmware modification which can track and report on application activity; needless to say, the results with some applications can be surprising. The code is now available for anybody wanting to build their own TaintDroid system. For the time being, though, installing it does not appear to be a simple or straightforward task.

Comments (3 posted)

Two local privilege escalations

There is a local-root kernel vulnerability in the RDS protocol implementation. See this VSR advisory for more information. So far, only Ubuntu has issued an update for this problem.

Tavis Ormandy has reported a flaw in GNU libc that can be exploited by local users to gain root privileges. No distributions (other than the soon-to-be-released Fedora 14) have put out an update as yet.

Comments (11 posted)

New vulnerabilities

ardour: insecure library loading

Package(s):ardour CVE #(s):CVE-2010-3349
Created:October 15, 2010 Updated:October 20, 2010
Description: From the Red Hat bugzilla:

The vulnerability is due to an insecure change to LD_LIBRARY_PATH, an environment variable used by to look for libraries in directories other than the standard paths. When there is an empty item in the colon-separated list of directories in LD_LIBRARY_PATH, treats it as a '.' (current working directory). If the given script is executed from a directory where a local attacker could write files, there is a chance for exploitation.

Fedora FEDORA-2010-15499 ardour 2010-09-30
Fedora FEDORA-2010-15510 ardour 2010-09-30

Comments (none posted)

gnome-subtitles: code execution

Package(s):gnome-subtitles CVE #(s):CVE-2010-3357
Created:October 14, 2010 Updated:October 20, 2010

From the Red Hat bugzilla entry:

The vulnerability is due to an insecure change to LD_LIBRARY_PATH, and environment variable used by to look for libraries in directories other than the standard paths. When there is an empty item in the colon-separated list of directories in LD_LIBRARY_PATH, treats it as a '.' (current working directory). If the given script is executed from a directory where a local attacker could write files, there is a chance for exploitation.

Fedora FEDORA-2010-15711 gnome-subtitles 2010-10-05
Fedora FEDORA-2010-15717 gnome-subtitles 2010-10-05

Comments (none posted)

java-1.6.0-openjdk: multiple vulnerabilities

Package(s):java-1.6.0-openjdk CVE #(s):CVE-2010-3541 CVE-2010-3548 CVE-2010-3549 CVE-2010-3551 CVE-2010-3553 CVE-2010-3554 CVE-2010-3557 CVE-2010-3561 CVE-2010-3562 CVE-2010-3564 CVE-2010-3565 CVE-2010-3567 CVE-2010-3568 CVE-2010-3569 CVE-2010-3573 CVE-2010-3574 CVE-2010-3566
Created:October 14, 2010 Updated:May 3, 2011

From the Red Hat advisory:

defaultReadObject of the Serialization API could be tricked into setting a volatile field multiple times, which could allow a remote attacker to execute arbitrary code with the privileges of the user running the applet or application. (CVE-2010-3569)

Race condition in the way objects were deserialized could allow an untrusted applet or application to misuse the privileges of the user running the applet or application. (CVE-2010-3568)

Miscalculation in the OpenType font rendering implementation caused out-of-bounds memory access, which could allow remote attackers to execute code with the privileges of the user running the java process. (CVE-2010-3567)

JPEGImageWriter.writeImage in the imageio API improperly checked certain image metadata, which could allow a remote attacker to execute arbitrary code in the context of the user running the applet or application. (CVE-2010-3565)

Double free in IndexColorModel could cause an untrusted applet or application to crash or, possibly, execute arbitrary code with the privileges of the user running the applet or application. (CVE-2010-3562)

The privileged accept method of the ServerSocket class in the Common Object Request Broker Architecture (CORBA) implementation in OpenJDK allowed it to receive connections from any host, instead of just the host of the current connection. An attacker could use this flaw to bypass restrictions defined by network permissions. (CVE-2010-3561)

Flaws in the Swing library could allow an untrusted application to modify the behavior and state of certain JDK classes. (CVE-2010-3557)

Flaws in the CORBA implementation could allow an attacker to execute arbitrary code by misusing permissions granted to certain system objects. (CVE-2010-3554)

UIDefault.ProxyLazyValue had unsafe reflection usage, allowing untrusted callers to create objects via ProxyLazyValue values. (CVE-2010-3553)

HttpURLConnection improperly handled the "chunked" transfer encoding method, which could allow remote attackers to conduct HTTP response splitting attacks. (CVE-2010-3549)

HttpURLConnection improperly checked whether the calling code was granted the "allowHttpTrace" permission, allowing untrusted code to create HTTP TRACE requests. (CVE-2010-3574)

HttpURLConnection did not validate request headers set by applets, which could allow remote attackers to trigger actions otherwise restricted to HTTP clients. (CVE-2010-3541, CVE-2010-3573)

The Kerberos implementation improperly checked the sanity of AP-REQ requests, which could cause a denial of service condition in the receiving Java Virtual Machine. (CVE-2010-3564)

The NetworkInterface class improperly checked the network "connect" permissions for local network addresses, which could allow remote attackers to read local network addresses. (CVE-2010-3551)

Information leak flaw in the Java Naming and Directory Interface (JNDI) could allow a remote attacker to access information about otherwise-protected internal network names. (CVE-2010-3548)

Gentoo 201406-32 icedtea-bin 2014-06-29
Gentoo 201111-02 sun-jdk 2011-11-05
SUSE SUSE-SR:2011:008 java-1_6_0-ibm, java-1_5_0-ibm, java-1_4_2-ibm, postfix, dhcp6, dhcpcd, mono-addon-bytefx-data-mysql/bytefx-data-mysql, dbus-1, libtiff/libtiff-devel, cifs-mount/libnetapi-devel, rubygem-sqlite3, gnutls, libpolkit0, udisks 2011-05-03
SUSE SUSE-SA:2011:014 java-1_6_0-ibm,java-1_5_0-ibm,java-1_4_2-ibm 2011-03-22
SUSE SUSE-SA:2011:006 java-1_6_0-ibm 2011-01-25
Red Hat RHSA-2011:0169-01 java-1.5.0-ibm 2011-01-20
Red Hat RHSA-2011:0152-01 java-1.4.2-ibm 2011-01-17
SUSE SUSE-SA:2010:061 java-1_4_2-ibm,IBMJava2-JRE 2010-12-17
Red Hat RHSA-2010:0987-01 java-1.6.0-ibm 2010-12-15
Red Hat RHSA-2010:0935-01 java-1.4.2-ibm 2010-12-01
openSUSE openSUSE-SU-2010:0957-1 java-1_6_0-openjdk 2010-11-17
Red Hat RHSA-2010:0873-02 java-1.5.0-ibm 2010-11-10
Red Hat RHSA-2010:0865-02 java-1.6.0-openjdk 2010-11-10
Ubuntu USN-1010-1 openjdk-6, openjdk-6b18 2010-10-28
Red Hat RHSA-2010:0807-01 java-1.5.0-ibm 2010-10-27
openSUSE openSUSE-SU-2010:0754-1 java-1_6_0-sun 2010-10-22
Fedora FEDORA-2010-16294 java-1.6.0-openjdk 2010-10-14
Red Hat RHSA-2010:0770-01 java-1.6.0-sun 2010-10-14
Red Hat RHSA-2010:0768-01 java-1.6.0-openjdk 2010-10-13
Fedora FEDORA-2010-16240 java-1.6.0-openjdk 2010-10-14
Red Hat RHSA-2010:0786-01 java-1.4.2-ibm 2010-10-20
CentOS CESA-2010:0768 java-1.6.0-openjdk 2010-10-14
SUSE SUSE-SR:2010:019 OpenOffice_org, acroread/acroread_ja, cifs-mount/samba, dbus-1-glib, festival, freetype2, java-1_6_0-sun, krb5, libHX13/libHX18/libHX22, mipv6d, mysql, postgresql, squid3 2010-10-25

Comments (none posted)

java-1.6.0-sun: multiple unspecified vulnerabilities

Package(s):java-1.6.0-sun CVE #(s):CVE-2010-3550 CVE-2010-3552 CVE-2010-3555 CVE-2010-3556 CVE-2010-3558 CVE-2010-3559 CVE-2010-3560 CVE-2010-3563 CVE-2010-3570 CVE-2010-3571 CVE-2010-3572
Created:October 14, 2010 Updated:March 22, 2011

From the Red Hat advisory:

CVE-2010-3550 JDK unspecified vulnerability in Java Web Start component

CVE-2010-3552 JDK unspecified vulnerability in New Java Plugin component

CVE-2010-3555 JDK unspecified vulnerability in Deployment component

CVE-2010-3556 JDK unspecified vulnerability in 2D component

CVE-2010-3558 JDK unspecified vulnerability in Java Web Start component

CVE-2010-3559 JDK unspecified vulnerability in Sound component

CVE-2010-3560 JDK unspecified vulnerability in Networking component

CVE-2010-3563 JDK unspecified vulnerability in Deployment component

CVE-2010-3570 JDK unspecified vulnerability in Deployment Toolkit

CVE-2010-3571 JDK unspecified vulnerability in 2D component

CVE-2010-3572 JDK unspecified vulnerability in Sound component

Gentoo 201111-02 sun-jdk 2011-11-05
SUSE SUSE-SA:2011:014 java-1_6_0-ibm,java-1_5_0-ibm,java-1_4_2-ibm 2011-03-22
SUSE SUSE-SA:2011:006 java-1_6_0-ibm 2011-01-25
Red Hat RHSA-2011:0169-01 java-1.5.0-ibm 2011-01-20
SUSE SUSE-SA:2010:061 java-1_4_2-ibm,IBMJava2-JRE 2010-12-17
Red Hat RHSA-2010:0987-01 java-1.6.0-ibm 2010-12-15
Red Hat RHSA-2010:0873-02 java-1.5.0-ibm 2010-11-10
Red Hat RHSA-2010:0807-01 java-1.5.0-ibm 2010-10-27
openSUSE openSUSE-SU-2010:0754-1 java-1_6_0-sun 2010-10-22
Red Hat RHSA-2010:0786-01 java-1.4.2-ibm 2010-10-20
Red Hat RHSA-2010:0770-01 java-1.6.0-sun 2010-10-14
SUSE SUSE-SR:2010:019 OpenOffice_org, acroread/acroread_ja, cifs-mount/samba, dbus-1-glib, festival, freetype2, java-1_6_0-sun, krb5, libHX13/libHX18/libHX22, mipv6d, mysql, postgresql, squid3 2010-10-25

Comments (none posted)

kernel: information leak

Package(s):kernel CVE #(s):CVE-2010-3477
Created:October 20, 2010 Updated:March 28, 2011
Description: The kernel's networking code fails to fully initialize a structure which is then passed back to user space, thus leaking a few bytes of data.
Ubuntu USN-1093-1 linux-mvl-dove 2011-03-25
Red Hat RHSA-2011:0330-01 kernel-rt 2011-03-10
Ubuntu USN-1083-1 linux-lts-backport-maverick 2011-03-03
Ubuntu USN-1074-2 linux-fsl-imx51 2011-02-28
Ubuntu USN-1074-1 linux-fsl-imx51 2011-02-25
Red Hat RHSA-2011:0007-01 kernel 2011-01-11
MeeGo MeeGo-SA-10:38 kernel 2010-10-09
Debian DSA-2126-1 linux-2.6 2010-11-26
CentOS CESA-2010:0839 kernel 2010-11-09
Red Hat RHSA-2010:0839-01 kernel 2010-11-09
Red Hat RHSA-2010:0779-01 kernel 2010-10-19
CentOS CESA-2010:0779 kernel 2010-10-25
Ubuntu USN-1000-1 kernel 2010-10-19

Comments (none posted)

kernel: privilege escalation

Package(s):kernel CVE #(s):CVE-2010-2963
Created:October 20, 2010 Updated:May 10, 2011
Description: A failure to properly validate parameters in the Video4Linux1 compatibility interface can enable a local user to obtain root privileges. This vulnerability apparently only affects 64-bit systems.
Oracle ELSA-2013-1645 kernel 2013-11-26
openSUSE openSUSE-SU-2013:0927-1 kernel 2013-06-10
Ubuntu USN-1093-1 linux-mvl-dove 2011-03-25
Ubuntu USN-1083-1 linux-lts-backport-maverick 2011-03-03
Ubuntu USN-1074-2 linux-fsl-imx51 2011-02-28
Ubuntu USN-1074-1 linux-fsl-imx51 2011-02-25
Ubuntu USN-1119-1 linux-ti-omap4 2011-04-20
Fedora FEDORA-2010-18983 kernel 2010-12-17
Mandriva MDVSA-2010:257 kernel 2010-10-29
openSUSE openSUSE-SU-2010:1047-1 kernel 2010-12-10
Red Hat RHSA-2010:0958-01 kernel-rt 2010-12-08
Debian DSA-2126-1 linux-2.6 2010-11-26
SUSE SUSE-SA:2010:057 kernel 2010-11-11
Red Hat RHSA-2010:0842-01 kernel 2010-11-10
openSUSE openSUSE-SU-2010:0933-1 kernel 2010-11-11
CentOS CESA-2010:0839 kernel 2010-11-09
Red Hat RHSA-2010:0839-01 kernel 2010-11-09
openSUSE SUSE-SA:2010:053 kernel 2010-10-28
openSUSE openSUSE-SU-2010:0902-1 kernel 2010-10-27
Ubuntu USN-1000-1 kernel 2010-10-19

Comments (none posted)

kernel: denial of service

Package(s):kernel CVE #(s):CVE-2010-3432
Created:October 20, 2010 Updated:March 28, 2011
Description: The SCTP networking code fails to properly handle the appending of packet chunks, leading to a remotely-triggerable system crash (at least).
Oracle ELSA-2013-1645 kernel 2013-11-26
Red Hat RHSA-2011:1321-01 kernel 2011-09-20
Ubuntu USN-1093-1 linux-mvl-dove 2011-03-25
Ubuntu USN-1083-1 linux-lts-backport-maverick 2011-03-03
Ubuntu USN-1074-2 linux-fsl-imx51 2011-02-28
Ubuntu USN-1074-1 linux-fsl-imx51 2011-02-25
SUSE SUSE-SA:2011:007 kernel-rt 2011-02-07
CentOS CESA-2010:0936 kernel 2011-01-27
CentOS CESA-2011:0004 kernel 2011-01-06
Red Hat RHSA-2011:0004-01 kernel 2011-01-04
openSUSE openSUSE-SU-2011:0004-1 kernel 2011-01-03
Fedora FEDORA-2010-18983 kernel 2010-12-17
Red Hat RHSA-2010:0958-01 kernel-rt 2010-12-08
Red Hat RHSA-2010:0936-01 kernel 2010-12-01
Debian DSA-2126-1 linux-2.6 2010-11-26
Red Hat RHSA-2010:0842-01 kernel 2010-11-10
Ubuntu USN-1000-1 kernel 2010-10-19

Comments (none posted)

kernel: information leak

Package(s):kernel CVE #(s):CVE-2010-3437
Created:October 20, 2010 Updated:April 21, 2011
Description: The CD driver fails to check parameters properly, allowing a local attacker to read arbitrary kernel memory.
Oracle ELSA-2013-1645 kernel 2013-11-26
openSUSE openSUSE-SU-2013:0927-1 kernel 2013-06-10
Ubuntu USN-1093-1 linux-mvl-dove 2011-03-25
Mandriva MDVSA-2011:051 kernel 2011-03-18
Ubuntu USN-1083-1 linux-lts-backport-maverick 2011-03-03
Ubuntu USN-1074-2 linux-fsl-imx51 2011-02-28
Ubuntu USN-1119-1 linux-ti-omap4 2011-04-20
Ubuntu USN-1074-1 linux-fsl-imx51 2011-02-25
Mandriva MDVSA-2011:029 kernel 2011-02-17
SUSE SUSE-SA:2011:007 kernel-rt 2011-02-07
SUSE SUSE-SA:2011:004 kernel 2011-01-14
openSUSE openSUSE-SU-2011:0048-1 SLE11 2011-01-19
openSUSE openSUSE-SU-2011:0003-1 kernel 2011-01-03
openSUSE openSUSE-SU-2011:0004-1 kernel 2011-01-03
SUSE SUSE-SA:2010:060 kernel 2010-12-14
openSUSE openSUSE-SU-2010:1047-1 kernel 2010-12-10
Debian DSA-2126-1 linux-2.6 2010-11-26
Red Hat RHSA-2010:0842-01 kernel 2010-11-10
Ubuntu USN-1000-1 kernel 2010-10-19

Comments (none posted)

kernel: denial of service

Package(s):kernel CVE #(s):CVE-2010-3442
Created:October 20, 2010 Updated:March 28, 2011
Description: The sound subsystem fails to properly validate system call parameters, enabling local attackers to crash the system (at least). Only 32-bit systems are affected by this bug.
Oracle ELSA-2013-1645 kernel 2013-11-26
Ubuntu USN-1093-1 linux-mvl-dove 2011-03-25
Ubuntu USN-1083-1 linux-lts-backport-maverick 2011-03-03
Ubuntu USN-1074-2 linux-fsl-imx51 2011-02-28
Ubuntu USN-1074-1 linux-fsl-imx51 2011-02-25
SUSE SUSE-SA:2011:008 kernel 2011-02-11
SUSE SUSE-SA:2011:007 kernel-rt 2011-02-07
CentOS CESA-2010:0936 kernel 2011-01-27
CentOS CESA-2011:0004 kernel 2011-01-06
Red Hat RHSA-2011:0004-01 kernel 2011-01-04
openSUSE openSUSE-SU-2011:0003-1 kernel 2011-01-03
openSUSE openSUSE-SU-2011:0004-1 kernel 2011-01-03
Fedora FEDORA-2010-18983 kernel 2010-12-17
Mandriva MDVSA-2010:257 kernel 2010-10-29
SUSE SUSE-SA:2010:060 kernel 2010-12-14
openSUSE openSUSE-SU-2010:1047-1 kernel 2010-12-10
Red Hat RHSA-2010:0958-01 kernel-rt 2010-12-08
Red Hat RHSA-2010:0936-01 kernel 2010-12-01
Debian DSA-2126-1 linux-2.6 2010-11-26
Red Hat RHSA-2010:0842-01 kernel 2010-11-10
Ubuntu USN-1000-1 kernel 2010-10-19

Comments (none posted)

kernel: remote denial of service

Package(s):kernel CVE #(s):CVE-2010-3705
Created:October 20, 2010 Updated:April 28, 2011
Description: The SCTP networking code does not properly handle HMAC calculations, enabling a remote attacker to crash the system (or worse) through specially-crafted traffic.
Oracle ELSA-2013-1645 kernel 2013-11-26
openSUSE openSUSE-SU-2013:0927-1 kernel 2013-06-10
SUSE SUSE-SA:2011:017 kernel 2011-04-18
openSUSE openSUSE-SU-2011:0346-1 kernel 2011-04-18
Ubuntu USN-1093-1 linux-mvl-dove 2011-03-25
Ubuntu USN-1119-1 linux-ti-omap4 2011-04-20
SUSE SUSE-SA:2011:012 kernel 2011-03-08
Ubuntu USN-1083-1 linux-lts-backport-maverick 2011-03-03
Ubuntu USN-1074-2 linux-fsl-imx51 2011-02-28
Ubuntu USN-1074-1 linux-fsl-imx51 2011-02-25
Mandriva MDVSA-2011:029 kernel 2011-02-17
openSUSE openSUSE-SU-2011:0399-1 kernel 2011-04-28
Fedora FEDORA-2010-18983 kernel 2010-12-17
Red Hat RHSA-2010:0958-01 kernel-rt 2010-12-08
Debian DSA-2126-1 linux-2.6 2010-11-26
Red Hat RHSA-2010:0842-01 kernel 2010-11-10
Ubuntu USN-1000-1 kernel 2010-10-19

Comments (none posted)

kernel: local privilege escalation

Package(s):kernel CVE #(s):CVE-2010-3904
Created:October 20, 2010 Updated:May 10, 2011
Description: The RDS network protocol fails to validate user-space addresses, allowing a local attacker to write arbitrary values into kernel memory. See this advisory for more information.
Oracle ELSA-2013-1645 kernel 2013-11-26
Ubuntu USN-1093-1 linux-mvl-dove 2011-03-25
Ubuntu USN-1083-1 linux-lts-backport-maverick 2011-03-03
Ubuntu USN-1074-2 linux-fsl-imx51 2011-02-28
Ubuntu USN-1119-1 linux-ti-omap4 2011-04-20
Ubuntu USN-1074-1 linux-fsl-imx51 2011-02-25
SUSE SUSE-SA:2011:007 kernel-rt 2011-02-07
Fedora FEDORA-2010-18983 kernel 2010-12-17
SUSE SUSE-SA:2010:057 kernel 2010-11-11
Red Hat RHSA-2010:0842-01 kernel 2010-11-10
openSUSE openSUSE-SU-2010:0933-1 kernel 2010-11-11
openSUSE SUSE-SA:2010:053 kernel 2010-10-28
openSUSE openSUSE-SU-2010:0902-1 kernel 2010-10-27
CentOS CESA-2010:0792 kernel 2010-10-26
Red Hat RHSA-2010:0792-01 kernel 2010-10-25
Ubuntu USN-1000-1 kernel 2010-10-19

Comments (none posted)

Mozilla products: multiple vulnerabilities

Package(s):firefox seamonkey thunderbird xulrunner CVE #(s):CVE-2010-3170 CVE-2010-3173 CVE-2010-3175 CVE-2010-3176 CVE-2010-3177 CVE-2010-3178 CVE-2010-3179 CVE-2010-3180 CVE-2010-3182 CVE-2010-3183
Created:October 20, 2010 Updated:December 24, 2010
Description: The firefox 3.6.11/3.5.14 and thunderbird 3.1.5/3.0.9 releases fix the usual set of security vulnerabilities.
openSUSE openSUSE-SU-2014:1100-1 Firefox 2014-09-09
Gentoo 201301-01 firefox 2013-01-07
Fedora FEDORA-2010-18920 seamonkey 2010-12-15
Fedora FEDORA-2010-18890 seamonkey 2010-12-15
Slackware SSA:2010-344-01 seamonkey 2010-12-13
Red Hat RHSA-2010:0896-01 thunderbird 2010-11-17
Slackware SSA:2010-317-01 thunderbird 2010-11-15
Red Hat RHSA-2010:0862-02 nss 2010-11-10
Red Hat RHSA-2010:0861-02 firefox 2010-11-10
Fedora FEDORA-2010-17084 seamonkey 2010-11-02
Fedora FEDORA-2010-17145 seamonkey 2010-11-02
SUSE SUSE-SA:2010:056 MozillaFirefox,seamonkey,MozillaThunderbird 2010-11-08
Fedora FEDORA-2010-15989 nss-softokn 2010-10-08
Fedora FEDORA-2010-15989 nss-util 2010-10-08
Fedora FEDORA-2010-15989 nss 2010-10-08
SUSE SUSE-SR:2010:020 NetworkManager, bind, clamav, dovecot12, festival, gpg2, libfreebl3, php5-pear-mail, postgresql 2010-11-03
Fedora FEDORA-2010-17105 seamonkey 2010-11-02
openSUSE openSUSE-SU-2010:0925-1 seamonkey 2010-11-02
openSUSE openSUSE-SU-2010:0924-1 mozilla-xulrunner191 2010-11-02
Fedora FEDORA-2010-16941 thunderbird 2010-10-29
Fedora FEDORA-2010-16939 thunderbird 2010-10-29
Fedora FEDORA-2010-16926 thunderbird 2010-10-29
Fedora FEDORA-2010-16941 sunbird 2010-10-29
Fedora FEDORA-2010-16939 sunbird 2010-10-29
Fedora FEDORA-2010-16926 sunbird 2010-10-29
Debian DSA-2124-1 xulrunner 2010-11-01
Debian DSA-2123-1 nss 2010-11-01
Slackware SSA:2010-305-01 seamonkey 2010-11-01
Fedora FEDORA-2010-16885 mozvoikko 2010-10-28
Fedora FEDORA-2010-16885 gnome-web-photo 2010-10-28
Fedora FEDORA-2010-16885 perl-Gtk2-MozEmbed 2010-10-28
Fedora FEDORA-2010-16885 xulrunner 2010-10-28
Fedora FEDORA-2010-16885 gnome-python2-extras 2010-10-28
Fedora FEDORA-2010-16885 galeon 2010-10-28
Fedora FEDORA-2010-16885 firefox 2010-10-28
Fedora FEDORA-2010-16593 mozvoikko 2010-10-21
Fedora FEDORA-2010-16593 gnome-python2-extras 2010-10-21
Fedora FEDORA-2010-16593 galeon 2010-10-21
Fedora FEDORA-2010-16593 gnome-web-photo 2010-10-21
Fedora FEDORA-2010-16593 firefox 2010-10-21
Fedora FEDORA-2010-16593 perl-Gtk2-MozEmbed 2010-10-21
Fedora FEDORA-2010-16593 xulrunner 2010-10-21
Fedora FEDORA-2010-15520 nss 2010-09-30
Fedora FEDORA-2010-15520 nss-softokn 2010-09-30
Fedora FEDORA-2010-15520 nss-util 2010-09-30
Slackware SSA:2010-300-01 seamonkey 2010-10-28
openSUSE openSUSE-SU-2010:0906-1 seamonkey thunderbird 2010-10-28
openSUSE openSUSE-SU-2010:0904-1 mozilla-nss 2010-10-27
Mandriva MDVSA-2010:210 firefox 2010-10-22
Ubuntu USN-998-1 thunderbird 2010-10-20
Ubuntu USN-997-1 firefox, firefox-3.0, firefox-3.5, xulrunner-1.9.1, xulrunner-1.9.2 2010-10-20
Ubuntu USN-1007-1 nss 2010-10-20
CentOS CESA-2010:0782 firefox 2010-10-20
Red Hat RHSA-2010:0780-01 thunderbird 2010-10-19
Red Hat RHSA-2010:0782-01 firefox 2010-10-19
CentOS CESA-2010:0782 firefox 2010-10-25
CentOS CESA-2010:0781 seamonkey 2010-10-25
CentOS CESA-2010:0780 thunderbird 2010-10-25
CentOS CESA-2010:0780 thunderbird 2010-10-20
Red Hat RHSA-2010:0781-01 seamonkey 2010-10-19
CentOS CESA-2010:0781 seamonkey 2010-10-25
Mandriva MDVSA-2010:211 mozilla-thunderbird 2010-10-22

Comments (none posted)

MRG Messaging: multiple vulnerabilities

Package(s):MRG Messaging CVE #(s):CVE-2009-5005 CVE-2009-5006
Created:October 14, 2010 Updated:October 20, 2010

From the Red Hat advisory:

A flaw was found in the way Apache Qpid handled the receipt of invalid AMQP data. A remote user could send invalid AMQP data to the server, causing it to crash, resulting in the cluster shutting down. (CVE-2009-5005)

A flaw was found in the way Apache Qpid handled a request to redeclare an existing exchange while adding a new alternate exchange. If a remote, authenticated user issued such a request, the server would crash, resulting in the cluster shutting down. (CVE-2009-5006)

Red Hat RHSA-2010:0774-01 MRG messaging 2010-10-14
Red Hat RHSA-2010:0773-01 MRG Messaging 2010-10-14

Comments (none posted)

opera: multiple vulnerabilities

Package(s):opera CVE #(s):
Created:October 15, 2010 Updated:October 20, 2010
Description: Opera 10.63 is a recommended upgrade offering security and stability enhancements. See the Opera release notes for details.
openSUSE openSUSE-SU-2010:0728-1 opera 2010-10-15

Comments (none posted)

php-pear-CAS: multiple vulnerabilities

Package(s):php-pear-CAS CVE #(s):CVE-2010-3690 CVE-2010-3691 CVE-2010-3692
Created:October 19, 2010 Updated:February 23, 2011
Description: From the CVE entries:

Multiple cross-site scripting (XSS) vulnerabilities in phpCAS before 1.1.3, when proxy mode is enabled, allow remote attackers to inject arbitrary web script or HTML via (1) a crafted Proxy Granting Ticket IOU (PGTiou) parameter to the callback function in client.php, (2) vectors involving functions that make getCallbackURL calls, or (3) vectors involving functions that make getURL calls. (CVE-2010-3690)

PGTStorage/pgt-file.php in phpCAS before 1.1.3, when proxy mode is enabled, allows local users to overwrite arbitrary files via a symlink attack on an unspecified file. (CVE-2010-3691)

Directory traversal vulnerability in the callback function in client.php in phpCAS before 1.1.3, when proxy mode is enabled, allows remote attackers to create or overwrite arbitrary files via directory traversal sequences in a Proxy Granting Ticket IOU (PGTiou) parameter. (CVE-2010-3692)

Debian DSA-2172-1 moodle 2011-02-22
Fedora FEDORA-2010-16905 glpi 2010-10-28
Fedora FEDORA-2010-16912 glpi 2010-10-28
Fedora FEDORA-2010-15943 php-pear-CAS 2010-10-08
Fedora FEDORA-2010-15970 php-pear-CAS 2010-10-08

Comments (none posted)

poppler: memory corruption

Package(s):poppler CVE #(s):CVE-2010-3703
Created:October 19, 2010 Updated:December 24, 2010
Description: From the Red Hat bugzilla:

poppler git commit bf2055088a corrects a possible use of an uninitialized pointer in PostScriptFunction, which can cause crash or memory corruption.

Gentoo 201310-03 poppler 2013-10-06
SUSE SUSE-SR:2010:024 clamav, subversion, python, krb5, otrs, moonlight, OpenOffice_org, kdenetwork4, zope, xpdf, gnutls, and opera 2010-12-23
openSUSE openSUSE-SU-2010:1091-1 xpdf 2010-12-23
openSUSE openSUSE-SU-2010:0976-1 poppler 2010-11-25
Slackware SSA:2010-324-02 poppler 2010-11-22
Slackware SSA:2010-324-01 xpdf 2010-11-22
Mandriva MDVSA-2010:231 poppler 2010-11-12
Red Hat RHSA-2010:0859-03 poppler 2010-11-10
Fedora FEDORA-2010-15911 poppler 2010-10-08
Ubuntu USN-1005-1 poppler 2010-10-19
Fedora FEDORA-2010-15981 poppler 2010-10-08

Comments (none posted)

typo3: multiple vulnerabilities

Package(s):typo3 CVE #(s):CVE-2010-3714 CVE-2010-3715 CVE-2010-3716 CVE-2010-3717
Created:October 20, 2010 Updated:October 20, 2010
Description: The typo3 content management system suffers from multiple vulnerabilities, including remote file disclosure (CVE-2010-3714), cross-site scripting (CVE-2010-3715), privilege escalation (CVE-2010-3716), and denial of service (CVE-2010-3717).
Debian DSA-2121-1 typo3-src 2010-10-19

Comments (none posted)

webkitgtk: multiple vulnerabilities

Package(s):webkitgtk CVE #(s):CVE-2010-3113 CVE-2010-1814 CVE-2010-1812 CVE-2010-1815 CVE-2010-3115 CVE-2010-1807 CVE-2010-3114 CVE-2010-3116 CVE-2010-3257 CVE-2010-3259
Created:October 19, 2010 Updated:March 2, 2011
Description: From the Fedora advisory:

Bug #628032 - CVE-2010-3113 webkit: memory corruption when handling SVG documents

Bug #631946 - CVE-2010-1814 webkit: memory corruption flaw when handling form menus

Bug #631939 - CVE-2010-1812 webkit: use-after-free flaw in handling of selections

Bug #631948 - CVE-2010-1815 webkit: use-after-free flaw when handling scrollbars

Bug #628071 - CVE-2010-3115 webkit: address bar spoofing with history bug

Bug #627703 - CVE-2010-1807 webkit: input validation error when parsing certain NaN values

Bug #628035 - CVE-2010-3114 webkit: bad cast with text editing

Bug #640353 - CVE-2010-3116 webkit: memory corruption with MIME types

Bug #640357 - CVE-2010-3257 webkit: stale pointer issue with focusing

Bug #640360 - CVE-2010-3259 webkit: cross-origin image theft

Gentoo 201412-09 racer-bin, fmod, PEAR-Mail, lvm2, gnucash, xine-lib, lastfmplayer, webkit-gtk, shadow, PEAR-PEAR, unixODBC, resource-agents, mrouted, rsync, xmlsec, xrdb, vino, oprofile, syslog-ng, sflowtool, gdm, libsoup, ca-certificates, gitolite, qt-creator 2014-12-11
Mandriva MDVSA-2011:039 webkit 2011-03-02
Red Hat RHSA-2011:0177-01 webkitgtk 2011-01-25
MeeGo MeeGo-SA-10:37 webkit 2010-10-09
openSUSE openSUSE-SU-2011:0024-1 webkit 2011-01-12
SUSE SUSE-SR:2011:002 ed, evince, hplip, libopensc2/opensc, libsmi, libwebkit, perl, python, sssd, sudo, wireshark 2011-01-25
Ubuntu USN-1006-1 webkit 2010-10-19
Fedora FEDORA-2010-15957 webkitgtk 2010-10-08
Fedora FEDORA-2010-15982 webkitgtk 2010-10-08

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The 2.6.36 kernel is out, released on October 20 with only a small number of changes since the 2.6.36-rc8 prepatch. Headline features in this release include the AppArmor security module, the LIRC infrared controller driver subsystem, and the new Tile architecture. The fanotify notification mechanism for anti-malware applications was disabled at the last minute due to ABI concerns. See the KernelNewbies 2.6.36 page for lots of information.

Linus also noted that a two-week merge window, starting on October 21, would run into the Kernel Summit.

So I'm going to hope that we could perhaps even do the 2.6.37 -rc1 release and close the merge window the Sunday before KS opens. Since 2.6.36 was longer than usual (at least it felt that way), I wouldn't mind having a 2.6.37 that is shorter than usual. But holler if this really screws up any plans. Ten days instead of two weeks? Let's see if it's even reasonably realistic.

(The 2.6.36 development cycle was 80 days, incidentally; almost exactly average for the last few years).

Previously, 2.6.36-rc8 was released on October 15. "I really hate doing this, and I'd much rather just release 2.6.36, but I'm doing one last final -rc instead. There's too much noise here (and pending in emails) for me to be happy, and while I first thought to just delay a day or two, I'm now looking at next week instead, and thus the additional -rc."

Stable updates: there have been no stable updates released in the last week.

Comments (none posted)

Quotes of the week.

I suspect there are a few places in MM/VFS/writeback which could/should be using something like this. Of course, if we do this then your nice little function will end up 250 lines long, utterly incomprehensible and full of subtle bugs. We like things to be that way.
-- Andrew Morton

In contrast, the "v2.6.25-rc1~1089^2~98" expression is actually well-defined. There is no ambiguity there, but it's also obviously not really all that human-readable.
-- Linus Torvalds

Frankly, I'm a lot more concerned about the locking being natural for the data structures we have and easily understood. We *are* fscking close to the complexity cliff, hopefully still on the right side of it.
-- Al Viro

Comments (none posted)

Dueling inode scalability patches

By Jonathan Corbet
October 20, 2010
Nick Piggin's VFS scalability patch set has been under development for well over a year. Some pieces were merged for 2.6.36, but the more complicated parts were deferred because Nick thought they needed more work and more testing. Then things went quiet; Nick changed jobs and went on vacation, so little work was done for some time. Eventually it became clear that Nick was unlikely to get the scalability work into shape for a 2.6.37 merge.

So Dave Chinner decided to jump in and work on these patches, and the code breaking up the inode lock in particular. His first patch set was posted in late September, with a number of revisions happening since. Dave worked on splitting the patch series into smaller, more reviewable chunks. He also took out some of the (to him) scarier changes. Subsequent revisions brought larger changes, to the point that version 5 reads:

None of the patches are unchanged, and several of them are new or completely rewritten, so any previous testing is completely invalidated. I have not tried to optimise locking by using trylock loops - anywhere that requires out-of-order locking drops locks and regains the locks needed for the next operation. This approach simplified the code and lead to several improvements in the patch series (e.g. moving inode->i_lock inside writeback_single_inode(), and the dispose_one_inode factoring) that would have gone unnoticed if I'd gone down the same trylock loop path that Nick used.

According to Dave, this patch set helps with the scalability problems he has been seeing, and other reviewers seem to think that the patch set is starting to look fairly good.

But then Nick returned. While he welcomed the new interest in scalability work, he did not take long to indicate that he was not pleased with the direction in which Dave had taken his patches. He has posted a 35-part patch series which he hopes to merge; the patch posting also details why he doesn't like Dave's alternative approach. The ensuing discussion has been a bit rough in spots, though it has remained mostly focused on the technical issues.

What it has not done, though, is to come up with any sort of conclusion. There are two patch sets out there; both deal with the intersection of the virtual filesystem layer and the memory management code. Much of the contention seems to be over whether "VFS people" or "memory management people" should have the ultimate say in how things are done. Given the difficult nature of both patch sets and the imminent opening of the 2.6.37 merge window, it seems fairly safe to say that neither will be merged unless Linus makes an executive decision. Pushing back this code to 2.6.38 will provide an opportunity for the patches to be discussed at length, and, possibly, for the upcoming Kernel Summit to consider them as well.

Comments (4 posted)

IMA memory hog

By Jonathan Corbet
October 20, 2010
Dave Chinner recently noticed a problem on one of the systems: the slab cache was using well over 2GB of memory, mainly on radix tree nodes. Intrigued, he looked further into the problem. It came down to the integrity measurement architecture (IMA) security code, which uses the hardware TPM to help ensure that files on the system have not been tampered with. IMA, it seems, was using a radix tree to store integrity information, indexed by inode address. Radix trees perform poorly with sparse, unclustered keys, so IMA's usage was causing the creation of a separate node for each inode in the system. That added up to a lot of memory.

A number of questions came after this revelation, including:

  1. Why is IMA using such an inappropriate data structure?
  2. Why is it keeping all this information around even though it was disabled on the system in question?
  3. Why was IMA configured into the kernel in the first place?

The answer to the first question seems to be that the IMA developers, as part of the process of getting the code into the mainline, were not allowed to expand the inode structure at all. So they created a separate tree for per-inode information; it just happens that they chose the wrong type of tree and never noticed how poorly it performed.

Question #2 is answered like this: the IMA code needs to keep track of which files are opened for write access at any time. There is no point in measuring the integrity of files (checksumming them, essentially) when they can be changed at any time. Without tracking the state of all files all the time, IMA can never know which files are opened for write access when it first starts up. The only way to be sure, it seems, is to track all files starting at boot time just in case somebody tries to turn IMA on at some point.

As for #3: was running a Fedora kernel, and the Fedora folks turned on the feature because it looked like it might be useful to some people. Nobody expected that it would have such an impact on systems where it was not turned on. Some participants in the discussion have given the Fedora kernel maintainers some grief for not having audited the code before enabling it, but auditing everything in the kernel to that level is a bit larger task than Fedora can really be expected to take on.

Eric Paris has started work on slimming IMA down; his patches work by moving the "open for write" counts into the inode structure itself, eliminating the need to allocate the separate IMA structures most of the time. IMA is also shifted over to a red-black tree when it does need to track those structures. This work eliminates almost all of the memory waste, but at the cost of growing the inode structure slightly. That does not sit well with everybody, especially, it seems, those developers who feel that IMA should not exist in the first place. But it's a clear step in the right direction, so one should expect something along these lines to be merged for 2.6.37.

Comments (13 posted)

Kernel development news

Shielding driver authors from locking

By Jonathan Corbet
October 20, 2010
Much of the time, patches can be developed against the mainline kernel and submitted for the next merge window without trouble. At other times, though, the mainline is far removed from the environment a patch will have to fit into at merge time. Your editor, who has been trying the (considerable) patience of the Video4Linux maintainer by trying to get a driver merged for 2.6.37 at the last minute, has encountered this fact of life the hard way: he submitted a driver which did not even come close to compiling inside the 2.6.37 V4L2 tree. Things have changed considerably there. This article will look at one of those changes with an eye toward the kind of design decisions that are being made in that part of the kernel.

The removal of the big kernel lock (BKL) has been documented here over the years. One of the biggest holdouts at this point is the V4L2 subsystem; almost everything that happens in a V4L2 driver is the result of an ioctl() call, and those calls have always been protected by the BKL. Removing BKL protection means auditing the drivers - and there are a lot of them - and, in many cases, providing a replacement locking scheme. It seems that a lot of V4L2 drivers - especially the older ones - do not exhibit the sort of attention to locking that one would expect from code submitted today.

The approach to this problem chosen by the V4L2 developers has proved to be mildly controversial within the group: they have tried to make it possible for driver authors to continue to avoid paying attention to locking. To that end, the video_device structure has gained a new lock field; it is a pointer to a mutex. If that field is non-null, the V4L2 core will acquire the mutex before calling any of the vast number of driver callbacks. So all driver operations are inherently serialized and driver authors need not worry about things. At least, they need not worry in the absence of other types of concurrency - like interrupts.

Hans Verkuil, the developer behind many recent V4L2 improvements, clearly feels that it's better to handle the locking centrally:

If he wants to punish himself and do all the locking manually (and prove that it is correct), then by all means, do so. If you want to use the core locking support and so simplify your driver and allow your brain to concentrate on getting the hardware to work, rather than trying to get the locking right, then that's fine as well. As a code reviewer I'd definitely prefer the latter approach as it makes my life much easier.

On the other side, developers like Laurent Pinchart argue that trying to insulate developers from locking is not the right approach:

Developers must not get told to be stupid and don't care about locks just because other developers got it wrong in the past. If people don't get locking right we need to educate them, not encourage them to understand even less of it.

Central locking at the V4L2 level leads to some interesting problems as well. The V4L2 user-space streaming API offers a pair of ioctl() calls for the management of frame buffers: VIDIOC_DQBUF to obtain a buffer from the driver, and VIDIOC_QBUF to give a buffer back. If there are no buffers available at the time of the call, VIDIOC_DQBUF will normally block until a buffer becomes available. When this call is protected by the BKL, blocking will automatically release the lock and enable other V4L2 operations to continue. That behavior is important: one of those other operations might be a VIDIOC_QBUF call providing the buffer needed to allow the VIDIOC_DQBUF call to proceed; if VIDIOC_DQBUF fails to release the lock, things could deadlock.

Drivers which handle their own locking will naturally release locks before blocking in a situation like this. Either the driver author thinks of it at the outset, or the need is made clear by disgruntled users later on. If the driver author is not even aware that the lock exists, though, it's less likely that the lock will be released at a time like this. That could lead to surprises in drivers which do their own I/O buffer management. If, however, the driver uses videobuf, this problem will be handled transparently with some scary-looking code in videobuf_waiton():

    is_ext_locked = q->ext_lock && mutex_is_locked(q->ext_lock);

    /* Release vdev lock to prevent this wait from blocking outside access to
       the device. */
    if (is_ext_locked)

With enough due care, one assumes that it's possible to be sure that unlocking a mutex acquired elsewhere is a reasonable thing to do here. But one must hope that the driver author - who is not concerned with locking, after all - has left things in a consistent state before calling videobuf_waiton(). Otherwise those disgruntled users will eventually make a return.

Locking complexity in the kernel is growing, and that makes it harder for developers to come up to speed. Complex locking can be an especially difficult challenge for somebody writing this type of driver; these authors tend not to be full-time kernel developers. So the appeal of taking care of locking for them and letting them concentrate on getting their hardware to do reasonable things is clear, especially if it makes the code review process easier as well. Such efforts may ultimately be successful, but there can be no doubt that they will run into disagreement from those who feel that kernel developers should either understand what is going on or switch to Java development. Expect this sort of discussion to pop up in a number of contexts as core developers try to make it easier for others to write correct code.

Comments (2 posted)

A netlink-based user-space crypto API

By Jake Edge
October 20, 2010

User-space access to the kernel cryptography subsystem has reared its head several times of late. We looked at one proposal back in August that had a /dev/crypto interface patterned after similar functionality in OpenBSD. There is another related effort, known as the NCR API, and crypto API maintainer Herbert Xu has recently posted an RFC for yet another. But giving user space the ability to request that the kernel perform its computation-intensive crypto operations is not uncontroversial.

As noted back in August, some kernel hackers are skeptical that there would be any performance gains by moving user-space crypto into the kernel. But there are a number of systems, especially embedded systems, with dedicated cryptographic hardware. Allowing user space to access that hardware will likely result in performance gains, in fact 50-100x performance improvements have been reported.

Another problem with both the /dev/crypto and NCR APIs (collectively known as the cryptodev-linux modules) is the addition of an enormous amount of code to the kernel to support crypto algorithms beyond those that are already available. Those two modules have adapted user-space libraries for crypto and multi-precision integers and included them into the kernel. They are necessary to support some government crypto standards and certifications that require a separation between user space and crypto processing. So, the cryptodev-linux modules are trying to solve two separate (or potentially separate) problems: user-space access to crypto hardware acceleration and security standards compliance.

When Xu first put out an RFC on his idea for the API (without any accompanying code) back in September, Christoph Hellwig had a rather strongly worded reaction:

doing crypto in kernel for userspace consumers [is] simply insane. It's computational intensive code which has no business in kernel space unless absolutely required (e.g. for kernel consumers). In addition to that adding the context switch overhead and address space transitions is god [awful] too.

Xu more or less agrees with Hellwig, but sees his API as a way to provide access to the hardware crypto devices. Because Xu's API is based on netlink sockets (as opposed to ioctl()-based or a brand new API that the cryptodev-linux modules introduce), he is clearly hoping that it will provide a way forward without requiring such large changes to the kernel:

FWIW I don't care about user-space using kernel software crypto at all. It's the security people that do.

The purpose of the user-space API is to export the hardware crypto devices to user-space. This means PCI devices mostly, as things like aesni-intel [Intel AES instructions] can already be used without kernel help.

Now as a side-effect if this means that we can shut the security people up about adding another interface then all the better. But I will certainly not go out of the way to add more crap to the kernel for that purpose.

The netlink-based interface uses a new AF_ALG address family that gets passed to the initial socket() call. There is also a new struct sockaddr_alg that contains information about what type of algorithm (e.g. "hash" or "skcipher") is to be used as well as the specific algorithm name (e.g. "sha1" or "cbc(aes)") that is being requested. That structure is then passed in the bind() call on the socket.

For things like hashing, where there is little or no additional information needed, an accept() is done on the socket, which yields an operation file descriptor. The data to be hashed is written to that descriptor and, when there is no more data to be hashed, the appropriate number of bytes (20 for sha1) are then read from the descriptor.

It is a bit more complicated for ciphers. Before accepting the connection on the socket, a key needs to be established for a symmetric key cipher. That is done with a setsockopt() call using the new SOL_ALG level and ALG_SET_KEY option name and passing the key data and its length. But there are additional parameters that need to be set up for ciphers, and those are done using sendmsg().

A cipher will need to know which direction it is operating in (i.e. encrypting or decrypting) and may need an initialization vector. Those are specified with the ALG_SET_OP and ALG_SET_IV messages. Once the accept() has been done, those messages are sent to the operational descriptor and the cipher is ready for use. Data can be sent as messages or written to the operational descriptor, and the resulting data can then be read from that descriptor.

There is an additional wrinkle for the "authenticated encryption with associated data" (AEAD) block cipher mode, which can include authentication information (i.e. message authentication code or MAC) into the ciphertext stream. Because of that, AEAD requires two data streams, one containing the data itself and another with the associated authentication data (the MAC). This is handled in Xu's API by doing two accept() calls, the first for the operational descriptor, and the second for the associated data. If the cipher is operating in encryption mode, both descriptors will be written to, while the encrypted data is read from the operational descriptor. For decryption, the ciphertext is written to the operational descriptor, while the plaintext and authentication data are read from the two descriptors.

There hasn't been much discussion, yet, of the actual code posting, but Xu's September posting elicited a number of complaints about performance, most from proponents of the cryptodev-linux modules. But it would seem that there is some real resistance to adding completely new APIs (as NCR does) or to adding a complicated ioctl()-based API (as /dev/crypto does). Now there are three competing solutions available, but it isn't at all clear that any interface to the kernel crypto subsystem will be acceptable to the kernel community at large. We will have to wait to see how it all plays out.

Comments (21 posted)

trace-cmd: A front-end for Ftrace

October 20, 2010

This article was contributed by Steven Rostedt

Previous LWN articles have explained the basic way to use Ftrace directly through the debugfs filesystem (part 1 and part 2). While the debugfs interface is rather simple, it can also be awkward to work with. It is especially convenient, though, for embedded platforms where it may be difficult to build and install special user tools on the device. On the desktop, it may be more convenient to have a command-line tool that works with Ftrace instead of echoing various commands into strange files and reading the result from another file. This tool does exist, and it is called trace-cmd.

trace-cmd is a user-space front-end command-line tool for Ftrace. You can download it from the git repository at git:// Some distributions ship it as a package, and some that currently do not, will soon. There are full man pages included, which are installed with a make install_doc. This article will not go over the information that is already in the man pages, but instead will explain a little about how trace-cmd works and how to use it.

How it works

A simple use case of trace-cmd is to record a trace and then report it.

    # trace-cmd record -e ext4 ls
    # trace-cmd report
    version = 6
    CPU 1 is empty
           trace-cmd-7374  [000]  1062.484227: ext4_request_inode:   \
	   		   	  dev 253:2 dir 40801 mode 33188
           trace-cmd-7374  [000]  1062.484309: ext4_allocate_inode:  \
	   		   	  dev 253:2 ino 10454 dir 40801 mode 33188

The above example enables the ext4 tracepoints for Ftrace, runs the ls command and records the Ftrace data into a file named trace.dat. The report command reads the trace.dat file and outputs the tracing data to standard output. Some metadata is also shown before the trace output is displayed: the version of the file, any empty CPU buffers, and the number of CPUs that were recorded.

By default, the record and report options write and read to the trace.dat file. You can use the -o or -i options to pick a different file to write to or read from respectively, but this article will use the default name when referencing the data file created by trace-cmd.

When recording a trace, trace-cmd will fork off a process for each CPU on the system. Each of these processes will open the file in debugfs that represents the CPU the process is dedicated to record from. The process recording CPU0 will open /sys/kernel/debug/tracing/per_cpu/cpu0/trace_pipe_raw, the process recording CPU1 will open a similar file in the cpu1 directory, and so on. The trace_pipe_raw file is a mapping directly to the Ftrace internal buffer for each CPU. Each of these CPU processes will read these files using splice to record into a temporary file during the trace. At the end of the record, the main process will concatenate the temporary files into a single trace.dat file.

There's no need to manually mount the debugfs filesystem before using the tool as trace-cmd will look to see if and where it is mounted. If debugfs is not mounted, it will automatically mount it at /sys/kernel/debug.

Recording a trace

As noted above, trace-cmd forks off a process for each CPU dedicated to record from that CPU but, in order to prevent scheduling interference, the threads are not pinned to a CPU. Pinning the threads to the CPU being traced may result in better cache usage, so a future version of trace-cmd may add an option to do that. The Ftrace ring buffers are allocated one per CPU, and each thread will read from a particular CPU's ring buffer. It is important to mention this because these threads can show up in the trace.

A common request is to have trace-cmd ignore events that are caused by trace-cmd itself. But it is not wise to ignore these events because they show where the tracer may have made an impact on what it is tracing. These events can be filtered out after the trace, but they are good to keep around in the trace.dat file in case some delay was caused by the trace itself, and the events may show that.

As trace-cmd is a front end to Ftrace, the arguments of record reflect some of the features of Ftrace. The -e option enables an event. The argument following the -e can be an event name, event subsystem name, or the special name all. The all name will make trace-cmd enable all events that the system supports. If a subsystem name is specified, then all events under that subsystem will be enabled during the trace. For example, specifying sched will enable all the events within the sched subsystem. To enable a single event, the event name can be used by itself, or the subsystem:event format can be used. If the subsystem name is left off, then all events with the given name will be enabled. Currently this would not be an issue because, as of this writing, all events have unique names. If more than one event or subsystem is to be traced, then multiple -e options may be specified.

Ftrace also has special plugin tracers that do not simply trace specific events. These tracers include the function, function graph, and latency tracers. Through the debugfs tracing directory, these plugins are enabled by echoing the type of tracer into the current_tracer file. With trace-cmd record, they are enabled with the -p option. Using the tracer plugin name as the argument for -p enables that plugin. You can still specify one or more events with a plugin, but you may only specify a single plugin, or no plugin at all.

When the record is finished, trace-cmd examines the kernel buffers and outputs some statistics, which may be a little confusing. Here's an example:

    Kernel buffer statistics:
      Note: "entries" are the entries left in the kernel ring buffer and are not
            recorded in the trace data. They should all be zero.

    CPU: 0
    entries: 0
    overrun: 0
    commit overrun: 0

    CPU: 1

As the output explains, the entries field is not the number of entries that were traced, but the number of entries left in the kernel buffer. If entries were dropped because trace-cmd could not read the buffer faster than it was being written to, and the writer overflowed the buffer, then either the overrun or commit overrun values would be something other than zero. The overrun value is the number of entries that were dropped due to the buffer filling up, and the writer deleting the older entries.

The commit overrun is much less likely to occur. Writes to the buffer is a three step process. First the writer reserves space in the ring buffer, then it writes to it, then it commits the change. Writing to the buffer does not disable interrupts. If a write is preempted by an interrupt, and the interrupt performs a large amount of tracing where it fills the buffer up to the point of the space that was reserved for the write it preempted, then it must drop events because it cannot touch the reserved space until it is committed. These dropped events are the commit overrun. This is highly unlikely to happen unless you have a small buffer.

Filtering while recording

As explained in "Secrets of the Ftrace function tracer", Ftrace allows you to filter what functions will be traced by the function tracer. Also, you can graph a single function, or a select set of functions, with the function graph tracer. These features are not lost when using trace-cmd.

    # trace-cmd record -p function -l 'sched_*' -n 'sched_slice'

(Note that the above does not specify a command to execute, so trace-cmd will record until Ctrl^C is hit.)

The -l option is the same as echoing its argument into set_ftrace_filter, and the -n option is the same as echoing its argument into set_ftrace_notrace. You can have more than one -l or -n option on the command line. trace-cmd will simply write all the arguments into the appropriate file. Note, those options are only useful with the function and function_graph plugins. The -g option (not shown) will pass its argument into the set_graph_function file.

Here is a nice trick to see how long interrupts take in the kernel:

    # trace-cmd record -p function_graph -l do_IRQ -e irq_handler_entry sleep 10
    # trace-cmd report
    version = 6
            Xorg-4262  [001] 212767.154882: funcgraph_entry:                   |  do_IRQ() {
            Xorg-4262  [001] 212767.154887: irq_handler_entry:    irq=21 name=sata_nv
            Xorg-4262  [001] 212767.154952: funcgraph_exit:       + 71.706 us  |  }
            Xorg-4262  [001] 212767.156948: funcgraph_entry:                   |  do_IRQ() {
            Xorg-4262  [001] 212767.156952: irq_handler_entry:    irq=22 name=ehci_hcd:usb1
            Xorg-4262  [001] 212767.156955: irq_handler_entry:    irq=22 name=NVidia CK804
            Xorg-4262  [001] 212767.156985: funcgraph_exit:       + 37.795 us  |  }

The events can also be filtered. To know what fields can be used for filtering a specific event, look in the format file from /sys/kernel/debug/tracing/events/<subsystem>/<event>/format, or run

    # trace-cmd report --events | less
on a trace.dat file that was created by the local system. The --events argument will list the event formats of all events that were available in the system that created the tracing file:

    $ trace-cmd report --events
    name: kmalloc_node
    ID: 338
        field:unsigned short common_type;       offset:0;       size:2; signed:0;
        field:unsigned char common_flags;       offset:2;       size:1; signed:0;
        field:unsigned char common_preempt_count;       offset:3;       size:1; signed:0;
        field:int common_pid;   offset:4;       size:4; signed:1;
        field:int common_lock_depth;    offset:8;       size:4; signed:1;

        field:unsigned long call_site;  offset:16;      size:8; signed:0;
        field:const void * ptr; offset:24;      size:8; signed:0;
        field:size_t bytes_req; offset:32;      size:8; signed:0;

Using the kmalloc_node event, we can filter on all requests that were greater than 1000 bytes:

    # trace-cmd record -e kmalloc_node -f 'bytes_req > 1000'

The -f option specifies a filter for the event (specified in a -e option) preceding it.

Reading the trace

As the initial example showed, to read the trace simply run the report command. By default, it will read the trace.dat file, unless the -i option specifies a different file to read, or the input file may simply be specified as the last argument.

    $ trace-cmd report -i mytrace.dat
    $ trace-cmd report mytrace.dat

The above two examples give the same result. The report command is not a privileged operation and only requires read permission on the data file it is reading.

    $ trace-cmd report
    version = 6
      trace-cmd-8412  [000] 13140.422056: sched_switch:         8412:120:S ==> 0:120: swapper
        <idle>-0     [000] 13140.422068: power_start:          type=1 state=2
        <idle>-0     [000] 13140.422174: irq_handler_entry:    irq=0 handler=timer
        <idle>-0     [000] 13140.422180: irq_handler_exit:     irq=0 return=handled

The output is similar to what you would see in /sys/kernel/debug/tracing/trace.

Having the trace data in a file gives some advantages over reading from a debugfs file. We can now easily filter what events we want to see, or pick a specific CPU to output.

You can do extensive filtering on events and what CPUs you want to focus on:

    $ trace-cmd report --cpu 0 -F 'sched_wakeup: success == 1'
    version = 6
          ls-8414  [000] 13140.423106: sched_wakeup: 8414:?:? + 8412:120:? trace-cmd Success
          ls-8414  [000] 13140.424179: sched_wakeup: 8414:?:? + 1155:120:? kondemand/0 Success
          ls-8414  [000] 13140.426925: sched_wakeup: 8414:?:? + 704:120:? phy0 Success
          ls-8414  [000] 13140.431172: sched_wakeup: 8414:?:? + 9:120:? events/0 Success
      events/0-9   [000] 13140.431182: sched_wakeup: 9:?:? + 11734:120:? sshd Success
          ls-8414  [000] 13140.434173: sched_wakeup: 8414:?:? + 1155:120:? kondemand/0 Success

The --cpu 0 limits the output to only show the events that occurred on CPU 0. The -F option limits the output further to only show sched_wakeup events that have its success field equal to 1. For more information about the filtering, consult the trace-cmd-report(1) man page.

Tracing over the network

There may be situations where you want to trace an embedded device or some machine with very little disk space. Perhaps another machine has lots of disk space and you want to record the trace to that machine or maybe you are tracing the filesystem itself and minimal interference to that code is needed. This is where tracing over the network comes in handy.

To set up a trace server, simply run something like the following command:

    $ trace-cmd listen -p 12345 -D -d /images/tracing/ -l /images/tracing/logfile

The only required option in the above is the -p option, which tells trace-cmd what port to listen on. The -D puts trace-cmd into daemon mode, while the -d /images/tracing/ tells trace-cmd to output the trace files from the connections it receives into the /images/tracing/ directory. Obviously, any directory you have write permission for can be used here. The -l /images/tracing/logfile tells trace-cmd to not write messages to standard output, but to the /images/tracing/logfile file instead. The listen command is not privileged, and can be run by any user.

On the embedded device (or whatever client is used), instead of specifying a output file in the trace-cmd record the -N option is used followed by host:port syntax.

    # trace-cmd record -N gandalf:12345 -e sched_switch -e sched_wakeup -e irq hackbench 50

Back on the host gandalf a file is made in the /images/tracing/ directory of the format "trace.<client-host>:<client-port>.dat".

    $ ls /images/tracing/
    logfile  trace.frodo:35287.dat

    $ cat /images/tracing/logfile
    [29078]Connected with frodo:35287

    $ trace-cmd report /images/tracing/trace.frodo\:35287.dat
    version = 6
        <...>-17215 [000] 19858.840695: sched_switch:      17215:120:S ==> 0:120: swapper
       <idle>-0     [000] 19858.840934: irq_handler_entry: irq=30 handler=iwl3945
       <idle>-0     [000] 19858.840959: irq_handler_exit:  irq=30 return=handled
       <idle>-0     [000] 19858.840960: softirq_entry:     softirq=6 action=TASKLET
       <idle>-0     [000] 19858.841005: softirq_exit:      softirq=6 action=TASKLET

trace-cmd is versatile enough to handle heterogeneous systems. All the information needed to create and read the trace.dat file is passed from the client to the host. The host could be a 64-bit x86 and the client a 32-bit PowerPC and the above would not change. A big-endian machine can read a little-endian file and vice versa. The compatibility to read various system types is not limited to network tracing. If a trace is performed on a big-endian 32-bit system, the resulting file can still be read from a little-endian 64-bit system.

For the lazy Ftrace user

If using the internal kernel Ftrace buffer is sufficient and there is no need to record the trace, trace-cmd can still be useful. Pretty much all of the record options can be used with the trace-cmd start command. start does not create a trace.dat file, but simply starts Ftrace. Similarly, the stop command is just a convenient way to do:

    $ echo 0 > /sys/kernel/debug/tracing/tracing_on
For example:

    # trace-cmd start -p function_graph -g ip_rcv

    # sleep 10

    # trace-cmd stop

    # cat /sys/kernel/debug/tracing/trace
    # tracer: function_graph
    # CPU  DURATION                  FUNCTION CALLS
    # |     |   |                     |   |   |   |
     1)               |  ip_rcv() {
     1)               |    T.769() {
     1)               |      nf_hook_slow() {
     1)   0.497 us    |        add_preempt_count();
     1)               |        nf_iterate() {
     1)   0.458 us    |          ip_sabotage_in();

If there is a case where the trace needs to be converted into a trace.dat file, the extract command can be used. After the above trace was done:

    # trace-cmd extract -o kernel-buf.dat

    # trace-cmd report kernel-buf.dat
    version = 6
       <idle>-0  [001] 214146.661193: funcgraph_entry:          |  ip_rcv() {
       <idle>-0  [001] 214146.661196: funcgraph_entry:          |    T.769() {
       <idle>-0  [001] 214146.661197: funcgraph_entry:          |      nf_hook_slow() {
       <idle>-0  [001] 214146.661197: funcgraph_entry: 0.497 us |        add_preempt_count();
       <idle>-0  [001] 214146.661198: funcgraph_entry:          |        nf_iterate() {
       <idle>-0  [001] 214146.661199: funcgraph_entry: 0.458 us |          ip_sabotage_in();

To disable all tracing, which will ensure that no overhead is left from using the function tracers or events, the reset command can be used. It will disable all of Ftrace and bring the system back to full performance.

    # trace-cmd reset

What's next?

This article explains some of the use cases for trace-cmd. There is still more that it can do but we just did not have the space to fit it all in. This article and the trace-cmd man pages should be enough to get you on your way to using trace-cmd in a productive manner.

So what's next? As this article shows how trace-cmd is a front-end tool for Ftrace, the next article will present kernelshark which is a graphical front-end to trace-cmd. Stay tuned.

Comments (3 posted)

Patches and updates

Kernel trees


Core kernel code

Development tools

Device drivers


Memory management



Virtualization and containers

Benchmarks and bugs


Page editor: Jonathan Corbet


Not quite precious: openSUSE releases Smeegol

October 18, 2010

This article was contributed by Joe 'Zonker' Brockmeier.

It's been a rocky road for the openSUSE Goblin team. Initially an attempt to create an openSUSE netbook release based on openSUSE and Moblin (Goblin) that was led by Andrew Wafaa, the shift from Moblin to MeeGo took the project back a few steps. Now under the name Smeegol, the openSUSE netbook release is finally at 1.0 — but how does it fare? Smeegol 1.0 is an interesting release, but it's still rough around the edges.

It is not surprising that Smeegol has some rough edges. MeeGo itself — despite the 1.0 label — is still a work in progress, as LWN's resident Grumpy Editor discovered back in June following the 1.0 Core release.

There's also the fact that the MeeGo project doesn't seem to be going out of its way to work with distributions to produce derivatives. According to a recent post from Fedora Project Leader Jared Smith, the project hasn't released its compliance specifications to allow projects to use the MeeGo trademark. In an email exchange, Wafaa said that "it appears that things aren't as open now that it's MeeGo... there are certainly more hurdles in the way - the rigid trademark requirements are one example."

It does appear that MeeGo has at least a draft compliance document for trademark usage. It's rather detailed, and forbids the MeeGo Core packages from being repackaged in the creation of a compliant implementation. At this point, it may be moot anyway — at least for the Fedora 14 release. Peter Robinson threw in the towel on the "mobility spin" based on MeeGo for the Fedora 14 release, citing "contention upstream" as well as a lack of time to finish it for the Fedora 14 release.

Wafaa also says that MeeGo "doesn't appear to be too happy in a helping way with other distros re-spinning." When working with Moblin for the Goblin netbook project, Wafaa said that Intel seemed happy with projects using the user experience (UX) on top of another distro. Indeed, Intel took pains to assure the community that Moblin was not meant to be a distribution, but a reference platform. In one of its FAQs for Moblin developers, Intel touts the Moblin derivatives and says "Moblin is an open source project, not a product."

MeeGo, on the other hand, seems a bit less eager to encourage offshoots and less responsive to requests. It wasn't until Smeegol was released that Ibrahim Haddad, director of technical alliances for the Linux Foundation, responded to the numerous requests about trademark usage. More discouragingly, Haddad says that the name Smeegol "is not in the benefit of MeeGo project" and requests that the project choose a new name.

In a reaction to the trademark rejection, Wafaa posted about the difficulties in working with MeeGo. According to Wafaa, MeeGo has "closed the door" on contributions, and says that MeeGo appears to want "to build a community from OEMs and Partners" to the exclusion of the wider community.

The switch from Moblin to MeeGo also impacted the Goblin/Smeegol development. Novell's engineers moved away from using a SUSE Linux Enterprise Desktop (SLED) base and to using MeeGo upstream. When Novell was working on Moblin, Wafaa says he could reuse much of that team's work in trying to package Goblin, but now the work doesn't carry over to openSUSE/Goblin as easily since it takes place in the MeeGo build system and is not SLED-based. MeeGo uses RPM for its package system, but the packages are built exclusively for the MeeGo distribution, and aren't compatible with Fedora or openSUSE.

The Smeegol Experience

So how does Smeegol stack up? Smeegol is somewhat discriminating in what hardware it will run on. The post announcing Smeegol noted that it does not yet play well in virtualized environments, and attempting to install the Smeegol 1.0 release in VMware Workstation predictably ended unsuccessfully. The installer image also choked when trying to boot Smeegol on an Acer netbook with the dreaded Intel Poulsbo chipset.

Using the "one-click" installer for openSUSE 11.3, Smeegol installed almost without a hitch on a regular laptop with an Nvidia card and an Atom-based Asus 1000HA Eee PC already loaded with openSUSE 11.3. Almost, in that the ksmolt package had to be removed from both systems to successfully log into the MeeGo UX. The Nvidia laptop also required that I install the Nvidia driver, as Smeegol doesn't seem to like the Nouveau driver installed by default.


Smeegol inherits the MeeGo interface, with all that entails. That is to say, it's an interesting attempt to make the most of the netbook screen size that works well in some ways, but falls down in others. The interface is particularly frustrating when running several distinct applications and/or running applications that are not customized for the MeeGo interface. Rather than desktops, applications run in "Zones" for each window — which means a lot of switching back and forth, seeing one application at a time. In some cases only one window of an application at a time.

A primary example would be GIMP, which has a default of three windows — the primary window for the image being edited, and two toolbars. Of course, it's unfair to lay all the blame on MeeGo or Smeegol here — GIMP is not an application that shines in a 1024x600 interface no matter what desktop environment it's running in. At least GIMP opens all of its windows in a single Zone, so it doesn't necessitate switching back and forth. Empathy, the default IM client, opens the contact list in one Zone and chats in another Zone — which makes it very confusing. In short, many applications are slightly out of place when run in the MeeGo UX.


For users that are sticking to "netbook" applications, this isn't much of a problem. Smeegol's default browser (Chromium) blends into the environment, and there's a customized Banshee interface for MeeGo/Smeegol, as well a as custom Evolution interface, and so on. If you're using Smeegol for a netbook that will be doing netbook-type tasks (light Web browsing, mail, social media, etc.) then it's very pleasant to use. This is not a power-user's interface, though.

Since Smeegol inherits so much from MeeGo, is there an advantage in running Smeegol over MeeGo? If users care about a wider range of packages than offered for MeeGo, or running on a wider range of hardware, the answer is yes. MeeGo doesn't offer many applications that users might want, like GIMP, AbiWord, etc. Smeegol may also be preferable for some users because of what it doesn't offer. Specifically, it uses the more mature NetworkManager instead of the default ConnMan, which is MeeGo's connection manager.

Smeegol is offered as a 32-bit or 64-bit release, and users can install any packages from the openSUSE repositories. Wafaa also says that Smeegol should have a wider range of hardware support, and does not require CPUs with SSE3 support — so older Eee PCs (for example) should work with Smeegol whereas MeeGo does not.

Smeegol is also, unlike MeeGo, a multiuser system. So if you share a netbook with a friend or family member, you can each have separate accounts. This is a good thing, since much of the system is meant to support very personal services like Facebook, Twitter, email, and so on.

Unfortunately, Smeegol inherits one of MeeGo's less charming traits: The complete and befuddling lack of a shutdown or logout button. It seems to be a debate within MeeGo circles whether a netbook requires this, since the hardware will (in theory) be fully supported and users will simply suspend the system or power off using the hardware power button. Right now this is a bit glitchy on Smeegol, which powers off almost instantly if you press the power button. And if you wish to let someone else log in, it necessitates killing X with the Ctrl-Alt-Backspace combo.

The Smeegol Future

Now that 1.0 has been released, what's next? Wafaa says that work has already begun on Smeegol 1.1, and he'd like to also develop a tablet version based on MeeGo 1.1 as well. openSUSE's Andreas Jaeger says that it's not yet certain whether Smeegol will be an official part of the 11.4 release, but would like to see that happen.

Whether it makes it in probably depends on the success of Smeegol 1.0, and whether it serves as a catalyst to recruit more contributors. Wafaa noted that Smeegol is not a Novell-initiated project, though some of the team working with MeeGo from Novell have contributed as individuals. Much, though not all, of the work is being done by Wafaa in his spare time. More contributors would no doubt be very helpful to ensuring that it becomes a long-term offering and an official part of openSUSE.

Smeegol is worth a look for openSUSE users who would like a customized netbook release, and for users who want the MeeGo UX without being limited to the MeeGo ecosystem of software. Smeegol has some warts, but most of those are related to the MeeGo heritage and are bound to improve over time as MeeGo matures. One hopes that the naming situation can be resolved amicably between openSUSE and the Linux Foundation, and that MeeGo will improve its coordination with downstream projects.

Comments (3 posted)

Brief items

Distribution quotes of the week

> Why are we convinced throwing away bugs is a good idea?

Thank you for helping to make Ubuntu better!

Unfortunately, you've not provided enough information for us to respond to the issue you've raised. We are marking your email Incomplete for now; it will expire in 30 days if we do not hear from you by then.

-- Bryce Harrington responds to Scott Kitterman

* if we are not granted the "right" (whatever that might mean) to use the "MeeGo" trademark to refer to the "software that comes out of the MeeGo project", what are we supposed to use instead? (Please don't make us use things like "the name everyone knows but that I can't write").
-- Didier Raboud

Comments (none posted)

Smeegol encounters a ring of power

Smeegol is an implementation of the MeeGo user experience on top of an openSUSE base; the 1.0 release was announced on October 6. On the 14th, the project was told that it cannot use the "Smeegol" name: "It is not in the benefit of MeeGo project to use 'Smeegol'. We therefore can not approve such usage of MeeGo mark in 'Smeegol'. We understand that you've already announced it and we will be happy to work with you to come up with a different name (for the good of the MeeGo project)." Some Smeegol developers are not pleased by this development and have announced their intent to "push back"; meanwhile, openSUSE community manager Jos Poortvliet has proposed renaming the distribution to "DarkRider". What the final resolution will be is unclear.

Comments (21 posted)

MeeGo 1.0 Update for Netbooks

The 4th update for the MeeGo v1.0 Core Software Platform & Netbook User Experience project release contains is available. This update fixes "many important security issues" and other bugs.

Full Story (comments: none)

Version 2.1 of the openSUSE build service

The openSUSE build service 2.1 release is out, with a number of new features. "Users of the Build Service may now access a new concept called 'source services'. Source services allow, for example, the automatic checkout of source code for a package from a remote server via Git or Subversion, building a tar ball from these checked out sources, and using them for building a package. It also enables direct download of tar balls from remote sites. This allows packagers to work with external sources without downloading them to their own workstations, and makes it easy to rebuild packages after upstream changes with a single click."

Comments (3 posted)

Red Hat Enterprise Linux 6 Release Candidate Available to Partners

Red Hat has announced the limited availability of Red Hat Enterprise Linux 6 Release Candidate. "The Release Candidate is available to a small set of strategic testing partners, including our OEM partners, and Red Hat's independent software vendor (ISV) partners. We encourage all of our ISV partners to enable our joint customers to experience the significant enhancements in performance, reliability and security offered in this version of what is intended to become our new flagship platform by accelerating testing and final certification of ISV offerings on the Release Candidate. We expect no further changes to the ABI or API that might otherwise affect application compatibility as we finalize Red Hat Enterprise Linux 6 and make it generally available later this year."

Comments (12 posted)

Distribution News

Debian GNU/Linux

Debian welcomes non-packaging contributors

It's official, the Debian project's general resolution which states that all contributors - not just those making packages - are welcome as project members, has passed. There were 285 votes in favor, 14 against, so it seems that this is not an especially controversial decision.

Full Story (comments: 8)


Fedora Board meeting recap, 2010 10-18

Click below for a recap of the October 18 meeting of the Fedora Board. Topics include a review of blocker bugs, and F15 naming.

Full Story (comments: none)

SUSE Linux and openSUSE

Advance discontinuation notice for openSUSE 11.1

SUSE Security has announced that the SUSE Security Team will stop releasing updates for openSUSE 11.1 after December 31, 2010. "As a consequence, the openSUSE 11.1 distribution directory on our server will be removed from /distribution/11.1/ to free space on our mirror sites. The 11.1 directory in the update tree /update/11.1 will follow, as soon as all updates have been published. Also the openSUSE buildservice repositories building openSUSE 11.1 will be removed."

Full Story (comments: none)

How to contribute: openSUSE Documentation

Juergen Weigert has some pointers to finding and contributing to openSUSE documentation. "Currently the openSUSE documentation (listed below under 3.) is maintained mostly within Novell by the SUSE Documentation Team ( The discussion on opensuse-marketing has shown that people are interested in contributing to this documentation--a fact that we really welcome!"

Full Story (comments: none)

Ubuntu family

Natty Narwhal open for development

Now that Ubuntu 10.10 has been released, it's time for the cycle to begin again this time with Natty Narwhal (11.04).

Full Story (comments: none)

Other distributions

OpenBricks Embedded Linux Framework - Day 1

The GeeXboX team has announced the "Day 1" of the OpenBricks Embedded Linux Framework project. "OpenBricks is a complete OpenSource and non-profit project which aims at bringing a coherent Linux distribution to run on as many embedded devices and architectures as possible. As much as possible, it tries to rely on standardized technologies, protocols and FOSS as to provide the most code re-usability. It can be used as a framework basis to build your very specific Linux distributions, corresponding to your exact and specific needs, whichever you're trying to build a Set-Top-Box, a touchscreen based multimedia tablet, a NAS, a router or whatsoever. Porting your board to Linux and adding your specific programs never has been so easy and one can easily create its own distribution flavour."

Comments (none posted)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

Fedora 14 Spotlight Feature: Keeping Secure with OpenSCAP

Red Hat News has started a series of blog posts leading up to the Fedora 14 "Laughlin" release highlighting some of the new features. The first post looks at the inclusion of OpenSCAP. "SCAP is a line of standards managed by the National Institute of Standards and Technology (NIST). It provides a standardized approach to maintaining the security of systems, such as automatically verifying the presence of patches, checking system security configuration settings, and examining systems for signs of compromise. With OpenSCAP, the open source community is leveraging many different components from the security standards ecosystem."

Comments (none posted)

Spotlight Feature: Get Mobile with Fedora 14

Red Hat News continues its series with a look at Fedora for netbooks and other mobile devices. "The release of Fedora 14 is just around the corner, and one of the areas of active development for this release is mobile devices, such as netbooks. Fedora community members have integrated several different mobile development platforms for use with Fedora, including Sugar on a Stick and software from the MeeGo project. Fedora members also work to bring Fedora to new hardware platforms."

Comments (none posted)

Page editor: Rebecca Sobol


ODF Plugfest: Making office tools interoperable

October 20, 2010

This article was contributed by Koen Vervloesem

On the 14th and 15th of October, the federal, regional, and community governments of Belgium organized an ODF Plugfest in Brussels. After plugfests in the Netherlands, Italy, and Spain, this was the fourth vendor-neutral event where all the lead developers and architects of Open Document Format (ODF) implementations — open source and proprietary — can meet to discuss interoperability issues and present what's available on the market.

The first day of the event was reserved for the ODF implementers: Abisource, DiaLOGIKa, Google, IBM, Itaapy, KO GmbH, Microsoft, Novell, Sun/Oracle, and LibreOffice developers weren't able to make it for the first day, but for now the new fork doesn't differ very much from its upstream with respect to ODF support. The various vendors did some interoperability testing scenarios focused around specific parts of the ODF standard. The scenarios can be consulted on the wiki of the OpenDoc Society (click on "Scenarios" and then on "20100415") and in the program. For example, the ODF implementers tested interoperability of the YEARFRAC spreadsheet function — an important function that is involved in many financial calculations.

Another testing scenario concerned change tracking with tables. This is a notoriously difficult topic, and the different vendors had a lengthy discussion about a proposal from DeltaXML to improve this in ODF. There were also some scenarios involving digital signatures in ODF 1.2 as well as some smaller tests. Based on the outcome of the tests, the developers of the various office suites surely have some homework to do.

The evolution of a document format

The second day of the event had a more classical, conference-like approach, with presentations by different ODF stakeholders. After the welcome speech by organizer Bart Hanssens, IBM's Rob Weir gave an update about the ODF specification as the chair of the OASIS OpenDocument technical committee. He emphasized that ODF 1.0 is still maintained (errata are published), even if ODF 1.1 is the current version and ODF 1.2 is already in the last phase of development. It's clear that ODF 1.2 is a much bigger development than the two previous versions: while the ODF 1.0 specification has 700 pages and the ODF 1.1 specification has just 30 more, the current Committee Draft 05 of the ODF 1.2 specification has more than 1200 pages.

The biggest differences between ODF 1.0/1.1 and ODF 1.2 lie in the addition of a spreadsheet formula language, OpenFormula (which amounts for 240 pages in the specification), an explicit scheme for digital signatures, and RDF/XML and RDFa capabilities. Moreover, the conformance language has been reworded for easier conversion to an ISO standard, and more than 2000 reported issues have been solved. Rob expects that ODF 1.2, which is "almost done", will be approved by OASIS at the end of January 2011. At the end of his talk, Rob speculated somewhat about what could go into "ODF Next" (version 1.3?): modularization (inspired by the W3C's modularization effort for HTML), which would make possible something like a "web profile" for ODF, enhanced SVG and XForms integration, enhanced signing of documents, and better change tracking. By the way, an interesting side note that Rob made is that OASIS can only approve a specification as a standard if there exist at least three implementations.

Bart Severi of the Flemish government talked about the policy of his government concerning archiving digital documents. Office documents that have to be preserved for a long time, will in practice be converted to PDF/A (if the behavior is not important) or ODF (if the scripts are needed). His colleague Bart Hanssens from the Belgian government department Fedict (and also chair of the ODF Interoperability and Conformance technical committee of OASIS) talked about the possibility of digitally signing ODF documents with the eID, which is an electronic ID card issued to all Belgian people. ODF 1.0/1.1 allow signing a document, but it is not specified how this should happen. In contrast, ODF 1.2 specifies the reuse of the W3C recommendation XML-DSIG and hints that XAdES can be used. XAdES (XML Advanced Electronic Signatures) builds upon XML-DSIG and is compliant with the European Directive 1999/93/EC. Fedict has an eID proof-of-concept applet for web browsers, which is available under the LGPL.

Alex Brown presented his online document validator Office-o-tron, which understands ODF (1.0, 1.1, and draft 1.2) and OOXML. Office-o-tron is open source software licensed under the Mozilla Public License v1.1. Sander Marechal gave an update about the Officeshots web application, which allows users to upload ODF documents and will generate output from various office applications. Officeshots makes it possible to automate the process of investigating ODF interoperability, as we described in November 2009. Since the last plugfest in Spain, development has focused more on the back-end, so there are not many new features, but Sander explained that support for more office applications is on the roadmap, as well as an easier installation routine for volunteers that want to host a rendering server.

Updates from office products

Then there were some short presentations with updates about office products supporting ODF. Casper Boemann from the German company KO GmbH talked about the latest features in KOffice, such as animation support, text line breaking that is compatible with (KOffice replicated the hyphenation and justification functionality of to get line breaks that behave identically), text wrapping around both sides of multiple shapes, and drop caps which can now be shown in different layouts. He also pointed out that a limited version of KOffice has been ported to Maemo on the N900. Marc Maurer from AbiSource, noted that AbiWord is the default word processor on 1.5 million OLPC XO-1 laptops, and said that the word processor almost supports the current draft ODF 1.2. Google's Nathan Hurst explained that Google Docs supports ODF 1.0 at the moment, but ODF 1.2 support is in the works. Moreover, the Quick View functionality for PDF files in the search engine will also be implemented for the ODF file format.

Microsoft's UK National Standards Officer John Phillips announced that they are researching the differences between ODF 1.1 and ODF 1.2 (ODF 1.1 is already supported in Microsoft Office 2007 by a service pack, and in Microsoft Office 2010 natively), and he re-affirmed his company's commitment to support ODF 1.2 within 9 months of ISO publication of the standard. However, after a question from the public about when the Mac version of Office will support ODF (currently it doesn't), John answered that there is not enough customer demand to implement it.

Oracle was represented by Oliver-Rainer Wittmann, one of the developers of He talked about the work done since the last plugfest and about Oracle Cloud Office, a web and mobile office suite integrated with Oracle Open Office. Rob Weir spoke about IBM Lotus Symphony and said that the 3.0 release should be available at the end of 2010. At the last minute the organizers added a presentation by Cor Nouws about LibreOffice. Cor, a member of the Document Foundation and owner of a Dutch consulting company, explained that LibreOffice plans to be a good member of the ODF community and will share new ODF features as soon as possible by communicating in the ODF technical committee.

A healthy ODF ecosystem

It's easy to forget that ODF documents are not only handled by word processors or office suites in general. There's a whole ecosystem of tools that can convert, enrich, or manipulate ODF files, and a couple of them were presented at the ODF plugfest. Karl Morten Ramberg presented the OFS Collaboration Suite — a real time secure collaboration suite which can be used from a web editor or from within or Microsoft Office. The client-server based architecture has some interesting functionality. When a user opens a document, the server checks which sections the user has read and/or write access to. The server then removes any edit and copy/paste functionality from the read-only sections. The sections the user has no read access to are removed completely from the document that the user receives from the server. Currently the collaboration suite has plug-ins for texts and spreadsheets, and support for Koffice and IBM Lotus Symphony is planned for a later release.

DIaLOGIKa's Wolfgang Keber (one of the developers of the Microsoft-funded ODF Converter project) made a general remark based on his experience consulting for the European Commission: ODF implementations should not only think about backward compatibility (to be able to read old archived documents), but also about forward compatibility. Opening a new ODF 1.2 document in an older 2.x version of should go as smoothly as possible, he said. This is particularly an issue for large institutions, where different application and document versions co-exist during a long migration phase. Giorgio Migliaccio from the Belgian company LetterGen presented its business-focused tools, such as its flagship product LetterGen that generates ODF documents based on an incoming XML message and the definition of templates and business rules, e.g. for legal contracts, manuals, insurance documents, and so on. This can be done interactively or in batch mode. Right now LetterGen only runs on Windows, but the next release 3.0 (expected in mid 2011) will also target other platforms.

The developers of two interesting ODF converters were also present at the ODF plugfest. Werner Donné talked about his proprietary project ODFToEPub, which allows anyone with an ordinary word processor to produce an e-book in the EPub format from a document in the ODT file format. There's a plug-in for — running on Linux, Windows and Mac OS X — that adds an export function to convert an ODT-file to EPub, but there's also a standalone interactive Java program and Werner expressed the possibility that a batch version may be coming.

Another special ODF converter is ODT2Braille, an LPGL 3+-licensed Braille extension to Writer, enabling authors to export documents as Braille files and even print them directly to a Braille embosser. Christophe Strobbe from the Katholieke Universiteit Leuven has been a researcher in web accessibility for people with disabilities since 2001 and he is the developer of the ODT2Braille project, which is part of the European project Aegis. The latest release is alpha 0.02 from 30 August 2010 and it reuses existing open source tools like liblouisxml, liblouis, pef2text, and odt2daisy. It's currently a Windows-only extension because of some minor incompatibility issues, but Christophe said that there will come versions for Linux and Mac OS X. For future versions, he also wants to support a larger set of embossers and probably also support Braille in the Calc and Impress applications.

There were also some talks about ODF libraries. Luis Belmar-Letelier from Itaapy talked about the lpOD project, which is an open source library with a set of high-level APIs for the Python, Perl, and Ruby languages. According to Luis, developers using lpOD don't have to know the details of the internal XML representation of the documents they manipulate, so they can focus on the high-level structure of the documents. And Oracle's Oliver-Rainer Wittmann talked about ODFDOM, an open source Java-based ODF API that is part of the ODF Toolkit. He said that the conversion from ODF 1.0/1.1 to ODF 1.2 is on the project's agenda. And KO GmbH's Jos van den Oever presented ODFKit, a C++ library for handling documents in ODF, which reuses WebKit functionality such as framework abstractions, code generation, JavaScript bindings, XML parsing, and XSLT processing.

ODF on the web

An especially interesting project that was presented is WebODF, which wants to bring ODF to the web. Jos van den Oever started from the observation that a lot of office suites are moving into the "cloud". Examples are Microsoft Live Office, Google Docs, and Zoho. But where are the free software alternatives for the cloud? For, KOffice, AbiWord, and Gnumeric, there are none that have a cloud version with ODF support. That was the motivation for Jos to start a project to fill in this gap and let users view and edit ODF documents on the web without losing control of the document into some company's servers.

The strategy Jos followed was to use just HTML and JavaScript for the web application. The application then loads the XML stream of the ODF document as is into the HTML document and puts it into the DOM tree. Styling is done by applying CSS rules that are directly derived from the <office:styles> and <office:automatic-styles> elements in the ODF document. That is how WebODF was born; it is a project with the initial goal of creating a simple ODF viewer and editor for offline and online use, implemented in HTML5.

The small code base consists of one HTML5 file and eight JavaScript files, each of which is a few hundred lines of code. The most interesting part is that it doesn't need server-side code execution: the JavaScript code is executed in the user's browser and saving the document to the web server is done using WebDAV. It supports both the Gecko and WebKit HTML engines. There is also an implementation on top of QtWebKit, which is for better desktop integration, and an ODFKit implementation. This means that WebODF is an easy way to add ODF support to almost any application, be it in HTML, Gtk, or QML. KO GmbH has received funding from NLnet to improve the current WebODF prototype and see how far the idea goes. Interested readers can try the online demo.

Not only for big companies

The fourth ODF plugfest managed to attract a 50-something attendees: apart from the developers of ODF office suites, there were a few small companies who deliver services, IT people from the Belgian federal and regional governments, and various interested ODF users. The momentum behind ODF 1.2 is one of the things that struck your author during the ODF plugfest: it's a huge change from ODF 1.0/1.1 and many parts from the draft are already supported by a lot of tools.

When talking about a big standard such as ODF, people generally think that it's mostly used by big companies like IBM and Oracle, and huge projects such as However, Michiel Leenaars who presented about the OpenDoc Society made the striking observation that small companies and projects can have a lot of impact in the ODF ecosystem. His case in point was that KO GmbH, a small 6-person support company for KOffice, did three talks at the ODF plugfest. This is promising for small innovative developer teams that want to fully participate in the ODF ecosystem.

The value of the ongoing series of ODF plugfests lies not only in the talks to update attendees about the latest work, but even more in the test scenarios where developers of competing products collaborate to attain the common goal of better interoperability. The presentation of different ODF tools was also inspiring: it shows that there's a healthy ecosystem that is forming around the document format standard.

Comments (5 posted)

Brief items

Quotes of the week

In a previous post, I mentioned that PyPy was the fastest Python implementation for most of my Project Euler programs, but that it was very slow for a few of them. This is no longer the case. The jit-generator branch was merged a few days ago, fixing a weakness with code that uses generators. And now PyPy is clearly the fastest Python implementation for this code, with both the most wins and the lowest overall time...

Unladen Swallow hasn't had a commit since August. I suspect it's just resting, not dead, but it's falling behind PyPy in performance. Version 2.8 of LLVM has been released, but Unladen still requires version 2.7.

-- David Ripton

This means that if you write a JavaScript implementation that does not faithfully reproduce the bug that arithmetic on integers greater than 2^53 silently does something stupid, then your implementation of the language is non-conforming.

Oh, also bitwise logical operations only have defined results (per the spec) up to 32 bits. And the result for out-of-range inputs is not an error condition, but silently mod 2^32.

I swear, it's a wonder anything works at all.

-- Jamie Zawinski

And that's where the second realization hits: you know how foo is just a string right? Then wouldn't foo() be "a string with parens"?

Well it happens that no:

function foo() { return "foo"; }

echo "foo"();

$ php test.php
Parse error: syntax error, unexpected '(', expecting ',' or ';' in test.php on line 4

Unless you put the string in a variable itself:

function foo() { return "foo"; }
$bar = "foo";
echo $bar();

this will print foo. That's actually what PHP's own create_function does. And yes, I can see the dread in your eyes already. Your fears are real.

-- "masklinn"

You can't win, you can only pick how you lose
-- Adam Jackson (thanks to Peter Robinson)

Comments (none posted)

AsbestOS PS3 bootloader released

For those of you wanting to regain the ability to run Linux on PS3 systems: the AsbestOS bootloader has been released. "Currently, it only supports netbooting a kernel and no initrd (mostly due to bootmem limitations). This is enough to run a Linux system booting from an NFS share or from USB storage media. Almost everything that works under OtherOS is working. As additional perks of running as GameOS, you also get access to a seventh SPE (needs a kernel patch to enable) and there is clearly full access to the RSX including 3D support, although we still need to learn a few details about how that works to be able to use it." (Thanks to Martin Jeppesen).

Comments (19 posted)

New releases of Firefox and Thunderbird

Mozilla has released Firefox 3.6.11, 3.5.14 and Thunderbird 3.1.5, 3.0.9. These releases fix several security issues and other bugs. It is recommended that Firefox and Thunderbird users upgrade to these latest versions.

Comments (none posted)

Mixxx 1.8.0 Released

The Mixxx DJing software package has released its 1.8.0 version. "After more than a year's worth of work by over 30 developers and artists, Mixxx 1.8.0 has finally been released. This follows the longest beta cycle we have ever had, in which over 100 bugs were fixed. Mixxx has never been more stable and ready for use by both new and experienced DJs!" New features include looping improvements, a new database-powered library with iTunes and Rhythmbox playlist imports, MIDI improvements, a massive rewrite of the mixing engine, and more.

Full Story (comments: none)

Parrot 2.9.0 ("Red-masked") released

Parrot, the virtual machine that is aimed at running all dynamic languages, has released version 2.9.0. Various improvements were made including simplifying the string processing code, speeding up various string operations, detecting IPv6, testing improvements, and more. Parrot has also ported most of its developer tools and docs to Git, and Minix users will be happy to know that Parrot can now be built on those systems.

Full Story (comments: none)

Python-on-a-Chip release 09

Python-on-a-Chip has release version 09. "Python-on-a-Chip (p14p) is a project to develop a reduced Python virtual machine (codenamed PyMite) that runs a significant subset of the Python language on microcontrollers without an OS. The other parts of p14p are the device drivers, high-level libraries and other tools." Many Python language features have been added to those supported including multiple inheritance, generators with iterators, string formatting, decorators with arguments, and more.

Full Story (comments: none)

Newsletters and articles

Development newsletters from the last week

Comments (2 posted)

Russell: On C Library Implementation

Rusty Russell has some suggestions for C library implementers on his blog. Among various other hacking efforts, Russell is behind the Comprehensive C Archive Network (CCAN). "3. Context creation and destruction are common patterns, so stick with "mylib_new()" and "mylib_free()" and everyone should understand what to do. There's plenty of bikeshedding over the names, but these are the shortest ones with clear overtones to the user. [...] 14. There's a standard naming scheme for C, and it's all lower case with underscores as separators. Don't BumpyCaps."

Comments (113 posted)

The Qt Future - Mobile on Nokia (The H)

The H reports from Qt Dev Days. "The company is also still developing it's 'open governance' model; so far it has opened up its repositories to allow non-Nokia developers to collaborate on code development, but it is still the dominant force in developing the platform. Nokia is also ahead of the curve in some elements of the way Qt development is run, having abandoned copyright-assignment and moved to just requiring a non-exclusive licence to use contributed code."

Comments (none posted)

Linux Audio Update: The Fall Fashions (Linux Journal)

Dave Phillips surveys recent developments in Linux audio in the Linux Journal. "I've been looking into Processing, a marvelous computer programming language 'for people who want to create images, animations, and interactions', as its Web page describes it. Processing has some unique characteristics that make it a must-see for anyone interested in computer graphics and the new multimedia arts. The language basics are easy to learn, yet it is capable of very sophisticated designs and displays. Its documentation includes one of the best-written tutorials I've read for just about anything, it enjoys the attentions of a large community of users and developers, and its extensibility includes capabilities of interest to musicians and other sound-artists."

Comments (none posted)

Page editor: Jonathan Corbet


Non-Commercial announcements

The FSF's hardware endorsement program

The Free Software Foundation has announced an initial set of criteria under which it would endorse hardware as "respecting freedom." "The FSF's criteria seek to cover all aspects of user interaction with and control of a device: they say the hardware must run free software on every layer that is user upgradeable, allow the user to modify that software, support free data formats, be fully usable with free tools, and more."

Comments (18 posted)

Open Standards in Europe: FSFE responds to BSA letter

The Free Software Foundation Europe supports open standards and interoperability. Now it seems that the Business Software Alliance is trying to get the European Commission to remove support for open standards. "On Friday FSFE sent a letter to the European Commission to support Open Standards and interoperability. In the drawn-out battle to retain at least a weak recommendation for Open Standards in the revised European Interoperability Framework, FSFE has countered a leaked letter by the lobby group Business Software Alliance with its own thorough analysis of the relation between standards and patents."

Full Story (comments: 101)

FSFE goes after websites that advertise non-free software

The Free Software Foundation Europe notes that "Free Software activists from 41 countries have reported 2286 public sector institutions which advertise non-free PDF readers on their websites. FSFE will now contact these institutions, trying to get as many advertisements for non-free PDF readers as possible removed before the end of the year."

Full Story (comments: none)

Articles of interest

Kuhn: Canonical, Ltd. Finally On Record: Seeking Open Core

Bradley Kuhn has read this IRC conversation with Mark Shuttleworth and drawn some fairly strong conclusions from it. "Nevertheless, it seems Canonical, Ltd. now believes that they've succeed in their sales job, because they've now confessed their true motive. In an IRC Q&A session last Thursday, Shuttleworth finally admits that his goal is to increase the amount of 'Open Core' activity."

Comments (29 posted)

Microsoft Gives its Blessing to (Computerworld UK)

Over at Computerworld UK, Glyn Moody compares the (in)famous 1999 Mindcraft study comparing Linux and Windows NT with a recent video that Microsoft has produced. That video shows various folks complaining about (and undoubtedly extolling Microsoft Office). "The criticisms made in the video are not really the point - they are mostly about not being a 100% clone of Microsoft Office, and compatibility problems with Microsoft's proprietary formats. The key issue is the exactly the same as it was for the Mindcraft benchmarks. You don't compare a rival's product with your own if it is not comparable. And you don't make this kind of attack video unless you are really, really worried about the growing success of a competitor."

Comments (46 posted)

Oracle Confirms Commitment to (Linux Journal)

As celebrates its ten year anniversary Linux Journal reports that Oracle has renewed its commitment to the project. "As ODF celebrates its fifth anniversary, Oracle said they applaud its efforts and renewed their commitment to "Oracle's growing team of developers, QA engineers, and user experience personnel will continue developing, improving, and supporting as open source, building on the 7.5 million lines of code already contributed to the community." This might be seen in the continuing efforts of developers to release 3.3.x snapshots as well as previews into some of the new features and tools."

Comments (12 posted)

Oracle wants LibreOffice members to leave OOo council (ars technica)

Ars technica reports that Oracle has asked some TDF (The Document Foundation) founders to resign from the community council. "During an OOo community council meeting last week, council chair Louis Saurez-Potts told the TDF members who also sit on the OOo community council that their participation in both organizations constituted a conflict of interest and that their involvement in the new LibreOffice fork should preclude them from holding leadership roles in the OOo community. Saurez-Potts is Oracle's community manager, a role that he also held at Sun prior to the acquisition. His position suggests that Oracle views LibreOffice as a hostile fork and will not join TDF as some had hoped."

Comments (16 posted)

Gould: Oracle to Red Hat: It's Not Your Father's Linux Market Anymore

The Gerson Lehrman Group's site is carrying this missive from Jeff Gould giving a rather wild-eyed analyst view of Oracle's enterprise kernel update. "Linux has been propagandized for years as the next Unix, the 'good' Unix that would leverage the noble democratic principles of open source to avoid the proprietary pitfalls of AIX vs. Solaris vs. HP-UX and all the other now forgotten exemplars of 'bad' vendor-exclusive Unix. But Larry Ellison and the Google kids have served notice that things aren't going to work out that way. Having understood that the GPL open source license that governs Linux is only a fig leaf, they've discovered that there is nothing to prevent them from rolling their own de facto private versions of Linux while still respecting the letter of the open source law."

Comments (88 posted)

Level Up to IPv6 with Ubuntu 10.10 on Comcast ( has a tutorial on how to participate in the IPv6 trial being run by Comcast, a major US ISP. "In phase one of their trials they are relying on the tunneling mechanisms 6to4 and more recently 6RD (Rapid Deployment). Comcast has 'open sourced' its solution based on OpenWRT if you happen to have a router supported by OpenWRT. I do not, so like any self-respecting Linux geek, I set out to do it with a Linux box. I found the documentation for doing so difficult to find."

Comments (52 posted)

New Books

Hadoop: The Definitive Guide--New from O'Reilly Media

O'Reilly Media has released "Hadoop: The Definitive Guide, Second Edition" by Tom White.

Full Story (comments: none)

Making Software--New from O'Reilly Media

O'Reilly Media has released "Making Software: What Really Works, and Why We Believe It" edited by Andy Oram and Greg Wilson.

Full Story (comments: none)

"PostgreSQL 9.0 High Performance" book now available

"PostgreSQL 9.0 High Performance" by Greg Smith is available from Packt Publishing.

Full Story (comments: none)

Calls for Presentations

Announcing EuroBSDCon 2011

The call for proposals is open for EuroBSDCon 2011. "The EuroBSDCon 2011 conference will be held in the Netherlands from Thursday 6 October 2011 to Sunday 9 October 2011, with tutorials on Thursday and Friday and talks on Saturday and Sunday."

Full Story (comments: none)

Upcoming Events

Linux Foundation Technical Advisory Board election

Elections for the Linux Foundation's Technical Advisory Board will be held at the joint Kernel Summit/Linux Plumbers Conference reception on November 2. There are five seats to be filled, and, as of this writing, only two candidates. Interested candidates need not be at the election to run, but they do need to put their nomination in; see the announcement for details.

Full Story (comments: 4)

lca2011 registration now open, one week left for Miniconf CFPs

Two pieces of news from 2011, which will be held January 24-29 in Brisbane, Australia. Registration is now open for the conference, with early bird rates lasting until November 8th or when they are all sold out. Also, there is only one week left to submit papers for the Miniconfs. "lca2011 is really shaping up to be another to remember. From the new innovative Rocketry miniconf to our impressive Keynotes and the strong but broad range of relevant presentations, there is something for everyone. There are technical sessions for those who really want to delve into the deeper areas of linux and open source software, and there is also the less technical presentations for the end users."

Comments (none posted)

MAGNet Conference

The Mid-America GNU/Linux Networkers (MAGNet) Conference will be held on May 6-7, 2011 in St. Louis, Missouri. "The main focus of the event will be education on everything from Linux on the desktop to Open Source philosophy to business applications. Linux for the artistic, software and hardware, security and networking will also be covered; plus a bit of everything in between. Everyone from the simply curious to the seasoned technology professional will find something of interest at MAGNet Con."

Full Story (comments: none)

Events: October 28, 2010 to December 27, 2010

The following event listing is taken from the Calendar.

October 25
October 29
Ubuntu Developer Summit Orlando, Florida, USA
October 27
October 29 2010 Parc Hotel Alvisse, Luxembourg
October 27
October 28
Embedded Linux Conference Europe 2010 Cambridge, UK
October 27
October 28
Government Open Source Conference 2010 Portland, OR, USA
October 28
October 29
European Conference on Computer Network Defense Berlin, Germany
October 28
October 29
Free Software Open Source Symposium Toronto, Canada
October 30
October 31
Debian MiniConf Paris 2010 Paris, France
November 1
November 2
Linux Kernel Summit Cambridge, MA, USA
November 1
November 5
ApacheCon North America 2010 Atlanta, GA, USA
November 3
November 5
Linux Plumbers Conference Cambridge, MA, USA
November 4 2010 LLVM Developers' Meeting San Jose, CA, USA
November 5
November 7
Free Society Conference and Nordic Summit Gorthenburg, Sweden
November 6
November 7
Technical Dutch Open Source Event Eindhoven, Netherlands
November 6
November 7 HackFest 2010 Hamburg, Germany
November 8
November 10
Free Open Source Academia Conference Grenoble, France
November 9
November 12
OpenStack Design Summit San Antonio, TX, USA
November 11 NLUUG Fall conference: Security Ede, Netherlands
November 11
November 13
8th International Firebird Conference 2010 Bremen, Germany
November 12
November 14
FOSSASIA Ho Chi Minh City (Saigon), Vietnam
November 12
November 13
Japan Linux Conference Tokyo, Japan
November 12
November 13
Mini-DebConf in Vietnam 2010 Ho Chi Minh City, Vietnam
November 13
November 14
OpenRheinRuhr Oberhausen, Germany
November 15
November 17
MeeGo Conference 2010 Dublin, Ireland
November 18
November 21
Piksel10 Bergen, Norway
November 20
November 21
OpenFest - Bulgaria's biggest Free and Open Source conference Sofia, Bulgaria
November 20
November 21
Kiwi PyCon 2010 Waitangi, New Zealand
November 20
November 21
WineConf 2010 Paris, France
November 23
November 26
DeepSec Vienna, Austria
November 24
November 26
Open Source Developers' Conference Melbourne, Australia
November 27 Open Source Conference Shimane 2010 Shimane, Japan
November 27 12. LinuxDay 2010 Dornbirn, Austria
November 29
November 30
European OpenSource & Free Software Law Event Torino, Italy
December 4 London Perl Workshop 2010 London, United Kingdom
December 6
December 8
PGDay Europe 2010 Stuttgart, Germany
December 11 Open Source Conference Fukuoka 2010 Fukuoka, Japan
December 13
December 18 2010 Hyderabad, India
December 15
December 17
FOSS.IN/2010 Bangalore, India

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol

Copyright © 2010, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds