LWN.net Weekly Edition for September 23, 2010
Distribution security response times
Two high-profile kernel bugs—with publicly released exploits—have recently been making news. Both can be used by local attackers to gain root privileges, which makes them quite dangerous for systems that allow untrusted users to log in, but they might also be used in conjunction with other flaws to produce a remote root exploit. They are, in short, just the kinds of vulnerabilities that most system administrators would want to patch quickly, so a look at how distributions have responded seems warranted.
The vulnerabilities lie in the x86_64 compatibility layer that allows 32-bit binaries to be run on 64-bit systems (see the Kernel page article for more details). In particular, that code allows 32-bit programs to make system calls on a 64-bit kernel. One of the bugs, CVE-2010-3301, was reintroduced into the kernel in April 2008, seven months after being fixed as CVE-2007-4573. The second, CVE-2010-3081, was discovered in the process of finding the first, and had been in the kernel since 2.6.26, which was released in July 2008.
Obviously, these are long-standing kernel holes that may have been available to attackers for as long as two-and-a-half years. In fact, a posting on the full-disclosure mailing lists claims that CVE-2010-3081 was known by a shadowy group called "Ac1db1tch3z" since the code causing the bug was committed in April 2008. Included in the post was a working exploit.
Ben Hawkes found and reported both of the vulnerabilities and fixes for both were quickly committed to the mainline kernel. Both were committed on September 14, but it is clear that at least CVE-2010-3081 was known a week earlier. Stable kernels were released on September 20, and some distributions had fixes out on September 17.
While enterprise kernels (e.g. RHEL, SLES) tend to be based on fairly old kernels (RHEL 5 is 2.6.18-based, while SLES 11 is 2.6.27-based), the distributors often backport features from newer kernels. Unsurprisingly, that can sometimes lead to backporting bugs along with the features. For RHEL, that meant that it was vulnerable to CVE-2010-3081 even though that code came into the kernel long after 2.6.18. On September 21, Red Hat issued updates for RHEL 5 and 5.4 to fix that problem. CentOS, which necessarily lags RHEL by at least a few hours, has a fix for CVE-2010-3081 available for CentOS 5 now as well.
For SLES, the situation is a little less clear. Based on its kernel version, it should be vulnerable to both flaws, but no updates have been issued as of this writing. In a separate advisory on September 21, SUSE noted (in the "Pending Vulnerabilities ..." section) that it was working on fixes for both.
For the more community-oriented distributions (Debian, Fedora, openSUSE, Ubuntu, and others), the response has been somewhat mixed. Ubuntu, Debian, and Fedora had fixes out on September 17 for both bugs (or, in the case of Debian, just one, as its stable distribution ("Lenny") is based on 2.6.26 and thus not vulnerable CVE-2010-3301). openSUSE has yet to release a fix and none of the secondary distributions that we track (Gentoo, Mandriva, Slackware, etc.) has put out a fix either.
How quickly can users and administrators expect security fixes? The enterprise vendors, who are typically more cautious before issuing an update, took a week or more to get fixes in the hands of their users. Meanwhile, exploits have been published and have been used in the wild. That has to make a lot of system administrators very nervous. Those running SUSE-based systems must be even more worried. A one-week delay (or more depending on where you start counting) may not seem like a lot of time, but for critical systems, with lots of sensitive data, it probably seems pretty long.
Local privilege escalation flaws are often downplayed because they require a local user on the system to be useful. But that thinking has some flaws of its own. On even a locked-down system, with only trusted users being able to log in, there may be ways for local exploits to be turned into remote exploits. Compromised user accounts might be one way for an attacker to access the system, but there is a far more common route: network services.
One badly written, or misconfigured, web application, for example, might provide just the hole that an attacker needs to get their code running on the system. Once that happens, they can use a local privilege escalation to compromise the entire system—and all the data it holds. Since many servers sit on the internet and handle lots of web and other network traffic, compromising a particular, targeted system may not be all that challenging to a dedicated attacker. Using a "zero day" vulnerability in a widely deployed web application might make a less-targeted attack (e.g. by script kiddies) possible as well.
While most of the "big four" community distributions were quick to get out updates for these problems, they still left a window that attackers could have exploited. That is largely unavoidable unless there were embargoes enforced on sensitive patches flowing into the mainline kernel—something that Linus Torvalds has always been opposed to. Then, of course, there is the (much larger) window available to those who closely track kernel development and notice these bugs as they get introduced.
That is one of the unfortunate side-effects of doing development in the open. While it allows anyone interested to look at the code, and find various bugs—security or otherwise—it does not, cannot, require that those bugs get reported. We can only hope that enough "white hat" eyes are focused on the code to help outweigh the "black hat" eyes that are clearly scrutinizing it.
For distributors, particularly Red Hat and Novell, it may seem like these flaws are not so critical that the fixes needed to be fast-tracked. Since there are, presumably, no known, unpatched network service flaws in the packages they ship, a local privilege escalation can be thwarted by a more secure configuration (e.g. no untrusted users on critical systems). While true in some sense, it may not make those customers very happy.
There are also plenty of systems out there with some possibly untrusted users. Perhaps they aren't the most critical systems, but that doesn't mean their administrators want to turn them over to attackers. It really seems like updates should have come more quickly in this case, at least for the enterprise distributions. As we have seen, a reputation for being slow to fix security problems is a hard one to erase; hopefully it's not a reputation that Linux is getting.
Fusion Garage speaks, but stalls on code release
Fusion Garage, makers of the Linux-based
joojoo
tablet, are still out of compliance with
the GPL, but have responded to Linux kernel
developer Matthew Garrett regarding his complaint this
month to US Customs and Border Protection.
Until Garrett filed a so-called "e-allegation"
form with CBP, the company had refused his requests
for corresponding source for the GPL-covered software
on the joojoo. After the filing, company spokesperson
Megan Alpers said that "Fusion Garage is discussing
the issue internally.
"
The joojoo concept
began as the "CrunchPad," in an essay
by TechCrunch founder Michael Arrington.
He wanted a "dead simple and dirt
cheap touch screen web tablet to surf the
web
". After a joint development project between Fusion Garage and
Arrington collapsed
last year, Fusion Garage went on
to introduce the tablet on its own.
Garrett said he is not currently planning on taking
further action. " In June, Garrett checked
out a joojoo tablet and
mailed the company's support address
for the source to the modified kernel. He posted
the company's reply to his blog: " Although the device is fairly simple, Garrett said
he would like to see some of the kernel changes.
" The US government offers several tools for
enforcing copyright at the border, which are
cheaper and simpler for copyright holders than an
infringement case in a Federal court. Beyond the
"e-allegation" form that Garrett filed, other
options include "e-recordation," which allows
the holder of a US trademark or copyright to
request that US Customs and Border Protection
stop infringing goods at the border, and a Section
337 complaint, which can result in the US
International Trade Commission assigning government
attorneys to work on the US rights-holder's
behalf. " While most of the ITC's copyright cases involve
product packaging or manuals, Norman said,
the ITC has taken action to exclude infringing
copies of arcade game software. In a 1981 case, "Coin-Operated Audio Visual Games and Components
Thereof" the ITC unanimously voted to
exclude video games whose code infringed both
copyright and trademarks of US-based Midway. In a 1982
case, also brought by Midway, the ITC not only
voted to exclude infringing Pac-Man games, but issued
a cease and desist order against the importers while
the case was in progress. In an LWN
comment, Norman added, " Open Source consultant Bruce Perens said that
retailers in the USA could
be a reason for manufacturers in other countries
to check license compliance for their embedded
software. Although the prospect of Customs enforcement
because of code they've never heard of might scare
some vendors away from open-source-based products,
Perens said, " No GPL licensor has yet filed a complaint with
the ITC, and even if a complaint is filed, the ITC
can decide whether or not to act on it. However,
import-based enforcement, including temporary
orders enforced at the Customs level, can move
much faster than ordinary infringement lawsuits.
Whether the resulting uncertainty is enough to make
device vendors double-check their license compliance
remains to be seen. Debian's "testing" distribution is where Debian developers prepare the next
stable distribution. While this is still its main purpose, many users have
adopted this version of Debian because it offers them a good trade-off between
stability and freshness. But there are downsides to using the testing
distribution, so the "Constantly Usable Testing" (CUT) project aims to
reduce or eliminate those downsides.
Debian "unstable" is the distribution where developers upload new versions of
their packages. But, frequently some packages are not installable from
unstable due
to changes in other packages or transitions in libraries
that have not yet been completed. Debian testing, on the contrary, is managed by a tool that ensures the
consistency of the whole distribution: it picks updates from unstable only if
the package has been tested enough (10 days usually), is free of new
release-critical bugs, is available on all supported architectures, and it
doesn't break any other package already present in testing. The release
team controls this tool and provides "hints" to help it find a set
of packages that can flow from unstable to testing. Those rules also ensure that the packages that flow into testing are
reasonably free of show-stopper bugs (like a system that doesn't boot, or
X that doesn't work at all). This makes it very attractive to users
who like to regularly get new upstream versions of their software without
dealing with the biggest problems associated with them. That is very
attractive to users, yet several Debian developers advise people to not use
testing. Why is that?
Disappearing software:
The release team uses the distribution to prepare the next
stable release and from time to time they remove packages from it. That is
done either
to ensure that other packages can migrate from
unstable to testing, or because a package has long-standing release-critical
bugs without progress towards a resolution. The team will also
remove packages on request if the maintainers believe
that the current version of the software cannot be supported
(security-wise) for 2 years or more. The security team also regularly
issues such requests.
Long delays for security and important fixes:
Despite the 10-day delay in unstable, there are always some annoying
bugs (and security bugs are no exceptions) that are only discovered when
the package has already migrated to testing. The maintainer might be quick to
upload a fixed package in unstable, and might even raise the urgency to
allow the package to migrate sooner, but if the packages get entangled in a
large ongoing transition, it will not migrate before the transition is
completed. Sometimes it can take weeks for that to happen. The delay can be avoided by doing direct uploads to testing (through
testing-proposed-updates) but that mechanism is almost never used except
during a freeze, where targeted bug fixes are the norm.
Not always installable:
With testing evolving daily, updates sometimes break
the last installation images available (in particular netboot images
that get everything from the network). The debian-installer (d-i) packages
are usually quickly fixed but they don't move to testing automatically
because the new combination of d-i packages has not necessarily been
validated yet. Colin
Watson sums
up the problem:
CUT has its roots in an old proposal by Joey
Hess. That introduces the idea that the stable release is not Debian's
sole product and that testing could become—with some work—a suitable
choice for end-users. Nobody took on that work and there has been no visible
progress in the last 3 years. But recently Joey brought
up
CUT again on the debian-devel mailing list and Stefano
Zacchiroli (the Debian project leader) challenged him to setup a BoF on
CUT for Debconf10. It turned out to be one of the most heavily
attended BoFs (video recording is here), so
there is clearly a lot of interest in the topic. There's now a dedicated wiki and an
Alioth project with a
mailing
list.
There's general agreement that regular snapshots of testing are required:
it's the only way to ensure that the generated installation media will
continue to work until the next snapshot. If tests of the snapshot do not
reveal any major problems, then it becomes the latest "cut". For clarity,
the official codename would be date based: e.g. "cut-2010-09" would be the
cut taken during September 2010. While the frequency has not been fixed yet, the goal is clearly to be
on the aggressive side: at the very least every 6 months, but every month
has been suggested as well. In order to reach a decision, many aspects
have to be balanced. One of them (and possibly the most important) is the security support.
Given that the security team is already overworked, it's difficult to
put more work on their shoulders by declaring that cuts will be supported
like any stable release. No official security support sounds bad but it's
not necessarily so problematic as one might imagine.
Testing's security record is generally better than stable's is (see
the security tracker) because
fixes flow in naturally with new upstream versions. Stable still get fixes
for very important security issues earlier than testing, but on the whole
there are fewer known security-related problems in testing than in stable.
Since it's only a question of time until the fixed version comes naturally
from upstream, more frequent cut releases means that users get security
fixes sooner. But Stefan Fritsch, who used to be involved in the Debian testing security team,
has also experienced the downside for anyone who tries to contribute security
updates: So if it's difficult to form a dedicated security team, the work of
providing security updates must be done by the package maintainer. They are
usually quite quick to upload fixed packages in unstable but tend to not monitor
whether the packages migrate to testing. They can't be blamed for that because
testing was created to prepare the next stable release and there is thus no
urgency to get the fix in as long as it makes it before the release. CUT can help in this regard precisely because it changes this assumption:
there will be users of the testing packages and they deserve to get security
fixes much like the stable users. Another aspect to consider when picking a release frequency is the amount of
associated work that comes with any official release: testing upgrades from the
previous version, writing release notes, and preparing installation images. It
seems difficult to do this every month. With this frequency it's also
impossible to have a new major kernel release for each cut (since they
tend to come out only every 2 to 3 months) and the new hardware support
that it brings is something worthwhile to many users. In summary, regular snapshots address the "not always installable" problem
and may change the perception of maintainers toward testing so that hopefully
they care more of security updates in that distribution (and in cuts).
But it does not solve the problem of disappearing packages. Something else
is needed to fix that problem. Lucas Nussbaum pointed
out that regular snapshots of Debian is not really a new concept:
In Lucas's eyes, CUT becomes interesting if it can provide a
rolling distribution (like testing) with a " If testing were used as the rolling distribution, the problem of disappearing
packages would not be fixed. But that could be solved with a new rolling
distribution that would work like testing but with adapted
rules, and the cuts would then be snapshots of rolling instead of testing.
The basic proposal is to make a copy of testing and to re-add the packages
which have been removed because they are not suited for a long term release
while they are perfectly acceptable for a constantly updated release (the most
recent example being Chromium). Then it's possible to go one step further: during a freeze, testing is no
longer automatically updated, which makes it inappropriate to feed the
rolling distribution. That's why rolling would be reconfigured to grab
updates from unstable (but using the same rules as testing). Given the frequent releases, it's likely that only a subset of architectures
would be officially supported. This is not a real problem because the
users who want bleeding edge software tend to be desktop users on mainly
i386/amd64 (and maybe armel for tablets and similar mobile products).
This choice—if made—opens up the door to even more possibilities:
if rolling is configured exactly like testing but with only a subset of the
architectures, it's likely that some packages would migrate to rolling before
testing where non-mainstream architectures are lagging in terms of auto-building
(or have toolchain problems). While being ahead of testing can be positive for the users, it's also
problematic on several levels. First, managing rolling becomes much more
complicated because the transition management work done by the release
team can't be reused as-is. Then it introduces competition between both
distributions which can make it more difficult to get a stable release
out, for example if maintainers stop caring about the migration to testing
because the migration to rolling has been completed. The rolling distribution is certainly a good idea but the rules governing it
must be designed to avoid any conflict with the process of releasing a stable
distribution. Lastly, the mere existence of rolling would finally fix the marketing
problem plaguing testing: the name "rolling" does not suggest that the
software is not yet
ready for prime time. Whether CUT will be implemented remains
to be seen, but it's off for a good start: ftpmaster Joerg Jaspert said
that the new archive server can cope with a new distribution, and there's
now a proposal shaping up. It may get going quickly as there is already an implementation
plan for the snapshot side of the project. The rolling
distribution can always be introduced later, once it is ready. Both approaches
can complement each other and provide something useful to different kind
of users. The global proposal is certainly appealing: it would address
the concerns of obsolescence of Debian's stable release by making
intermediary releases. Anyone needing something more recent for hardware
support can start by installing a cut and follow the subsequent releases
until the next stable version. And users who always want the latest
version of all software could use rolling after having installed a
cut. From a user point of view, there are similarities with the mix of normal
and long-term releases of Ubuntu. But from the development side, the process
followed would be quite different, and the constraints imposed by having a
constantly usable distribution are stronger. With CUT, any wide-scale
change must be
designed in a way that it can happen progressively in a transparent manner for
users.
The company seems to be at least
considering the issue at the moment, so I wasn't
planning on doing anything further just yet
", he said.
Bradley Kuhn, president of the Software Freedom
Conservancy, which carries out GPL enforcement, praised
Garrett's effort
on his blog:
We will make the
source release available once we feel we are ready
to do so and also having the resources to get this
sorted out and organized for publication.
"
They seem to be exposing ACPI events directly through
the embedded controller. I'm interested to see how
they're doing this and how drivers bind to it
",
he said.
Customs will seize something if it's a clear
knockoff product but they don't like to wade
into disputes where it's not clear
", said attorney Jeffery
Norman of Kirkland & Ellis, a law firm that
represents clients in Section 337 cases. "
The ITC
has jurisdiction to litigate disputes involving
copyright, trademark and patent, but 99% of cases
involve patents
", he said.
Customs will only
enforce either an ITC order or 'Piratical' [obviously
infringing] copies.
" He advocated the ITC approach:
the good news is that products like
Android are so important to the market that retailers
must learn to deal with Open Source or let their
competitors have the business.
"A constantly usable testing distribution for Debian
About Debian unstable & testing
Known problems with testing
CUT's history
The ideas behind CUT
Among all the ideas, there are two main approaches that have been
discussed. The first is to regularly snapshot testing at points where it
is known to work reasonably well (those snapshots would be named "cut").
The second is to build an improved testing distribution tailored to the
needs of users who want a working distribution with daily updates, its
name would be "rolling".
Regular snapshots of testing
A new "rolling" distribution?
constant flux of new
upstream releases
". For him, that would be "something quite unique in
the Free Software world
". The snapshots would be used as starting
point for the initial installation, but the installed system would point
to the rolling distribution and users would then upgrade as often as they
want. In this scenario, security support for the snapshots is not so important,
what matters is the state of the rolling distribution.
Conclusion
Security
Private browsing: not so private?
A team of researchers led by Dan Boneh at Stanford University undertook a study of the "private browsing" feature offered by most web browsers, and found numerous exploitable holes in the wall of protection the features are supposed to maintain. The results, presented at the USENIX Security conference in August, spawned a variety of reactions from browser makers, from defensive posturing to bug reports. In the meantime, the paper provides a practical method to increase private browsing's privacy through a new Firefox extension.
The study examined the private browsing features offered by the four most popular browsers: Internet Explorer, Firefox, Chrome, and Safari with regard to two distinct privacy models. The first is termed the "local attacker," although it need not be a hostile party; browser makers advertise private browsing features as a way to keep family members from learning about birthday present shopping and secret vacation plans. A private browsing failure with regard to the local attacker would allow the attacker to learn what happened during a private session after the session had been terminated — specifically by finding information written to disk somewhere by the browser, as opposed to retrieving proxy caches or other network attacks.
The second model is the "web attacker," namely a hostile site. A web attack flaw would allow the attacker to either discover that a visitor in a private browsing session was the same as a visitor in a previous non-private session, or vice versa. In either case, the attack would be limited to the browser exposing information during the browsing session, not simply by recording the IP address of the visitor or by having the user choose to do something that identifies himself or herself (e.g. log in).
In looking at the ways that a browser can "leak" information during a private browsing session, the team classified all of the possible state changes into four categories. The first category, those changes initiated by the site without user intervention (such as saving cookies, browsing history, and caching data) were actively guarded against by all four browsers. It was in the other three categories that the browsers disagreed, and were internally inconsistent. Those include changes initiated by the site but requiring user action (such as saving a password or generating an SSL client certificate), changes initiated by the user (such as adding a bookmark or downloading a file), and changes that are not user-specific (such as installing an update, or refreshing a block-list).
The leaks
The team performed tests tracking disk writes during private browsing sessions. It also performed an audit of the Firefox source code, and found several leaks, where data generated during a private browsing session was later accessible from a public browsing session. Some problems were the same across all browsers, such as the ability to add bookmarks during a private browsing session, and automatic recording of file downloads (which persist even if the files themselves are removed). None have a separate "private bookmarks" feature; all allow you to add global bookmarks during a private session. Some browsers allow you to manually delete individual entries from the Downloads window's log — if you remember to do so.
Others were limited to some browsers and not others. Internet Explorer, for example, can be tricked into using SMB queries to request page content simply by using the SMB \\servername\resource.ext naming convention, which bypasses IE's private browsing altogether and sends Windows hostname and username information to the server via SMB protocol messages. IE, Safari, and Chrome all permanently save user-approved self-signed SSL certificates encountered during private browsing as well.
Firefox was not immune to private browsing leaks. The team singled out Firefox's "custom protocol handler" function, which a site can use to register a custom protocol with the browser and trigger a desired action — think git:// and torrent:// links, for example. Those protocol handlers live in the document object model (DOM) of the browser and could be detected by a remote site, which could leak information between public and private sessions.
During the in-depth look at Firefox, the team found five different files in a user's profile that were written to during private browsing, including security certificate settings (the cert8.db file), site-specific permissions such as cookie- and pop-up-blocking (in the permissions.sqlite file), download action preferences (in the mimeTypes.rdf file), automatically-discovered search engine add-ons (in the search.sqlite and search.json files), and plugin registrations (in the pluginreg.dat file).
The add-ons
Beyond the individual problems with each browser, however, the team found add-ons to be a far more serious source of private browsing data leaks. Plugins can use their own cookie setting and reading framework unimpeded by the browser's. Extensions are worse, in most cases outright ignoring whether or not the browser is in private browsing mode.
Not all browsers support extensions, of course, but all had problems. Chrome disables all extensions by default while in its private browsing mode, though this setting can be changed on a per-extension basis. IE also defaults to disabling extensions while browsing privately, although this is a preference setting that can only be turned on or turned off across all extensions at once. Safari does not have a public, supported extension API at all, but unsupported extensions continue to run unaltered while private browsing is enabled.
Firefox's extension behavior is the most problematic, starting with the fact that extensions remain enabled when private browsing is turned on. The paper examined the top 40 most popular Firefox extensions in depth. Eight were binary extensions, which constitute a serious security threat in their own right and run with the same read/write privileges of the current user. Of the remaining 32, 16 wrote no data to disk at all when browsing, but only one — Tab Mix Plus — actually checked the privacy mode of the current session through Firefox's nsIPrivateBrowsingService API.
The study's authors indicate that they have begun a similar in-depth examination of the most popular Chrome extensions. Thus far, they have encountered several that can execute arbitrary binary code, and several more that report user information to remote Google Analytics servers, leading them to expect to uncover more privacy violations.
Recommendations
The paper concludes with a discussion of various approaches that the browsers could take to improve the privacy guarantees of private browsing sessions, with particular emphasis on extensions. The ideas include having each extension voluntarily suspend writing information during a private session, having the browser block all extensions from writing data during a private session, and having the browser revert changes made by extensions.
The authors seem to regard the first option as the easiest to implement, and built their own extension named ExtensionBlocker that implements it. ExtensionBlocker works by querying the manifest file of each active Firefox extension, and, during a private session, disabling those that do not include the suggested <privateModeCompatible/> XML tag. Thus far, ExtensionBlocker does not seem to have been released to the public.
Naturally, for ExtensionBlocker to be useful, other extension authors would have to start including the <privateModeCompatible/> tag. So far, no other extensions have adopted it. But the authors do recommend several other privacy-protecting extensions, such as Torbutton, Doppelganger, and Bugnosis. In addition, they suggest that a system similar to the W3C's platform for privacy preferences (P3P) could be used to classify sites as safe for private-mode browsing.
Reaction
Shortly after the paper was published online, ZDNet Asia solicited
feedback from various browser vendors. Opera, surprisingly, was the
harshest in its criticism, reportedly accusing the researchers of
"simply incorrect
" assumptions about the security goals of
private browsing. Opera was not included in the tests, but it did recently
introduce its own private browsing feature, which it (like the others) advertises as offering protection for gift-shoppers, shared-computer users, and "testing websites
" for "cookie and session-related aspects
" of browsing.
Google responded by saying that Chrome's private browsing mode helps you "limit
" the information saved on disk but that it makes clear that the mode "does not remove all records
".
For its part, Mozilla responded both by saying that some of the issues addressed in the paper have already been fixed in the Firefox 4 series, and that others had been filed as new bugs on which there will be new work.
All details aside, perhaps the biggest take-aways for users are that the browser makers do not agree on what "private browsing" mode actually entails, and that none of them make strong guarantees. As always, existing tools like Tor, Privoxy, and NoScript offer the dedicated user with a way to significantly improve the anonymity and security of a browsing session — albeit it at the cost of reduced functionality on certain sites.
Finally, everyone concerned about his or her browsing privacy would do well to remember that private browsing modes offer no protection against certain other types of detection. The Electronic Frontier Foundation's Panopticlick tool, for example, uses combinations of remotely-accessible, non-private data (including the list of installed plugins, fonts, OS version information, and more) to assemble what could be a unique fingerprint for each browser — regardless of what nsIPrivateBrowsingService might report.
Brief items
Security quote of the week
- Users want to click on things.
- Code wants to be wrong.
- Services want to be on.
- Security features can be used to harm.
Die-hard bug bytes Linux kernel for second time (Register)
The Register reports on CVE-2010-3301, a local root vulnerability which has now been fixed - for the second time - in the mainline kernel. "The oversight means that untrusted users with, say, limited SSH access have a trivial means to gain unfettered access to pretty much any 64-bit installation. Consider, too, that the bug has been allowed to fester in the kernel for years and was already fixed once before and we think a measured WTF is in order." It's worth noting that exploits for this vulnerability have been posted.
Felten: Understanding the HDCP Master Key Leak
Ed Felten comments on the apparent release of the HDCP master key. "Now we can understand the implications of the master key leaking. Anyone who knows the master key can do keygen, so the leak allows everyone to do keygen. And this destroys both of the security properties that HDCP is supposed to provide. HDCP encryption is no longer effective because an eavesdropper who sees the initial handshake can use keygen to determine the parties' private keys, thereby allowing the eavesdropper to determine the encryption key that protects the communication. HDCP no longer guarantees that participating devices are licensed, because a maker of unlicensed devices can use keygen to create mathematically correct public/private key pairs. In short, HDCP is now a dead letter, as far as security is concerned." One thing he doesn't mention is that this key might make it possible to create open video components based on free software.
New vulnerabilities
bzip2: code execution
| Package(s): | bzip2 | CVE #(s): | CVE-2010-0405 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | September 20, 2010 | Updated: | January 9, 2013 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Debian advisory:
Mikolaj Izdebski has discovered an integer overflow flaw in the BZ2_decompress function in bzip2/libbz2. An attacker could use a crafted bz2 file to cause a denial of service (application crash) or potentially to execute arbitrary code. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
couchdb: cross-site request forgery
| Package(s): | couchdb | CVE #(s): | CVE-2010-2234 | ||||||||
| Created: | September 21, 2010 | Updated: | September 22, 2010 | ||||||||
| Description: | From the Red Hat bugzilla:
Apache CouchDB prior to 0.11.2 and 1.0.1 are vulnerable to cross site request forgery (CSRF) attacks. A malicious web site can POST arbitrary JavaScript code to wellknown CouchDB installation URLs and make the browser execute the injected JavaScript in the security context of CouchDB's admin interface Futon. | ||||||||||
| Alerts: |
| ||||||||||
dovecot: Maildir ACL issue
| Package(s): | dovecot | CVE #(s): | CVE-2010-3304 | ||||||||||||||||||||
| Created: | September 20, 2010 | Updated: | February 8, 2011 | ||||||||||||||||||||
| Description: | From the openSUSE advisory:
When using Maildir all ACLs on INBOX were copied to newly created mailboxes although only default ACLs should have been copied. | ||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||
drupal: multiple vulnerabilities
| Package(s): | drupal6 | CVE #(s): | CVE-2010-3091 CVE-2010-3092 CVE-2010-3093 CVE-2010-3094 | ||||
| Created: | September 20, 2010 | Updated: | September 22, 2010 | ||||
| Description: | From the Debian advisory:
Several issues have been discovered in the OpenID module that allows malicious access to user accounts. (CVE-2010-3091) The upload module includes a potential bypass of access restrictions due to not checking letter case-sensitivity. (CVE-2010-3092) The comment module has a privilege escalation issue that allows certain users to bypass limitations. (CVE-2010-3093) Several cross-site scripting (XSS) issues have been discovered in the Action feature. (CVE-2010-3094) | ||||||
| Alerts: |
| ||||||
flash-plugin: code execution
| Package(s): | flash-plugin | CVE #(s): | CVE-2010-2884 | ||||||||||||||||||||||||||||||||
| Created: | September 21, 2010 | Updated: | January 21, 2011 | ||||||||||||||||||||||||||||||||
| Description: | From the Red Hat advisory:
This vulnerability is detailed on the Adobe security page APSB10-22, listed in the References section. If a victim loaded a page containing specially-crafted SWF content, it could cause flash-plugin to crash or, potentially, execute arbitrary code. | ||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||
fuse-encfs: multiple vulnerabilities
| Package(s): | fuse-encfs | CVE #(s): | CVE-2010-3073 CVE-2010-3074 CVE-2010-3075 | ||||||||||||||||
| Created: | September 22, 2010 | Updated: | December 8, 2010 | ||||||||||||||||
| Description: | From the Red Hat Bugzilla entry: Micha Riser reported three security flaws in EncFS encrypted filesystem:
A security analysis of EncFS has revealed multiple vulnerabilities: | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
kernel: memory leaks
| Package(s): | kernel | CVE #(s): | CVE-2010-2942 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | September 20, 2010 | Updated: | March 28, 2011 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the openSUSE advisory:
Several memory leaks in the net scheduling code were fixed. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
kernel: multiple vulnerabilities
| Package(s): | kernel | CVE #(s): | CVE-2010-2954 CVE-2010-3078 CVE-2010-3080 CVE-2010-3081 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | September 17, 2010 | Updated: | April 21, 2011 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Debian advisory:
Tavis Ormandy reported an issue in the irda subsystem which may allow local users to cause a denial of service via a NULL pointer dereference. (CVE-2010-2954) Dan Rosenberg discovered an issue in the XFS file system that allows local users to read potentially sensitive kernel memory. (CVE-2010-3078) Tavis Ormandy reported an issue in the ALSA sequencer OSS emulation layer. Local users with sufficient privileges to open /dev/sequencer (by default on Debian, this is members of the 'audio' group) can cause a denial of service via a NULL pointer dereference. (CVE-2010-3080) Ben Hawkes discovered an issue in the 32-bit compatibility code for 64-bit systems. Local users can gain elevated privileges due to insufficient checks in compat_alloc_user_space allocations. (CVE-2010-3081) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
kernel: privilege escalation
| Package(s): | linux, linux-source-2.6.15 | CVE #(s): | CVE-2010-3301 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | September 20, 2010 | Updated: | March 3, 2011 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Ubuntu advisory:
Ben Hawkes discovered that the Linux kernel did not correctly filter registers on 64bit kernels when performing 32bit system calls. On a 64bit system, a local attacker could manipulate 32bit system calls to gain root privileges. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
libtiff: code execution
| Package(s): | libtiff | CVE #(s): | CVE-2010-3087 | ||||||||||||||||||||
| Created: | September 22, 2010 | Updated: | March 15, 2011 | ||||||||||||||||||||
| Description: | From the openSUSE advisory: Specially crafted tiff files could cause a memory corruption in libtiff. Attackers could potentially exploit that to execute arbitrary code in applications that use libtiff for processing tiff files (CVE-2010-3087). | ||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||
python-updater: code execution
| Package(s): | python-updater | CVE #(s): | |||||
| Created: | September 22, 2010 | Updated: | September 22, 2010 | ||||
| Description: | From the Gentoo advisory:
Robert Buchholz of the Gentoo Security Team reported that python-updater includes the current working directory and subdirectories in the Python module search path (sys.path) before calling "import". A local attacker could entice the root user to run "python-updater" from a directory containing a specially crafted Python module, resulting in the execution of arbitrary code with root privileges. | ||||||
| Alerts: |
| ||||||
squid: denial of service
| Package(s): | squid | CVE #(s): | CVE-2010-3072 | ||||||||||||||||||||||||||||||||
| Created: | September 22, 2010 | Updated: | May 19, 2011 | ||||||||||||||||||||||||||||||||
| Description: | From the Red Hat bugzilla entry: A denial of service flaw was found in the way Squid proxy caching server internally processed NULL buffers. A remote, trusted client could use this flaw to cause squid daemon crash (dereference NULL pointer) when processing specially-crafted request. | ||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||
tomcat: information disclosure
| Package(s): | tomcat | CVE #(s): | CVE-2010-1157 | ||||||||||||
| Created: | September 22, 2010 | Updated: | February 14, 2011 | ||||||||||||
| Description: | From the Novell report: Apache Tomcat 5.5.0 through 5.5.29 and 6.0.0 through 6.0.26 might allow remote attackers to discover the server's hostname or IP address by sending a request for a resource that requires (1) BASIC or (2) DIGEST authentication, and then reading the realm field in the WWW-Authenticate header in the reply. | ||||||||||||||
| Alerts: |
| ||||||||||||||
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 2.6.36-rc5, released on September 20. "Nothing really stands out. Except perhaps that Al has obviously been looking at architecture sigreturn paths, and been finding problems, and Dan Rosenberg has been finding places where we copy structures to user space without having fully initialized them yet, leaking kernel stack contents. And we had that annoying x86-64 32-bit compat syscall bug that needed fixing for the _second_ time." See the announcement for the short changelog, or the full changelog for all the details.
Stable updates: the 2.6.27.54, 2.6.32.22 and 2.6.35.5 updates were released on September 20; they each contain a long list of important fixes.
Quotes of the week
I researched the fragmented history of the stone ages as well, i checked out numerous cave paintings, and while much was lost, i was able to recover this old fragment of a clue in the cave called 'patch-2.3.27', carbon-dated back as far as the previous millenium (!).
VFS developers should be able to review each of these patches in 3 minutes or less. If it takes you longer, email me and I'll post a video on YouTube making fun of you.
After that was done, we forward ported devfs. Proper disc management code in Linux was long overdue.
The return of the Kernel Podcast
Jon Masters has announced that he is once again producing kernel podcasts. "My intention is to get back into doing this regularly again as it helps me keep up with LKML - recently I've been having enough time to do most of the prep but not enough time to write the show and record it! I don't do this for any reason other than to force myself to keep up with LKML."
Oracle's "Unbreakable Enterprise Kernel"
Oracle has sent out a press release proclaiming the virtues of its new kernel. "Based on the combined efforts of Oracle's Linux, Database, Middleware, and Hardware engineering teams, the Unbreakable Enterprise Kernel is: Fast: More than 75 percent performance gain demonstrated in OLTP performance tests over a Red Hat Compatible Kernel; 200 percent speedup of Infiniband messaging; 137 percent faster solid state disk access." Much of the gain may come from using 2.6.32 as the base instead of 2.6.18, but it's hard to tell for sure; source for this kernel does not seem to be available for download as of this writing.
This is an interesting move in that it reintroduces competition at the kernel level - something distributors have not emphasized in recent years - and it demonstrates an intent to move a bit further away from the RHEL base that Oracle uses to build its offering.
Updating the kernel's web links
Justin P. Mattock has set out a rather large clean-up task for himself: updating all of the web links in the kernel comments. As might be guessed, many of those links have link-rotted over the last ten or more years, so Mattock is trying to update the kernel to point to the proper place—if it can be found. That effort resulted in a monster patch that covered all of the references to "http" that he could find.
Many of the new links pointed off to archive.org as the only location that Mattock was able to find, but that caused a formatting problem. When adding those, he used links like:
http://web.archive.org/web/*/http://oldsite/oldlink
which shows all of the different versions of pages that the Wayback Machine
has stored. Putting "*/" into a C-language comment is not a good plan,
however,
as Matt Turner pointed out. The proper
solution is to use "%2A" as that is the HTML entity for "*". But there is a
bigger issue with those archive links.
Finn Thain suggested that any of those
links could just be left alone and that people should already know about
archive.org, so adding it to the old links is just
"bloat
". Furthermore, there is a question of which
version of the stored page is the one that the original comment referred
to. Basically, Thain's point was that web pages which are maintained and
updated are likely to be more useful, and that those who want to refer to
pages that have dropped off the net should know (or learn) how to go about it.
Eventually Mattock split the patch into two parts, one that updated links to newer locations and the other which added the archive.org links for lost sites. He is soliciting more feedback on whether to include the archive links or not.
It is not clear, so far at least, whether these changes will be accepted. It is, in a sense, churn, and likely to lead to more churn down the road as link-rot is an endemic web problem. It is probably frustrating for developers and others to come across broken links in the kernel code, but is it worth the never-ending—hopefully fairly infrequent—stream of update patches? There are undoubtedly copyright, logistical, and other issues, but it would certainly be a lot nicer if these documents could be permanently stored in some location at kernel.org.
Kernel development news
BKL-free in 2.6.37 (maybe)
The removal of the big kernel lock has been an ongoing, multi-year effort which has been reported on here a few times. The BKL has some strange and unique properties which make its removal from various kernel subsystems trickier than one might think it should be. But, thanks to a great deal of work by Arnd Bergmann, we might just be approaching a point where the 2.6.37 kernel can be built BKL-free for many or most users. There is, however, one significant obstacle which still must be overcome.Arnd currently has a vast array of patches in the linux-next tree. Many of them are the result of the tedious (but tricky) work of looking at specific subsystems, determining what kind of locking they really need to have, then substituting lock_kernel() calls with something more local. In many cases, the BKL locking can simply be removed, as the code turns out not to need it. A big focus for 2.6.37 has been the removal of the BKL from a number of filesystems - a task which has required digging into some fairly old code. The Amiga FFS, for example, cannot have received much maintenance in recent times, and seems unlikely to have a lot of users.
The most wide-ranging patch for 2.6.37 has to do with the llseek() function, found in struct file_operations. This function allows a filesystem or driver to implement the lseek() system call, changing a file descriptor's position within the file. Unlike most file_operations functions, there is a default implementation for llseek() which simply changes the kernel's idea of the descriptor's position without notifying the underlying code at all. That change, naturally, was done with the BKL held. This implicit default llseek() implementation will have made life easier for a handful of developers, but it makes BKL removal hard: an implementation change could affect any code with a file_operations structure, not just modules which actually implement the llseek() operation.
To make things harder, a great many of these implicit llseek() implementations are not really needed or useful - most device drivers do not implement any concept of a "file position" and pay no attention to whatever the kernel thinks the position might be. In such situations, it is tempting to change the code to an explicit "no seeking allowed" implementation which reflects what is really going on. The problem here is that some user-space application somewhere might be calling lseek() on the device, and they might get upset if those calls started failing with ESPIPE errors. In other words, a successful-but-ignored lseek() call might just be part of the user-space ABI for a specific device. So something more careful has to be done.
The first step was to go through the kernel and add an explicit llseek() operation to every file_operations structure which did not already have one - a patch affecting 343 files. This work was done primarily with a frightening Coccinelle semantic patch (it was included in the patch changelog) which attempts to determine whether the code in question actually uses the file position or not. If the file position is used, default_llseek(), which implements the old default behavior, becomes the explicit default; otherwise noop_llseek(), which succeeds but does nothing, is used. After that work was done, Arnd was able to verify that none of the users of default_llseek() (there are 191 of them) needs the BKL. So the removal of the BKL from llseek() can be made complete.
The patch also changes how llseek() is handled in the core kernel. Starting with 2.6.37, assuming this work is merged (a good bet), any code which fails to provide an llseek() operation will default to no_llseek(), which returns ESPIPE. Any out-of-tree code which depends on the old default will thus not work properly with 2.6.37 until it is updated.
Even after all of this work, there are still a lot of lock_kernel() calls in the mainline. Almost all of them, though, are in old, obscure code which is not relevant to a lot of users. In some cases, the remaining BKL-using code might be shifted over to the staging tree and eventually removed entirely if it is not fixed up. In other cases, an effort will be made to eradicate the BKL; it can still be found in occasionally-useful code like the Appletalk and ncpfs implementations. There are also a lot of Video4Linux2 drivers which still use the BKL; how those drivers will be fixed is the subject of an ongoing discussion in the V4L2 community.
The biggest impediment to a BKL-free 2.6.37, though, may well be the POSIX locking code. File locks are represented internally with a file_lock structure; those structures are passed around to a few places and, of course, protected with the BKL. Patches exist to protect those structures with a spinlock within the core kernel. The main sticking point appears to be the NFS lockd daemon, which uses file_lock structures and which, thus, requires the BKL; somebody is said to be working on fixing this code, but no patches have been posted yet. Until lockd has been converted, file locking as a whole requires the BKL. And, since it's a rare kernel that does not have file locking enabled, that will drag the BKL into almost all real-world kernel builds.
Even after that fix is in place, distributor kernels are likely to need the BKL for a bit longer. As long as there is even one module they ship which requires the BKL, the support for it needs to be there, even if most users will not have that module loaded. People who build their own kernels, though, should often be able to put together a configuration which does not need the BKL. If all goes well, 2.6.37 will have a configuration option which makes BKL-free builds possible. That's a huge step forward, even if the BKL still exists in most stock kernels.
The hazards of 32/64-bit compatibility
A kernel bug that was found—and fixed—in 2007 has recently reared its head again. Unfortunately, the bug was reintroduced in 2008, leaving a rather large pile of kernel versions that are vulnerable to a local privilege escalation on x86_64 systems. Though perhaps difficult to do, it would seem that some kind of regression testing suite for the kernel might be able to detect these kinds of problems before they get released to the world.
There are two semi-related bugs that are both currently floating around, which is causing a bit of confusion. One was originally CVE-2007-4573, and was reintroduced in a cleanup patch in June 2008. The reintroduced vulnerability has been tagged as CVE-2010-3301 (though the CVE entry is simply reserved at the time of this writing). Ben Hawkes found a somewhat similar vulnerability—also exploiting system calls from 32-bit binaries on 64-bit x86 systems—which led him to the discovery of the reintroduction of CVE-2007-4573.
There are numerous pitfalls when trying to handle 32-bit binaries making system calls on 64-bit systems. Linux has a set of functions to handle the differences in arguments and calling conventions between 32 and 64-bit system calls, but it has always been tricky to get right. What we are seeing today are two instances where it wasn't done correctly, and the consequences of that can be dire.
The 2007 problem stemmed from a mismatch between the use of the %eax 32-bit register to store the system call number (which is used as an index into the syscall table) and the use of the %rax 64-bit register (which contains %eax as its low 32 bits) to do the indexing. In the "normal" system call path, %eax was zero-extended before the 32-bit system call number from user space was stored, but there was a second path into that code where the upper 32 bits in %rax were not cleared.
The ptrace() system call has the facility to make other system calls (using the PTRACE_SYSCALL request type) and also gives a user the ability to set register values. An attacker could set the upper 32 bits of %rax to a value of their choosing, make a system call with a seemingly valid index (in %eax) and end up indexing somewhere outside of the syscall table. By arranging to have exploit code at the designated location, the attacker can get the kernel to run his code.
The ptrace() path was fixed by Andi Kleen in September 2007 by ensuring that %eax (and other registers) were zero-extended. But zero-extending %eax was removed in Roland McGrath's clean up patch in June 2008. When Hawkes and Robert Swiecki recently noticed the problem, they had little difficulty in modifying an exploit from 2007 to get a root shell on recent kernels.
CVE-2010-3301 was resolved by a pair of patches. McGrath put the zero-extension of the %eax register back into the ptrace path, while H. Peter Anvin made the validity test of the system call number look at the entire %rax register. Either would be sufficient to close the current hole, but Anvin's patch will prevent any new paths into the system call entry code from running afoul of this problem in the future.
The fact that the old exploit was useful implies that someone could have written a test case in 2007 that might have detected the reintroduction of the problem. A suite of such regression tests, run regularly against the mainline, would be quite useful as a way to reduce regressions, both for normal bugs as well as for security holes. Not all kernel bugs will be amenable to that kind of testing, but, for those that are, it seems like an idea worth pursuing.
The other problem that Hawkes found (CVE-2010-3081, also just reserved) is that the compat_alloc_user_space() function did not check to see that the pointer which is being returned is actually a valid user-space pointer. That routine is used to allocate some stack space for massaging 32-bit data into its 64-bit equivalent before making a system call. Hawkes found two places (and believes there are others) where the lack of an access_ok() call in that path could be exploited to allow attackers to write to kernel memory.
One of those was in a video4linux ioctl(), but the more easily exploited spot was in the IP multicast getsockopt() call. It uses a 32-bit unsigned length parameter provided by user space that can be used to confuse compat_alloc_user_space() into returning a pointer into kernel memory. The compat_mc_getsockopt() call then writes user-supplied values using those pointers. That can be fairly easily turned into an exploit as Hawkes noted:
Anvin patched
compat_alloc_user_space() so that it always does the
access_ok() check. That should take care of the two problem spots
that Hawkes found as well as any others that are lurking out there. But
there have been a whole lot of kernels released with one or both of these
bugs, and there have been other bugs associated with 64-bit/32-bit
compatibility. It is a part of the kernel that Hawkes calls "a
little bit scary
":
Perhaps 32-bit compatibility for x86_64 kernels would be a good starting point for regression testing. Some enterprise distributions were not affected by CVE-2010-3301 because of the ancient kernels (like RHEL's 2.6.18) they are based on, but CVE-2010-3081 was backported into RHEL 5, which required that kernel to be updated. The interests of distribution vendors would be well-served by better—any—regression testing so a project of that sort would be quite welcome. The vendors may already be running some tests internally, but regression testing is just the kind of project that would benefit from some cross-distribution collaboration.
It should also be noted that a posting to the full-disclosure mailing list claims that the vulnerability in compat_mc_getsockopt() has been known for nearly two-and-a-half years by black (or at least gray) hats. According to the post, it was noticed when the vulnerability was introduced in April 2008. Certainly there are some that are following the commit-stream to try to find these kinds of vulnerabilities; it would be good if the kernel had a team of white hats doing the same.
Broadcom firmware and regulatory compliance
Broadcom's recently-announced open source wireless networking driver was a major step forward for a company which has, until now, not been forthcoming when it comes to free support for its wireless products. That driver includes the obligatory firmware blob which has been licensed for free distribution by the company; it is now found in the kernel firmware repository. Broadcom has not freed the firmware for its older drivers, though, leading to discussions on the intersection between kernel development and regulatory compliance.The lack of freely-distributable firmware for older Broadcom products makes life a bit more difficult for users, who must obtain the firmware separately. When the new firmware was made distributable, David Woodhouse asked the company about the older firmware as well, only to told that it would not be made distributable. As he explained it, Broadcom is afraid that allowing the distribution of that firmware could lead to trouble:
The reason why the old firmware is different is simple: the newer firmware, which can only run on newer hardware, has regulatory compliance built into it. The older firmware, instead, depends on the driver in the kernel to ensure that it is not configured to operate in a non-compliant manner.
David is not known for graceful suffering in the presence of (people he sees as) fools. His response was a patch which "credits" Broadcom for enabling the development of the reverse-engineered b43 driver; this "enablement" is said to have come through the provision of binary-only drivers which could be reverse engineered. His goal in writing this patch was described as:
Or failing that, in the hope that it'll give their crack-addled lawyers aneurysms, and they'll hire some saner ones to replace them.
He also expressed a wish that the b43 developers would release more information - obtained from the binary-only drivers - on how to patch those binary drivers to get around various regulatory restrictions. Once again, he feels that this kind of information would help to make it clear that free drivers do not make it any easier to operate the hardware in an illegal manner.
David's position plays well with developers who have no patience for obstacles created by lawyers. There is also a vocal contingent out there which says that Linux has no business telling users how they should use their hardware in any case; if the user wants to configure the hardware in a non-compliant manner, that's the user's problem. In some cases, that user may well have a license which makes it entirely legal to run the hardware outside of the parameters which normally apply to off-the-shelf wireless networking equipment. So regulatory compliance naturally irritates developers who think that the kernel has no business getting in anybody's way in this regard.
Luis Rodriguez, on the other hand, is a strong supporter of regulatory compliance in the Linux kernel; he stepped into the discussion to remind people of the kernel's regulatory statement and to say that there was no real interest in encouraging the violation of spectrum-use regulations with any driver. He added:
We are not dealing with legal issues on Linux, we are dealing with engineering solutions, and trust me, we're light years ahead of other OSes because of this now.
His point is that the kernel's "engineering solution" to the regulatory problem has made it possible for wireless vendors to dip their toes into the open-source water. That, in turn, has helped to move Linux from having poor wireless support to, arguably, having the best support over the course of a few years. It is hard to argue with the success which the wireless developers have had recently; any moves which might endanger that success should be considered carefully, to say the least.
Of course, it would be nicer to do without the proprietary firmware blob altogether. In early 2009, the openfwwf project announced the availability of an open source firmware implementation for Broadcom adapters. Since then, news from that project has been relatively scarce. On September 21, though, Michael Büsch announced the availability of a toolchain for working with the b43 firmware. Using the disassembler and assembler, it is possible to decode the device firmware, make changes, then build a new firmware load. Naturally, one can also build a new firmware implementation from the beginning. With these tools available, we might just get to a point where we can have device firmware without distribution restrictions, and which adds features and flexibility to the device as well.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Development tools
Device drivers
Filesystems and block I/O
Memory management
Networking
Virtualization and containers
Benchmarks and bugs
Miscellaneous
Page editor: Jonathan Corbet
Distributions
Mandriva Linux forked into Mageia
After years of financial problems and layoffs, the Mandriva community has finally taken matters into its own hands and forked the distribution. More accurately, some ex-Mandriva employees and interested parties from the community have announced an intention to fork and are working to gather interest and set up shop.
Mandriva, formerly MandrakeSoft, has had a long history with financial troubles. The most recent release was delayed by a few months due to financial troubles, and the company was in in discussions to be sold earlier this year. As it turns out, Mandriva eventually sold a controlling stake to a company called NGI, based in Russia for a €2 million investment.
Following the investment, Mandriva conducted another large layoff and has started, yet again, to restructure. This time the company has been pared down to 27 developers and four subcontractors. According to the post on the company blog laying out the reasons for restructuring, the company plans to make a number of new hires to compensate for the layoffs — but not the original employees. The company also plans to hire a community manager, though it's not clear that a community manager would have much of a remaining community to interact with.
For the former employees and other concerned community members, the latest round of layoffs was enough. On September 18th, the Mageia project was launched with a lengthy list of original employees from Mandriva and Mandriva contributors. Why? According to the announcement:
Most employees working on the distribution were laid off when Edge-IT was liquidated. We do not trust the plans of Mandriva SA anymore and we don't think the company (or any company) is a safe host for such a project.
Many things have happened in the past 12 years. Some were very nice: the Mandriva Linux community is quite large, motivated and experienced, the distribution remains one of the most popular and an award-winning product, easy to use and innovative. Some other events did have some really bad consequences that made people not so confident in the viability of their favourite distribution.
People working on it just do not want to be dependent on the economic fluctuations and erratic, unexplained strategic moves of the company.
The mission statement so far is a bit vague. The project lists "ideas and plans" for the new distribution, including making Linux and free software "straightforward" to use for everyone; providing integrated system configuration tools; providing a high level of integration for the distribution; and targeting new form-factors and architectures. In short, it sounds mostly like the project wishes to continue Mandriva under a new banner and as a project rather than commercial interest.
At the moment, there's not much to see at the new Mageia site. It has links to mailing lists, which are largely clogged with statements of support and offers of contributions and a couple of posts asking "where are the jobs?" There's also a new blog, with postings thanking the community for its enthusiasm and outlining the next steps.
Romain d'Alverny, former Web department manager at Mandriva, noted that it will be a while before the project reaches the point of offering employment. In fact, the project is still getting to the point of offering a wiki, blog, and basic services aside from the mailing lists. The initial project announcement says that the project is seeking code hosting, build servers, contributors, and counsel on building the organization.
openSUSE community manager Jos Poortvliet has extended an offer of assistance to the nascent Mageia community, and suggests that some Mageia contributors might want to attend the upcoming openSUSE Conference to collaborate. Poortvliet has also recommended that the Mageia team consider making use of the openSUSE Build Service to start:
Moreover, openSUSE has the Build Service which we use to build our distribution. But the build service is also used by many other communities including the Linux Foundation's Meego project. It is fully Free Software so anyone can set up their own Build Service if they want. Currently the Build Service already allows building Mandriva packages and we could support Mageia too. As a stop-gap measure, the Mageia community could build (part of) their distro on our infrastructure - and in time set up their own Build Service instance.
Maybe there is more room for collaboration. Building a distribution is NOT easy and a lot of work - it'd be cool if openSUSE and Mageia could share some resources on some of the infrastructural bits like the kernel, Xorg and other base libraries. But even without that - the Build Service is there for you.
The offer of community assistance and tools might be well-received by Mageia's growing community, or they might see it as an attempt to capitalize on an unfortunate situation. There's also the small matter of openSUSE's sponsor Novell possibly selling off SUSE Linux (perhaps to VMware) in the near term.
If Mageia is to make a go of it, the community would do well to avoid the approach often taken by Mandriva itself — attempting to do many things, rather than focusing on specific user profiles. The latest post on the Mandriva blog is a good example. The company is struggling to keep its head above water, yet it promises to deliver desktop, server, and education-focused distributions. It also plans to have a "cloud strategy" and online services integrated into the desktop versions, as well as tablet releases for ARM and Intel-based platforms. Given the company's resources, it might be better to plan to focus on one or two areas of development and see if it can become revenue-positive before developing cloud strategies or attempting to launch a tablet-based distribution.
Can Mageia be successful? The users and contributors who have stayed with Mandriva over the years have shown remarkable loyalty to the distribution. If that translates into contributions and the community is focused, it's possible that Mageia could be a successful venture. It seems all but certain at this point that Mandriva as a company and a distribution are on borrowed time. With any luck, the community die-hards that have stuck around this long will be able to thrive as a community project where commercial ventures have failed.
Distribution News
Debian GNU/Linux
Google Summer of Code 2010 Debian Report
Obey Arthur Liu has a report about Debian's Google Summer of Code, including a brief report from each of the students. "This year, 8 of our 10 students succeeded in our (very strict!) final evaluations, but we have reasons to believe that they will translate into more long-term developers than ever, all [thanks] to you."
Bit from the Release Team: Status of hppa
The hppa architecture will not be supported in the Debian Squeeze release. The status of hppa in unstable is unchanged.
Fedora
Duffy: Fedora Board Meetings
Máirín Duffy provides a summary of the Fedora Board meetings held September 10 and September 13. The meeting on the 10th was a public IRC meeting. The meeting on the 13th covered some general updates, a ticket review, and other board business.
Mandriva Linux
Mandriva news by the board
The Official Mandriva Blog has some news from the board. "The Mandriva Community will be autonomous and governance structures will be created to ensure freedom. The Mandriva enterprise is just an element of this independent community. A community manager will be hired by Mandriva to help the community to implement these plans. The next version of the Mandriva community distribution will be available in spring 2011. The community version of the Mandriva distribution is the one on which the Powerpack distribution, the Corporate Desktop distribution and the Mandriva Enterprise Server distribution are based upon."
Ubuntu family
Minutes of the Ubuntu Technical Board meeting
Click below for the minutes from the September 21 meeting of the Ubuntu Technical Board. Topics include SRU microrelease exception for bzr and Chromium security updates.
New Distributions
Mageia - a Mandriva fork
A large group of former Mandriva employees has come together to announce Mageia, a fork of the Mandriva distribution. "Mageia is a community project: it will not depend on the fate of a particular company. A not-for-profit organization will be set up in the coming days and it will be managed by a board of community members. After the first year this board will be regularly elected by committed community members."
Newsletters and articles of interest
Distribution newsletters
- Debian Project News (September 21)
- DistroWatch Weekly, Issue 372 (September 20)
- Fedora Weekly News Issue 243 (September 15)
- openSUSE Weekly News, Issue 141 (September 18)
- Ubuntu Weekly Newsletter, Issue 211 (September 18)
Page editor: Rebecca Sobol
Development
PostgreSQL 9.0 arrives with many new features
Version 9.0 of the PostgreSQL database management system was released on September 20, with considerably more "buzz" than a PostgreSQL release has had in a while. The PostgreSQL Project does a major release every year, but this one is special. It has more than a dozen major features and nearly 200 minor improvements, so the release has more new goodies in it for database geeks than any release before it, hence the "9.0" version number.
It includes some things which users have been requesting for years, such as built-in replication. Given this large number of new features, I'm not even going to try to cover them all; instead, I'll pick five very different features as examples and you can read about the rest on PostgreSQL.org.
Binary replication
PostgreSQL 9.0 includes a new mechanism for replicating entire databases which uses binary logs to replicate between servers. Since this is very similar to Oracle's Hot Standby Databases, it is often also called "hot standby". While PostgreSQL already has several other replication tools, including Slony-I, pgPool2, and Bucardo, binary replication supports greater scalability and availability for the majority of PostgreSQL users. It is also simpler to configure and initialize for the most basic configuration: two identical databases with failover.
This feature is also expected to help push PostgreSQL in cloud hosting and software as a service (SaaS) environments. Overhead for each standby database is very low, allowing a single master to support tens or even hundreds of standby databases. Binary replication also works quite well together with virtual machine and filesystem cloning tools.
Binary replication works by copying the binary change logs, called "transaction logs", from one server to another. This builds on the mechanism used for "warm standby" in PostgreSQL for several years, with two important differences: you can now run queries on the standby server(s), and logs can now be shipped continuously over a database port connection. The latter also decreases latency between servers, allowing the standby server to be only a fraction of a second behind.
The new replication was a large, community-wide effort which took two years. Most of the work was done by developers working for 2nd Quadrant and NTT Open Source, but also includes contributions by developers for EnterpriseDB and Red Hat. Plans for 9.1 include administration tools, standby promotion, and synchronous replication.
New triggers
PostgreSQL applications frequently put a lot of programming into the database itself. This includes triggers, which are scripts that execute whenever data changes. PostgreSQL 9.0 expands this capability with two new kinds of triggers: column triggers and conditional triggers.
Column triggers are defined in the ANSI SQL Standard. These triggers execute only when data in that specific column changes. Imagine a database where you want to log a message every time a user changes their password, but not for other kinds of changes. You could define a trigger like so:
CREATE TRIGGER password_changed
AFTER UPDATE OF password ON users
FOR EACH ROW
EXECUTE PROCEDURE password_changed_log();
Conditional triggers are a PostgreSQL-only feature. A conditional trigger executes whenever a specific condition is met, but not otherwise. For example, imagine that you have a database of e-mail users and you want to freeze accounts whenever total storage goes over 5MB:
CREATE TRIGGER freeze_account
AFTER UPDATE OF mail_accounts
WHEN ( total_storage > 5000000 )
FOR EACH ROW
EXECUTE PROCEDURE freeze_account_full();
New security features
Given the widespread concerns about database security in the industry, it's no surprise that PostgreSQL is adding new features to control access to data. First, PostgreSQL now supports RADIUS authentication, in addition to password, SSL, LDAP, PAM, and GSSAPI authentication. Second, large objects in PostgreSQL now have settable access permissions.
But the biggest new security feature is new commands to set blanket access permissions across an entire database or schema. This will make it much easier for web developers to make effective use of database permissions and prevent SQL injection attacks and other exploits. The first of these commands is GRANT ON ALL, which allows you to set permissions for a specific type of object over the whole database:
GRANT SELECT,INSERT,UPDATE ON ALL TABLES IN cmsdata TO webcms;
The second command is SET DEFAULT PRIVILEGES, which allows you to define a set of permissions for each new database object created by users:
ALTER DEFAULT PRIVILEGES FOR ROLE webcms IN SCHEMA cmsdata
GRANT SELECT,INSERT,UPDATE ON TABLES;
Either of these tasks would previously required writing a script to iterate over each object in the database.
HStore: key-value PostgreSQL
The renaissance of non-relational databases, or "NoSQL movement", has had an influence on PostgreSQL as it has on everyone else. One such influence is the HStore optional module included in version 9.0, which implements a key-value store inside PostgreSQL.
HStore has been available in previous versions of PostgreSQL, but built-in limitations made it only useful for a trivial set of tasks. Those limitations are now gone and there are new features of HStore, like key manipulation functions, as well. Application developers can now use HStore for most key-value tasks instead of adding an additional key-value database to their application infrastructure.
For example, say you wanted to store user profile data as a set of key-value data. First, you'd load HStore, since it's an optional module. Next, you'd create a table:
CREATE TABLE user_profile (
user_id INT NOT NULL PRIMARY KEY REFERENCES users(user_id),
profile HSTORE
);
Then you can store key-value data in that table:
INSERT INTO user_profile
VALUES ( 5, hstore('"Home City"=>"San Francisco","Occupation"=>"Sculptor"');
Notice that the format for the HStore strings is a lot like hashes in Perl, but you can also use an array format, and simple JSON objects will probably be supported in 9.1. You probably want to index the keys in the HStore for fast lookup:
CREATE INDEX user_profile_hstore ON user_profile USING GIN (profile);
Now you can see what keys you have:
SELECT akeys(profile) FROM user_profile WHERE user_id = 5;
Look up individual keys:
SELECT profile -> 'Occupation' FROM user_profile;
Or even delete specific keys:
SELECT profile - 'Occupation' FROM user_profile WHERE user_id = 5;
UPDATE user_profile SET profile = profile - 'Occupation'
WHERE user_id = 5;
JSON & XML explain plans
Like other enterprise SQL databases, PostgreSQL allows users to examine how each query is to be executed in order to optimize database access. The query plan is called an "EXPLAIN plan", and previously was available only as an idiosyncratic text format. For example, a simple single-table sort query might be displayed like so:
Sort (cost=1.05..1.06 rows=305 width=11)
Sort Key: forum_name
-> Seq Scan on forums (cost=0.00..1.03 rows=305 width=11)
While the above text is fine for a database administrator who is troubleshooting specific queries, it is terrible for any form of automated processing, especially since the text changes slightly for each version of PostgreSQL. So developers Robert Haas and Greg Sabino-Mullaine added some new formats for EXPLAIN plans, including XML:
<explain xmlns="http://www.postgresql.org/2009/explain">
<Query>
<Plan>
<Node-Type>Sort</Node-Type>
<Startup-Cost>1.05</Startup-Cost>
<Total-Cost>1.06</Total-Cost>
<Plan-Rows>3</Plan-Rows>
<Plan-Width>11</Plan-Width>
<Sort-Key>
<Item>forum_name</Item>
</Sort-Key>
...
and JSON:
QUERY PLAN
[
{
"Plan": {
"Node Type": "Sort",
"Startup Cost": 1.05,
"Total Cost": 1.06,
"Plan Rows": 3,
"Plan Width": 11,
"Sort Key": ["forum_name"],
"Plans": [
...
This feature is designed to encourage the development of new tools to analyze explain plans and suggest database improvements.
Changes in the PostgreSQL project
It's not just the PostgreSQL database which is changing rapidly; there have been changes in the project and community as well.
During September, the PostgreSQL project is finally moving to the git version control system from CVS. This move was planned before the 9.0 release, but technical difficulties have prevented it from being completed yet; this may be the topic of another article. The PostgreSQL home page is also migrating from an ah-hoc web framework written in PHP to the popular Django web framework.
Most of all, though, what has changed is project processes and attitudes, which have allowed the PostgreSQL project to add dozens of new contributors over the last two years. Selena Deckelmann, PostgreSQL major contributor, described the changes this way:
As a group, we work really hard to recruit and maintain long-term relationships with developers, and that investment in people has paid off really well in 9.0. We have long term commitments from volunteers and independent businesses to implement features that take multiple years to see through to completion. The binary replication is a clear example of that, and we have many other projects underway that are only possible because developers trust our core development team to see them through.
The future of PostgreSQL
The PostgreSQL project's developers are already on to version 9.1; in fact, the second CommitFest for 9.1 has already started. Features currently under development include synchronous replication, SQL/MED federated tables, security label support, per-column collations, predicate locking, benchmarking tools, and new XML functions. Additionally, Google Summer of Code students created draft patches for JSON support and the MERGE operation.
"We have seen incredible increase in the volume of activity within the PostgreSQL community, in user base, development, and infrastructure.
" commented core team member Bruce Momjian.
The future certainly seems bright for what was once the "other open source database".
Brief items
Quote of the week
Codec2 low-bandwidth voice codec released
Codec2 is a codec intended for use with voice data where bandwidth is at a premium. "Currently it can encode 3.75 seconds of clear speech in 1050 bytes, and there are opportunities to code in additional compression that will further reduce its bandwidth." The 1.0 release is currently available.
conf2py released
The conf2py (seemingly with no version number) has been released. It is an application for conference management; its features include single sign-on support, configurable billing policies, paper submission and review, and more.Diaspora source released
The Diaspora project, working on privacy-aware social networking, has made its first source release. "Much of our focus this summer was centered around publishing content to groups of your friends, wherever their seed may live. It is by no means bug free or feature complete, but it an important step for putting us, the users, in control. Developers, our code is on github, our tracker is public, we have a developer mailing list, and we are happily accepting patches." (Thanks to Tom Arnold).
Firefox, Thunderbird and SeaMonkey updates released
FirefoxQt 4.7.0 released
The Qt 4.7.0 release is out. "Although it's a little more than nine months since Qt's last feature release (4.6.0 on December 1st, 2009), the seeds of some of the new stuff in 4.7 were sown much earlier. Indeed, many of the ideas behind the biggest new feature in Qt 4.7.0, QtQuick, were born more than two years ago, not long after Qt 4.4 was released. We hope you'll benefit from the effort and care that went into bringing the implementation of those ideas to maturity." See the "What's new in Qt 4.7" page for more information on this release.
Newsletters and articles
Development newsletters from the last week
- Caml Weekly News (September 21)
- PostgreSQL Weekly News (September 19)
An Ecology Of Ardour (Linux Journal)
Dave Phillips has some news about Ardour, an open-source digital audio workstation. "Ardour3 is a significant evolution from previous versions of the program. Perhaps the most anticipated enhancement is the addition of MIDI track support. Standard MIDI files can be loaded, edited, and saved as integral parts of your projects, or you can record MIDI data directly. If you're the meticulous type you can choose to enter MIDI events by hand, one by one. Another welcome feature is the expanded MIDI synchronization support. Ardour3 now supports MIDI Clock, a method that is especially useful for synchronizing devices such as external arpeggiators and synthesizers that use its timing information to control certain aspects of their sounds."
Page editor: Jonathan Corbet
Announcements
Non-Commercial announcements
Mozilla Joins Open Invention Network (The Mozilla Blog)
On its blog, Mozilla has announced that it joined the Open Invention Network (OIN). "Patents owned by Open Invention Network are available royalty-free to any company, institution or individual that agrees not to assert its patents against the Linux System. By joining, Mozilla receives cross-licenses from other OIN licensees, but more importantly, for the long term, it affords us a chance to work with OIN in reducing IP threats to open source development and innovation. This may include a defensive publications program that would make it harder for others to patent work created by Mozilla contributors, sharing defensive tactics, and cooperation to minimize patent threats."
The Ubuntu application review process
Canonical has announced a mechanism by which applications will be reviewed for possible acceptance into the Ubuntu Software Centre. "Recently we formed a community-driven Application Review Board that is committed to providing high quality reviews of applications submitted by application authors to ensure they are safe and work well. Importantly, only new applications that are not present in an existing official Ubuntu repository (such as main/universe) are eligible in this process (e.g a new version of an application in an existing official repository is not eligible). Also no other software can depend on the application being submitted (e.g. development libraries are not eligible), only executable applications (and content that is part of them) are eligible, and not stand-alone content, documentation or media, and applications must be Open Source and available under an OSI approved license."
Articles of interest
Novell's Patents Are Complicating Its Sale (GigaOm)
Matt Asay ponders the role of Novell's patents in the sale of the company. "This is why Red Hat has remained interested in Novell, despite a lack of competitive threat from SUSE. At all. In a world of rough-and-tumble patent litigation, Red Hat can't continue to bring vague rally-the-troops, anti-patent jingoism to a knife fight. Yes, it has done well to support the Open Invention Network and other patent collectives, but having its own defensive patent war chest would go a long way toward securing its thriving Linux, virtualization, and middleware businesses from latent patent suits."
Adobe Flash Player Square adds native 64-bit support (NetworkWorld)
NetworkWorld reports that Adobe has made available a preview of Flash Player with native 64-bit support, and there is a Linux version. "The preview, available from Adobe Labs, is offered so that users can test existing content and new platforms for compatibility and stability. Adobe warns, the preview isn't yet stable, and advises caution when installing Flash Player "Square" on production machines."
Resources
Linux Foundation Monthly Newsletter: September 2010
The Linux Foundation newsletter for September 2010 covers: Qualcomm Innovation Center Joins LF, LF Industry-Wide Open Compliance Program, Program for LF End User Summit, and several other topics.
Education and Certification
LPI Announces New Affiliate for Asia Pacific
The Linux Professional Institute (LPI) has announced a new affiliate partner for the Asia Pacific region--LPI-APAC. "LPI-APAC will include the following countries: Australia, China, Hong Kong, India, Malaysia, New Zealand, Philippines, Singapore, Taiwan, and Vietnam."
Calls for Presentations
FUDCon LATAM 2011 bids
The bid period for the next Fedora Users and Developers Conference (FUDCon) in the LATAM region is open until September 30. "Decisions for a FUDCon location are made on the basis of many factors, including costs, quality and quantity of bid information provided, facilities available, bid team commitment, and the desire to move FUDCon around a given region where practical."
O'Reilly MySQL Conference and Expo 2011 CfP
The O'Reilly MySQL Conference & Expo will take place April 11-14, 2011 in Santa Clara, California. The call for proposals is open until October 25, 2010.
Upcoming Events
linux.conf.au 2011 news
The first confirmed keynote speaker for linux.conf.au 2011 will be Mark Pesce. "Mark Pesce is an inventor, writer, educator and broadcaster. He is the co-developer of the Virtual Reality Markup Language (VRML) which forms a basis for 3D rendering used in millions of computers worldwide. His work at US universities included initiating 3D and interactive arts programs."
A draft schedule of presentations is available for lca2011. "From around the
world, and among the best and brightest in Australia & New Zealand, the
schedule for lca2011 includes topics from cloud computing and HTML5 to
Kernel innovations and Databases as well as a FOSS powered Telco in Dili
and Linux powered coffee roaster. linux.conf.au 2011 runs for a full week
starting Monday 24th January, with presentations, tutorials, some
significant Keynote presentations, 15 Miniconfs and an Open Day.
"
If you are interested in speaking at a Miniconf see this page and click on the Miniconf of choice to see the requirements for that particular session.
Canonical Announces Provisional Ubuntu Developer Summit Tracks
Canonical has announced a list of provisional tracks for the Ubuntu Developer Summit (UDS-N) scheduled for October 25-29, 2010 in Orlando, Florida.Events: September 30, 2010 to November 29, 2010
The following event listing is taken from the LWN.net Calendar.
| Date(s) | Event | Location |
|---|---|---|
| September 30 October 1 |
Open World Forum | Paris, France |
| October 1 October 2 |
Open Video Conference | New York, NY, USA |
| October 1 | Firebird Day Paris - La Cinémathèque Française | Paris, France |
| October 3 October 4 |
Foundations of Open Media Software 2010 | New York, NY, USA |
| October 4 October 5 |
IRILL days - where FOSS developers, researchers, and communities meet | Paris, France |
| October 7 October 9 |
Utah Open Source Conference | Salt Lake City, UT, USA |
| October 8 October 9 |
Free Culture Research Conference | Berlin, Germany |
| October 11 October 15 |
17th Annual Tcl/Tk Conference | Chicago/Oakbrook Terrace, IL, USA |
| October 12 October 13 |
Linux Foundation End User Summit | Jersey City, NJ, USA |
| October 12 | Eclipse Government Day | Reston, VA, USA |
| October 16 | FLOSS UK Unconference Autumn 2010 | Birmingham, UK |
| October 16 | Central PA Open Source Conference | Harrisburg, PA, USA |
| October 18 October 21 |
7th Netfilter Workshop | Seville, Spain |
| October 18 October 20 |
Pacific Northwest Software Quality Conference | Portland, OR, USA |
| October 19 October 20 |
Open Source in Mobile World | London, United Kingdom |
| October 20 October 23 |
openSUSE Conference 2010 | Nuremberg, Germany |
| October 22 October 24 |
OLPC Community Summit | San Francisco, CA, USA |
| October 25 October 27 |
GitTogether '10 | Mountain VIew, CA, USA |
| October 25 October 27 |
Real Time Linux Workshop | Nairobi, Kenya |
| October 25 October 27 |
GCC & GNU Toolchain Developers Summit | Ottawa, Ontario, Canada |
| October 25 October 29 |
Ubuntu Developer Summit | Orlando, Florida, USA |
| October 26 | GStreamer Conference 2010 | Cambridge, UK |
| October 27 | Open Source Health Informatics Conference | London, UK |
| October 27 October 29 |
Hack.lu 2010 | Parc Hotel Alvisse, Luxembourg |
| October 27 October 28 |
Embedded Linux Conference Europe 2010 | Cambridge, UK |
| October 27 October 28 |
Government Open Source Conference 2010 | Portland, OR, USA |
| October 28 October 29 |
European Conference on Computer Network Defense | Berlin, Germany |
| October 28 October 29 |
Free Software Open Source Symposium | Toronto, Canada |
| October 30 October 31 |
Debian MiniConf Paris 2010 | Paris, France |
| November 1 November 2 |
Linux Kernel Summit | Cambridge, MA, USA |
| November 1 November 5 |
ApacheCon North America 2010 | Atlanta, GA, USA |
| November 3 November 5 |
Linux Plumbers Conference | Cambridge, MA, USA |
| November 4 | 2010 LLVM Developers' Meeting | San Jose, CA, USA |
| November 5 November 7 |
Free Society Conference and Nordic Summit | Gorthenburg, Sweden |
| November 6 November 7 |
Technical Dutch Open Source Event | Eindhoven, Netherlands |
| November 6 November 7 |
OpenOffice.org HackFest 2010 | Hamburg, Germany |
| November 8 November 10 |
Free Open Source Academia Conference | Grenoble, France |
| November 9 November 12 |
OpenStack Design Summit | San Antonio, TX, USA |
| November 11 | NLUUG Fall conference: Security | Ede, Netherlands |
| November 11 November 13 |
8th International Firebird Conference 2010 | Bremen, Germany |
| November 12 November 14 |
FOSSASIA | Ho Chi Minh City (Saigon), Vietnam |
| November 12 November 13 |
Japan Linux Conference | Tokyo, Japan |
| November 12 November 13 |
Mini-DebConf in Vietnam 2010 | Ho Chi Minh City, Vietnam |
| November 13 November 14 |
OpenRheinRuhr | Oberhausen, Germany |
| November 15 November 17 |
MeeGo Conference 2010 | Dublin, Ireland |
| November 18 November 21 |
Piksel10 | Bergen, Norway |
| November 20 November 21 |
OpenFest - Bulgaria's biggest Free and Open Source conference | Sofia, Bulgaria |
| November 20 November 21 |
Kiwi PyCon 2010 | Waitangi, New Zealand |
| November 20 November 21 |
WineConf 2010 | Paris, France |
| November 23 November 26 |
DeepSec | Vienna, Austria |
| November 24 November 26 |
Open Source Developers' Conference | Melbourne, Australia |
| November 27 | Open Source Conference Shimane 2010 | Shimane, Japan |
| November 27 | 12. LinuxDay 2010 | Dornbirn, Austria |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
