|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for March 24, 2011

The case of the fraudulent SSL certificates

By Jake Edge
March 23, 2011

Certificate authorities (CAs) exist to issue certificates that protect users' encrypted traffic when they use SSL/TLS (i.e. HTTPS) for web browsing—at least ostensibly—but the CA system has been a source of general unhappiness for quite some time. The discovery of fraudulent certificates in the wild can only lead to more calls for changes to the CA model, and for some, that model is irretrievably broken.

A Tor project blog posting by Jacob Appelbaum (aka ioerror) has the most detailed look at this particular incident. Basically, sometime around March 15, a CA (evidently UserTrust, which is part of Comodo) noticed that there were certificates signed by the CA, but not properly issued by that CA, floating around the internet. On March 15, the CA issued certificate revocations for nine different certificates.

Man in the middle

Part of the problem, though, is that certificate revocation doesn't really work, as browsers are generally not checking the revocation status of certificates. [Update: As pointed out in the comments, browsers do check the revocation status, but if that check fails, most do not give the user any indication of that.] In addition, many browsers do not keep track of the certificates that they have received and alert users when they change. So, a fraudulent, but correctly signed, certificate offered by a man-in-the-middle (MITM) attacker will be accepted by most browsers with no indication to the user of any problem.

MITM attacks have traditionally been considered difficult to pull off because they require control of some intermediate node in the path between the user and the web site. The pervasiveness of wireless networking has reduced that barrier considerably. Any access point, even one using encrypted communications (e.g. WPA), could be subverted—or intentionally configured—to perform MITM attacks. Public WiFi hotspots would be a perfect location for an attacker to set up a fraudulent certificate for, say, Paypal, and sniff the credentials of any user that connected to the service. Offering "Free Public WiFi" in crowded places like airports might be another route to setting up an MITM attack.

So far, only addons.mozilla.org (the site for Firefox extensions) has been identified as a victim site, one for which a fraudulent certificate has been issued. According to Appelbaum, there are seven uniquely named certificates floating around (one of which is issued to an invalid hostname: "global trustee"). He speculates that "Facebook, Skype, Google, Microsoft, Mozilla, and others are worthy of targeting". But Comodo has not released any information about which hostnames were targeted, at least yet. It was Mozilla who disclosed that addons.mozilla.org was a target.

As pointed out by LWN reader sumC, Microsoft has put out an advisory that lists the affected domains: login.live.com, mail.google.com, www.google.com, login.yahoo.com (3 certificates), login.skype.com, and addons.mozilla.org.

Alerted by browser code changes

Since browsers do not generally check the revocation status of certificates, some other mechanism must be used to reject these bad certificates. A change to the Chromium browser code on March 16 is how Appelbaum was first alerted to the problem. That change added a new X509Certificate::IsBlacklisted() function for Chromium, which listed the serial numbers for multiple certificates any of which would cause the function to return "true". Some twelve hours later, Google issued a Chrome update that "blacklists a small number of HTTPS certificates".

At around the same time, Mozilla also patched Firefox with a similar, but not exactly the same, list of serial numbers. This was clearly an indication that major browser vendors had been alerted to a problem. Appelbaum started digging further, looking at the published certificate revocation lists (CRLs) using his crlwatch program. Crlwatch used the EFF's SSL Observatory to find a canonical list of CRLs, then fetched them to look for a match to any of the serial numbers blacklisted by Chromium or Mozilla.

He found matches for all but the two test certificates that were listed, all pointing back to UserTrust. In addition, the Mozilla patch also points to UserTrust as the compromised CA. It means that somehow an attacker was able to get certificates issued by UserTrust for domain names that should not have been issued. That could happen if the signing key were compromised or UserTrust was somehow tricked into issuing those certificates. So far, there are no details as to how the fraudulent certificates were created.

Disclosure issues

As Appelbaum points out, it is clear that at least some browser makers were alerted to the problem, but certainly not all. Tor releases a "Tor Browser Bundle" and was not advised of the problem, so the project is left scrambling to update its browsers (though, one would guess Appelbaum's alertness in spotting the issue will have given the project a head start). Other, smaller browsers are likely affected by the problem as well, but are just now hearing about it.

Appelbaum agreed to an embargo on releasing information about the problem until the Firefox 4 release on March 22. That embargo was extended to March 23 to ensure that Microsoft could release an IE update, but Mozilla put out its posting on the issue on March 22, at which point Appelbaum considered the embargo lifted and posted his own information.

The embargo is very troubling since these certificates were evidently already out there potentially causing trouble for users; hiding their existence doesn't help users at all. Also worrisome is that both Google and Mozilla told Appelbaum that the CA "had done something quite remarkable by disclosing this compromise". It seems that other CAs may have fallen prey to the same kinds of attacks and not disclosed that fact. So it is possible that there are other known fraudulent certificates in the wild, presumably just listed on CRLs without any special browser blacklisting. An attacker holding one of those certificates must be cackling with glee.

Even for the certificates that UserTrust/Comodo alerted about, it took some time for browsers to be updated and will take even more time before users get and install those updates. Since we don't know how the certificate-issuing process was subverted, we have to hope that the CA is taking proper steps to either invalidate the signing key if it was compromised, or to change its process so things like this can't happen again. It would also be good if UserTrust itself disclosed exactly which domain names were affected, though we may already have that information via Microsoft's advisory.

Browser certificate handling

It's easy to see that there is a problem with certificate handling in browsers, but it is less clear what the proper solution is. Keeping track of the certificate that is sent when a domain is first encountered and comparing it on subsequent visits would be useful, but the browser makers are likely to be uncomfortable with how they present any changes to users. Certificates expire and are changed for other legitimate reasons, so it may be difficult for users to distinguish a legitimate certificate change from one that may be happening due to an MITM attack.

The browsers could also default to using the CRL or online certificate status protocol (OCSP) queries to ensure that the certificates are still valid. That requires that the CA be available to answer those queries every time a certificate is sent by a site, as any downtime will result in web sites becoming unavailable. Adam Langley offers the idea of short-lived certificates in his "Revocation doesn't work" blog posting that was linked above. There are other possible solutions as well, some of which Appelbaum mentions toward the end of his post.

The real problem, though, may be that the CA model doesn't work well on a (largely) decentralized network like the internet. The problems range from incidents like this one to worries about possibly rogue CAs. There are other ideas for handling certificates in a non-CA world (or one where CAs have much reduced authority), but there is a rather large stumbling block to changing the current system: the enormous economic interest that the CAs have in keeping things more or less as they are. CAs derive a huge amount of money from their semi-monopoly on the issuing of certificates, and one could expect that any threat to that income stream would be met with strong resistance. But cases like this one may start to make it clear that changes are required.

Comments (20 posted)

Has Bionic stepped over the GPL line?

By Jake Edge
March 20, 2011

Way back in the early days of Linux, shortly after Linus Torvalds switched the kernel from his own "non-commercial" license to the GPL, he also added an important clarification to the kernel's license. In the COPYING file at the top of the kernel tree since mid-1993, there has been a clear statement that Torvalds, at least, does not consider user-space programs to be derived from the kernel, and thus are not subject to the kernel's license:

This copyright does *not* cover user programs that use kernel services by normal system calls - this is merely considered normal use of the kernel, and does *not* fall under the heading of "derived work".

One could easily argue that this distinction is one of the reasons that Linux is so popular today as programs written to run on Linux can be under whatever license that the developer chooses. Some recent analyses of Google's Bionic libc implementation, which claim that Google may be violating the kernel's license, seem to be missing—or misunderstanding—that clarification.

A blog posting from Raymond T. Nimmer, who is a professor specializing in intellectual property (IP) law, was the starting point. That posting looks at the boundaries between copyleft and non-copyleft code. Nimmer specifically analyzes the question of whether header files that specify an API to a GPL-covered work can be incorporated into a program that is not released under the GPL. He points to Google's use of the kernel header files in the Bionic library as an example and concludes:

For entities that do not desire to disclose code or force their customers to do so, or otherwise conform to copyleft obligations, working with copyleft platforms and programs presents a very significant and uncertain, risk-reward equation.

Nimmer's post was noticed by Edward J. Naughton, a practicing IP attorney, who then wrote briefly about it at the Huffington Post. Naughton also did a much longer analysis [PDF] as an advisory for his law firm, Brown Rudnick. That advisory concludes with a fairly ominous warning:

But if Google is right, if it has succeeded in removing all copyrightable material from the Linux kernel headers, then it has unlocked the Linux kernel from the restrictions of GPLv2. Google can now use the "clean" Bionic headers to create a non-GPL'd fork of the Linux kernel, one that can be extended under proprietary license terms. Even if Google does not do this itself, it has enabled others to do so. It also has provided a useful roadmap for those who might want to do the same thing with other GPLv2-licensed programs, such as databases.

In turn, Naughton and Nimmer's analyses were picked up by Florian Mueller who wrote a blog post about the serious threat that Google and Android face because of this supposed GPL violation. So, is Google really circumventing the GPL in a way that could threaten Linux? To answer that, we'll have to dig into what Bionic is, how it is built, and whether it violates the letter or spirit of the Linux COPYING file.

An interface for user space

The kernel exists to provide services to user space, and one can do nothing useful from user space on a Linux system without invoking the kernel via a system call. That system call boundary is quite clear. It requires a special instruction that puts the CPU into kernel mode in order to invoke one. While programmers may see system calls as simple library calls, that's not what's happening under the covers.

In order to use Linux system calls, though, it is necessary to get information from the kernel header files. Various pieces of information are needed including system call numbers (which is how they are invoked), type information for various system call arguments, as well as constants that are required to properly invoke those calls. That information is stored in the kernel headers and any program that wants to run on Linux needs to get that information somehow.

The most common way to invoke the kernel is by using GNU libc (glibc). Glibc has a set of "sanitized" kernel header files that are used to build the library, and distributions typically provide packages with those header files to be installed into /usr/include. Programs can then be built by using those header files and linking to glibc. While "sanitized" may sound like it refers to the removal of GPL-covered elements from those files, the main reason it is done is to remove kernel-specific elements from the files. The kernel headers have lots of kernel-internal types, constants, and functions that are not part of the kernel interface.

It isn't really correct to call the interface that the kernel provides to user space an API (i.e. application programming interface), as it is really an application binary interface (ABI), and one that the kernel hackers strive to maintain for each new kernel release. Removing something from the kernel ABI almost never happens, though new features expand that ABI frequently. The ABI is what allows binaries that were built on an earlier kernel to run on newer kernels. The API, on the other hand, is provided by glibc or some other library.

Using glibc is just one way for a program to be built to run on Linux. There are other libc implementations, including uClibc and dietlibc, which are targeted at embedded devices, as well as the embedded fork of glibc, EGLIBC. A program could also use assembly language instructions to make system calls more directly. Using any of those methods to get at the system call interface is perfectly reasonable, and will require information from the kernel headers. Glibc may be the most popular, but it certainly isn't the only way.

Android's Bionic libc is, at some level, just another alternative C library implementation. It is based on libc from the BSDs with some Google additions like a simple pthread implementation, and has a BSD license. It's also a lot smaller than glibc—roughly half the size. The license satisfies one of the goals for Android: keeping the GPL out of user space. While glibc is not under the GPL, as it is licensed under the LGPL (v2 currently, with a plan to move it to v3), that may concern Google (and its partners) because LGPLv3 requires that users be able to replace the library—something that doesn't mesh well with locking down phones and other Android devices. In the end, it doesn't matter, as Google, like any other kernel user, can make Linux system calls any way it chooses.

Bionic's use of kernel headers

So what does Google do that causes Nimmer, Naughton, and Mueller to claim that it is circumventing the GPL to the detriment of the community? To create the header files used by Bionic, and applications, Google processes the kernel header files to remove all of the extra stuff that is either only there for the kernel, or doesn't make sense in the Bionic environment. In short, with minor exceptions, Bionic is doing exactly what glibc is doing, taking the kernel header files and massaging them into a form that defines the interface so that they can be used by the library itself and any applications that use the library. Nor has Google hidden what it's done, as there is a README.TXT file that is quite clear on what it is doing and why it is doing it.

Glibc and others may be using the kernel headers that can be generated from a kernel source tree by doing a "make headers_install". That Makefile target was added to help library developers and distributions create the header files that are required to use the kernel ABI. It is not a requirement, as there are other ways to generate (or create) the required headers, and various libraries have done it differently along the way. The Android developers intend to eventually use the headers that can be created from the kernel tree, but there are currently some technical barriers to doing so. The key piece to understand is that the information required to use the kernel ABI are contained in one and only one place: the kernel header files.

There are two things that Bionic does that are perhaps a bit questionable. The first is that as part of munging the header files, it removes the comments from them, including the copyright notice at the top of the file. It replaces the copyright information with a generic "This header was automatically generated ..." message, which concludes with: "It contains only constants, structures, and macros generated from the original header, and thus, contains no copyrightable information." The latter part is likely what has the IP experts up in arms. Much of Naughton and Nimmer's postings indicate that they believe Google overreached in terms of copyright law by stating that the files do not contain elements eligible for copyright protection.

They may be right in a technical sense, but it still may not make any difference at all. Calling into the kernel requires constants and types (structures mostly) that can only come from the kernel headers. Those make up the functional definition of the ABI, and that ABI has been explicitly cleared for use by non-GPL code. One could argue that Google should keep the copyright information intact—one would guess lawyers were involved in the decision not to and the wording of that statement—but that is most likely only a nicety and not required once one understands that those files just contain the ABI information, nothing more.

Well, perhaps there is a bit more. The Bionic README, notes that "the 'clean headers' only contain type and macro definitions, with the exception of a couple static inline functions used for performance reason (e.g. optimized CPU-specific byte-swapping routines)". The latter might be considered elements worthy of copyright protection—and not part of the kernel ABI—but they might not as well. Those routines are written in assembly code, so they might well be considered to be the only way to efficiently write byte-swapping routines for each of the architectures and thus might be considered purely functional elements.

Misunderstanding Torvalds

Both Naughton and Mueller make a big deal about a posting from Torvalds in 2003 that ends with the shout: "BUT YOU CAN NOT USE THE KERNEL HEADER FILES TO CREATE NON-GPL'D BINARIES." While it would seem to be a statement from Torvalds damning exactly what Google is doing, that would be a misreading of what he is saying. One need look no further than the subject of the thread ("Linux GPL and binary module exception clause?") to see that the context is not about user-space binaries, but instead about binary kernel modules. Torvalds may have been a little loose with his terminology in that post, but stepping back through the thread makes it clear he is talking about kernel modules. Furthermore, in another post in that same thread, he reiterates his stance on user-space programs:

This was indeed one of the worries that people had a long time ago, and is one (but only one) of the reasons for the addition of the clarification to the COPYING file for the kernel.

So I agree with you from a technical standpoint, and I claim that the clarification in COPYING about user space usage through normal system calls covers that special case.

But at the same time I do want to say that I discourage use of the kernel header files for user programs for _other_ reasons (ie for the last 8 years or so, the suggestion has been to have a separate copy of the header files for the user space library). But that's due to technical issues (since I think the language of the COPYING file takes care of all copyright issues): trying to avoid version dependencies.

That's a pretty unambiguous statement about using the kernel headers for user-space programs. In fact, in the early days, the accepted practice was to symbolically link the kernel headers into /usr/include, and one might guess that any number of proprietary (and other non-GPL) programs were built that way. Torvalds is no lawyer (nor am I), but his (and the other kernel hackers') intent is likely to be very important in the extremely unlikely case this ever gets litigated.

It is almost amusing that Mueller argues that Google should switch to using glibc, rather than Bionic. It reflects a grave misunderstanding of the differences between the two libraries. If the Nimmer/Naughton arguments are right, it's hard to see how glibc is any different. Their argument essentially boils down to there being no way to use the kernel headers without a requirement to apply the GPL to the resulting code.

It's certainly not impossible to imagine someone filing a lawsuit over Android's use of the kernel headers. It's also possible that a judge might rule against Android. But given that kernel hackers want user-space programs to use the ABI and that the COPYING file explicitly excludes those programs from being considered derived works of the kernel, one would guess that some kind of workaround would be found rather quickly. Other than the fear, uncertainty, and doubt that these arguments might engender, one would guess that Google isn't really losing much sleep over them.

Comments (199 posted)

Target: Android

By Jonathan Corbet
March 23, 2011
One way to know that a product has become successful is to see others lining up to attack it in the courts. Imitation may be the sincerest form of flattery, but FUD and lawsuits also show, in a rather less sincere way, a certain form of respect. By this measure, the Android system, by virtue of being the target of both, must be riding high indeed.

At the top of the news is Microsoft's just-announced lawsuit against Barnes & Noble, an American bookstore chain. According to Microsoft, the Nook ebook reader, which is based on Android, violates five of Microsoft's software patents. By now, the fact that these patents cover trivial "innovations" should not come as a surprise. Here is a quick overview of what Microsoft claims to own:

  • #5,778,372: "Remote retrieval and display management of electronic document with incorporated images." What's described here is displaying a document on top of a background image with the brilliant feature that the document is displayed while the image is still transferring and the whole page is redrawn once the image shows up.

  • #5,889,522: "System provided child window controls." This patent covers an application settings dialog with tabs.

  • #6,339,780: "Loading status in a hypermedia browser having a limited available display area." This patent covers the idea of putting up a "loading" image over a page while the page is loading.

  • #6,891,551: "Selection handles in editing electronic documents." Covered here is the technique of putting little images around a selection area so that the user can, by dragging those handles, resize the area.

  • #6,957,233: "Method and apparatus for capturing and rendering annotations for non-modifiable electronic content" - a method for storing "annotations" outside of the text of a read-only document. Broadly read, this patent could be said to cover browser bookmarks, an idea that somebody might just have thought of before this patent's 1999 filing date.

It seems unlikely that any of these patents would stand up against a suitably determined challenge in court - though such an opinion does certainly rely on a possibly naïve view of the rationality of American patent courts. In the real world, challenging patents is a time-consuming and expensive affair. Barnes & Noble, which may well have been chosen because it looks like a weak target, may not have an appetite for that kind of fight. That said, the company has, evidently, refused offers to license these patents from Microsoft; its management had to know that a lawsuit would be the next step.

Perhaps we'll see a counterattack from the rest of the industry; the idea of paying rent to Microsoft for the privilege of using bookmarks is unlikely to have broad appeal. Or, perhaps, the entire mobile marketplace in the US will simply collapse under the weight of the steadily increasing pile of patent-related lawsuits.

The patent suit is not the only attack ongoing against Android currently; there is also, seemingly, a determined FUD campaign underway. Most recently, this campaign has taken the form of assertions that Android is violating the Linux kernel's license; LWN examined those claims in a separate article. It's worth noting that one of the proponents of these claims has represented Microsoft in the past - a fact which he chose to remove from his online biography shortly before beginning the attack.

The goal of these attacks seems clear - to inject fear, uncertainty, and doubt into the growing Android ecosystem. Hardware vendors have been served notice that they may well be sued for using Android. Software vendors are being told that writing for Android could expose them to copyright infringement claims. It's all aimed at getting these companies to reconsider investing in Android in favor of the "safer" alternatives.

The thing is, of course, we have seen this kind of FUD campaign before. As early as the 1990's, people were warning that use of Linux could cause companies to "lose their intellectual property." Fortunately, most companies have figured out that they face no such threat. So, when we see something like this ludicrous claim:

If Google is proven wrong, pretty much that entire software stack -- and also many popular third-party closed-source components such as the Angry Birds game and the Adobe Flash Player -- would actually have to be published under the GPL.

We can relax in the knowledge that we have seen such things before. Linux turns out to be surprisingly resilient to FUD; this campaign seems unlikely to even slow things down.

It is worth noting that, despite the recent claims that companies working with Android might be sued by free software developers, the actual lawsuit was filed by a company which is generally hostile to free software. There are indeed threats out there, but they do not come from our community; they come from companies which feel they are losing in the market and which, as such companies so often do, are turning to the courts instead. It is one of the less pleasant signs of success; our community should feel flattered indeed.

Comments (3 posted)

Page editor: Jonathan Corbet

Security

Arch Linux and (the lack of) package signing

March 23, 2011

This article was contributed by Nathan Willis

The Arch Linux user and developer community has been engulfed in sharply divisive debate recently over the issue of package signing. It started when an Arch user blogged about the distribution's lack of package signatures, the security risk it created, and his own frustrations with the core developers' response to the issue. The ensuing argument has since spread to include Arch's development model and a variety of leadership questions, but the root problem remains unsolved: there is still no mechanism to verify the authenticity of "official" Arch Linux packages.

Arch Linux is a non-commercial distribution that de-emphasizes ease of use in favor of emphasizing the "responsibility" of the user to maintain the system properly, and providing "simple" and "code-correct" tools rather than striving for convenience. That philosophy evidently attracts a healthy number of users and contributors, but to some users, the project in practice misses the mark in the important area of security. Arch Linux practices rolling releases, with new packages pushed out as tarballs made available on the primary project server, and mirrored on close to 100 sites around the globe. Although the official installation guide and download page both emphasize the importance of examining the checksums of downloads, no packages are signed by individual developers or by the main project server.

To blogger IgnorantGuru, this constitutes an unacceptable security risk. In February, he blogged about his concerns, noting that without a method for Arch users to verify that a package is unaltered, packages can be replaced with Trojan-horse-laden code. This can happen if the primary server is compromised, but also at any of the mirrors, at network proxies or caches, on local storage, or even through man-in-the-middle hijacking of the download request. While package authentication might not have been important in Arch's early days, he said, the number of contributors and mirrors has grown so much that without signatures, the distribution and its users are sitting ducks.

Package signing protects against attackers' ability to tamper with or replace an uploaded file by providing an independent way for other people to verify that the binary has not been altered since the time it was signed by the developer. If a file server's security is compromised, a simple checksum (such as an MD5 or SHA1 hash) can be replaced with a new hash that matches the tampered-with binary with virtually no additional work for the attacker. That is because a hash is a function of the file contents alone; a digital signature is a function of both the file and the signer's private key. Thus, its authenticity can be verified using the file and a copy of the corresponding public key from a public key server.

Almost all prominent Linux distributions use package signing to allow users and system administrators to verify the integrity of packages. Typically, package uploaders use a private GPG key to sign the package before it is pushed to the release server, and the user's package installer includes support for automatically retrieving the corresponding public key and checking the signature. Support in the package manager itself is not mandatory, however; simply including a cryptographic signature as part of each package's metadata would permit security-conscious users to manually verify that packages have not been tampered with.

IgnorantGuru made that point in a message to the mailing list for developers of Arch's package manager Pacman. There he also lamented what he said was the low priority given to package security by the developers. The topic had come up before, but no one acted on it, and several of the core Arch developers dismissed the subject as an unimportant one that they were not willing to work on personally. IgnorantGuru subsequently asked for the addition of signatures to package metadata (without adding simultaneous support for checking them), and the core developers replied that he should take the issue up with the maintainers of the Arch mirrors he used.

Process, and the lack thereof

Furthermore, IgnorantGuru lambasted the dismissive attitude of the core developers towards the security issue in general. He quoted several of them directly, including comments such as "For me it makes not a big difference if a package is signed or not. It's a nice to have feature and I would be glad if someone would implement it. But for me it has a very low priority," and "Minor performance issues interest me a hell of a lot more than package signing."

A few, he said, did take the issue seriously and had submitted patches to Pacman, but core developers refused to act on them. In the comment thread that followed, supporters came out on both sides, and IgnorantGuru expounded on his claims of key developers resisting the idea, if not openly trying to derail it. He linked to two bug reports he had filed (the second in response to a comment on the first), noting that Allan McRae seemed to be actively trying to keep package signing improvements out of Arch. McRae, he said, repeatedly claimed that he had no interest in package signatures (and frequently stated that anyone who cared to could submit their code), but sought out every discussion of the topic and tried to quash others' efforts to work on a solution.

In the second first bug report, IgnorantGuru even suggested a lightweight solution that involved only signing the main server's package database, which would not protect users in the event that the main server is compromised, but would at least allow them to verify that a given mirror had not altered the packages it hosted. McRae countered that this approach still left other attack vectors open (such as the master server itself being compromised and the signatures and database forged), thus it was not worth implementing, and that waiting for full support to come to Pacman was preferable. IgnorantGuru responded, in effect, that this line of thinking is making the perfect the enemy of the good, and that adding lightweight signature support provided at least a substantial improvement over "no signatures whatsoever." McRae also argues that the package signing plan in the bug reports is "sub optimal" and proposes a different approach instead, concluding with a list of unanswered questions about the new approach. IgnorantGuru calls this an attempt to derail real work by starting a bike-shedding argument.

In addition to laying out the technical arguments, IgnorantGuru described how his attempts to discuss package signing on the official Arch discussion forum had been shut down by forum moderators. The threads were closed, and then moved to a "dustbin" forum section that is not visible to unregistered users. Subsequently, IgnorantGuru's account was suspended for eight weeks by the moderators, and his IP address blocked.

IgnorantGuru posted a copy of the message that prompted his forum ban on his personal blog. In it, he quotes McRae's assertion that he would accept Pacman patches from other developers, and his assertion that "as usual" no one submitted any. He then describes patches sent by himself and other Arch contributors, and what McRae and other core developers did to prevent merging them. This second post covers similar ground as its predecessor, but the comment thread provides even more detail, as McRae eventually joins in.

As you would expect, there is a lot of heated language between the two, which can make it difficult to separate out the nucleus of each side's complaints. It is clear, however, that other Arch contributors have proposed virtually the same package signing plan as IgnorantGuru in the past, at least as far back as 2008. The core developers have never acted to add signature-checking to Pacman, even though patches have been submitted, yet at the same time have resisted all "interim" solutions as a waste of effort, on the grounds that Pacman is the proper place for package signature work.

For his part, IgnorantGuru posted a Bash script on his own site that allows a user a small amount of integrity checking by comparing the checksums of a package from multiple Arch mirrors. In the weeks since the original post, IgnorantGuru announced that he had given up on the hope that the core developers would ever adopt a package signing solution, and would be moving to a new distribution with a better security model. Most recently he posted about Gnuffy, a fork of Arch Linux that incorporates package signatures and other ideas shunned by the Arch leadership.

Threats

In the final analysis, Arch users are exposed to a security threat both by the distribution's lack of package signing and by the core developer's resistance to adopting it. However much the Arch "philosophy" says each user is responsible for his or her own system, IgnorantGuru is correct in his first blog post when he observes that without signatures, the distribution's infrastructure is vulnerable to every exploit found in every other system on the path between the main project server and the user's PC. Any exploit on the OS used by the HTTP or FTP mirror server can allow an attacker to replace an Arch package, and the user has no way to even detect that the change has been made. A network-based man-in-the-middle attack does not even require finding a root exploit on one of the servers; it could be performed at a router or network access point.

As for McRae and the other core developers resisting outside work to implement package signatures, there are at least two levels at which their recent statements are troubling. First, they (and commenters taking their side) invariably bring up the "stop whining or start coding" retort, or its kinder cousin "patches welcome."

In theory that reply encapsulates the open source approach — anyone with an itch can scratch it — which is true on the surface. But it overlooks the reality of distribution maintainership: some tasks, such as security, are vital to the health of the project, even though they might not be anyone's idea of fun. In addition, as the Arch developers' responses have shown, the gatekeepers to a project can still prevent new code and patches from getting accepted if they so choose.

That of course is the second level at which the core developers' resistance is troubling: the fact that they would prevent security patches from going into the project. Some on the anti-signature camp argue (essentially) that if you don't like any decision made by the core developers, you should leave, and cite Arch's "not for the faint-of-heart" philosophy as justification. But that is in direct opposition to the "patches welcome" response. A distribution cannot invite contributions, but then treat contributors with hostility when they arrive.

Moreover, in Arch's case it hardens the anti-signing camp into a position where it denies the reality of the package-tampering threat. You can already see this in some of the responses to IgnorantGuru and the bug reports he filed: commenters minimizing the statistical likelihood of an attack as grounds for not adopting any package signing plan at all.

Perhaps there is no reason to read ill will into any of McRae or the other Arch developers' resistance to package signing; perhaps they only seem hostile in context. But even if that is the case, by preventing others' code from making it into Pacman and the package database, they are letting Arch Linux users remain sitting ducks, as well as driving them away. It's hard to see how either result improves the project.

[Update: Arch Linux developer (and Pacman lead) Dan McGee strongly disagrees with this article, and has written about it on his blog. Additional comments can be found on our note about his posting as well.]

(Thanks to Roger Leigh for the heads-up on this topic).

Comments (42 posted)

Brief items

Security quotes of the week

So, here's the question: have I broken the law by using NoScript? I've used it for years, and it seems pretty ridiculous to claim that I now need to specifically go and whitelist the NYTimes just because it wants to hit me with an incredibly porous paywall. But, technically, I could see how an argument could be made that merely using NoScript makes me a DMCA violator by "circumventing" technical protection measures. Does this also mean that NoScript -- an incredibly useful tool -- has suddenly become a "circumvention device" overnight, because the NYTimes programmed an incredibly stupid paywall in javascript?
-- Mike Masnick

The best way for online social networking to become safer, more flexible, and more innovative is to distribute the ability and authority to the world's users and developers, whose various needs and imaginations can do far more than what any single company could achieve.
-- Richard Esguerra

Comments (4 posted)

Using SELinux and iptables Together (Linux.com)

Over at Linux.com, Red Hat's Daniel J. Walsh digs into making SELinux and iptables play nicely together. It's a rather technical look at generating rules for iptables and writing SELinux policies to support the following use case: "I finally came upon a couple of use cases where I could write some simple rules and policy to further secure my laptop. I wanted to write policy to prevent all confined domains that are started at boot (system domains) from talking to the external network, and allow all domains started by my login process (user domains) to talk to both the internal and external networks. The idea here is I do not want processes like avahi, or sssd, or sshd or any other process that gets started at boot to be listening or affected by packets from an untrusted network. I want processes started by my login, like Firefox or my VPN to be able to talk to the network. If my vpn is shut down the system domains are off the network, while I can still use the Internet for browsing and email."

Comments (1 posted)

Fraudulent SSL certificates in the wild

The just-released Firefox update announcement includes the note that "Firefox 3.6.16 and Firefox 3.5.18 blacklist a few invalid HTTPS certificates". Mozilla's separate release on the subject is rather terse, but it does at least use the word "fraudulent" instead of "invalid." Much more information can be found in the Tor blog. "Last week, a smoking gun came into sight: A Certification Authority appeared to be compromised in some capacity, and the attacker issued themselves valid HTTPS certificates for high-value web sites. With these certificates, the attacker could impersonate the identities of the victim web sites or other related systems, probably undetectably for the majority of users on the internet." There is still quite a bit of uncertainty about what happened, but updating seems like a good thing to do regardless.

Comments (17 posted)

New vulnerabilities

aaa_base: arbitrary command execution

Package(s):aaa_base CVE #(s):CVE-2011-0468
Created:March 22, 2011 Updated:April 1, 2011
Description: From the openSUSE advisory:

shell meta characters in file names could cause interactive shells to execute arbitrary commands when performing tab expansion.

Alerts:
SUSE SUSE-SR:2011:005 hplip, perl, subversion, t1lib, bind, tomcat5, tomcat6, avahi, gimp, aaa_base, build, libtiff, krb5, nbd, clamav, aaa_base, flash-player, pango, openssl, subversion, postgresql, logwatch, libxml2, quagga, fuse, util-linux 2011-04-01
openSUSE openSUSE-SU-2011:0207-1 aaa_base 2011-03-22

Comments (none posted)

libvirt: arbitrary code execution

Package(s):libvirt CVE #(s):CVE-2011-1146
Created:March 18, 2011 Updated:August 4, 2011
Description: From the CVE entry:

libvirt.c in the API in Red Hat libvirt 0.8.8 does not properly restrict operations in a read-only connection, which allows remote attackers to cause a denial of service (host OS crash) or possibly execute arbitrary code via a (1) virNodeDeviceDettach, (2) virNodeDeviceReset, (3) virDomainRevertToSnapshot, (4) virDomainSnapshotDelete, (5) virNodeDeviceReAttach, or (6) virConnectDomainXMLToNative call, a different vulnerability than CVE-2008-5086.

Alerts:
Gentoo 201202-07 libvirt 2012-02-27
Pardus 2011-102 libvirt 2011-08-04
openSUSE openSUSE-SU-2011:0578-1 xen 2011-06-01
openSUSE openSUSE-SU-2011:0580-1 xen 2011-06-01
SUSE SUSE-SR:2011:007 NetworkManager, OpenOffice_org, apache2-slms, dbus-1-glib, dhcp/dhcpcd/dhcp6, freetype2, kbd, krb5, libcgroup, libmodplug, libvirt, mailman, moonlight-plugin, nbd, openldap2, pure-ftpd, python-feedparser, rsyslog, telepathy-gabble, wireshark 2011-04-19
openSUSE openSUSE-SU-2011:0311-1 libvirt 2011-04-07
Ubuntu USN-1094-1 libvirt 2011-03-29
Red Hat RHSA-2011:0391-01 libvirt 2011-03-28
Debian DSA-2194-1 libvirt 2011-03-18
CentOS CESA-2011:0391 libvirt 2011-04-28

Comments (none posted)

maradns: denial of service

Package(s):maradns CVE #(s):CVE-2011-0520
Created:March 21, 2011 Updated:November 21, 2011
Description: From the Debian advisory:

Witold Baryluk discovered that MaraDNS, a simple security-focused Domain Name Service server, may overflow an internal buffer when handling requests with a large number of labels, causing a server crash and the consequent denial of service.

Alerts:
Gentoo 201111-06 maradns 2011-11-20
Debian DSA-2196-1 maradns 2011-03-19

Comments (none posted)

Mozilla products and Chromium: fraudulent SSL certificates

Package(s):firefox thunderbird seamonkey chromium CVE #(s):
Created:March 23, 2011 Updated:May 2, 2011
Description: Mozilla and others have released new versions of their web browser-related packages in response to some fraudulent SSL certificates being exploited on the net.
Alerts:
Fedora FEDORA-2011-5161 seamonkey 2011-04-10
Fedora FEDORA-2011-5152 seamonkey 2011-04-10
Fedora FEDORA-2011-4250 nss 2011-03-28
CentOS CESA-2011:0373 firefox 2011-04-14
Mandriva MDVSA-2011:074 qt4 2011-04-12
Fedora FEDORA-2011-4244 nss 2011-03-28
Mandriva MDVSA-2011:072 gwenhywfar 2011-04-08
Ubuntu USN-1106-1 nss 2011-04-06
Mandriva MDVSA-2011:068 firefox 2011-04-07
Pardus 2011-61 firefox xulrunner 2011-03-30
Slackware SSA:2011-086-02 firefox 2011-03-28
Slackware SSA:2011-086-01 seamonkey 2011-03-28
Debian DSA-2203-1 nss 2011-03-26
Ubuntu USN-1091-1 firefox, firefox-{3.0,3.5}, xulrunner-1.9.2 2011-03-25
openSUSE openSUSE-SU-403 mozilla-xulrunner191 2011-03-25
Fedora FEDORA-2011-3946 galeon 2011-03-23
Fedora FEDORA-2011-3917 galeon 2011-03-23
Fedora FEDORA-2011-3946 gnome-python2-extras 2011-03-23
Fedora FEDORA-2011-3917 gnome-python2-extras 2011-03-23
Fedora FEDORA-2011-3946 perl-Gtk2-MozEmbed 2011-03-23
Fedora FEDORA-2011-3917 perl-Gtk2-MozEmbed 2011-03-23
Fedora FEDORA-2011-3946 mozvoikko 2011-03-23
Fedora FEDORA-2011-3917 mozvoikko 2011-03-23
Fedora FEDORA-2011-3946 gnome-web-photo 2011-03-23
Fedora FEDORA-2011-3917 gnome-web-photo 2011-03-23
Fedora FEDORA-2011-3946 xulrunner 2011-03-23
Fedora FEDORA-2011-3917 xulrunner 2011-03-23
Fedora FEDORA-2011-3946 firefox 2011-03-23
Fedora FEDORA-2011-3917 firefox 2011-03-23
Debian DSA-2200-1 iceweasel 2011-03-23
Debian DSA-2199-1 iceape 2011-03-23
CentOS CESA-2011:0375 seamonkey 2011-03-23
CentOS CESA-2011:0374 thunderbird 2011-03-23
CentOS CESA-2011:0373 firefox 2011-03-23
Red Hat RHSA-2011:0374-01 thunderbird 2011-03-22
Red Hat RHSA-2011:0375-01 seamonkey 2011-03-22
Red Hat RHSA-2011:0373-01 firefox 2011-03-22
CentOS CESA-2011:0472 nss 2011-04-29
CentOS CESA-2011:0472 nss 2011-04-29
Red Hat RHSA-2011:0472-01 nss 2011-04-28

Comments (none posted)

php: multiple vulnerabilities

Package(s):php CVE #(s):CVE-2011-0421 CVE-2011-1092 CVE-2011-1153 CVE-2011-1464 CVE-2011-1466 CVE-2011-1467 CVE-2011-1468 CVE-2011-1469 CVE-2011-1470 CVE-2011-1471
Created:March 23, 2011 Updated:April 13, 2012
Description: PHP contains a number of vulnerabilities, including denial of service via an empty ZIP archive (CVE-2011-0421), denial of service and information disclosure (CVE-2011-1092), code execution via multiple format string vulnerabilities in the phar extension (CVE-2011-1153), denial of service in strval() (CVE-2011-1464), denial of service via an "unspecified vulnerability" (CVE-2011-1467), denial of service via memory leaks in the openssl extension (CVE-2011-1468), and two other ZIP-related denial of service issues (CVE-2011-1470, CVE-2011-1471).
Alerts:
SUSE SUSE-SU-2013:1351-1 PHP5 2013-08-16
Oracle ELSA-2012-1046 php 2012-06-30
SUSE SUSE-SU-2012:0496-1 PHP5 2012-04-12
openSUSE openSUSE-SU-2012:0426-1 php5 2012-03-29
Debian DSA-2408-1 php5 2012-02-13
Scientific Linux SL-php-20120130 php 2012-01-30
Oracle ELSA-2012-0071 php 2012-01-31
CentOS CESA-2012:0071 php 2012-01-30
Red Hat RHSA-2012:0071-01 php 2012-01-30
Scientific Linux SL-php-20120119 php 2012-01-19
Oracle ELSA-2012-0033 php 2012-01-18
CentOS CESA-2012:0033 php 2012-01-18
Red Hat RHSA-2012:0033-01 php 2012-01-18
Oracle ELSA-2011-1423 php53/php 2011-11-03
Oracle ELSA-2011-1423 php53/php 2011-11-03
Scientific Linux SL-NotF-20111102 php53/php 2011-11-02
CentOS CESA-2011:1423 php53 2011-11-03
Red Hat RHSA-2011:1423-01 php53/php 2011-11-02
Gentoo 201110-06 php 2011-10-10
Slackware SSA:2011-210-01 libpng 2011-08-01
Debian DSA-2266-1 php5 2011-06-29
openSUSE openSUSE-SU-2011:0645-1 php5 2011-06-16
Ubuntu USN-1126-1 php5 2011-04-29
Pardus 2011-63 mod_php php-cli php-common 2011-04-07
Fedora FEDORA-2011-3666 maniadrive 2011-03-19
Fedora FEDORA-2011-3636 maniadrive 2011-03-19
Fedora FEDORA-2011-3666 php-eaccelerator 2011-03-19
Fedora FEDORA-2011-3636 php-eaccelerator 2011-03-19
Fedora FEDORA-2011-3666 php 2011-03-19
Fedora FEDORA-2011-3636 php 2011-03-19
SUSE SUSE-SR:2011:009 mailman, openssl, tgt, rsync, vsftpd, libzip1/libzip-devel, otrs, libtiff, kdelibs4, libwebkit, libpython2_6-1_0, perl, pure-ftpd, collectd, vino, aaa_base, exim 2011-05-17
Mandriva MDVSA-2011:052 php 2011-03-23
Mandriva MDVSA-2011:053 php 2011-03-23
Mandriva MDVSA-2011:099 libzip 2011-05-24
openSUSE openSUSE-SU-2011:0449-1 libzip 2011-05-06
Ubuntu USN-1126-2 php5 2011-05-05

Comments (none posted)

policycoreutils: privilege escalation

Package(s):policycoreutils CVE #(s):CVE-2011-1011
Created:March 21, 2011 Updated:April 5, 2011
Description: From the CVE entry:

The seunshare_mount function in sandbox/seunshare.c in seunshare in certain Red Hat packages of policycoreutils 2.0.83 and earlier in Red Hat Enterprise Linux (RHEL) 6 and earlier, and Fedora 14 and earlier, mounts a new directory on top of /tmp without assigning root ownership and the sticky bit to this new directory, which allows local users to replace or delete arbitrary /tmp files, and consequently cause a denial of service or possibly gain privileges, by running a setuid application that relies on /tmp, as demonstrated by the ksu application.

Alerts:
Red Hat RHSA-2011:0414-01 policycoreutils 2011-04-04
Fedora FEDORA-2011-3043 policycoreutils 2011-03-10

Comments (none posted)

quagga: denial of service

Package(s):quagga CVE #(s):CVE-2010-1674 CVE-2010-1675
Created:March 22, 2011 Updated:September 14, 2012
Description: From the Debian advisory:

It has been discovered that the Quagga routing daemon contains two denial-of-service vulnerabilities in its BGP implementation:

CVE-2010-1674: A crafted Extended Communities attribute triggers a null pointer dereference which causes the BGP daemon to crash. The crafted attributes are not propagated by the Internet core, so only explicitly configured direct peers are able to exploit this vulnerability in typical configurations.

CVE-2010-1675: The BGP daemon resets BGP sessions when it encounters malformed AS_PATHLIMIT attributes, introducing a distributed BGP session reset vulnerability which disrupts packet forwarding. Such malformed attributes are propagated by the Internet core, and exploitation of this vulnerability is not restricted to directly configured BGP peers.

Alerts:
Scientific Linux SL-quag-20120913 quagga 2012-09-13
Oracle ELSA-2012-1259 quagga 2012-09-13
Oracle ELSA-2012-1258 quagga 2012-09-13
CentOS CESA-2012:1258 quagga 2012-09-12
Red Hat RHSA-2012:1258-01 quagga 2012-09-12
Gentoo 201202-02 quagga 2012-02-21
SUSE SUSE-SU-2011:1316-1 quagga 2011-12-12
Fedora FEDORA-2011-3916 quagga 2011-03-23
Fedora FEDORA-2011-3922 quagga 2011-03-23
SUSE SUSE-SR:2011:006 apache2-mod_php5/php5, cobbler, evince, gdm, kdelibs4, otrs, quagga 2011-04-05
openSUSE openSUSE-SU-2011:0274-2 quagga 2011-04-05
SUSE SUSE-SR:2011:005 hplip, perl, subversion, t1lib, bind, tomcat5, tomcat6, avahi, gimp, aaa_base, build, libtiff, krb5, nbd, clamav, aaa_base, flash-player, pango, openssl, subversion, postgresql, logwatch, libxml2, quagga, fuse, util-linux 2011-04-01
openSUSE openSUSE-SU-2011:0274-1 quagga 2011-04-01
Mandriva MDVSA-2011:058 quagga 2011-04-01
Red Hat RHSA-2011:0406-01 quagga 2011-03-31
Ubuntu USN-1095-1 quagga 2011-03-29
Debian DSA-2197-1 quagga 2011-03-21

Comments (none posted)

tex-common: shell code execution

Package(s):tex-common CVE #(s):CVE-2011-1400
Created:March 23, 2011 Updated:April 5, 2011
Description: The tex-common script collection contains an insecure setting for the shell_escape_commands directive. As a result, it may be possible to execute arbitrary commands via a malicious .tex file.
Alerts:
Ubuntu USN-1103-1 tex-common 2011-04-04
Debian DSA-2198-1 tex-common 2011-03-22

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The 2.6.39 merge window remains open as of this writing; see the separate summary below for details on what has been merged over the last week.

Stable updates: The 2.6.32.34, 2.6.37.5, and 2.6.38.1 stable kernel updates were released on March 23; each contains a number of important fixes.

2.6.33.8 was released on March 21. 2.6.33 had gone out of maintenance, but Greg Kroah-Hartman has resumed creating updates because the realtime preemption patch set is stuck at this release.

Comments (1 posted)

Quotes of the week

I tend to view arch specific embedded code as rather like very dubious parties. What goes on in other peoples' house out of sight is none of my business.

The 8250 however is core code so it should keep its clothes on and behave in a manner befitting its status.

-- Alan Cox

I also believe that Greg spends lots of lonely nights looking into git commits, drinking his favorite Latte and cursing at all the git patches that fix a bug that didn't have a Cc: stable tag attached. He use to have a huge mane of hair on his head before taking over as stable maintainer.
-- Steven Rostedt explains the stable series

If it's some desperate cry for attention by somebody, I just wish those people would release their own sex tapes or something, rather than drag the Linux kernel into their sordid world.
-- Linus Torvalds

I do get the impression that you're extremely unhappy with the way ARM stuff works, and I've no real idea how to solve that. I think much of it is down to perception rather than anything tangible.

Maybe the only solution is for ARM to fork the kernel, which is something I really don't want to do - but from what I'm seeing its the only solution which could come close to making you happy.

-- Russell King

This was discussed before, and it was felt that perhaps 75000 lines of ocaml code was not really appropriate for the Linux source tree.
-- Julia Lawall

Comments (2 posted)

Report from the V4L2 Warsaw Brainstorming Meeting

A group of Video4Linux2 developers recently gathered in Warsaw to discuss a number of topics of interest to that subsystem; Hans Verkuil has posted a report from that gathering. Issues discussed include an API for compressed video formats, subdev hierarchies, cropping and composing, pipeline configuration, HDMI support, and more.

Full Story (comments: none)

Converting strings to integers

By Jonathan Corbet
March 23, 2011
Kernel developers might rightly complain about being confused over which functions should be used to convert strings to integer types. Old functions like simple_strtoul() will silently ignore junk at the end of an integer value, so "100xx" successfully converts to an unsigned integer type. Alternatives like strict_strtoul() have been encouraged instead, but they have problems too, including the lack of overflow checks. So what's a kernel hacker to do?

As of 2.6.39, there is a new set of string-to-integer converters which is expected to be used in preference to all others.

  • Unsigned conversions can be done with any of kstrtoull(), kstrtoul(), kstrtouint(), kstrtou64(), kstrtou32(), kstrtou16(), or kstrtou8().

  • Conversions to signed integers can be done with kstrtoll(), kstrtol(), kstrtoint(), kstrtos64(), kstrtos32(), kstrtos16(), or kstrtos8().

All of these functions are marked __must_check, so callers are expected to check to ensure that the conversion happened successfully. The older functions are marked deprecated, and will eventually be removed. These new kstrto*() functions are now the Official Best Way To Convert Strings, so developers need wonder no longer.

Comments (none posted)

Kernel development news

2.6.39 merge window, part 2

By Jonathan Corbet
March 23, 2011
As of this writing, some 5,500 non-merge changesets have been merged into the mainline since last week's 2.6.39 merge window summary. A wide-ranging set of new features, cleanups, and performance improvements has been added to the kernel. Some of the more significant user-visible changes include:

  • The ipset mechanism has been merged. Ipset allows the creation of groups of IP addresses, port numbers, and MAC addresses in a way which can be quickly matched in iptables rules.

  • The size of the initial congestion window in the TCP stack has been increased, a change which should lead to shorter latencies for the loading of web pages and other server-oriented tasks. See this article for details.

  • There is a new system call:

        int syncfs(int fd);
    

    It behaves like sync() with the exception that only the filesystem containing fd will be flushed to persistent storage.

  • The USB core has gained support for USB 3.0 hubs.

  • The transcendent memory core has been added to the staging tree. Along with it came "zcache," a compressed in-memory caching mechanism.

  • There is a new "multi-queue priority scheduler" queueing discipline in the networking layer which enables the offloading of quality-of-service processing work to suitably capable hardware.

  • The CHOKe flow scheduler and the Stochastic Fair Blue scheduler have been added to the networking code.

  • RFC 4303 extended IPSEC sequence numbers are now supported.

  • Support for the UniCore 32-bit RISC architecture has been merged.

  • New drivers include:

    • Processors and systems: VIA/WonderMedia VT8500/WM85xx System-on-Chips, IMX27 IPCAM boards, and MX51 Genesi Efika Smartbook systems.

      Block: Broadcom NetXtreme II FCoE controllers and Freescale MXS Multimedia Card interfaces.

    • Graphics: Intel GMA500 controllers (2D acceleration only), USB-connected graphics devices, MXS LCD framebuffer devices, and LD9040 AMOLED panels.

    • Input: Hyper-V virtualized mice, Roccat Kova[+] mouse devices, Roccat Arvo keyboards, Wolfson WM831x PMIC touchscreen controllers, Atmel AT42QT1070 touch sensor chips, and Texas Instruments TSC2005 touchscreen controllers.

    • Networking: Texas Instruments WiLink7 bluetooth controllers (graduated from staging), Bosch C_CAN controllers, Faraday FTMAC100 10/100 Ethernet controllers, and the Xen "netback" back-end driver.

    • Miscellaneous: Faraday FUSB300 USB peripheral controllers, OMAP USBHS host controllers, NVIDIA Tegra USB host controllers, Texas Instruments PRUSS-connected devices, MSM UARTs, Maxim MAX517/518/519 DACs, RealTek PCI-E card readers, Analog Devices ad7606, ad7606-6, and ad7606-4 analog to digital converters, Maxim MAX6639 temperature monitors, Maxim MAX8688, MAX16064, MAX34440 and MAX34441 hardware monitoring chips, Lineage compact power line power entry modules, PMBus-compliant hardware monitoring devices, Linear Technology LTC4151 is high voltage I2C current and voltage monitors, Intel SCU watchdog devices, Ingenic jz4740 SoC hardware watchdogs, Xen watchdog devices, NVIDIA Tegra internal I2C controllers, Freescale i.MX28 I2C interfaces, MXS Application UART (AUART) ports, SuperH SPI controllers, Altera SPI controllers, OpenCores tiny SPI controllers, SMSC SCH5627 Super-I/O hardware monitoring chips, Texas Instruments ADS1015 12-bit 4-input ADC devices, Diolan U2C-12 USB adapters, SPEAr13XX PCIe controllers (in "gadget" mode), and Freescale MXS-based SoC i.MX23/28 DMA engines.

    • Sound: Firewire-connected sound devices, Wolfson Micro WM8991 codecs, Cirrus CS4271 codecs, Freescale SGTL5000 codecs, TI tlv320aic32x4 codecs, Maxim MAX9850 codecs, and TerraTec 6fire DMX USB interfaces.

    • Outgoing: A number of TTY drivers (epca, ip2, istallion, riscom8, serial167, specialix, stallion, generic_serial, rio, ser_a2232, sx, and vme_scc) have been moved to the staging tree in anticipation of removal in 2.6.41. The smbfs and autofs3 filesystems, which were moved to staging in 2.6.37, have now been moved out of the kernel entirely.

Changes visible to kernel developers include:

  • After many years of work by a large number of developers, the big kernel lock has been removed from the kernel.

  • The dynamic debug mechanism has some new control flags allowing for control over whether the function name, line number, module name, and current thread ID are printed.

  • The kernel can export raw DMI table data via sysfs, making it available in user space without needing to go through /dev/mem.

  • Network drivers can now enable hardware support for receive flow steering via the new ndo_rx_flow_steer() method.

  • The "pstore" filesystem provides access to platform-specific persistent storage which can be used to carry information across reboots.

  • The EXTRA_CFLAGS and EXTRA_AFLAGS makefile variables have been replaced with ccflags-y, ccflags-m, asflags-y, and asflags-m.

  • kmem_cache_name(), which returned the name of a slab cache, has been removed from the kernel.

  • The SLUB memory allocator now has a lockless fast path for allocations, speeding performance considerably. "Sadly this does nothing for the slowpath which is where the main issues with performance in slub are but the best case performance rises significantly."

  • Kernel threads can be created on a specific NUMA node with the new kthread_create_on_node() function.

  • The new function delete_from_page_cache() does what its name implies; unlike remove_from_page_cache() (which has now been deleted), it also decrements the page's reference count. It thus more closely mirrors add_to_page_cache().

  • There is a whole new set of functions which are the preferred way to convert strings to integer values; see this article for details.

  • The new "hwspinlock" framework allows the implementation of synchronization primitives on systems where different cores are running different operating systems. See Documentation/hwspinlock.txt for more information.

If the usual two-weeks rule holds, the 2.6.39 merge window can be expected to close on March 28. Watch this space next week for a summary of the final changes merged for this development cycle.

Comments (11 posted)

The dynamic debugging interface

By Jonathan Corbet
March 22, 2011
The kernel's "dynamic debugging" interface saw some minor changes for 2.6.39. As it happens, LWN has never written about how dynamic debug works, so this seems like an opportune time to fill in the gap.

It can be nice to instrument kernel code with abundant print statements that illustrate what is going on inside. The problem, of course, is that those statements can generate vast amounts of output which is usually not of interest. These statements can be left commented out most of the time, but that leads to situations where an edit/rebuild/reboot cycle is needed to get the output. In response, many developers have created mechanisms which enable or disable specific print statements at run time. The dynamic debugging interface was added as a way of providing a uniform control interface for debugging output while avoiding cluttering the kernel with various hand-rolled alternatives.

Dynamic debug operates on print statements written with either of:

    pr_debug(char *format, ...);
    dev_dbg(struct device *dev, char *format, ...);

If the CONFIG_DYNAMIC_DEBUG option is not set, the above functions will be turned into normal printk() statements at the KERN_DEBUG level. If the option is enabled, though, the code sets aside a special descriptor for every call site, noting the module, function, and file names, along with the line number and format string. At system boot, all of these debug statements are turned off, so their output will not appear even if debug-level kernel messages are routed somewhere useful by the syslog daemon.

Turning on dynamic debug causes a new virtual file to appear at /sys/kernel/debug/dynamic_debug/control (modulo any individual preferences for the location of debugfs, naturally). Writing to that file will enable or disable specific debugging functions, as specified by a simple but flexible language.

As an example, drivers/char/tpm/tpm_nsc.c contains the following code at line 346:

    dev_dbg(&pdev->dev, "NSC TPM detected\n");

Turning on that specific line could be done with a line like:

    echo file tpm_nsc.c line 346 +p > .../dynamic_debug/control

(Where the full path to debugfs has been replaced with "..."). As it happens, that dev_dbg() line does not stand alone - there is a long series of them providing information on the newly-detected device. One could enter a series of lines like the above to enable them all individually, but either of the following would also work:

    echo file tpm_nsc.c line 346-373 +p > .../dynamic_debug/control
    echo file tpm_nsc.c function init_nsc +p > .../dynamic_debug/control

Along with selection by file name, line number, and function name, the interface also allows "module name" to select a specific module, and "format fmt" to select any line whose format string contains "fmt". If more than one selector is given, all must match for a given statement to be enabled.

Commands to the control file must end with a "flags" operation telling the system what to do; "+p" turns on printk() output, while "-p" turns it off. There is also a set of flags (new for 2.6.39) controlling information added to each output line: "f" adds the function name, "l" adds the line number, "m" adds the module name, and "t" adds the thread ID. One can use "=" to set the full mask of flags to a specific value - "=plm" will enable printing with line numbers and module names while disabling thread ID and function output regardless of their prior setting. The only way to clear all of the flags is with "-pflmt".

Reading the control file will produce a list of all currently-enabled call sites.

Sometimes the interesting action happens before the system reaches a point where the control file can be accessed. Dynamic debug output can be turned on early in the boot process with the ddebug_query boot parameter.

More information on how to use this facility can be found in Documentation/dynamic-debug-howto.txt. Dynamic debug has been in the kernel since 2.6.30, but it is still common to see code submitted which contains its own, home-brewed mechanism for controlling debug output. Chances are that reviewers will ask for such mechanisms to be taken out before the code is merged. Given the flexibility and ease of use of the in-kernel implementation, it makes sense to use it from the beginning.

Comments (15 posted)

Persistent storage for a kernel's "dying breath"

By Jake Edge
March 23, 2011

When Linux systems crash, there are various ways to find out what went wrong, but generally those rely on writing to log files on disk. For some systems, disk may not be available, or trusted in the case of a crash, so a way to poke some data into a platform-specific place for use by a subsequent kernel boot would be useful. That's exactly what the pstore filesystem, which was just added during the current 2.6.39 merge window, will provide.

The idea for pstore came out of a conversation between Tony Luck and Thomas Gleixner at last year's Linux Plumbers Conference. Luck wanted to use the ACPI error record serialization table (ERST) to store crash information across a reboot. The ERST is a mechanism specified by the ACPI specification [PDF] (in section 17.4, page 519) that allows saving and retrieving hardware error information to and from a non-volatile location (like flash).

Rather than just doing something specific for the x86 architecture, he decided to create a more general framework so that other platforms could use whatever persistent storage they had available. It would be, as Luck put, "a generic layer for persistent storage usable to pass tens or hundreds of kilobytes of data from the dying breath of a crashing kernel to its successor".

There have been a number of iterations of the code since Luck first posted it for comments back in November. After Alan Cox's suggestion, pstore moved from its original firmware driver with a sysfs interface to a more straightforward filesystem-based implementation.

The basic idea is that a platform can register the availability of a persistent storage location with a call to pstore_register() and pass a pointer to a struct pstore_info, which looks like:

    struct pstore_info {
	    struct module   *owner;
	    char            *name;
	    struct mutex    buf_mutex;      /* serialize access to 'buf' */
	    char            *buf;
	    size_t          bufsize;
	    size_t          (*read)(u64 *id, enum pstore_type_id *type,
			    struct timespec *time);
	    u64             (*write)(enum pstore_type_id type, size_t size);
	    int             (*erase)(u64 id);
    };

The platform driver needs to provide three I/O routines and a buffer. There is also a mutex present to protect against simultaneous access to the buffer. With that, pstore will implement a filesystem that can be accessed from the kernel—or from user space once it has been mounted. The underlying ERST storage is record oriented, and Luck posits that other platform storage areas will be also, so the I/O interface is record oriented as well.

In addition to the pstore framework, the ERST driver was modified to take advantage of pstore; that change was also merged, so there is an in-kernel user of pstore. The pstore_info buffer is allocated and managed by drivers/acpi/apei/erst.c, and is larger than the bufsize advertised to account for the record and section headers required by ERST. Users of the IO interface either fill the buffer before calling pstore_info.write() or read the data from the buffer after a call to pstore_info.read().

Each item is stored with a type, either PSTORE_TYPE_DMESG for log messages (likely oops output), PSTORE_TYPE_MCE for hardware errors, or PSTORE_TYPE_UNKNOWN for other undefined types. When stored, each item gets a record ID associated with it, which gets returned from the pstore_info.write() call. That ID can then be used in read() and erase() operations, but it also appears in the filenames in the pstore filesystem.

The filesystem can be mounted using:

    # mount -t pstore - /dev/pstore
Files will appear there with names based on the type, name of the storage driver, and the id, so the first dmesg record for ERST would be /dev/pstore/dmesg-erst-1. The typical scenario would be for the filesystem to be mounted at boot time, then some user-space process would check for any files there, copy them to some more permanent place, and delete the files with rm. That will allow the storage facility driver to reclaim the space in order to be ready for other crashes or hardware errors.

By default, pstore will register a dump handler with kmsg_dump to write the last 10K bytes of data from the kernel log to the pstore device when there is a kernel oops or panic. The amount of data to store can be configured at mount time using the kmsg_bytes parameter.

Luck has also put out an RFC patch to disable dumping information into pstore for some kinds of kmsg_dump reasons (e.g. KMSG_DUMP_HALT or KMSG_DUMP_RESTART), but various other developers weren't so sure. Seiji Aguchi pointed to two use cases (1, 2) he has found for needing to store the tail of the kernel log messages in most of those cases. In addition, Artem Bityutskiy pointed out that having pstore decide which kmsg_dump reasons to handle "smells like policy in the kernel". Adding more options to control that behavior is certainly possible, but Luck seems to be of a mind to wait a bit before making any change.

There are other persistent storage methods for kernel log messages, notably devices/mtd/mtdoops.c and devices/char/ramoops.c. But those are targeted at the embedded space where NVRAM devices are prevalent or for platforms where RAM can be reserved that will not be cleared on a restart. Pstore is more flexible, as it can store more than just kernel logs, while the two *oops devices are wired into storing the output of kmsg_dump.

Now that pstore has been merged, others will likely start using it. David Miller has already indicated that he will use it for sparc64, where a region of memory can be set aside to persist across reboots. One would guess that other architectures that have hardware support for similar mechanisms will as well.

Comments (9 posted)

Patches and updates

Kernel trees

Greg KH Linux 2.6.38.1 ?
Greg KH Linux 2.6.37.5 ?
Greg KH Linux 2.6.33.8 ?
Greg KH Linux 2.6.32.34 ?

Architecture-specific

Core kernel code

Development tools

Device drivers

Filesystems and block I/O

Memory management

Security-related

Miscellaneous

Page editor: Jonathan Corbet

Distributions

Slackware 13.37: Linux for the fun of it

March 22, 2011

This article was contributed by Joe 'Zonker' Brockmeier.

The SUSE family of distributions has the motto "have a lot of fun," but it's Slackware that really pushes that philosophy to its limit. While most of the major Linux distributions are shaped by corporate influence, community politics, and pursuit of mainstream success, Patrick Volkerding has taken a much different path with Slackware, which is readily apparent in the release candidate of Slackware 13.37.

[Slackware boot menu]

The "13.37" version numbering, of course, is a nod to the Slackware userbase (the leet or elite, as it were) and Slackware's history of not taking version numbers too seriously. For a while, when Linux was still elbowing its way into the mainstream, some distributions were accused of "inflating" version numbers as a marketing tactic. And speaking as someone who attended one too many trade shows selling Linux CDs (with the now-defunct Linux-Mall.com) and explaining the differences to new users, this tactic actually worked to some extent. Thus Volkerding decided to one-up the competition by jumping from Slackware 4 to 7 to tweak other vendors and be competitive in version numbering.

Slackware Linux is the oldest surviving Linux distribution. It was not the first, that honor goes to MCC Interim Linux, which was followed by Yggdrasil and Soft Landing Systems (SLS) Linux. SLS was taken by Volkerding, cleaned up, and that became Slackware. It may be hard to imagine for newcomers, but Slackware was once the dominant distribution and considered very easy to use. It has weathered the loss of its distributor when Wind River bought Walnut Creek in 2001, and Volkerding's leave of absence while dealing with health issues.

Nearly 18 years have passed, and the Linux landscape looks radically different. But Slackware does not. Slackware 13.37 retains the text-based installer that has been part of Slackware seemingly forever — at least since this user first picked it up in 1996. When GUI installers were all the rage at the turn of the century and vendors were starting to think seriously about the "Year of the Linux Desktop," Slashdot asked Volkerding if he'd make Slackware "pretty" and easy to use like other distributions. Volkerding responded that Slackware is easy:

And I think we set up the desktop (at least on KDE) with the nicest set of defaults. But I understand what you're saying -- many of the other distributions now provide a fully graphical installation. But, then you end up raising the hardware requirements, so it's a trade off. I do think keeping things primarily text based is the most flexible, then you can do things like installing with a serial console, and maintaining the machine remotely is more straightforward. Of course, I've been accused of being one of those "command line" kind of people...

Judging by the looks of Slackware 13.37, nothing has changed on that front. In fact, not a lot has changed in Slackware since the 13.1 release last May.

[Slackware Xfce]

Naturally, the release contains updates for all of the major packages. Firefox has been updated to the 4.0 release candidate, and Slackware now features KDE 4.5.5 (though KDE SC 4.6.1 has been out for a few weeks and 4.6.0 was released in January), Xfce 4.6.2, Linux kernel 2.6.37, and a slew of other updates that can be found in the changelog.

For those who haven't run Slackware previously, it's something of a minimalist affair. You begin by booting from the CD or DVD, and are dropped into a root prompt. From there, if you need to format your disk you can choose between fdisk or the slightly more user-friendly cfdisk to partition your disk. Then you can run the Slackware setup utility, which walks through the installation of package sets. At that point you can pick and choose software from package sets that range from "a" for base packages (such as the kernel, glibc, and so on) to "xap" for X applications like Firefox. The suggested method is to simply proceed with a full install, which should be around 5.5GB of software when all is said and done.

At the end of the install, you're prompted to set up networking — which is only really useful if you have a wired connection — the bootloader (Slackware still opts for LILO), and set up a root password. Then you may reboot and begin using your freshly installed Slackware system. If you want to run KDE, Xfce, or another desktop you need to either edit the inittab to boot to an X login, (Volkerding has not embraced Upstart as a replacement for init, nor is it likely Slackware would take a chance on systemd just yet) or run startx. KDE, Xfce, and other desktops are presented with very minimal changes from upstream — another difference between Slackware and most modern Linux distributions. GNOME is not presented at all, having been dropped from Slackware years ago. GNOME enthusiasts who want the Slackware experience should look to GNOME SlackBuild.

[Slackware FVWM]

Slackware leaves it to users to create non-root users and manage packages. Slackware has advanced in this regard over the years, and includes slackpkg — a utility which allows users to keep a system updated, but this is done manually. On first run, users will have to edit the slackpkg configuration and select a mirror from which to get updates. Fedora, Ubuntu, Linux Mint and openSUSE users who are accustomed to seeing a notification for system updates may find this a little antiquated. Not to mention the lack of dependency resolution in Slackware's native package management tools, which is a given for APT, Yum, and Zypper but left to the user with Slackware's package tools.

Indeed, Slackware on the surface seems to be a bit behind the times. Since Slackware has no official user mailing lists, I turned to the LinuxQuestions.org Slackware forum to ask Slackers why they choose Slackware over Linux distributions with more advanced package management and system management tools.

The query brought quite a few responses — ranging from users who've been using Slackware to replace SLS to users who started with Slackware (and Linux) around Slackware 12. The overwhelming response was "simplicity" along with the fact that software is "proven" and "stable" before being put into Slackware. One example is the 2.6 kernel series — Slackware did not default to a 2.6 kernel, which was released in 2003, until Slackware 12 in 2007. Thomas Ronayne sums it up well:

It does matter to me that Slackware is conservative; too many times I've burned by cutting-edge technology crashing down around my ears and, frankly, I don't want to deal with that nonsense. I value stability far more than I do bells and whistles. The first editor I learned was ed (came with GECOS). I use vi simply because it is rock-solid, dependable and I never have to deal with "improvements" that get in my way. I learned AT&T's Documenter's Workbench, still have it, still use it -- I like the macros and troff!

[...] A full installation of Slackware results in a full-boat system -- the compilers are there, MySQL is there, Apache is there, all the tools are there, you get on with what you want to do (I do a lot of software development). When you install, say, Ubuntu you have to start adding (and adding and adding) stuff just to get to the point where you can compile and execute hello, world: nuts to that. Plus there's a constant barrage of updates. Plus, because the distribution has been "tuned," you really can't tell what the heck was done to which when you try to change something. Yeah, with Slackware you have to add OpenOffice.org if you want a word processor (KWord just doesn't cut the mustard) but that doesn't bother me anywhere near as much as not having a software development environment at first boot and having to figure out why I can't get Apache configured or where the heck to I get Korn Shell.

Slackware is the most un-fooled-around-with distribution and that makes it my baby.

JK Wood responded that he prefers Slackware because "Pat keeps things as close to upstream as possible. He also publishes the scripts and patches he uses in the source tree, so if I have a question how something was compiled, I simply have to go looking." Wood says that it has "just enough" pre-configuration, with "defaults that make sense in Slackware, and the rest is left to the end-user admin."

Though it's not advertised heavily, Wood points out another of Slackware's advantages — long-term support. Wood says that Slackware 8.1, released in 2002, is still receiving security updates. Few distributions can match that kind of longevity. For Vim users, "Emacs is in its own diskset. I can skip it completely in the installer, no questions asked."

Other users were lured in from Slackware derivatives like Wolvix or Slax before settling on the original.

Not everyone stays with Slackware, though. One respondent says that they tried Slackware "for all of a week and then went back to Debian" though they said they like Slackware:

So if I like Slackware why did I go back to Debian? Homesickness for one, I started with Ubuntu and after a couple of years moved to Debian for about another year. Another reason is because I needed to have an operating system that I could set up in a day and forget about. I will give Slackware another shot though, probably within the next few months at most.

In short, Slackware users are those who want to tinker with their system and don't find it intimidating — or are willing to face intimidation to learn more about their systems. The users range from hobbyists to one who claims to manage more than 150 Slackware servers across the state. Which isn't to suggest that Slackware is likely to be a big choice on behalf of business. The Slackware site lists a few companies that offer Slackware support but it doesn't seem too many organizations are clamoring for it. I contacted Steuben Technologies and Adjuvo Consulting. William Schaub of Steuben Technologies says he's received only one serious call for Slackware support and says "My guess is either there isn't a lot of people running Slackware in production or (and this is more likely) that most people running Slackware on their servers have all the help in house that they need."

The Slackware community may be smaller than those of major Linux distributions, but it's also largely free of politics and drama (Volkerding's health scare excepted). The distribution is driven by Volkerding, but it's not a one-man show. The changelog is full of acknowledgments from Volkerding to Robby Workman, Eric Hameleers, and many others.

You could look at Slackware and say that it's out of date, a throwback to the days when Linux was the domain of the "l33t" and little more than a hobbyist OS. Another way to look at it is that Slackware is for users who miss the simpler days of Linux and still want to tinker with their systems.

Comments (101 posted)

Brief items

Distribution quote of the week

I wonder why people rant here about bugs, instead of filing them in the appropriate places.

Ranting on -devel doesn't change things. (Sometimes its useful to make people aware, yes. But then, the next step is something else than ranting again.)

-- Holger Levsen

Comments (none posted)

Debian 6.0.1 released

The Debian project has announced the first update of Debian 6.0 "squeeze". "This update mainly adds corrections for security problems to the stable release, along with a few adjustment to serious problems."

Comments (none posted)

Slackware 13.37 release candidate 3

The third release candidate of Slackware 13.37 has been announced in the March 22 changelog entry. "We'll call this Slackware 13.37 release candidate 3. It seems like most of the important pieces are in place now. :-)" x86 and x86_64.

Comments (none posted)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

Zimmerman: DEX: Debian and its derivatives, getting things done together

On his blog, Ubuntu CTO Matt Zimmerman introduces DEX, which is an effort to get Debian and its derivative distributions working more closely together. "DEX is all about action: merging patches, fixing bugs, crunching data, whatever is necessary to get changes from derivatives into Debian proper. DEX doesn't try to change the way any existing project works, but adds a "fast path" for getting code from one place to another."

Comments (2 posted)

Page editor: Rebecca Sobol

Development

The SD distributed bug tracker

By Jake Edge
March 23, 2011

On last week's Development page we looked at the Fossil version control system, with one of its distinguishing features being the inclusion of distributed bug tracking as part of the tool. Another distributed bug tracker has recently come to our attention, and that is Simple Defects, or SD for short. There are a number of different distributed bug trackers out there, with Bugs Everywhere seeming to hold the role of "leader of the pack", but a brief introduction to SD seems in order.

So far, distributed bug tracking (DBT) hasn't taken off quite like distributed version control systems (DVCS) did, but there is still a fair amount of interest in the idea. In some ways, it feels like the DBT communities are still feeling their way, much like the pre-2005 DVCS world, and that at some point, the right combination of features will come about in one or more projects (much like the broadly similar feature sets of Git, Mercurial, and Bazaar did). Once that happens, DBT may find its way into more and more projects.

The basic idea of DBT is that centralizing bug tracking on one system makes it difficult for disconnected developers to easily handle editing bugs while they are fixing the code. There are (at least) two ways to approach the problem, either by somehow keeping the bug reports and code in the same repository (which is Fossil's approach) or by having a separate repository for bugs that can be synchronized in ways similar to that of a DVCS (the approach of SD and Bugs Everywhere). The latter approach has the advantage of being able to work with multiple version control systems, but it has the disadvantage of requiring developers to manage their code and bug repositories separately.

SD is written in Perl to sit atop the Prophet distributed database. Both Prophet and SD are available on the Comprehensive Perl Archive Network (CPAN), but the developers recommend getting them from the Git repositories if possible due to the rapid development pace. The development logs seem to bear that out as there hasn't been a release since Prophet and SD 0.7 in August 2009—though there seem to be more recent CPAN updates—while the Git logs show progress as recently as early March.

One thing that can't be said for SD (or Prophet) is that they are overly burdened with documentation. That may tend to chase away potential users, but the terse Using SD page does cover many of the topics that users are likely to be interested in. The basics should be familiar to anyone who has used a DVCS, as you clone a ticket repository with:

    $ sd clone --from http://example.com/path/to/sd
or initialize a new repository with:
    $ sd init

As with Git et al., you synchronize from a remote repository with a pull sub-command, and update remotes using publish (instead of Git's push). Bug tickets are created with the unsurprisingly named:

    $ sd ticket create
command, which opens your text editor with a template to create the bug. There are, of course, various commands to search and update bugs, as well. The browser sub-command will put SD into server mode and start up a web browser to search and edit bugs. SD also provides ways to synchronize its bugs with other systems like Trac, GitHub, Google Code, Hiveminder, RT, and others.

Overall, it seems like an interesting system for projects that are looking for a simple DBT that doesn't combine code and bugs into a single repository. That is one of the advantages of SD, at least for Lars Wirzenius, who made the blog post that introduced us to SD. He also notes that it seemed to him to be the most mature of DBT systems currently, which is a little surprising given the status of Bugs Everywhere. The ability to handle bugs from the command line is another advantage that he cites.

The lack of documentation is somewhat intimidating, and overall the projects could use a bit of work on their web sites. The wiki is largely a placeholder, and the project pages have just the bare essentials. In fact, for Prophet, even the bare essentials appear to be lacking, which is unfortunate as the description sounds intriguing: "Prophet is a new kind of database designed for the post Web-2.0 world. It's made to let you collaborate with your friends and coworkers without needing any kind of special server or internet provider." It sounds like something that might prove useful to projects like the Freedom Box.

While SD and Prophet may still have quite a ways to go before being ready to handle larger projects or ones with different needs, they do currently work for some projects. For Perl programmers with an interest in DBT (or a distributed database), they might provide a nice opportunity as a project to contribute to. For others who just want a simple DBT to use, its probably worth taking a peek at SD.

Comments (3 posted)

Brief items

Quote of the week

Object-oriented programming is eliminated entirely from the introductory curriculum, because it is both anti-modular and anti-parallel by its very nature, and hence unsuitable for a modern CS curriculum. A proposed new course on object-oriented design methodology will be offered at the sophomore level for those students who wish to study this topic.
-- Carnegie Mellon University professor Robert Harper

Comments (6 posted)

Django 1.3 released

Version 1.3 of the Django web framework has been released. New features include class-based views ("This means you can compose a view out of a collection of methods that can be subclassed and overridden to provide common views of data without having to write too much code"), better logging, better unit testing support, a number of template improvements, and more; see the release notes for details.

Comments (1 posted)

GNOME 3.0 Release Candidate

The first release candidate for the GNOME 3.0 release is now available. A lot of changes have been funneled into this release, including a pass at proper multiple monitor support, but that's apparently at an end. "Hard code freeze is also in place, no source code changes can be made without approval from the release-team. Translation and documentation can continue."

Full Story (comments: none)

Firefox 4 released

Version 4 of the Firefox browser has been announced. "The latest version of Firefox introduces a sleek new look that lets Web content take center stage. With features like App Tabs and Panorama, Firefox makes it easier and more efficient to navigate the Web. Firefox delivers industry-leading privacy and security features like Do Not Track and Content Security Policy to give users control over their personal data and protect them online."

Comments (32 posted)

LibreOffice 3.3.2 is now available

The LibreOffice 3.3.2 release is out. "The community of developers has been able to maintain the tight schedule thanks to the increase in the number of contributors, and to the fact that those that have started with easy hacks in September 2010 are now working at substantial features. In addition, they have almost completed the code cleaning process, getting rid of German comments and obsolete functionalities."

Full Story (comments: 23)

Newscoop 3.5.2 released

Newscoop is a journalism-oriented content management system; LWN reviewed it in 2009 when it used its former name "Campsite." The 3.5.2 release is available with a number of enhancements and some security fixes.

Comments (none posted)

PHP 5.3.6 released

PHP 5.3.6 is available. It's a stability release, but it includes a number of fixes with CVE numbers attached, so it would appear to be a worthwhile upgrade. Note that PHP 5.2 is no longer supported, so there will be no upgrade for that branch.

Full Story (comments: none)

Newsletters and articles

Development newsletters from the last week

Comments (none posted)

The Linux graphics stack from X to Wayland (ars technica)

Ars technica looks at the history of X and the birth of Wayland. "A key feature of Wayland is the use of a rendering API that does not have any dependencies on X. To maintain compatibility, the X Server itself is made into a Wayland client, and all of the X rendering is fed directly into Wayland. The Wayland package, like X before it, defines a protocol only. The architecture of Wayland, with its ability to function alongside X, provides an easy migration path for existing and even future planned X clients. The X Server can run as before, servicing all of the legacy clients."

Comments (47 posted)

Page editor: Jonathan Corbet

Announcements

Brief items

2010 Free Software Awards announced

The FSF has announced the winners of its 2010 awards. The award for the advancement of free software went to Rob Savoye, currently the maintainer of the Gnash project. The award for "projects of social benefit" was given to the Tor project.

Full Story (comments: none)

Google Summer of Code 2011 - mentoring organizations announced

Google has finalized the list of organizations participating in the 2011 Google Summer of Code. The list includes GNOME, KDE, Debian, Fedora, Gentoo, Blender Foundation, Inkscape, and very many more. The student application period opens on March 28. See the timeline for other dates of interest.

Comments (7 posted)

The Linux Foundation Announces MeeGo TV Working Group

The Linux Foundation has announced the formation of the MeeGo Smart TV Working Group. "Early Working Group participants include some of the most innovative thinkers in the Smart TV industry and represent leading service providers, system integrators, hardware platform manufacturers and solution providers that include: Amino Communications, Intel Corporation, JetHead Development, Locatel, MIPS Technologies, Nokia, Nokia Siemens Networks, Sigma Designs, Telecom Italia, Videon Central, and Ysten, among others. Working Group members will define software components and a compliance program as well as focus on building an innovative and thriving ecosystem of developers and content providers with leading capabilities and tools."

Comments (none posted)

Microsoft sues Barnes and Noble over Android

Microsoft has announced the filing of a lawsuit against Barnes & Noble, Foxconn, and Inventec, alleging patent infringement in the Android-based Nook ebook reader. "The patents at issue cover a range of functionality embodied in Android devices that are essential to the user experience, including: natural ways of interacting with devices by tabbing through various screens to find the information they need; surfing the Web more quickly, and interacting with documents and e-books." Information on the specific patents involved is not yet available.

Update: ZDNet has a list of the patents at issue.

Comments (44 posted)

Phipps: Board Meeting Report

On the Open Source Initiative (OSI) blog, Simon Phipps reports on an OSI board meeting that appointed Karl Fogel, Jim Jagielski, and Mike Godwin to the board (replacing Russ Nelson and Danese Cooper who are leaving due to term limits, and adding an additional board seat). At the meeting, there was also movement toward more open governance for OSI. "This time the Board was able to reach agreement. OSI intends to move to a representative model, with open source communities of all kinds able to become members and participate in OSI's mission and governance directly. We decided the best approach was to offer a way for those communities to affiliate with OSI. The first step will be to change OSI's bylaws to permit entities other than the Board to participate in governance."

Comments (none posted)

New Books

Learning Android--New from O'Reilly Media

O'Reilly Media has released "Learning Android", by Marko Gargenta.

Full Story (comments: none)

R Cookbook--New from O'Reilly Media

O'Reilly Media has released "R Cookbook", by Paul Teetor.

Full Story (comments: none)

Contests and Awards

O'Reilly Media and Fluidinfo Celebrate New Writable API with a Contest

Fluidinfo created a writable API for O'Reilly Media and to celebrate they are holding a contest "for developers to find creative new ways to use O'Reilly's metadata." The contest is only open to US residents. The grand prize is a full pass to the Open Source Conference (OSCON) in Portland, Oregon, including hotel and airfare.

Full Story (comments: none)

Calls for Presentations

Linux Plumbers Conference CFP open until April 30

The Linux Plumbers Conference call for presentations is now open. Proposals need to be in by April 30 and the conference will be held September 7-9 in Santa Rosa, California. "A good topic will cut across community boundaries, and should generate vigorous discussion leading to beneficial change. One excellent example from a past LPC was "From Naught to Sixty in 5 Seconds" back in 2008. This topic involved a sizable fraction of the Linux-related software stack, and required coordinated changes to many components. It set the goal of booting a netbook in five seconds, and within a few months actually achieved a three-second boot. That said, talks describing lessons learned during an already-completed implementation effort are also welcome, as long as they are likely to generate good discussion and to help others avoid similar pitfalls in future implementation efforts."

Comments (none posted)

SciPy 2011 Call for Papers

SciPy 2011 is the 10th Python in Science conference, to be held July 11-16, 2011, in Austin, TX. The call for proposals is open, with tutorial proposals due April 15 and paper abstracts due by April 24.

Full Story (comments: none)

Upcoming Events

ELC 2011 program announced

The Embedded Linux Conference (ELC) will be held April 11-13, 2011 in San Francisco, California. The schedule is available online.

Full Story (comments: none)

OpenCms Days 2011 - Complete Conference Program Available

OpenCms Days 2011 will take place May 9-10, 2011 in Cologne, Germany. The full conference program is available at their website.

Full Story (comments: none)

O'Reilly Strata Conference Launches Online Component April 6

O'Reilly Strata Online Conference is a web-based, three-part series that combines tutorials, panel discussions and case studies. The series starts April 6, 2011, with a session titled "Toward a Data-Driven Enterprise".

Full Story (comments: none)

Events: March 31, 2011 to May 30, 2011

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
March 28
April 1
GNOME 3.0 Bangalore Hackfest | GNOME.ASIA SUMMIT 2011 Bangalore, India
April 1
April 3
Flourish Conference 2011! Chicago, IL, USA
April 2
April 3
Workshop on GCC Research Opportunities Chamonix, France
April 2 Texas Linux Fest 2011 Austin, Texas, USA
April 4
April 5
Camp KDE 2011 San Francisco, CA, USA
April 4
April 6
SugarCon ’11 San Francisco, CA, USA
April 4
April 6
Selenium Conference San Francisco, CA, USA
April 6
April 8
5th Annual Linux Foundation Collaboration Summit San Francisco, CA, USA
April 8
April 9
Hack'n Rio Rio de Janeiro, Brazil
April 9 Linuxwochen Österreich - Graz Graz, Austria
April 9 Festival Latinoamericano de Instalación de Software Libre
April 11
April 14
O'Reilly MySQL Conference & Expo Santa Clara, CA, USA
April 11
April 13
2011 Embedded Linux Conference San Francisco, CA, USA
April 13
April 14
2011 Android Builders Summit San Francisco, CA, USA
April 16 Open Source Conference Kansai/Kobe 2011 Kobe, Japan
April 25
April 26
WebKit Contributors Meeting Cupertino, USA
April 26
April 29
OpenStack Conference and Design Summit Santa Clara, CA, USA
April 28
April 29
Puppet Camp EU 2011: Amsterdam Amsterdam, Netherlands
April 29 Ottawa IPv6 Summit 2011 Ottawa, Canada
April 29
April 30
Professional IT Community Conference 2011 New Brunswick, NJ, USA
April 30
May 1
LinuxFest Northwest Bellingham, Washington, USA
May 3
May 6
Red Hat Summit and JBoss World 2011 Boston, MA, USA
May 4
May 5
ASoC and Embedded ALSA Conference Edinburgh, United Kingdom
May 5
May 7
Linuxwochen Österreich - Wien Wien, Austria
May 6
May 8
Linux Audio Conference 2011 Maynooth, Ireland
May 9
May 11
SambaXP Göttingen, Germany
May 9
May 10
OpenCms Days 2011 Conference and Expo Cologne, Germany
May 9
May 13
Linaro Development Summit Budapest, Hungary
May 9
May 13
Ubuntu Developer Summit Budapest, Hungary
May 10
May 13
Libre Graphics Meeting Montreal, Canada
May 10
May 12
Solutions Linux Open Source 2011 Paris, France
May 11
May 14
LinuxTag - International conference on Free Software and Open Source Berlin, Germany
May 12 NLUUG Spring Conference 2011 ReeHorst, Ede, Netherlands
May 12
May 15
Pingwinaria 2011 - Polish Linux User Group Conference Spala, Poland
May 12
May 14
Linuxwochen Österreich - Linz Linz, Austria
May 16
May 19
PGCon - PostgreSQL Conference for Users and Developers Ottawa, Canada
May 16
May 19
RailsConf 2011 Baltimore, MD, USA
May 20
May 21
Linuxwochen Österreich - Eisenstadt Eisenstadt, Austria
May 21 UKUUG OpenTech 2011 London, United Kingdom
May 23
May 25
MeeGo Conference San Francisco 2011 San Francisco, USA

If your event does not appear here, please tell us about it.

Audio and Video programs

Videos from ELC Europe 2010

Video recordings of the talks given at the 2010 Embedded Linux Conference Europe are now available. "For the first time, they are all released in full HD resolution. We also switched to the VP8 video codec, which is the latest greatest Open Source and royalty free video codec."

Comments (4 posted)

Page editor: Rebecca Sobol


Copyright © 2011, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds