User: Password:
Subscribe / Log in / New account Weekly Edition for September 30, 2010

Michael Meeks talks about LibreOffice and the Document Foundation

By Jake Edge
September 28, 2010

A group of developers has announced the creation of an independent foundation - called the Document Foundation - to guide the further development of the office suite, which is provisionally named LibreOffice. At the heart of this effort is longtime developer Michael Meeks. We had the good fortune to discuss the LibreOffice effort with Michael; read on for his comments on this new initiative

LWN: Probably the first question that will come to mind for most of our readers is "Why?" — why fork And why now?

Well, it has been ten years since a foundation was promised as part of the original announcement, and there is now a confluence of circumstances to realise that goal. We want a vendor neutral body that lots of companies and non-profits can contribute to as peers. That foundation is called the Document Foundation, and for trademark reasons our product will be called LibreOffice.

LWN: What do you see as the advantages of LibreOffice for users? developers? distributions?

For developers, we are open for business - we have a realistic view of the code-base and as such we are interested in including people's fixes and improvements quickly. When we can get people working to clean up the code, translate German comments, remove dead code, fix ergonomic nits, write unit tests and so on - we are optimistic that we can produce a far better product, and one that (as developers) we can be proud of.

Linux distributions should find LibreOffice easier to package, as the development team has a vast amount of Linux distribution experience.

All of that of course leads to getting a better, more stable, and featureful office suite into users' hands.

LWN: Do you plan to require copyright assignment or contributor agreements? If so, what would those entail? And if not, why not?

There are no plans to require copyright assignment, clearly it is important to determine the origin of all code, so we will use a clear signing off / attribution trail, and familiar git tooling to make that easy.

Having to sign formal paperwork before contributing code is clearly a formidable barrier to entry, even if the rights end up with a well-governed non-profit. In contrast I believe LibreOffice needs an "All Contributions Welcome and Valued" sign outside, that says come in and help, there is a place for you here.

LWN: What are the near-term technical and community goals for the project? What about the longer-term?

In the near term, we expect to clean-up the code; we have a set of janitors tasks that require (in some cases) no previous programming experience whatsoever e.g. removing commented out code that was just left lying around (presumably due to a lack of faith in revision control). If you want to get the eternal glory of having your name in the LibreOffice code-base, now is a great time to get involved.

We also want to target tackling many of the problems that have traditionally made it hard to develop with, such as the arcane and monolithic build system.

Finally - there are a lot of ergonomic nits in OpenOffice, that individually are easy to fix but collectively add up to a big problem. We want to start tackling these in the short term.

Longer term - we are developing a plan, but somehow our press experts persuaded us to delay announcing it, expect to hear more around the Linux Plumbers Conference.

LWN: When might we expect the first LibreOffice release? Presumably it will incorporate the patches that go-oo has been maintaining, but are there patches from elsewhere that might make their way into the first release or two? Any exciting features on the horizon that we haven't seen in go-oo yet?

We have already released a beta. It is a distinct piece of code from go-oo for several reasons, most importantly being that we don't want to maintain patches anymore. Go-oo was maintained as patches, such that features could be enabled per-platform or per distribution simply by not applying them but this brings maintenance, and development problems of its own.

Instead with LibreOffice we will have several flat git repositories, such that the git diff output will be your patch, and committing is as simple as a git push. Of course many of the go-oo features have been merged, some are still pending review, and going forward go-oo will be obsoleted by LibreOffice.

LWN: Does LibreOffice plan to track OpenOffice development and incorporate changes from that code base or does it plan to go completely in its own direction? Or will there be a gradual shift from one to the other?

Clearly we are going to merge all (suitably licensed) code into the project from anywhere we can get it. Previously we would work from whatever Oracle released, but in future we will pick and choose the best changes and features from wherever they come.

LWN: Are you at all concerned about maintaining such a large body of code without the resources of a large company like Sun or Oracle behind the effort?

Clearly Oracle's contribution is real and substantial, and we would dearly like them to participate in the Document Foundation, a warm welcome is extended to them. Nevertheless - both Novell and Red Hat have support capabilities around and are confident that we can fix and improve the code. Clearly, having dependence on any single company to support or drive the project is a huge risk factor. There is a perception out there that the code is terribly tangled and impossible to develop with, but the reality is that it is just code. Sure you have to read some parts quite carefully, and empathise deeply with the authors before altering them, but this is true of all large pieces of code.

LWN: There have been occasional hints that Sun had patents on some StarOffice/OpenOffice components and we have seen that Oracle is not terribly shy about patent litigation; does the project have any concerns about patents or patented technology in the codebase?

The code-base that LibreOffice is derived from is licensed under the LGPLv3 - which gives us all a strong explicit patent license, and a good copyright license, so no. Clearly for new code we would want a plus ["or any later version"] license, so we are considering recommending a LGPLv3+ / MPL combination for entirely new code.

LWN: Who is involved with this new LibreOffice project? Undoubtedly there were individuals besides yourself, along with companies, and perhaps other groups, what can you tell us about who they are and what their roles will be?

Oh certainly, I, and Novell are only a small part of this effort, a large proportion of the non-Oracle community is of like mind, and are instrumental in helping to create LibreOffice. I anticipate the Foundation we create ultimately looking more like the GNOME Foundation than the Mozilla Foundation, i.e. with only a small staff for co-ordination, rather than for central development. I hope we will have similar elections of contributors for representatives and so on.

There is a list of people behind the foundation on the LibreOffice web-site, if I start naming them all we will run out of space pretty quickly. Of course, there are also a good number of heroes who managed somehow to get their code and fixes into an OpenOffice product in the past, that should find it a pleasure to contribute in future.

LWN: Have you had any discussions with Oracle about any of this? You are inviting them to join forces with the new project, have they expressed any interest, either formally or informally?

Clearly we have informed Oracle's StarDivision management ahead of time, as is only polite. As to their reaction - I have many developer friends in StarDivision whom I respect and have loved collaborating with in the past. My hope is, that we will work together again.

[ We would like to thank Michael for taking the time to answer our questions. ]

Comments (59 posted)

The impact of the HDCP master key release

September 29, 2010

This article was contributed by Nathan Willis

On September 13, a file appeared on the Pastebin clipboard-sharing site claiming to contain the "master key" for the High-bandwidth Digital Content Protection (HDCP) encryption system used to restrict digital audio and video content over participating HDMI (High-Definition Multimedia Interface), DisplayPort, and other connections. Intel, which developed the HDCP system internally and now sells licenses to it through its subsidiary Digital Content Protection (DCP), confirmed to the press that the key is legitimate on the 17th. What the development means for open source software is not clear, however. It stands as yet another example of why digital content-restriction schemes consistently offer "protection" that they cannot deliver, but it is not an open door for free access to media that comes in encrypted formats, such as Blu-ray discs.

Primarily this is because HDCP is not the encryption scheme used to scramble content delivery — either on optical disc or delivered to the home via satellite or cable. Rather, HDCP is used exclusively to encrypt the video output signal from the playback source (such as an optical disc player or a cable converter box) to the display. HDCP "protects" the signal both by encrypting it during transmission, and by allowing each device to perform an authentication check against the device on the other end of the connection. A side effect of the scheme is that home theater enthusiasts complain of sometimes lengthy delays when switching from one HDCP-compliant video source to another while the devices step through the HDCP handshake process.

HDCP under the hood

Computer scientist Edward Felten posted an explanation of the HDCP security model on Princeton's Freedom to Tinker blog shortly before Intel verified that the key was indeed genuine. In a nutshell, the HDCP handshake process begins with a key exchange protocol using Blom's scheme. Each licensed HDCP device has a public key and a private key; all of the private keys are generated (in advance) from the public key combined with a secret master key kept by DCP.

That key was the array posted to Pastebin on September 13. It allows anyone to generate a perfectly valid private key at their leisure. Therefore, anyone can correctly perform the handshake, exchange keys with a licensed HDCP device, and decrypt the video signal sent over the cable. No "key revocation" or blacklisting scheme can prevent such an attack, as all would-be attackers can now generate every possible key.

The fact that the secret master key was exposed does not necessarily mean that some ne'er-do-well stole it, however. As far back as 2001, three years before HDCP received regulatory approval by the FCC, two teams of cryptographers announced that the system was fatally flawed, and that an attacker could discover the master key simply by capturing the public keys — something that all HDCP-compliant devices freely report — from as few as 40 legitimate devices .

One researcher, Niels Ferguson, declined to publish his finding citing the threat of prosecution under the US Digital Millennium Copyright Act (DMCA). The other group, Scott Crobsy et al., did publish their paper [PDF], which also notes the amusing property that reverse-engineering the secret master key can be done with no prior knowledge of the algorithm used to generate keys.

Ferguson noted on his site in 2001, however, that "someday, someone, somewhere will duplicate my results. This person might decide to just publish the HDCP master key on the Internet. Instead of fixing HDCP now before it is deployed on a large scale, the industry will be confronted with all the expense of building HDCP into every device, only to have it rendered useless." On September 14, he updated his HDCP page, saying: "My only question is: what took them so long?"


Now that HDCP's authentication requirements and content encryption are irrevocably broken, the question many in the open source software community are asking is whether free software media projects will now have an easier time working around HDCP's restrictions. The short answer is that there is little to no practical advantage gained from a broken HDCP, because it is an encryption measure applied only on the raw video signal sent to the display — i.e., over HDMI, DVI, or DisplayPort cabling.

At that stage, the original source media has been decompressed from its delivery format into an audio stream and a sequence of full-resolution video frames. The bandwidth requirements for the current generation of high-definition content are very high (1920 by 1080 pixels, 24 bits per pixel, 30 frames per second, or approximately 1.49 Gbps for video alone). The open source projects that include video capture, such as MythTV, VLC, VDR, and Freevo, focus either on the capture of standard MPEG-based broadcasts or on supporting embedded hardware that performs MPEG-conversion or other compression of analog signals via a dedicated chip.

One of those devices, the Hauppauge HD PVR, does capture full-resolution, high-definition raw video over component inputs. In theory it would be possible to build a similar device that accepted HDCP-locked HDMI input instead, but such a device would either perform the same hardware compression the current devices do (in which case the "bit perfect" copy is lost), or have extremely large, extremely fast storage attached. MythTV's Robert McNamara described the possibility as infeasible.

Doing the same thing with generic PC hardware would not be much easier; there are a few HDMI video capture devices on the market, but the only manufacturer with any Linux driver support at the moment, Blackmagic Design, supplies only binary drivers that do not allow capturing HDCP-copy-protected content.

More importantly, the ability to capture full-resolution, uncompressed video from the HDMI output of a high-definition video player is a moot point considering that the content scrambling schemes employed on the compressed contents of optical discs like HD DVD and Blu-ray are broken as well.

The initial scheme deployed on HD DVD and Blu-ray is called Advanced Access Content System (AACS), and it has suffered numerous key discoveries that allow its decryption. AACS incorporates a key revocation scheme that can lock up new releases with new keys, and which is currently believed to be in either the 17th or 18th round of revocation and key replacement.

Some newer Blu-ray discs are encrypted with a different system called BD+ centered around a small virtual machine in the player, which runs VM code included on the disc. The VM code can perform integrity checks to make sure that the player has not been tampered with, force player firmware upgrades, and other security tasks. Nevertheless, at least two proprietary companies sell BD+-stripping software, and there is an open source effort to reverse-engineer the BD+ VM, spearheaded by developers at the Doom9 forums.

High-definition cable and satellite transmissions are protected by other schemes sold by proprietary vendors, including DigiCipher 2, PowerVu, and Nagravision. There appears to be no large-scale interest in reverse-engineering any of these schemes in open source software.

Legal threats

When it verified publicly that the Pastebin key was in fact the HDCP secret master key, Intel spokesman Tom Waldrop levied ominous-sounding threats of legal action against anyone who incorporated the master key into a product, saying "There are laws to protect both the intellectual property involved as well as the content that is created and owned by the content providers, [...] Should a circumvention device be created using this information, we and others would avail ourselves, as appropriate, of those remedies."

Which laws those are were not specified. The key itself could probably be considered a trade secret under US law, and if anyone with access to it disclosed it, he or she could face a civil breach-of-contract lawsuit. Both Waldrop and independent cryptographer Paul Kocher have publicly opined that the key was probably calculated through reverse engineering as Ferguson and Crosby predicted, however.

Nevertheless, any hardware manufacturer that currently produces HDCP equipment has purchased a license from DCP, which would presumably prohibit it from producing a competing product using the leaked master key. What remains unclear at this stage is whether DCP asserts any patents on HDCP, which could be used to mount a legal challenge to any HDCP-bypassing device even from a non-licensee. DCP's web site and the license agreements offered there mention patents among other broad "intellectual property" claims, but do not specify any particular patent grants. The opacity of patent filings and the difficulties of performing an adequate patent search are but two of the flaws in the US patent system already familiar to most readers.

The anti-circumvention provisions of the DMCA are yet another possible legal avenue; section 103 states that "No person shall circumvent a technological measure that effectively controls access to a work protected under this title." Whether or not the completely broken HDCP scheme would be ruled as "effectively" controlling access to a work is a matter of speculation. In recent years, the copyright office has expanded the regulatory list of allowed exceptions to section 103, including specific examples of copying CSS-protected DVD content, but individual court cases continue to rule both ways on whether fair use permits circumvention.

The future

Many people are speculating that the broken-beyond-repair HDCP scheme will lead to new hardware devices, perhaps monitors or video switches that can connect to HDCP content sources but that can ignore the restrictions imposed from HDCP content sources on the other end of the cable. That is certainly a possibility, though it could be a while before such products reach the market, and they may initially come from overseas suppliers far from the reach of DCP's legal threats.

From the software angle, however, it is difficult to come up with a scenario in which sidestepping HDCP constitutes a major gain. For video capture applications, it occurs way too close to the final display to be valuable — working around the on-disc scrambling schemes is far faster, and the raw output that might be captured over HDMI must immediately be compressed again to be practically stored. Given that no content sources (cable, satellite, or optical disc) originate in uncompressed formats, this would be a "recompression" anyway, not likely to provide any discernible quality improvement. Perhaps playback applications could fake being a licensed HDCP source, but what good is that, when HDCP is broken? In addition, display devices are all considered downstream from content sources; adding HDCP encryption would not make a signal more widely viewable, only less. Nevertheless, on September 29, two developers posted some BSD-licensed code implementing HDCP in software, so time will tell if the global software community finds it useful.

In conclusion, as the world says goodbye to HDCP, it is probably worth noting that the technology did little or nothing to actually prevent the unauthorized copying of digital audio and video content, so it is logically befitting that its passing will probably have little effect either. Whether the consumer electronics and entertainment industries learn a lesson from its brief lifespan or not is another matter entirely. DCP is already promoting a newer product called HDCP 2.0, which it advertises as being based on public key RSA authentication and AES 128 encryption, targeting wireless audio/video transmission standards. I have not yet found any serious cryptanalysis of HDCP 2.0 (there are several white papers promoting the standard, however), but then again the technologies that implement it — Digital Interface for Video and Audio (DiiVA), NetHD, Wireless Home Digital Interface (WHDI), and Wireless HD (WiHD) — have yet to reach the mass-market.

Comments (11 posted)

GSM security testing: where the action is

By Jonathan Corbet
September 27, 2010
Over the years, there has been a lot of interest in the security of the TCP/IP protocol suite. But there is another set of protocols - the GSM mobile telephony suite - which is easily as widely deployed as TCP and for which security is just as important, but a lot fewer people have ever taken a deep look at GSM. Harald Welte, along with a small group of co-conspirators, is out to change that; in a fast-paced Linux-Kongress talk (slides [PDF]), he outlined what they have been up to and how far they have gotten.

While they may be hard to find, the specifications for the GSM protocols are available. But the industry around GSM is very closed, Harald says, and closed-minded as well. There are only about four implementations (closed, naturally) of the GSM stack; everybody else licenses one of them. There are also no documents released for GSM hardware - at least, none which have been released intentionally. There are very few companies making GSM or 3G chips, and they buy their software from elsewhere. Only the biggest of handset manufacturers get to buy these chips directly, and even they don't get comprehensive documentation or source code.

On the network side, there are, once again, just a few companies making GSM-related equipment. Beyond the major manufacturers, there are a couple of nano/femtocell companies, and a shadowy group of firms making equipment for law-enforcement agencies. These companies have a small number of customers - the cellular carriers - and the quantities sold are low. So, in other words, prices for this equipment are very high. That means that anybody wanting to do GSM protocol research needs to set up a network dedicated to that purpose, and that is an expensive proposition.

Even the cellular operators don't know all that much about what is going on under the hood; they outsource almost everything they do to others. These companies, Harald says, are more akin to banks than technology companies; the actual operation of the network equipment is outsourced to the companies which sold that equipment in the first place. As a result, there are very few people who know much about the protocols or the equipment which implements them.

This state of affairs has some significant implications. Protocol knowledge is limited to the small number of manufacturers out there. There is almost no protocol-level security research happening; most of what is being done is very theoretical and oriented around cryptographic technology. The only other significant research is at the application level, which is several layers up the stack from the area that Harald is interested in. There are also no open-source protocol implementations, which is a problem: these implementations are needed to help people learn about the protocols. The lack of open reference implementations also restricts innovation in the GSM space to the manufacturers.

So how should an aspiring GSM security researcher go about it? One possibility is to focus on the network side, but, as was mentioned before, that is an expensive way to go. The good news is that the protocols on the network side are relatively well documented; that has helped the OpenBSC and OpenBTS projects to make some progress in this area. If, instead, one wanted to look at GSM from the handset side, there is a different set of obstacles to deal with. The firmware and protocol code used in handset baseband processors is, naturally, closed and proprietary. The layer-1 and signal-processing hardware and software is equally closed. There is also a complete lack of documented interfaces between these layers; we don't even know how they talk to each other. There have been some attempts to make things better - the TSM30 and MADos projects were mentioned - but things are still in an early state.

Nonetheless, the handset side is where Harald and company decided to work. The bootstrap process was a bit painful; it involved wading through over 1000 documents (full documents - not pages) to gradually learn about the protocols and how they interact with each other. Then it's necessary to get some equipment and start messing with it.

Harald gave a whirlwind tour of the protocols and acronyms found in cellular telephony. On the network side, there is the BTS (the cell tower), which talks with the base station controller (BSC), which can handle possibly hundreds of towers. The BSC, in turn, talks to the network subsystem (NSS), which is responsible for making most of the details of mobile telephone work. The protocol for talking with the handsets is called UM. It breaks down into several layers, starting with layer 1 (the radio layer, TS 04.04), up to layer 2 (LAPDm, TS 04.06), and layer 3, with names like "radio resource," "mobility management," and "call control." The layer 3 specification is TS 04.08 - the single most important spec, Harald says, for people interested in how mobile telephony works.

Various people, looking at the specifications, have already turned up a few security problems. There is, for example, no mutual authentication between the handset and the cellular tower, making tower-in-the-middle attacks possible. Cryptographic protocols are weak - and optional at that - and there is no way for the user to know what kind of encryption, if any, is in use. And so on.

On the handset side, these protocols are handled by a dedicated baseband processor; it is usually some sort of ARM7 or ARM9 processor running a real-time operating system. Evidently L4 microkernels are in use on a number of these processors. The CPU has no memory protection, and the software is written in C or assembly. There are no security features like stack protection, non-executable memory, or address-space layout randomization. It's a huge amount of software running in an unforgiving environment; Harald has written up a description of how this processor works in this document [PDF].

What an aspiring GSM security researcher needs is a baseband processor under his or her control. There are a couple of approaches which could be taken to get one of those, starting with simply building one from generic components. With a digital signal processor and a CPU, one would eventually get there, but it would be a lot of work. The alternative is to use an existing baseband chipset, working from information gained from reverse engineering or leaked documentation. That approach might be faster, but it still leads to working with custom, expensive hardware.

So the OsmocomBB hackers took neither of those approaches, choosing instead the "alternative, lazy approach" of repurposing an existing handset. There is a clear advantage to working this way: the hardware is already known to work. There is still a fair amount of reverse engineering to be done, and hardware drivers need to be written, but the job is manageable. The key is to find the right phone; a good candidate would be as cheap as possible, readily available, old and simple, and, preferably, contain a baseband chipset with at least some leaked information.

The team settled on the TI Calypso chipset, which actually has an open-source GSM stack available for it. Actually, it's not open source, but it did sit on SourceForge for four years until TI figured out it was there; naturally, the code is still available for those who look hard enough. The chipset is past the end of its commercial life, but phones built on this chipset are easy to find on eBay. As an added bonus, the firmware is not encrypted, so there are no DRM schemes to bypass.

With these devices in hand, the OsmocomBB project started in January of 2010 with the goal of creating a GSM baseband implementation from scratch. At this point, they have succeeded, in that they have almost everything required to run the phone. Their current approach involves running as little code as possible on the phone itself - debugging is much easier when the code is running on a normal computer. So the drivers and layer 1 code run on the phone; everything else is on the PC. Eventually, most of the rest of the code will move to the handset, but there seems to be no hurry in that regard.

The firmware load currently has a set of hardware drivers for the radio, screen, and other parts of the phone. The GSM layer 1 code runs with no underlying operating system - there really is no need for one. It is a relatively simple set of event-driven routines. The OsmocomBB develpers have created a custom protocol, called l1ctl, for talking with the layer 1 code. Layers 2 and 3 run on the host computer, using l1ctl to talk to the phone; they handle tasks like cell selection, SIM card emulation, and various "applications" like making calls.

The actual phones used come from the Motorola C1xx family, with the C123 and C155 models preferred for development and testing. One nice feature of these phones is that they contain the same GSM modem as the OpenMoko handset; that made life a lot easier. These phones also have a headset jack which can, under software control, be turned into an RS-232 port; this jack is how software is loaded onto the phone.

At this point, the hardware drivers for this phone are complete; the layer 1-3 implementations are also "quite complete." The OsmocomBB stack is now able to make voice calls, working with normal cellular operators. The user interface is not meant for wider use - tasks like cellular registration and dialing are command-line applications - but it all works. The code is also nicely integrated with wireshark; there are dissectors for the protocols in the wireshark mainline now.

Things which are not working include reading SIM cards, automatic power control (the phone always operates with fixed transmit power), and data transmission with GPRS. Getting GPRS working is evidently a lot of work, and there does not seem to be anybody interested in doing it, so Harald thinks there is "not much of a future" for GPRS support. Also not supported is 3G, which is quite different from GSM and which will not be happening anytime soon. There is also, naturally enough, no official approval for the stack as a whole. Even so, it's a capable system at this point; it is, Harald says, "an Ethernet card for GSM." With OsmocomBB, developers who want to build something on top of GSM have a platform they can work with.

The developers have already discovered a few "wild things" which can be done. It turns out, for example, that there is no authentication of deregistration messages. So it is possible to kick any other phone off the cellular network. There are some basic fuzzing tools available for those who would like to stress the protocols; their usefulness is limited, though, by the fact that the developers can't get any feedback from the cellular carriers.

The GSM industry, Harald says, is making security analysis difficult. So it should not be surprising that the security of existing GSM stacks is quite low. Things are going to have to change in the future; Harald hopes that OsmocomBB will help to drive that change. It is, however, up to the security community to make use of the tools which have been created for them. He hopes that community will step up to the challenge. At this point, TCP/IP security is a boring area; GSM security is where the interesting action is going to be.

Comments (43 posted)

Page editor: Jonathan Corbet


BruCON: How to take over the world by breaking into embedded systems

September 29, 2010

This article was contributed by Koen Vervloesem

On September 24 and 25, the community-oriented security conference BruCON made its second appearance in Brussels. Just like last year, the organizers succeeded in gathering a diverse mix of presentation topics and speakers: from overview talks about GSM security, mobile malware and social engineering, to highly technical talks about how to find backdoors in code, mapping the "malicious web", and analyzing malicious PDF files.

Paul Asadoorian, who is currently Product Evangelist for Tenable Network Security (the creators of the vulnerability scanning program Nessus), gave a talk with the provocative title "Embedded System Hacking and My Plot To Take Over The World" (slides [PDF]). His premise is simple: we depend on more and more embedded systems in our daily lives, and because security is largely an afterthought for embedded systems manufacturers, these systems can be used to take over the world.

Indeed, each time we use our home network, print a document, watch a DVD, and so on, there's an embedded system involved. Because these are mass-produced products that have to be manufactured as cheaply as possible, many manufacturers only think about security after the device has been designed—if they think about it at all. This makes embedded systems an attractive vehicle for mounting a large-scale attack on world-wide society. In his talk, Paul looked at some common vulnerabilities in embedded systems, how you can find these vulnerable systems, and what you could gain by exploiting them. His message to device manufacturers was clear: fix this, because the problem is huge!

Before you read further, an obvious warning: much of what Paul suggests may be illegal in some jurisdictions. These are just examples to point out what criminals could do. Don't try this at home unless you are sure you know what you're doing.

How to take over the world

What do you need to take over the world? According to Paul, three things: money, power, and stealth. First, money is needed to get resources, for buying weapons, paying armies, and so on. So how can embedded systems help you to make money? By exploiting devices that have the user's credit card linked to it, such as an entertainment system or a video game console, for example. Another possibility is to break into the user's router and snoop on the network traffic: by getting passwords for their online banking accounts, PayPal, or eBay, an attacker can get access to the user's money. But also think about sensitive information that the user prints or faxes.

Second, embedded systems can also be used to influence and control people, or in other words: gain power. For starters, just think about the adage "information = power": by sniffing people's networks and manipulating what the users see, you have a lot of control over their online life. Just by manipulating a single router, you may be able to influence multiple computers. But it goes even further: embedded systems are integral to many important services, like the power grid, water utilities, and so on. It doesn't take much inspiration to come up with some nasty attack scenarios. Paul referred to research from Josh Wright and Travis Goodspeed along with the paper Advanced Metering Infrastructure Attack Methodology [PDF] from Inguardians.

The third essential element for world domination is stealth: even if you have all the money and power, people will stop you as soon as they know your plans, so your plan is doomed if you don't work in stealth mode. According to Paul, embedded systems are perfect for this purpose:

No one pays attention to embedded systems until they are broken, because no one is interacting with them directly, e.g. with a keyboard and mouse. I have even encountered people who didn't know where their router was when I asked them about it: they didn't even know what a router is.

Combine this practical invisibility with the fact that device vendors focus on profit and leave out security to save resources, and you have an explosive cocktail: a lot of unnoticed vulnerabilities, ready to be exploited, but hidden from view.

Millions of vulnerable devices

The challenge is now to find all these vulnerable devices, Paul says: "Most of the vulnerabilities in embedded systems go unnoticed for a long time because everyone looking for them has just a couple of devices." Of course you can use the internet to find devices. Paul showed the web site WiGLE (Wireless Geographic Logging Engine), which collects statistics about wireless networks. Every practitioner of wardriving can add their data to the web site.

The interesting thing is that you can use the statistics on WiGLE to select possible targets. You can see which are the most popular vendors, and use this information to find vulnerabilities in routers of these vendors to maximize the damage. For example, the statistics show that Linksys is the most popular wireless router vendor, with 10.5% of the routers, or more than 2.7 million routers in the WiGLE database. All these routers are also drawn on a map. Just look up your home town to see how many routers there are in your neighborhood, and take into account that many of them are vulnerable to some attack.

And vulnerable they are. Paul pointed to a study last year, where researchers from the Columbia University Intrusion Detection Systems Lab scanning the internet found nearly 21,000 routers, webcams, and VoIP products with an administrative web interface viewable from anywhere on the internet and a default password. Linksys routers had the highest percentage of vulnerable devices in the United States: 45 percent of the 2,729 accessible Linksys routers still had the manufacturer's default administrative password. An attacker who finds such a router can do anything with it, including altering the router's DNS settings or reflashing the firmware.

The researchers have provided ISPs with their findings, in the hope that they would do something to protect their vulnerable customers, e.g. stop providing these devices with a default password and an administrative interface that is publicly accessible. But in general, ISPs are not responsive to these kinds of vulnerabilities.

How to find vulnerable devices

So there are a lot of vulnerable routers out there, but how do you find them? Paul gave some tips. First, just use Google: try to find the popular ISPs that provide cable modem routers to their users, and try to find out which model it is. Then use the ARIN (American Registry for Internet Numbers) database to discover the IP address ranges assigned to those ISPs. After that, you can use the port scanner Nmap to discover all devices that have port 80 open, and try to identify the HTTP banner.

Of course scanning big IP address ranges is slow, even if you limit it to one port, but with the right tuning of Nmap parameters it is doable: Paul showed a scan of 2.7 hours for half a million IP addresses and a scan of 37.5 hours for 2.2 million IP addresses. You can then manually poke through the results or write a script to find vulnerabilities, exploit them, or upload custom configurations and firmware.

It's not always necessary to scan a whole IP address range to find computers. NTP can be used to identify devices, as has been shown by Metasploit creator HD Moore. For example, by executing:

    ntpdc -c monlist <ntpserver>
you get a list of all recent clients from the NTP server. So choosing, for example, Apple's NTP server gets a list of Apple devices.

Paul also gave the example of Netgear routers that were shipped in 2003 with a hardcoded NTP server. After a while this had been patched, but now if you use HD Moore's trick on this particular NTP server, you can still find Netgear routers that query this server and thus don't have the firmware fix. That's an easy way to find outdated routers, which probably have a lot of vulnerabilities. For example, the open source penetration testing framework Metasploit has this test.

Or you can brute-force DNS subdomains. Paul referred to a method to hunt for Linksys IP cameras on the net. Some IP cameras can use dynamic domain names, and by using the tool dnsmap an attacker can brute-force subdomains to discover these cameras. Of course this can be enhanced with an automatic check for default credentials or the ability to anonymously view the video stream.

Another interesting resource is SHODAN, a search engine to find computers. You can search for computers or routers running specific software or filtered by geographic location. If you want to attack the internet infrastructure of a specific country, this is the place to begin your search. Google is also useful for this purpose: just query content that is unique to a target device.

Example vulnerabilities

For the rest of the talk, Paul ran through a lot of example vulnerabilities he has encountered and how easy it is to exploit them. For example, too many wireless routers have just default, weak, or even missing passwords. Paul even found a Zyxel router that had the password already filled in on the publicly accessible web interface. He only had to click "Login" to gain administrative access.

Paul also found some publicly accessible multifunction printers that didn't use authentication. He showed how he got access to the printed documents on a Lanier printer: he could download all documents that were printed recently, without any authentication. The type of espionage enabled by this vulnerability is perfect for social engineering purposes: he found the person's name, company, department, what applications he runs, and so on. The same printer allowed anyone to copy data from an SD card that is accidentally left in the SD card slot.

HP scanners were especially nasty: they have a webscan feature that is turned on by default with no security whatsoever: everyone can scan a confidential document that is left on the scanner and retrieve it via a web browser, because the URLs used for scanned documents are completely predictable. This is a perfect tool for corporate espionage.

More recently, HD Moore discovered several flaws in the VxWorks embedded operating system, scanned 3.1 billion IP addresses and found 250,000 vulnerable systems accessible on the internet. And then there's the DNS rebinding attack that Craig Heffner discovered in several routers, allowing attackers to gain control of the administrative interface.

Luckily, some vendors are learning from their vulnerabilities. The Linksys WET610N wireless router's setup program forces the user to change the default password "admin" to something different on the first log in. However, Paul's happiness ended quickly when he saw the next screen where Linksys recommended saving the password in a text file.

How to fix this

Paul didn't talk about all these security exploits to spoon-feed the bad guys. He wants to convince embedded systems vendors to create safer devices. They could start just by implementing some elementary, but too often ignored, security measures: don't use a default password ("Why does the concept of a default password even exist?") but force the user to choose a password, allow the user to disable protocols, and by default only enable secure management protocols like HTTPS and SSH. Moreover, Paul wants ISPs to block the inbound port 80—though it makes it hard for anyone wanting to run a web server—and to take responsibility for keeping the devices of their users secure.

To raise awareness about obvious security failures and to try to change the industry to implement better security on devices, Paul has started the website, which is a public wiki where people can point out the ways in which their devices are not secure. It's a promising initiative, but your author fears that this is not sufficient to change the industry: as Bruce Schneier has been saying for years, vendors will not improve their software's security until it is in their financial interest. A wiki will not change that, so it looks like we'll remain in the situation where anyone with enough dedication can take over the world.

Comments (7 posted)

Brief items

Zombie cookie wars: evil tracking API meant to "raise awareness" (ars technica)

Ars technica looks at evercookie, a way for web applications to store multiple cookies that can be rather difficult to get rid of. "So, when you delete the cookie in one, three, or five places, evercookie can dip into one of its many other repositories to poll your user ID and restore the data tracking cookies. It works cross-browser, too—if the Local Shared Object cookie is intact, evercookie can spread to whatever other browsers you choose to use on the same machine. Since most users are barely aware of these storage methods, it's unlikely that users will ever delete all of them."

Comments (26 posted)

U.S. Tries to Make It Easier to Wiretap the Internet (New York Times)

The New York Times reports on a bill (proposed law) that the Obama administration plans to submit to Congress that would require communication providers be able to decrypt and provide the data they carry on demand, presumably after a court order. "Essentially, officials want Congress to require all services that enable communications — including encrypted e-mail transmitters like BlackBerry, social networking Web sites like Facebook and software that allows direct "peer to peer" messaging like Skype — to be technically capable of complying if served with a wiretap order. The mandate would include being able to intercept and unscramble encrypted messages." As one might guess, the EFF is particularly worried about the bill: "The crypto wars are back in full force, and it's time for everyone who cares about privacy to stand up and defend it: no back doors and no bans on the tools that protect our communications."

Comments (40 posted)

New vulnerabilities

kernel: multiple vulnerabilities

Package(s):kernel CVE #(s):CVE-2010-2938 CVE-2010-2943
Created:September 29, 2010 Updated:March 28, 2011
Description: From the Red Hat advisory:

A flaw was found in the Xen hypervisor implementation when running a system that has an Intel CPU without Extended Page Tables (EPT) support. While attempting to dump information about a crashing fully-virtualized guest, the flaw could cause the hypervisor to crash the host as well. A user with permissions to configure a fully-virtualized guest system could use this flaw to crash the host. (CVE-2010-2938)

A flaw was found in the Linux kernel's XFS file system implementation. The file handle lookup could return an invalid inode as valid. If an XFS file system was mounted via NFS (Network File System), a local attacker could access stale data or overwrite existing data that reused the inodes. (CVE-2010-2943)

openSUSE openSUSE-SU-2013:0927-1 kernel 2013-06-10
Ubuntu USN-1093-1 linux-mvl-dove 2011-03-25
SUSE SUSE-SA:2011:012 kernel 2011-03-08
Ubuntu USN-1083-1 linux-lts-backport-maverick 2011-03-03
Ubuntu USN-1074-2 linux-fsl-imx51 2011-02-28
Ubuntu USN-1074-1 linux-fsl-imx51 2011-02-25
Ubuntu USN-1072-1 linux 2011-02-25
Ubuntu USN-1057-1 linux-source-2.6.15 2011-02-03
Ubuntu USN-1041-1 kernel 2011-01-10
Red Hat RHSA-2010:0723-01 kernel 2010-09-29
CentOS CESA-2010:0723 kernel 2010-09-30

Comments (none posted)

kernel: multiple vulnerabilities

Package(s):Linux CVE #(s):CVE-2010-3084 CVE-2010-2955 CVE-2010-3298 CVE-2010-3296 CVE-2010-3297 CVE-2010-2946
Created:September 23, 2010 Updated:April 21, 2011

From the openSUSE advisory:

CVE-2010-3084: A buffer overflow in the ETHTOOL_GRXCLSRLALL code could be used to crash the kernel or potentially execute code.

CVE-2010-2955: A kernel information leak via the WEXT ioctl was fixed.

CVE-2010-3298: Fixed a kernel information leak in the net/usb/hso driver.

CVE-2010-3296: Fixed a kernel information leak in the cxgb3 driver.

CVE-2010-3297: Fixed a kernel information leak in the net/eql driver.

CVE-2010-2946: The 'os2' xattr namespace on the jfs filesystem could be used to bypass xattr namespace rules.

Oracle ELSA-2013-1645 kernel 2013-11-26
openSUSE openSUSE-SU-2013:0927-1 kernel 2013-06-10
Ubuntu USN-1202-1 linux-ti-omap4 2011-09-13
Red Hat RHSA-2011:0421-01 kernel 2011-04-07
Ubuntu USN-1093-1 linux-mvl-dove 2011-03-25
Mandriva MDVSA-2011:051 kernel 2011-03-18
Ubuntu USN-1083-1 linux-lts-backport-maverick 2011-03-03
Ubuntu USN-1074-2 linux-fsl-imx51 2011-02-28
Ubuntu USN-1119-1 linux-ti-omap4 2011-04-20
Ubuntu USN-1074-1 linux-fsl-imx51 2011-02-25
Ubuntu USN-1072-1 linux 2011-02-25
SUSE SUSE-SA:2011:008 kernel 2011-02-11
SUSE SUSE-SA:2011:007 kernel-rt 2011-02-07
Ubuntu USN-1057-1 linux-source-2.6.15 2011-02-03
Red Hat RHSA-2011:0007-01 kernel 2011-01-11
Ubuntu USN-1041-1 kernel 2011-01-10
MeeGo MeeGo-SA-10:38 kernel 2010-10-09
Fedora FEDORA-2010-18983 kernel 2010-12-17
SUSE SUSE-SA:2010:060 kernel 2010-12-14
Red Hat RHSA-2011:0017-01 kernel 2011-01-13
Debian DSA-2126-1 linux-2.6 2010-11-26
Red Hat RHSA-2010:0842-01 kernel 2010-11-10
SUSE SUSE-SA:2010:052 kernel 2010-11-03
openSUSE openSUSE-SU-test-2010:36579-1 Kernel Module Packages 2010-11-03
openSUSE openSUSE-SU-2010:0895-2 Kernel 2010-11-03
SUSE openSUSE-SU-2010:0895-1 kernel 2010-10-27
Red Hat RHSA-2010:0771-01 kernel-rt 2010-10-14
openSUSE openSUSE-SU-2010:0720-1 kernel 2010-10-13
SUSE SUSE-SA:2010:050 kernel 2010-10-13
SUSE SUSE-SA:2010:045 kernel 2010-09-23
SUSE SUSE-SA:2010:044 kernel 2010-09-23
openSUSE openSUSE-SU-2010:0655-1 kernel 2010-09-23
openSUSE openSUSE-SU-2010:0664-1 Linux 2010-09-23
Ubuntu USN-1000-1 kernel 2010-10-19

Comments (none posted)

lib3ds: code execution

Package(s):lib3ds CVE #(s):CVE-2010-0280
Created:September 27, 2010 Updated:May 19, 2014
Description: From the CVE entry:

Array index error in Jan Eric Kyprianidis lib3ds 1.x, as used in Google SketchUp 7.x before 7.1 M2, allows remote attackers to cause a denial of service (memory corruption) or possibly execute arbitrary code via crafted structures in a 3DS file, probably related to mesh.c.

Gentoo 201405-23 lib3ds 2014-05-18
Fedora FEDORA-2010-17621 mingw32-OpenSceneGraph 2010-11-11
Fedora FEDORA-2010-14632 lib3ds 2010-09-15
Fedora FEDORA-2010-14644 lib3ds 2010-09-15

Comments (none posted)

php: multiple vulnerabilities

Package(s):php5 CVE #(s):CVE-2010-1860 CVE-2010-1862 CVE-2010-1864 CVE-2010-2093 CVE-2010-2094 CVE-2010-2097 CVE-2010-2100 CVE-2010-2101 CVE-2010-2191 CVE-2010-3062 CVE-2010-3063 CVE-2010-3064 CVE-2010-3065
Created:September 29, 2010 Updated:January 11, 2011
Description: From the CVE entries:

The html_entity_decode function in PHP 5.2 through 5.2.13 and 5.3 through 5.3.2 allows context-dependent attackers to obtain sensitive information (memory contents) or trigger memory corruption by causing a userspace interruption of an internal call, related to the call time pass by reference feature. (CVE-2010-1860)

The chunk_split function in PHP 5.2 through 5.2.13 and 5.3 through 5.3.2 allows context-dependent attackers to obtain sensitive information (memory contents) by causing a userspace interruption of an internal function, related to the call time pass by reference feature. (CVE-2010-1862)

The addcslashes function in PHP 5.2 through 5.2.13 and 5.3 through 5.3.2 allows context-dependent attackers to obtain sensitive information (memory contents) by causing a userspace interruption of an internal function, related to the call time pass by reference feature. (CVE-2010-1864)

Use-after-free vulnerability in the request shutdown functionality in PHP 5.2 before 5.2.13 and 5.3 before 5.3.2 allows context-dependent attackers to cause a denial of service (crash) via a stream context structure that is freed before destruction occurs. (CVE-2010-2093)

Multiple format string vulnerabilities in the phar extension in PHP 5.3 before 5.3.2 allow context-dependent attackers to obtain sensitive information (memory contents) and possibly execute arbitrary code via a crafted phar:// URI that is not properly handled by the (1) phar_stream_flush, (2) phar_wrapper_unlink, (3) phar_parse_url, or (4) phar_wrapper_open_url functions in ext/phar/stream.c; and the (5) phar_wrapper_open_dir function in ext/phar/dirstream.c, which triggers errors in the php_stream_wrapper_log_error function. (CVE-2010-2094)

The (1) iconv_mime_decode, (2) iconv_substr, and (3) iconv_mime_encode functions in PHP 5.2 through 5.2.13 and 5.3 through 5.3.2 allow context-dependent attackers to obtain sensitive information (memory contents) by causing a userspace interruption of an internal function, related to the call time pass by reference feature. (CVE-2010-2097)

The (1) htmlentities, (2) htmlspecialchars, (3) str_getcsv, (4) http_build_query, (5) strpbrk, and (6) strtr functions in PHP 5.2 through 5.2.13 and 5.3 through 5.3.2 allow context-dependent attackers to obtain sensitive information (memory contents) by causing a userspace interruption of an internal function, related to the call time pass by reference feature. (CVE-2010-2100)

The (1) strip_tags, (2) setcookie, (3) strtok, (4) wordwrap, (5) str_word_count, and (6) str_pad functions in PHP 5.2 through 5.2.13 and 5.3 through 5.3.2 allow context-dependent attackers to obtain sensitive information (memory contents) by causing a userspace interruption of an internal function, related to the call time pass by reference feature. (CVE-2010-2101)

The (1) parse_str, (2) preg_match, (3) unpack, and (4) pack functions; the (5) ZEND_FETCH_RW, (6) ZEND_CONCAT, and (7) ZEND_ASSIGN_CONCAT opcodes; and the (8) ArrayObject::uasort method in PHP 5.2 through 5.2.13 and 5.3 through 5.3.2 allow context-dependent attackers to obtain sensitive information (memory contents) or trigger memory corruption by causing a userspace interruption of an internal function or handler. NOTE: vectors 2 through 4 are related to the call time pass by reference feature. (CVE-2010-2191)

mysqlnd_wireprotocol.c in the Mysqlnd extension in PHP 5.3 through 5.3.2 allows remote attackers to (1) read sensitive memory via a modified length value, which is not properly handled by the php_mysqlnd_ok_read function; or (2) trigger a heap-based buffer overflow via a modified length value, which is not properly handled by the php_mysqlnd_rset_header_read function. (CVE-2010-3062)

The php_mysqlnd_read_error_from_line function in the Mysqlnd extension in PHP 5.3 through 5.3.2 does not properly calculate a buffer length, which allows context-dependent attackers to trigger a heap-based buffer overflow via crafted inputs that cause a negative length value to be used. (CVE-2010-3063)

Stack-based buffer overflow in the php_mysqlnd_auth_write function in the Mysqlnd extension in PHP 5.3 through 5.3.2 allows context-dependent attackers to cause a denial of service (crash) and possibly execute arbitrary code via a long (1) username or (2) database name argument to the (a) mysql_connect or (b) mysqli_connect function. (CVE-2010-3064)

The default session serializer in PHP 5.2 through 5.2.13 and 5.3 through 5.3.2 does not properly handle the PS_UNDEF_MARKER marker, which allows context-dependent attackers to modify arbitrary session variables via a crafted session variable name. (CVE-2010-3065)

Gentoo 201110-06 php 2011-10-10
Mandriva MDVSA-2011:004 php-phar 2011-01-10
CentOS CESA-2010:0919 php 2010-12-01
CentOS CESA-2010:0919 php 2010-11-30
Red Hat RHSA-2010:0919-01 php 2010-11-29
openSUSE openSUSE-SU-2010:0678-1 php5 2010-09-29
SUSE SUSE-SR:2010:018 samba libgdiplus0 libwebkit bzip2 php5 ocular 2010-10-06

Comments (1 posted)

php-nusoap: cross-site scripting

Package(s):php-nusoap CVE #(s):CVE-2010-3070
Created:September 27, 2010 Updated:January 3, 2011
Description: From the Red Hat bugzilla:

Bogdan Calin at at Acunetix discovered a XSS vulnerability in NuSOAP 0.9.5

Fedora FEDORA-2010-14098 php-nusoap 2010-09-04
Fedora FEDORA-2010-14100 php-nusoap 2010-09-04
Fedora FEDORA-2010-15080 mantis 2010-09-22
Fedora FEDORA-2010-15082 mantis 2010-09-22

Comments (none posted)

quassel: denial of service

Package(s):quassel CVE #(s):
Created:September 24, 2010 Updated:September 29, 2010
Description: From the Ubuntu advisory:

Jima discovered that quassel would respond to a single privmsg containing multiple CTCP requests with multiple NOTICEs, possibly resulting in a denial of service against the IRC connection.

Ubuntu USN-991-1 quassel 2010-09-23

Comments (none posted)

roundup: cross-site scripting

Package(s):roundup CVE #(s):CVE-2010-2491
Created:September 23, 2010 Updated:September 29, 2010

From the Red Hat bugzilla entry:

A deficiency was found in the way Roundup, simple and flexible issue-tracking system, processed PageTemplate templates for named pages. A remote attacker could use this flaw to conduct cross-site scripting (XSS) attacks by tricking a local, authenticated user into visiting a specially-crafted web page.

Fedora FEDORA-2010-12261 roundup 2010-08-07
Fedora FEDORA-2010-12269 roundup 2010-08-07

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 2.6.36-rc6, which was released on September 28. "Nothing here strikes me as particularly interesting. I'd like developers to take a look at Rafael's latest regression list (subject line of "2.6.36-rc5-git7: Reported regressions from 2.6.35" on lkml and various other mailing lists), as it's reasonably short. That said, for some reason I don't have that "warm and fuzzy" feeling, possibly because there's still been more commits in these -rc's than I'd really like at this stage (and no, the one extra day isn't enough to account for it)." The short-form changelog is in the announcement, or see the full changelog for all the details.

Stable updates: and were released on September 27. A typo fix that only affected Xen users necessitated the release of, which was done live on-stage at LinuxCon Japan on September 29.

Comments (none posted)

Quotes of the week

I'm beginning to think we need to have an entry in the kernel newbie's FAQ warning people that the output of various scripts such as checkpatch and get_maintainer are not authoritative, and are heuristics intended to be supplemented by human intelligence.
-- Ted Ts'o

Comments (2 posted)

Kernel development news

Maintaining a stable kernel on an unstable base

By Jonathan Corbet
September 29, 2010
Greg Kroah-Hartman launched his LinuxCon Japan 2010 keynote by stating that the most fun thing about working on Linux is that it is not stable; it is, in fact, the fastest-moving software project in the history of the world. This claim was justified with a number of statistics on development speed, all of which will be quite familiar to LWN readers. In summary, over the last year, the kernel has been absorbing 5.5 changes per hour, every hour, without a break. How, he asked, might one try to build a stable kernel on top of such a rapidly-changing base?

The answer began with a history lesson. Fifteen years ago, the 2.0.0 kernel came out, and things were looking good. We had good performance, SMP support, a shiny new mascot, and more. After four months of stabilization work, the 2.1.0 tree was branched off, and development of the mainline resumed. This was, of course, the days of the traditional even/odd development cycle, which seemed like the right way to do things at the time.

It took 848 days and 141 development releases to reach the 2.2.0 kernel. There was a strong feeling that things should go faster than that, so when, four months later, the 2.3.0 kernel came out, there was a hope that this development cycle would be a little bit shorter. To an extent, we succeeded: it only took 604 days and 58 releases to get to 2.4.0. But people who were watching at the time will remember that 2.4 took a long time to really stabilize; it was a full ten months before Linus felt ready to create the 2.5 branch and go into development mode again.

This time around, the developers intended to do a short development cycle for real. There was a lot of new code which they wanted to get into the hands of users as soon as possible. In fact, the pressure to push features to users was so strong that the distributors were putting considerable resources into backporting 2.5 code into the 2.4 kernels they were shipping. The result was "a mess" at all levels: shipped 2.4 kernels were an unstable mixture of patches, and the developers ended up doing their feature work twice: once for 2.5, and once for the backport. It did not work very well.

As a result, the 2.5 development cycle ran for 1057 days, with 86 releases. It was painful in a number of ways, but the end result - the 2.6 kernel - was significantly better than 2.4. Various things happened over the course of this development cycle; the development community learned a number of lessons about how kernel development should be done. The advent [Greg Kroah-Hartman] of BitKeeper made distributed development work much better than it did in the past and highlighted the importance of breaking changes down into small, reviewable, debuggable pieces. The kernel community which existed at the 2.6.0 release was wiser and more experienced than what had existed before; we had figured out how to do things better.

This evolution led to the adoption of the "new" development model in the early 2.6 days. The separate development and stable branches were gone, replaced with a single, fast-moving tree with releases about every three months. This system worked well for development; it is still in use several years later. But it made life a bit difficult for distributors and users. Even three months can be a long time to wait for important fixes, and, if those fixes come with a new load of bugs, they may not be entirely welcome. So it became clear that there needed to be a mechanism to distribute fixes (and only fixes) to users more quickly.

The discussion led to Linus's classic email saying that it would not be possible to find somebody who could maintain a stable kernel over any period of time. But, still, he expressed some guidelines by which a suitable "sucker" could try to create such a tree. Within a few minutes, Greg had held up his hand as a potential sucker; Chris Wright followed thereafter. Greg has been doing it ever since; Chris created about 50 stable releases before eventually moving back to "real work" and away from stable kernel work.

The stable tree has been in operation ever since. The model has changed little over that time; once a mainline release happens, it will receive stable updates for at least one development cycle. For most kernels, those updates stop after exactly one cycle. This is an important part of how the stable tree works; it puts an upper bound on the number of trees which must be maintained, and it encourages users to move forward to more current kernels.

Greg presented the rules which apply to submissions to the stable tree: they must fix real bugs, be small and easily verified, etc. The most important rule, though, is the one stating that any patches must appear in the mainline before they can be applied to the stable tree. That rule ensures that important fixes get into both trees and increases assurance that the fixes have been properly reviewed.

Some kernels receive longer stable support than others; one example is 2.6.32. A number of distribution kernel maintainers got together around 2.6.30 to see if they could all settle on a single kernel to maintain for a longer period; they settled on 2.6.32. That kernel has since been incorporated into SLES11 SP1, RHEL6, Debian Squeeze, Ubuntu 10.04 LTS, and Oracle's recently-announced enterprise kernel update. It has received over 2000 fixes to date, with contributions from everybody involved; 2.6.32 is a great example of inter-distribution contribution. It is also, as the result of all those fixes, a high-quality kernel at this point.

Greg pointed out one other interesting thing about 2.6.32: two enterprise distributions (SLES and Oracle's offering) have moved forward to this kernel for an existing distribution. That is a bit of a change in an area where distributors have typically stuck with their original kernel versions over the lifetime of a release. There are significant costs to staying with an ancient kernel, so it would be encouraging if these distributors were to figure out how to move to newer stable kernels without creating problems for their users.

The stable process is generally working well, with maintainers doing an increasingly good job of sending important fixes over. Some maintainers are quite good, with dedicated repository branches for stable patches. Others are...not quite so good; SCSI maintainer James Bottomley was told in a rather un-Japanese manner that he and his developers could be doing better.

People who are interested in upcoming stable releases can participate in the review cycle as well. Two or three days before each release, Greg posts all of the candidate patches to the lists for review. Some people complain about the large number of posts, but he ignores them: the Linux community, he says, does its development in public. There are starting to be more people who are interested in helping with pre-release testing, a development which Greg described as "awesome."

The talk concluded with a demo: Greg packaged up and released (code name "Yokohama") from the stage. It seems that the update - evidently released during Dirk Hohndel's MeeGo talk earlier in the week - contained a typo which made life difficult for Xen users. The fix, possibly the first major kernel release done in front of a crowd, hopefully will not suffer from the same kind of problem.

Comments (4 posted)

Organizing kernel messages

By Jonathan Corbet
September 29, 2010
In a previous life, your editor developed Fortran code on a VAX/VMS system. Every message emitted by VMS came decorated with a unique identifier which could be used to look it up in a massive blue binder, yielding a few paragraphs of (hopefully) helpful text on what the message actually meant. Linux has no analogous mechanism, but that is not the result of a lack of attempts. A talk at LinuxCon Japan detailed a new approach to organized kernel messaging which, its authors hope, has a better chance of making it into the mainline.

Andrew Morton recently described the kernel's approach to messaging this way:

The kernel's whole approach to messaging is pretty haphazard and lame and sad. There have been various proposals to improve the usefulness and to rationally categorise things in way which are more useful to operators, but nothing seems to ever get over the line

At LinuxCon Japan, Hisashi Hashimoto described an effort which, he hopes, will get over the line. To that end, he and others have examined previous attempts to bring order to kernel messaging. Undeterred, they have pushed forward with a new project; he then introduced Kazuo Ito who discussed the actual work.

Attempts to regularize kernel messaging usually involve either attaching an identifier to kernel messages or standardizing the message format in some way. One thing that Ito-san noted at the outset is that any scheme requiring wholesale changes to printk() lines is probably not going to get very far. There are over 75,000 such lines in the kernel, many of them buried within macros; there is no practical way to change them all. Other wrapper functions, such as dev_printk(), complicate the situation further. So any change will have to be effected in a way which works with the existing mass of printk() calls.

A few approaches were considered. One would be to create a set of wrapper macros which would format message identifiers and pass them to printk(); the disadvantage of this method, of course, is that it still requires changing all of the printk() call sites. It's also [Kazuo Ito] possible to turn printk() into a macro which would assemble a message identifier from the available file name and line number information; those identifiers, though, would be too volatile for the intended use. So the approach which the developers favored was hooking into printk() itself to add message identifiers to messages as they find their way to the console and the logs.

These message identifiers (also called "message-locating helper tokens") must be assigned in some sort of automatic manner; asking the development community to maintain a list of identifiers and attach them to messages seems like a sure road to disappointment. So one must immediately think of how those identifiers will be generated; the two main concerns are uniqueness and stability. It turns out that Ito-san is not concerned with absolute uniqueness; if, on occasion, two or three kernel messages end up with the same identifier, the administrator should still be able to sort things out without a great deal of pain.

Stability is important, though; if message identifiers change frequently between releases - not to mention between boots - their value will be reduced. For that reason, generating messages at compile time using preprocessor variables like __FILE__ and __LINE__ to generate the identifiers, while easy, is not sufficient. One could also use the virtual address of the printk() call site, which is guaranteed to be unique, but that could even change from one system boot to the next, depending on things like the order in which modules are loaded. So a different approach needs to be found.

What this group has settled on is generating a CRC32 hash of the message format string at run time. There is a certain runtime cost to that which would have been nice to avoid, but it's not that high and, if printk() calls are a bottleneck to system performance, there are other problems. If the system has been configured to output message identifiers, this hash value will be prepended (with a "(%08x):" format) to the message before it is printed. A CRC32 hash is not guaranteed to produce a unique identifier for each message (though it is better than CRC16, which is guaranteed to have collisions with 75,000 messages), but it will be close enough.

Discussion of the current implementation during the talk revealed that there are some remaining problems. Messages printed with dev_printk() will all end up with the same identifier, which is an undesirable result. The newly-added "%pV" format directive (which indicates the passing of a structure containing a new format string and argument list) also complicates things significantly by adding recursive format string processing. So the implementation will require some work, but there was not a lot of disagreement over the basic approach.

It was only toward the end of the talk that there was some discussion of what the use cases for this feature are. The initial goal is simply to make it easier to find where a message is coming from in the kernel code. The use of macros, helper functions, etc. can make it hard to track down a message with a simple grep operation. But, with a message ID and a supporting database (to be maintained with a user-space tool), developers should be able to go directly to the correct printk() call. Vinod Kutty noted that, in large installations, automatic monitoring systems could use the identifiers to recognize situations requiring some sort of response. There are also long-term goals around creating databases of messages translated to other languages and help information for specific messages.

So there are real motivations for this sort of work. But, as was noted back at the beginning, getting any kind of message identifier patch through the process has always been a losing proposition so far. It is hoped that, this time around, the solution will be sufficiently useful (even to kernel developers) and sufficiently nonintrusive that it might just get over the line. We should find out soon; once the patch has been fixed, it will be posted to the mailing list for comments.

Comments (26 posted)

Namespace file descriptors

By Jake Edge
September 29, 2010

Giving different groups of processes their own view of global kernel resources—network environments and filesystem trees for example—is one of the goals of the kernel container developers. These views, or namespaces, are created as part of a clone() with one of the CLONE_NEW* flags and are only visible to the new process and its children. Eric Biederman has proposed a mechanism that would allow other processes, outside of the namespace-creator's descendants, to see and access those namespaces.

When we looked at an earlier version back in March, Biederman had proposed two new system calls, nsfd() and setns(). Since that time, he has eliminated the nsfd() call by adding a new /proc/<pid>/ns directory with files that can be opened to provide a file descriptor for the different kinds of namespaces. That removes the need for a dedicated system call to find and return an fd to a namespace.

Currently, there must be a process running in a namespace to keep it around, but there are use cases where it is rather cumbersome to have a dedicated process for keeping the namespace alive. With the new patches, doing a bind mount of the proc file for a namespace:

    mount --bind /proc/self/ns/net /some/path
for example, will keep the namespace alive until it is unmounted.

The setns() call is unchanged from the earlier proposal:

    int setns(unsigned int nstype, int nsfd);
It will set the namespace of the process to that indicated by the file descriptor nsfd, which should be a reference to an open namespace /proc file. nstype is either zero or the name of the namespace type the caller is trying to switch to ("net", "ipc", "uts", and "mnt" are implemented), so the call will fail if the namespace that is referred to by nsfd does not correspond. The call will also fail unless the caller has the CAP_SYS_ADMIN capability (root privileges, essentially).

For this round, Biederman has also added something of a convenience function, in the form of the socketat() system call:

    int socketat(int nsfd, int family, int type, int protocol);
The call parallels socket(), but takes an nsfd parameter for the namespace to create the socket in. As pointed out in the discussion of that patch, socketat() could be implemented using setns():
    setns(0, nsfd);
    sock = socket(...);
    setns(0, original_nsfd);
Biederman agrees that it could be done in user space, but is concerned about race conditions in an implementation of that kind. In addition, unlike for the other namespace types, he has some specific use cases in mind for network namespaces:

The use case are applications are the handful of networking applications that find that it makes sense to listen to sockets from multiple network namespaces at once. Say a home machine that has a vpn into your office network and the vpn into the office network runs in a different network namespace so you don't have to worry about address conflicts between the two networks, the chance of accidentally bridging between them, and so you can use different dns resolvers for the different networks.

But he also realized that it might be a somewhat controversial addition. Overall, there has been relatively little discussion of the patchset on linux-kernel, and Biederman said that it had received positive reviews on the containers mailing list. He posted the patches so that other kernel developers could review the ABI additions, and there seem to be no complaints with setns() and the /proc filesystem additions.

Changes for the "pid" namespace were not included in these patches as there is some work needed before that namespace can be safely unshared. That work doesn't affect the ABI, though. Once the pid namespace is added in, it seems likely we will see these patches return, perhaps without socketat(), sometime soon. Allowing suitably privileged processes to access others' namespaces will be a useful addition, and one that may not be too far off.

Comments (5 posted)

Patches and updates

Kernel trees

Core kernel code

Device drivers


Filesystems and block I/O



Virtualization and containers

Benchmarks and bugs


Page editor: Jonathan Corbet


CloudUSB 1.1: Good idea, flawed execution

September 29, 2010

This article was contributed by Joe 'Zonker' Brockmeier.

"Cloud" is easily the most overused term of the year for the computing industry. Case in point is the CloudUSB distribution, a project that promises to provide automatic backups and data along with privacy protection. The cloud name is a stretch and the security is far less than promised.

CloudUSB is a USB-based Linux distribution based on Ubuntu 10.04 LTS. The idea is that you can carry your own Linux distribution with you for use anywhere, thus allowing anyone to use Linux on any computer and keep their data safe in the event the USB key is lost. In practice, this is more limited than suggested by the Web site.

To use CloudUSB, you need to download a 950MB ISO image and a script to copy the image to your USB key. The script makes use of UNetbootin to copy the image to the USB key and set it up correctly. It takes about 10 minutes to copy CloudUSB over and get it ready. You'll need at least a 4GB key to use it effectively, and larger is better if you have a significant amount of data. I used an 8GB Seagate "puck" USB drive. The configuration routine allows you to decide how much room to allocate for data, so I split it evenly between 4GB for the OS and 4GB for personal data.

After booting CloudUSB, it lets you log in using "cloudusb" as the name and password. That default is used when creating the USB key, so you'll want to change the password on first boot. To finish the CloudUSB setup, you need to run the script on the desktop to configure Dropbox.

Encryption and data sync

CloudUSB uses the Dropbox service to synchronize data, so users who don't already have a Dropbox account will need to set up an account before being able to use the synchronization service. CloudUSB sets up a data and private-data folder for keeping sensitive files in. The private-data folder is encrypted, and requires a password that matches the user login.

Or it's supposed to be encrypted, anyway. After running through the instructions and setting everything up, I removed the USB key and mounted it on another Ubuntu Linux system. There, under the Dropbox directory, I was able to see the private-data directory and view all of the contents. I had followed the how to configure instruction to the letter, so it didn't appear to be a user error.

The script that comes with the distribution uses encfs to set up an encrypted directory. It appears the script isn't properly encrypting the directory, though. When the system is rebooted, it does use encfs to mount the Dropbox/private-data directory as Desktop/.private-data. However, if you use fusermount to unmount that directory, the mount disappears but Dropbox/private-data is not encrypted. I've contacted the developer, Gianluca Mora, and he's looking at the problem.

The CloudUSB Web site points to the Edubuntu wiki as the source of instructions for creating an encrypted home directory. Users may want to simply use UNetbootin to create their own USB key and configure an encrypted directory on their own rather than relying on the CloudUSB project.

Even if the setup works properly, the only thing being encrypted is what's stored under the Dropbox/private-data directory, and all that's being synced is the material under Dropbox. Any user configuration, bookmarks, and so on will only be synced if the user takes the time to file them under the Dropbox directories.

Aside from Dropbox, though, CloudUSB isn't very "cloudy" at all. CloudUSB includes the standard desktop fat-client applications, without mixing in 'cloud' apps as Peppermint Linux does. On the Web site, there's not much indication that there are any plans to go beyond Dropbox synchronization and making it slightly easier to set up a distribution on a USB key. The scripts to create the CloudUSB ISO are available, so users who want to work on customizing their own USB distro might start there.

There's very little to say about the distribution itself outside of its encryption abilities, or lack thereof. It's largely a package-for-package clone of Ubuntu 10.04 LTS, though it does have a couple of packages you won't find in the standard Ubuntu install. Specifically, CloudUSB includes Dropbox, Skype, Wireshark, UNetbootin, and Emacs. If you've used Ubuntu, though, you've pretty much used CloudUSB.

The rationale behind the project is a good one, but the execution is flawed on a number of levels. It's also limited by the choice of Dropbox to some extent. Some users will not want to use Dropbox because it is in part proprietary software. On a practical level, Dropbox may be difficult to squeeze onto a USB key for users who have accounts with more storage than you'll find on most USB thumb drives. Dropbox doesn't provide a way to synchronize only a few folders to an account, so it's easy to see users with larger Dropbox folders running out of space instantly using CloudUSB.

The final verdict is that CloudUSB needs some work. Even when the setup problems are addressed, it doesn't offer all that much over a standard Ubuntu install with Dropbox added.

Comments (5 posted)

Brief items

Distribution quotes of the week

My package made it into Debian-main because it looked innocuous enough; no one noticed "locusts" in the dependency list.
-- xkcd

I like minimal server installs, a lot. --nobase tastes like candy to me. They don't get many updates, it's easy to read though what updates are needed. Easy to test that things won't break when there is an update. They're just lovely. If my wife had said no when I asked her to marry me, I'd have married a server with a minimal install.

Yes, I understand this is just reality, it's no ones fault but when I decide to take my minimal install and add something, I just want to add those packages and no more. So the sysadmin in me cringes when I try to install kvm, libvirt, and python-virtinst and see alsa-lib, libogg, and esound (among others) pulled in. As if this server will ever make a sound of any kind.

-- Mike McGrath

Comments (13 posted)

Announcing the release of Fedora 14 Beta

Dennis Gilmore has announced the release of Fedora 14 Beta. "The beta release is the last important milestone of Fedora 14. Only critical bug fixes will be pushed as updates leading up to the general release of Fedora 14, scheduled to be released in early November. We invite you to join us and participate in making Fedora 14 a solid release by downloading, testing, and providing your valuable feedback."

Full Story (comments: 14)

Distribution News

Debian GNU/Linux

delegation for FTP Masters

Debian Project Leader Stefano Zacchiroli has announced the delegations for FTP masters. "FTP Masters, commonly referred to as "ftpmaster", oversee and maintain the well-being of Debian's official package repositories."

Full Story (comments: none)


Fedora Board Recap 2010-09-27

Click below for a recap of the September 27 meeting of the Fedora Board. Topics include FUDCon Zurich, Fedora 14 beta, and the Fedora vision statement.

Full Story (comments: none)

Ubuntu family

Ubuntu 9.04 reaches end-of-life on October 23, 2010

Ubuntu 9.04 aka "Jaunty Jackalope" will not be supported after October 23, 2010. "The supported upgrade path from Ubuntu 9.04 is via Ubuntu 9.10. Instructions and caveats for the upgrade may be found at Note that upgrades to version 10.04 LTS and beyond are only supported in multiple steps, via an upgrade first to 9.10, then to 10.04 LTS."

Full Story (comments: none)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

Carrez: The real problem with Java in Linux distros

Ubuntu server technical lead Thierry Carrez has a good summary of the problems with distributing Java applications on Linux. "The problem is that Java open source upstream projects do not really release code. Their main artifact is a complete binary distribution, a bundle including their compiled code and a set of third-party libraries they rely on. If you take the Java project point of view, it makes sense: you pick versions of libraries that work for you, test that precise combination, and release the same bundle for all platforms. It makes it easy to use everywhere, especially on operating systems that don’t enjoy the greatness of an unified package management system." (Thanks to Torsten Werner.)

Comments (130 posted)

Ubuntu 10.10 Preview: Steady Progress for Maverick (

Nathan Willis reviews the soon-to-be-released Ubuntu 10.10 (Maverick Meerkat). "For starters, 10.10 ships with Linux kernel 2.6.35, up from 2.6.32 that shipped with Ubuntu 10.4. The list of improvements this brings is long. It includes the open source Nouveau video drivers for NVIDIA graphics cards, the ability to switch between onboard and graphics card GPUs on laptops, a host of new WiFi and WiMAX network adapters, improved KVM virtualization, better RAID management, and better power management for both CPUs and storage devices - most notably support for AMD's "Turbo Core" feature, which can speed up just one core of a multi-core chip while letting the other sleep. Although BTRFS did not make the cut, two new filesystems did make it into the release: the distributed Ceph filesystem targeting clusters, and LogFS, which uses logging structures to reduce wear on flash drives."

Comments (none posted)

GeeXboX: Lightweight Media System (Linux Journal)

Linux Journal has a review of GeeXboX. "GeeXboX is a live distribution that can quickly turn a PC into a straight-forward media playback solution. It can be installed to a hard disk, but it works quite well when booted from a CDROM or other removable media. I'm going to examine the existing, stable 1.x series and also take a look at what the forthcoming (but already usable) 2.x series has lined up."

Comments (none posted)

Spotlight on Linux: SliTaz GNU/Linux 3.0 (Linux Journal)

Susan Linton takes a quick look at SliTaz 3.0. "In the world of small size distributions, SliTaz is one of the most remarkable. Not only does it have one of the smallest download images, but it can also run on modest hardware while offering graphical applications with familiar interfaces. It's one of the wonders of the Linux world."

Comments (none posted)

Richard Hillesley charts the trials and tribulations of PCLinuxOS (ITPro)

Richard Hillesley takes a look at PCLinuxOS. "The inspiration behind PcLinuxOS, also known as PCLOS, is Bill Reynolds, who is known to fans of PCLinuxOS as Texstar. PCLinuxOS began as an offshoot of Mandrake/Mandriva, to which Texstar had been a long time contributor of third-party packages. The objective was to build a fast, reliable distribution of Linux, that was both a Live distribution on the model of Knoppix and a fully installable and flexible Linux desktop, driven by Reynolds' passion to make the perfect software package."

Comments (none posted)

Page editor: Rebecca Sobol


OHS: Open hardware legal issues

September 29, 2010

This article was contributed by Jeff & Victoria Osier-Mixon

Free software projects sometimes struggle with licensing issues. The open hardware community is beginning to face the same kinds of difficulties with copyrights and patents that the free software community has grappled with. A presentation by John Wilbanks at the Open Hardware Summit (OHS) in New York discussed how these concepts apply to open hardware, and how to surmount the issues using open-source tenets and a "commons" of patent tools and free licenses.

Open hardware is a relatively new concept. The OpenCores project and website started in 1999, primarily as a collecting point for Verilog IP description files, but produced no actual hardware. SPARC enthusiasts remember when Sun "open-sourced" the SPARC implementation, leading to the OpenSPARC processors in 2006. Since then, a large spectrum of devices ranging from basic electronic designs to microcontrollers up to entire FPGAs has been created as open source. This means that the designs, in the form of design documents, CAD files, etc., are open in terms of shared ownership, community development, ready availability, low or zero cost, and permissive licensing, just as open-source software describes source code with those same characteristics.

The legal issues, however, are tricky, as anyone involved in open-source software can attest. This is particularly true in a field in which an invention can be protected by either a copyright or a patent, or even a combination of both, and that copyright or patent can be licensed in a number of ways. Hardware has traditionally been covered under patents, although that is changing as hardware is increasingly described with documentation rather than physical evidence.

The expense of a patent is beyond the reach of most individuals to obtain and defend — that has normally been the domain of corporations with lawyers. Even if a creator wants to open up a design, the cost of getting a patent is restrictive and the legal issues around patents are daunting. Copyrights, on the other hand, are free and applicable to any creative work as soon as it is published, and can sometimes be applied to hardware design documents.

[John Wilbanks]

John Wilbanks is the VP for Science at Creative Commons, a non-profit corporation whose stated goal is "making it easier for people to share and build upon the work of others, consistent with the rules of copyright." It accomplishes this by providing a set of free licenses and legal tools to enable creators of hardware, software, documents, and pretty much anything else copyrightable to "share, remix, use commercially, or any combination thereof," using the concept of a commons, a traditional public meeting ground in which resources are collectively owned and/or shared. Creative Commons is addressing the challenges of open hardware by creating a commons of patent tools to realize those same goals for patents.

Wilbanks spoke at the OHS about legal issues when open source concepts are applied to patents. He started by describing the problems with patent protection and restrictive licenses on "not-open-source hardware" — the traditional method for protecting hardware inventions:

  • incrementalism — incremental innovation based on prior art as patents expire and are renewed on new versions
  • artificial scarcity — the goal of restrictive licenses in general: to restrict access to something of value in order to make it appear more valuable, such as Internet access at an expensive hotel
  • "thickets of patents" — sets of restrictive patents built on intertwining layers, making them very difficult and expensive to change or challenge

At OHS, Wilbanks described an alternative method for both protection and ensuring the ability to share by creating "zones of prenegotiated patent rights" in the form of licenses based on language that is agreeable to both the licensor and the licensee. "The whole issue of commons is coming to open source hardware" and other fields in terms of a patent tools commons, a set of licensing tools being developed by Creative Commons. CC has set up two new patent tools to help hardware creators who use patents to navigate through the maze in order to open their designs in a safe way. The Creative Commons site describes the tools as follows:

  • a Research Non-Assertion Pledge, to be used by patent owners who wish to promote basic research by committing not to enforce patents against users engaged in basic non-profit research
  • a Model Patent License, to be used by patent owners who wish to make a public offer to license their patents on standard terms

"Untested licenses are like uncompiled code," Wilbanks says. "The idea is to create zones of prenegotiated patent rights, first to begin cutting through thickets [of patents], second to eliminate problems of scarcity, and third to really start to get to the point where if you're trying to figure out what you want to do, you're not exposed" to patent issues. The Model Patent License in draft form provides sample language for a public license offer that is "capable of being accepted by anyone on a non-discriminatory basis and without additional negotiation," offering benefits to both parties. These tools enable individuals to approach hardware patents similarly to the ways corporations have traditionally done, but without the high associated costs.

Wilbanks believes that open hardware can succeed "when you do legal licensing around these systems [to] lower the transactional cost, increase the transparency, and get the need to have a highly skilled lawyer negotiating on your behalf out of the way." As with software, open hardware licensing boils down to the creator's intent. Rather than depending on safety "from the barrel of a license," Wilbanks says that to be adequately protected, open hardware projects should follow some solid basic open-source principles: start open (as opposed to starting proprietary and open-sourcing later), publish early and often (to create prior art and encourage sharing, respectively), and use prenegotiated patent rights to protect both ownership and the right to share, rather than depending on licenses for protection. Using licenses and prenegotiated rights lowers the cost of making hardware open, which also lowers the barrier to entry. Those ideas will be refined further as the open hardware movement gains traction.

Comments (none posted)

Brief items

Development quotes of the week

The Foundation will be the cornerstone of a new ecosystem where individuals and organisations can contribute to and benefit from the availability of a truly free office suite. It will generate increased competition and choice for the benefit of customers and drive innovation in the office suite market. From now on, the community will be known as "The Document Foundation".
-- The Document Foundation announces itself

It feels a bit weird, but I'm glad that this is the last major release of 2.x era. For real, this time ;-) We'll get a maintenance release in November, but it will only provide a few bug fixes, since most contributors are already completely focused on making 3.0 rock!
-- Vincent Untz on GNOME 2.32

Comments (none posted)

Clutter 1.4.0 released

Version 1.4.0 of the Clutter graphical toolkit has been released. It is the first stable release in the 1.4 cycle and adds new base classes and objects along with various performance improvements. "Clutter is an open source software library for creating portable, fast, compelling and dynamic graphical user interfaces."

Full Story (comments: none)

Crabgrass 0.5.3 released

The Crabgrass project has released version 0.5.3. The biggest new feature is WYSIWYG wiki editing. "Crabgrass is social networking, group collaboration, and network organizing Web application. It consists of a solid suite of group collaboration tools such as private wikis, task lists, a file repository, and decision making tools. Work is currently being done on a large user interface overhaul, better social networking tools, blogs, and event calendars, as well as better support for collaboration and decision making among independent groups."

Full Story (comments: none)

GNOME 2.32 has been released

The last major release in the GNOME 2.x series, 2.32, has been released. The release comes with a long list of new features for the free, multi-platform desktop environment including better contact organization in Empathy, better accessibility for the Evince PDF viewer, full support for Python and Vala in the Anjuta IDE, and more. "GNOME 2.32 is the last planned major release in the GNOME 2.x series, with only maintenance releases for GNOME 2.x planned going forward. GNOME 2.32 features a limited set of new features in some applications, as the community continues to focus on the upcoming GNOME 3.0 release scheduled for April, 2011."

Full Story (comments: none)

Propose new modules for inclusion in GNOME 3.0

Proposals for new modules to be added into GNOME 3 are now being accepted. "How to proceed? All the information is on: We expect module discussions to heat up about those proposals around the beginning of November and to reach a decision by the second week of November."

Full Story (comments: none)

Pylint 0.21.3 will be the last to support Python 2.3

Pylint, a Python source code analyzer, has been updated to version 0.21.3 to support Python 2.3. This is likely to be the last version supporting 2.3 (and possibly 2.4): "At the time of porting pylint to py3k, this will much probably be the latest set of versions to use to get pylint working with python 2.3 code. And maybe, unless you people think it would be a shame, also for python 2.4, so we can drop support for the old compiler module."

Full Story (comments: none)

Tahoe, the Least-Authority File System v1.8.0 released

The Tahoe "Least-Authority Filesystem" (LAFS) team has released version 1.8.0 with greatly improved performance and fault-tolerance of downloads along with improved Windows support. "Tahoe-LAFS is the first distributed storage system to offer "provider-independent security" - meaning that not even the operators of your storage servers can read or alter your data without your consent." LWN looked at Tahoe back in 2008.

Full Story (comments: none)

Newsletters and articles

Development newsletters from the last week

Comments (none posted)

Page editor: Jonathan Corbet


Non-Commercial announcements community members launch the Document Foundation

A group of developers has announced the creation of The Document Foundation, intended to carry forward development of as an independent free software project. "Oracle, who acquired assets as a result of its acquisition of Sun Microsystems, has been invited to become a member of the new Foundation, and donate the brand the community has grown during the past ten years. Pending this decision, the brand 'LibreOffice' has been chosen for the software going forward." Supporting organizations include Canonical, Google, Novell, and Red Hat.

Full Story (comments: none)

XtreemOS Announces Public Access to Open Test Bed

The XtreemOS consortium has announced the opening of a publicly accessible test bed for the XtreemOS grid operating system. "XtreemOS is a set of technologies developed on top of Mandriva Linux to enable ease of use on clusters and grids. XtreemOS has been developed over the past 4 years by an international consortium of 19 academic and industrial partners. When asked why the consortium decided to open their test bed, Dr. Christine Morin, a research director at INRIA (Institut National de Recherche en Infomatique et Automatique) replied, "We wanted to concretely demonstrate our efforts over the past four years, as well as to seek interest from other computer researchers and scientists to participate in the future development of XtreemOS." Dr Morin organized and has been the scientific coordinator for the XtreemOS project since its inception in 2006."

Comments (none posted)

Commercial announcements

Red Hat Reports Second Quarter Results

Red Hat has announced financial results for its fiscal year 2011 second quarter ended August 31, 2010. "Total revenue for the quarter was $219.8 million, an increase of 20% from the year ago quarter. Subscription revenue for the quarter was $186.2 million, up 19% year-over-year."

Comments (none posted)

Legal Announcements

Red Hat Responds to U.S. Patent and Trademark Office Request for Guidance on Bilski

The Red Hat legal team has submitted comments to the U.S Patent and Trademark Office. "The submission was made in response to the PTO's request for public comments to assist it in determining how to apply the Supreme Court's decision in that case. Although the Bilski decision did not expressly address the standards for refusing to allow software patents, interpretation of the decision will determine whether certain patents are granted. Thus the PTO's approach to examining patent applications will have a substantial effect on the patent landscape. Red Hat previously submitted an amicus brief in Bilski. Its comments to the PTO indicate its continuing commitment to reform of the patent system to address the harms caused by poor quality software patents." (Thanks to Davide Del Vento)

Comments (110 posted)

Articles of interest

The Defenders of Free Software (New York Times)

Here's a profile of Armijn Hemel and his GPL enforcement work in the New York Times. "Lawsuits are typically settled out of court. For example, after Cisco was sued over the software included in its home routers; it agreed to make the software code available, tapped a point person to be responsible for open-source issues and paid an undisclosed amount to settle the case. Representatives of companies dealing with these complaints say it's a far more amicable engagement than one would find in the proprietary software world, where hard-charging lawyers come seeking serious payback for what they view as violations of intellectual property rules."

Comments (none posted)

Can academia "release early, release often?" (

Over at, Fedora project member and Red Hat intern Ian Weller reflects on what was learned in a project to create an open source textbook, Practical Open Source Software Exploration. In particular, he notes the difference between the release cycles in open source and academia. "Here's where we again saw the gap between the release cycles of open source and those in academia. The open source release cycle is 'release early, release often'—even though your goals are known, milestones are often unknown, and it's possible that even the end result is unknown. Professors know when every milestone—every semester—begins and ends, and students know exactly what they're going to learn in a given semester, but they don't always know what the end goal is."

Comments (4 posted)

Project Porting MeeGo OS To Android Phones Starting To Yield Results (Android Police)

The Android Police site reports that a group of developers is getting to the point where it is possible to run MeeGo on a Nexus One handset. "Right now, it’s unclear exactly how functional the OS is, and, as usual, the development is suffering from issues with driver cross-compatibility and other similar obstacles. One particular stumbling block appears to be the closed-source nature of Qualcomm 3D drivers, preventing hardware acceleration of the user interface and resulting in painfully slow response times."

Comments (15 posted)

CodePlex Foundation becomes Outercurve Foundation (The H)

The H reports that the CodePlex Foundation has changed its name to the Outercurve Foundation. "Since its launch, the CodePlex Foundation has been confused by some with Microsoft's project hosting platform, despite the fact that the former is a non-profit organisation dedicated to promoting open source collaboration between corporations and the community and the latter is a Microsoft-owned site for hosting open source projects." LWN covered the launch of the CodePlex Foundation in September 2009.

Comments (6 posted)

New Books

The Agile Samurai--New from Pragmatic Bookshelf

Pragmatic Bookshelf has released "The Agile Samurai" by Jonathan Rasmusson.

Full Story (comments: none)

REST in Practice--New from O'Reilly

O'Reilly has released "REST in Practice" by Jim Webber, Savas Parastatidis and Ian Robinson.

Full Story (comments: none)

Calls for Presentations

Southern Plumbers Miniconf CFP (LCA 2011 - Brisbane)

There will be a Southern Plumbers Miniconf at the 2011 on January 24, 2011, in Brisbane, Australia. The call for submissions will be open until October 29, 2010.

Full Story (comments: none)

Call for proposals -- PyCon 2011

PyCon 2011 will be held March 9-17, 2011 in Atlanta, Georgia. "The PyCon conference days will be March 11-13, preceded by two tutorial days (March 9-10), and followed by four days of development sprints (March 14-17)." The call for proposals for the formal presentation tracks is open until November 10, 2010.

Full Story (comments: none)

Upcoming Events

Open Video Conference streamed live in WebM

Flumotion will be streaming the Open Video Conference live using the open formats WebM and Ogg / Vorbis / Theora. "The OVC is an annual summit organized and hosted by the Open Video Alliance (OVA), a coalition of organizations and individuals committed to the idea that the power of moving image should belong to everyone. This year's OVC takes place between 1-2 October in New York City, with the conference drawing a broad and unique audience of connected creatives and change makers, who all share the vision of an open video ecosystem."

Full Story (comments: none)

Events: October 7, 2010 to December 6, 2010

The following event listing is taken from the Calendar.

October 7
October 9
Utah Open Source Conference Salt Lake City, UT, USA
October 8
October 9
Free Culture Research Conference Berlin, Germany
October 11
October 15
17th Annual Tcl/Tk Conference Chicago/Oakbrook Terrace, IL, USA
October 12
October 13
Linux Foundation End User Summit Jersey City, NJ, USA
October 12 Eclipse Government Day Reston, VA, USA
October 16 FLOSS UK Unconference Autumn 2010 Birmingham, UK
October 16 Central PA Open Source Conference Harrisburg, PA, USA
October 18
October 21
7th Netfilter Workshop Seville, Spain
October 18
October 20
Pacific Northwest Software Quality Conference Portland, OR, USA
October 19
October 20
Open Source in Mobile World London, United Kingdom
October 20
October 23
openSUSE Conference 2010 Nuremberg, Germany
October 22
October 24
OLPC Community Summit San Francisco, CA, USA
October 25
October 27
GitTogether '10 Mountain VIew, CA, USA
October 25
October 27
Real Time Linux Workshop Nairobi, Kenya
October 25
October 27
GCC & GNU Toolchain Developers’ Summit Ottawa, Ontario, Canada
October 25
October 29
Ubuntu Developer Summit Orlando, Florida, USA
October 26 GStreamer Conference 2010 Cambridge, UK
October 27 Open Source Health Informatics Conference London, UK
October 27
October 29 2010 Parc Hotel Alvisse, Luxembourg
October 27
October 28
Embedded Linux Conference Europe 2010 Cambridge, UK
October 27
October 28
Government Open Source Conference 2010 Portland, OR, USA
October 28
October 29
European Conference on Computer Network Defense Berlin, Germany
October 28
October 29
Free Software Open Source Symposium Toronto, Canada
October 30
October 31
Debian MiniConf Paris 2010 Paris, France
November 1
November 2
Linux Kernel Summit Cambridge, MA, USA
November 1
November 5
ApacheCon North America 2010 Atlanta, GA, USA
November 3
November 5
Linux Plumbers Conference Cambridge, MA, USA
November 4 2010 LLVM Developers' Meeting San Jose, CA, USA
November 5
November 7
Free Society Conference and Nordic Summit Gorthenburg, Sweden
November 6
November 7
Technical Dutch Open Source Event Eindhoven, Netherlands
November 6
November 7 HackFest 2010 Hamburg, Germany
November 8
November 10
Free Open Source Academia Conference Grenoble, France
November 9
November 12
OpenStack Design Summit San Antonio, TX, USA
November 11 NLUUG Fall conference: Security Ede, Netherlands
November 11
November 13
8th International Firebird Conference 2010 Bremen, Germany
November 12
November 14
FOSSASIA Ho Chi Minh City (Saigon), Vietnam
November 12
November 13
Japan Linux Conference Tokyo, Japan
November 12
November 13
Mini-DebConf in Vietnam 2010 Ho Chi Minh City, Vietnam
November 13
November 14
OpenRheinRuhr Oberhausen, Germany
November 15
November 17
MeeGo Conference 2010 Dublin, Ireland
November 18
November 21
Piksel10 Bergen, Norway
November 20
November 21
OpenFest - Bulgaria's biggest Free and Open Source conference Sofia, Bulgaria
November 20
November 21
Kiwi PyCon 2010 Waitangi, New Zealand
November 20
November 21
WineConf 2010 Paris, France
November 23
November 26
DeepSec Vienna, Austria
November 24
November 26
Open Source Developers' Conference Melbourne, Australia
November 27 Open Source Conference Shimane 2010 Shimane, Japan
November 27 12. LinuxDay 2010 Dornbirn, Austria
November 29
November 30
European OpenSource & Free Software Law Event Torino, Italy
December 4 London Perl Workshop 2010 London, United Kingdom

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol

Copyright © 2010, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds