User: Password:
Subscribe / Log in / New account Weekly Edition for January 26, 2012

LCA: Jacob Appelbaum on surveillance and censorship

By Jonathan Corbet
January 25, 2012
Talks at often cover a wider range of topics than those held at many other Linux-related events, and LCA 2012 was no exception. The final keynote at this conference was from Jacob Appelbaum, a lead developer of the Tor project. This no-holds-barred session took an uncompromising look at surveillance and censorship and the people behind them. It was a strong call for action - and for more free software - from a courageous man who clearly lives by the words written on his T-shirt: "be the trouble you want to see in the world."

We live, Jacob said, in a surveillance society. We don't really live in independent states anymore; instead, we live in different surveillance cones on a surveillance planet. Increasingly, the world resembles the Panopticon, a prison [Jacob Appelbaum] designed in 1786. Anybody who thinks otherwise need only look at, for example, the widespread warrantless wiretapping of US citizens with AT&T's help under (at least) the Bush administration. We are, indeed, being watched.

There are a number of coping strategies that we all adopt in the face of this kind of surveillance, starting with the specious claim that "I have nothing to hide." The fact that the attendees decided to put clothes on before going to the conference that morning (a decision your editor, at least, much appreciates) demonstrates otherwise. Or we say that yes, people are watching, but bad things will never happen to us personally. Which is a fine position until something does happen.

The problem with this kind of surveillance structure, according to Jacob, is that "it attracts assholes." Once this machinery is put into place, it will be put to bad uses regardless of its original intent. For example, the "Echelon" spy network was put into place as part of the cold war, but it was also alleged to be used to, for example, funnel information to Boeing to be used to win aircraft orders.

Many (or most) countries allow for "lawful" interception of some communications by governments without a warrant. Traffic data for phone calls or text messaging, for example, falls under this umbrella. It's said not to be "content" that requires a warrant to access, but it still tells a story about a person and will be abused by governments. We need to make it harder for governments to get at this data.

But it gets worse. The switches at the core of the phone system and the Internet all have governmental backdoors built into them. Sometimes those backdoors are more widely used than intended; Jacob recommended reading The Athens Affair, an IEEE article about the use of surveillance backdoors to spy on the Greek government (and many others). These backdoors are an attractive target, to the point that the operators of these systems should think hard about what their lives are worth; the man in charge of planning the Greek Vodafone-Panafon network died suspiciously as the compromise of that network was discovered.

Jacob played a video advertisement for the "FinFly" device, meant to be installed in an Internet provider's equipment rack. The FinFly is a highly capable man-in-the-middle attack device, able to pick out traffic associated with specific targets, record it, and even install malware on the target's systems. This device, sadly, is built on top of the Backtrack-Linux distribution. Among its customers was the former government of Egypt, which used it against pro-democracy activists there. Jacob does not want to live in a world where governments can do things like that.

FinFly is just the beginning; there is a whole range of products designed to meet the needs of the surveillance state. Quite a bit of information about this particular area of commerce can be found in the Spyfiles release from WikiLeaks. There is a lot of money to be made in surveillance equipment, but the companies involved should be held culpable for the uses to which that equipment is put.

Pervasive surveillance allows the government to put together a picture about almost anybody. That picture is based on facts, but may still not be true. But it is useful for the purposes of control, enforcement of power structures, and harassment. Jacob knows that latter aspect well, having been detained several times, threatened with jail, and subjected to seizures of his electronic equipment.

Along with surveillance goes censorship - the determination by people in power that there are things they do not want others to know. Practices like Internet filtering are designed to promote ignorance and retain power. It's done in lot of different ways. There is the famous great firewall of China, which, he said, is more of a spider web catching those who try to stray beyond the boundaries. In the US, censorship is accomplished through "legal threats and illegal tactics." In Lebanon, the national firewall uses a version of squid - a good thing, Jacob said, since they haven't gotten around to patching it for a long time. In Syria, off-the-shelf products are used. And so on.

Not all censorship is equal, and it is often easy to bypass. But censorship, combined with surveillance, often leads to self-censorship. The net was not built to make us fear our own state, but that is what is happening. When a company like Google is frightened by a law like SOPA, we should all be scared; Richard Stallman's The Right to Read was not meant to be a manual. History has shown us over and over again that people with power will turn into thugs. The Stanford prison experiment also demonstrated that quite clearly. With so much experience in this area, why is it that we keep repeating the experiment?

[Jacob Appelbaum] The good news, according to Jacob, is that we have the power to change things. And, in particular, we can challenge surveillance and censorship with anonymity. The American revolution was fueled by anonymous pamphlets that could be circulated without their authors ending up in prison. We need the ability to distribute anonymous pamphlets in this century as well.

So what can we do? We need to reframe the issues so that freedom and openness come first. We need to observe - and report on - surveillance and censorship on the net. We should write more free software and get more people to use it, and everybody writing software should be thinking about their users' freedom and security. Free software needs to be free as in freedom, though; "open source for business" is not the same thing. He looks forward to the day when the only binary blob running on his system is the government rootkit.

Tor is one piece of the puzzle, certainly, but there are others. Jacob mentioned TextSecure, which allows encrypted text messaging between Android phones, as an important piece of freedom-related technology. He also called out FreedomBox, the GNOME project, the Ada Initiative (what does freedom mean, he asked, if half of our population is oppressed?), and the Electronic Frontier Foundation.

In the end, he said, it comes down to freedom for everybody - no exceptions. But that is not how the surveillance state works. Securing that freedom will require a dedication to open standards, open designs, free software, free hardware, and decentralization. We can, he said, push back the surveillance state and create for ourselves an accountable government and freedom for all.

[Your editor would like to thank the LCA 2012 organizers for assisting with his travel to the event.]

Comments (27 posted)

An LCA 2012 summary

By Jonathan Corbet
January 25, 2012
Your editor is still recovering from his journey home from Ballarat, Australia, where the 2012 was held. The richness of this event can be seen in the numerous articles already published here; this article will attempt to cover a variety of talks that, for various reasons, did not turn into articles of their own.

Freedom is always a strong theme at LCA, and the 2012 version was no exception. That emphasis tends to be especially strong in the keynote talks, as should be clear from the reports on the keynotes by Bruce Perens and Jacob Appelbaum. Karen Sandler's keynote was just as concerned with freedom and just as noteworthy; it would have merited an article of its own had we not covered some of her topics from another talk back in 2010. Karen retold her chilling story of trying to get access to the source code for an implanted device designed to protect her from abrupt heart failure. Not only is that source not available to her; even the regulatory agency in the US (the FDA) charged with approving the device normally does not review the code. The presence of catastrophic bugs seems guaranteed.

In addition to simple worries about whether the device will work as needed, there is another concern: these devices are increasingly given wireless communications capabilities that allow them to be reconfigured and controlled remotely. To the extent that the security associated with that access can be verified, it seems to be notable mostly in its absence. In other words, implanted medical devices would appear to be open to a variety of denial of service attacks with extreme consequences. Given that some of them are implanted into important people (she named Dick Cheney who, as a result of his implanted device, no longer has a pulse), it only seems like a matter of time until somebody exploits one of these vulnerabilities in a high-profile way. Karen noted dryly that, given the type of people she hangs around with, it would be unwise to expose herself to such attacks; she went out of her way to get an older device with no wireless connectivity.

She pointed out that a lot of other safety-critical devices - automotive control systems were mentioned in particular - have similar problems. The solution to the problem is clear: we need more free software in safety-critical applications so that we can all review the code and convince ourselves that we can trust our lives to it. And that, she said, is why she made the move to the GNOME Foundation. GNOME's work to make free software usable and attractive in current and future systems is, she said, an important part of getting free software adopted in places where we need it to be.

Another theme at LCA has always been represented by the maker contingent: whether it's Arduino, rockets, robots, or home automation, the people who make their own toys always turn out in force at LCA. Notable among a strong set of maker-oriented talks was "Rescuing Joe" by Andrew "Tridge" Tridgell. The challenge here is to make an autonomous aircraft that can search a defined area for a lost bushwalker ("hiker," in US dialect), drop a water bottle nearby (without hitting him), then return safely back to its landing point. This challenge has been run for a few years, but nobody has yet fully achieved its goals; Tridge's team hopes to be the first to succeed.

Getting there requires the design of a complex system involving autonomous avionics, an independent failsafe mechanism that will crash the plane if it leaves the search area, computer vision systems to locate the hiker, mechanical systems to reliably drop the water bottle in the desired location, and high-bandwidth digital communications back to the launch base. The test systems currently run with a combination of Pandaboard and Arduino-based systems, but the limitations of the Arduino are becoming clear, so the avionics are likely to move to another Linux-based Pandaboard in the near future.

This project requires the writing of a lot of software, most of which is finding its way back upstream. The hardware requirements are also significant; Tridge noted that the team received a sophisticated phased-array antenna as a donation with a note reading "thanks for rsync." All told, "challenge" appears to not even begin to describe the difficulty of what this team has taken on. The whole talk, done in Tridge's classic informative and entertaining style, is well worth watching.

Rusty Russell and Matt Evans recently took a look at V6 Unix, as built for the PDP-11, and noted something obvious but interesting: it was a whole lot smaller than the systems we are running now. The cat binary on that system was all of 152 bytes - in an era when everything was statically linked - while cat on Ubuntu 11.10 weighs in at 47,696 bytes - and that is with dynamic linkage. We have seen similar growth in grep (2,190 bytes to 151,056) and ls (4,920 bytes to 105,776). So they asked: where is all this growth coming from, and what did we get for it?

What followed was an interesting look into how Unix-like systems have changed over the years; watching the video is well recommended. Their first observation was that contemporary binaries could be reduced in size by about 30% by using the GCC -Os option, which causes the compiler to optimize for size. In other words, we are paying a 30% size penalty in order to gain some speed; the actual speed benefit they measured was about 9%. But there is a lot more to it than that.

A simple program consisting of a single "return 42;" line on Ubuntu will, when build statically, weigh in at about 500,000 bytes. Rusty and Matt determined that this program, which makes no direct C library calls, was pulling in about 17% of glibc anyway. Even the simplest program, anymore, must make provisions for dynamic loading, atexit() handling, proper setuid behavior, and more. So the program gets huge but, in this case, only about 2% of the pulled-in code actually gets run. In general, they found, most of the code dragged in by contemporary programs is simply wasted space. That waste can be reduced considerably by linking against dietlibc instead of glibc, though.

How much does 64-bit capability cost? An amusing exercise in porting the V6 code to the imaginary 64-bit "PDP-44" architecture increased its size by about 50%; the size difference between 32-bit and 64-bit Ubuntu programs is rather smaller, at about 9%. Use of "modern infrastructure" (that, for example, forces malloc() to be used instead of sbrk() in all programs) bloats things by about 120%. The large growth in features (ls has 60 options) leads to a massive 440% increase in size; they also measured a 20% time overhead caused by rarely-used features in ls. It's worth noting that half of that time cost goes away when running with LANG=C, leading to the conclusion that locales and other flexibility built into contemporary systems has a large cost. In the end, though, these appear to be costs that we are willing to pay.

David Rowe gave a fascinating talk on the development of Codec2, a speech-oriented codec that is able to produce surprisingly good voice quality at data rates as low as 1,400 bits/second. To understand the math involved, one should watch the video. But even without following that aspect of things, the talk is an interesting discussion of the open development of a patent-free codec with interesting real-world applications - sufficiently interesting that it risked being classified as a munition and treated like cryptographic code.

In summary, LCA remains unique in its combination of strongly technical talks, freedom-oriented and hands-on orientation, wide variety of topics covered, and infectious Australian humor. There is a reason some of us seem to end up there every year despite the painful air-travel experiences required. Linux Australia has put together a structure that allows the conference to be handed off to a new team in a new city every year, bringing a fresh view while upholding the standards set in the previous years. With regard to upholding the standards, the LCA 2012 lived up to expectations in a seemingly effortless manner - it was a job well done indeed. They have set a high bar for the 2013 team (Canberra, January 28 to February 2) to live up to.

[ Conference videos can be found on YouTube, in Ogg format, and in WebM. Your editor would like to thank the LCA 2012 organizers for assisting with his travel to the event. ]

Comments (16 posted)

Robots rampage (in a friendly way) at SCALE 10X

January 25, 2012

This article was contributed by Nathan Willis

"World domination" is a less prevalent theme in Linux and open source discussions these days than it was some time ago, but it still comes up regularly in one field of study: robots. At the 2012 Southern California Linux Expo (SCALE) in Los Angeles, Willow Garage's Tully Foote described the Robot Operating System (ROS) project, an open source stack for state-of-the-art robotics. ROS is in use by industry and academic research projects, often on hardware that runs in the hundreds-of-thousands of dollars range, but it is capable of running on low end and homebrew robots, too. [Flying robot shark] Naturally, the ultimate homebrew device automation option is the open source Arduino, which was also on display at SCALE thanks to Akkana Peck and her flying robot shark....

ROS's universal robots

In his talk, Foote illustrated the need for a common, open source platform like ROS by describing his own background in robotics. As a graduate student, he often lamented the fact that interesting developments described in research papers were nearly impossible to re-implement at a different institution because there was no baseline framework on which to build the new bits. As a result, every research group ended up re-building the same infrastructure in a separate silo, slowing down the pace of useful research. The same productivity hit was evident in the DARPA Grand Challenge; Foote and his teammates saw that virtually every successful entrant ran Linux, but they shared no code. The difference between the teams that completed the challenge and those that failed was "a couple of percent" more efficiency in key algorithms, he said — a tiny amount of time compared to the hours spent constructing and debugging the predictable, underlying base layer.

[Tully Foote]

ROS grew out of those concerns. The idea is to provide a reusable meta-framework that researchers can run on their own robot hardware, taking care of low-level services like hardware abstraction, device control (for sensors, actuators, motors, and other robo-building-blocks), and message-passing, and to offer reusable modules for common services like 2D and 3D object recognition and navigation. Developers can incorporate the libraries they need, then write their own code focused on the area of research interest.

Rather than simply design the system and post the code, however, the ROS team has taken an active interest in cultivating an active community of robotics researchers and code contributors. There is a thorough documentation site and a StackOverflow-style question-and-answer site, plus a bug tracker, IRC channel, and mailing lists. The code is licensed BSD-style, in order to encourage adoption by commercial research groups in addition to academics.

The plan seems to be successfully attracting developers; the wiki lists more than 400 "stacks," which are installable ROS variants tuned for a specific task, hardware configuration, or research project. Many are marked as being maintained by institutions other than Willow Garage. Willow Garage itself is a "long-term incubator," as Foote described it. The team does its own robotics research as well as developing ROS, in the hopes of making a breakthrough someday that will warrant spinning off a startup company.

Architecture and marathons

Structurally, ROS is a collection of processes that communicate via the ros_comm middleware layer. This communication framework is abstracted at the network level, with C++, Java, Python, and Lisp all equally supported for developing modules. There are several message-passing paradigms supported, including synchronous RPC-style services, asynchronous publish-subscribe, and simple data acquisition. Under the hood, ROS is designed to run on Ubuntu (and the official downloads for the current release, "Electric Emys," are packaged only for Ubuntu), although experimental packages are available for other distributions, as well as Mac OS X and Windows.

Robotics research is centered on compute-intensive subjects like computer vision, object-detection, and decision making (rather than simply programming repetitive tasks like car assembly), rarely manageable by a single CPU. Thus, ROS supports grid-like designs it calls "compute graphs" in which multiple slave compute nodes can talk to each other in peer-to-peer style, as well as report back to a master node. Foote described his team's DARPA Grand Challenge vehicles, which in different generations used everything from six rack-mount servers to a trunk full of Mac Minis. ROS can also run on remote compute nodes that communicate wirelessly to a robot — Foote mentioned that an Android application is in development that will allow users to run ROS on their phones to control small, Roomba-like robots.

A Roomba and an Android phone does not make for a cutting-edge proof of concept, however, so Willow Garage set out to develop a state-of-the-art research robot for working with ROS. The result is the PR2, a mobile (wheeled) robot with stereo vision, two manipulator arms with five pressure-sensing fingers on each, and an assortment of rangefinders and other sensors for interacting with the world. The PR2 is powered by sixteen i7 cores, and has a $400,000 price tag (although Foote said the company offers some level of discount for open source projects).

Foote played videos of several demonstration projects that Willow Garage undertook, which are available at the bottom of the PR2 page. They included training the PR2 to "run" a 26.2 mile marathon (a problem that seemed to focus largely on teaching the PR2 to notice when it was running low on battery power, find an outlet, and plug itself in to recharge), play pool, and fetch and open bottled beer (determining the brand by visually scanning and interpreting the labels it finds in the fridge).

Nearly half a million bucks is a hefty bill by most standards, but Foote said there are around twenty PR2 units in the field — and that together they result in the publication of more than 100 research papers every year. The ROS community as a whole has expanded the number of software packages available from the initial 200 to more than 3000. Luckily, ROS runs on a range of other robot hardware, including several inexpensive options like the LEGO NXT and Willow Garage's new offering, the TurtleBot. TurtleBot is based on an iRobot Create, augmented with an ASUS netbook, a Microsoft Kinect, and various other sensors. TurtleBot kits are available for between $400 and $1400 (US), depending on how many of the off-the-shelf components are included.

The DIY approach

[Akkana Peck]

Foote's talk and PR2 videos wowed the crowd (which strained the capacity of the session room), but so did Akkana Peck's live demonstration of Arduino development in the "Fun With Linux and Devices" talk. Peck described herself as an circuit-building novice, emphasizing that Arduino was a simple way to get into the world of programming hardware devices and robots even for those inexperienced with IDEs and soldering irons.

Peck began with an overview of the Arduino itself, including digital and analog I/O, programming via USB cable, and the various options for power (USB-supplied, battery, and AC adapter). She then explained the Arduino software environment and how to compile and upload code. "The first project is always making the LED blink," she said, "which is a lot more exciting than it sounds when you finally get yours to blink."

From the blinking LED, she gradually increased the complexity of the projects, including how to interface with devices that draw more power than an Arduino can safely handle, how to read from sensors, and how to write Python code that interfaces with the Arduino's I/O pins — thus allowing the user to monitor and control devices. Along the way, these examples included several live demonstrations, including linking the blinking-LED signal into a series of desk lamps and Christmas tree lights, creating a functioning oscilloscope, and an echolocation rangefinder.

But those projects were only the lead-up to the main event, Sharkduino: controlling an "Air Swimmers" flying shark from a Linux box. Air Swimmers are helium-filled balloon toys with motorized tails and adjustable ballasts, and can be flown using an infrared remote controller. They are also inexpensive, which makes them a tempting target for an Arduino automation project. In addition, the infrared controller is more limited in functionality than a typical radio-controlled airplane or helicopter, making it ripe for hacking. Peck said she considered attaching a small Arduino directly to the shark, but ultimately chose to connect to it via the remote in the interest of making flying-shark-run-amok a less likely outcome.

The project involved dissecting the infrared remote, getting help from the Arduino community on what sort of circuits to attach to the remote control's switches, and writing a Python application to run on the desktop. The end result allows the user to fly the shark by moving the mouse up, down, left, or right. It may not shoot lasers yet (to Dr. Evil's certain dismay), but "Bruce" (as the shark is evidently named) was still a hit with young and old alike in the audience.

Along the way, Peck included tips on where to find Arduino components, how to ask for help from the Arduino community, and how the platform compares to BeagleBoard, RaspberryPi, and other rapid-prototyping products. All of the code demonstrated in the session (including the Sharkduino application) is available on Peck's personal web site, along with the session slides.

Penguin overlords not far behind

In some ways, ROS and Arduino could not be further apart. ROS is developed (although not exclusively) on expensive specialty hardware, while Arduino boards come as cheap as seven dollars (thanks to the open hardware plans). ROS is designed for complex, compute-intensive tasks, while Arduino is designed to be as simple as possible. But in the more fundamental sense, they serve the same goal: to provide a reliable, common platform on which others can experiment and innovate. Neither could accomplish that as effectively if they were proprietary, closed-source projects.

Willow Garage's TurtleBot promises to bring brainy robotics to the hobbyist market (while giving those hobbyists the same reliability and service that research institutions already enjoy), which should have an interesting effect on the ROS project and its community. And if the thought already occurred to you that a low-cost ROS robot outfitted with an Arduino-controlled sensor board sounds like an intriguing platform for exploration — why yes, there are several modules to do exactly that.

Comments (4 posted)

Page editor: Jonathan Corbet


Security processes and the flaw

January 25, 2012

This article was contributed by Michael Gilbert

A recent security flaw (CVE-2012-0064) was handled well by those involved by many measures (by the issue discloser, by the developers, and by various distribution security response teams). In fact, the issue was fixed in less than a day by most distributions, which helps demonstrate the progress that the open source community has made in terms of security processes and practices.

On January 19th, Gu1 (a member of the Consortium of Pwners computer security war gaming group) published details of a flaw he happened to come across in the latest release. By pressing a particular combination of keys when sitting locally at any machine running 1.11 or greater (and a subset of release candidates), he found that he could terminate any application with a current screen grab (i.e. screensavers). This meant that he, or anyone else with knowledge of that particular "code", would be able to gain local access to machines for which they did not have appropriate credentials. Some readers may be tempted to jump to the conclusion that such a simple "code" is a sign of a maliciously placed back-door, but the actual explanation is far more mundane. This particular key combination simply happens to be a debugging feature — with known and documented security implications — that, by default, was appropriately disabled in the past.

Fortunately 1.11 is currently so new that it hasn't yet shipped in most distributions. Of the most common GNU/Linux distributions, the only stable release affected was Fedora 16. Also affected were Debian testing and unstable as well as Arch, all of which are either rolling or experimental releases. All Ubuntu, RedHat (including CentOS, of course), and openSUSE releases were not affected. So, first of all, there isn't much for most users to worry about with respect to this particular problem. However, the events leading up to and following publication of the flaw paint an interesting picture. In one sense, this flaw was handled well by the security teams of the affected distributions, but that doesn't mean there isn't room for improvement.

Note that a comprehensive discussion on the technical details of the flaw itself will not be included here. Peter Hutterer has already written an excellent blog entry on the matter, and readers are encouraged to visit his site for more information. Succinctly, the screen grab debugging key-press combinations have now been removed from the default XKB keymap configuration files. It is still possible to re-enable them, but that requires a determined user that presumably knows what they are doing.

Timeline of the flaw

In the beginning (1984), X was written. At some point, developers recognized a need to be able to debug screen grabbing applications, so they wrote some code to be able to break such grabs. A screen grab (in speak) is simply a top-level overlay on the screen that prevents events (key and mouse presses) from touching the windows underneath. The grab breaks were assigned to the Control+Alt+KeypadMultiply and Control+Alt+KeypadDivide key-press combinations. At the time, the X developers recognized the security implications and made it a non-default option. They even documented the problem to hopefully make it very clear to users.

Many years passed...

In 2008, there was a great purge of xf86misc (a code clean up effort that removed various unused X code that had accumulated over many years), which, along with many other things, excised those particular debugging options (Daniel Stone's commit, commit). Recently, Daniel has been working on multi-pointer X. In that process, he encountered quite a few situations where screen grab debugging would be helpful. So, he dusted of that code and pushed for its re-inclusion. In June of 2011, Peter Hutterer reviewed and applied said patch.

However, lost in translation/communication (and to the passage of time) was the fact that the code did indeed have security implications. That fact was not picked up on until around January 5th on a day that Gu1 found himself rather bored. On that day, he had decided to read some older documentation, and in particular, he came across "AllowClosedownGrabs", which documented the Control-Alt-KeypadMultiply key combination. He decided to try it with the latest expecting nothing, but to his surprise it worked. So, part of the problem was that the documentation that warned about security considerations of the code was not brought back as well. It still doesn't look like this has returned yet, but an important takeaway is that both code and documentation should be brought back on returning features, and that the discussion in that documentation should be taken into consideration when doing so. One solution could be to remove documentation in the same commit that the code is removed. That way if the commit is ever reverted, the documentation automatically comes back as well.

Not content with only finding the issue, Gu1 took the time to write a rather detailed blog entry, and published that two weeks later on January 19th. He even went so far as to research, bisect, and identify the commit introducing the problem. This is an example of a well-written disclosure. It made it possible for security teams to take rapid action to close the issue. In an email interview with Gu1, he stated that his motivation to do this was not out of selflessness. He was more interested in obtaining a discount to the Hackito Ergo Sum 2012 conference. The discount is provided to those attendees that have disclosed CVE issues. It may be interesting to think more about providing these kind of simple incentives in the future to reduce the number of issues that are currently sat on by those without motivation to disclose.

Note that one could argue that Gu1's decision to fully disclose the issue with no advance notice to those involved was less than ideal. The delayed disclosure (often framed as "responsible disclosure") camp believes that vendors need some time to be able to do appropriate analysis and testing of fixes, and thus disclosers should give those vendors some time (though how much time is often a question). This issue demonstrates a case where that preparation time didn't matter. The issue was fully disclosed and hours later security teams had the problem solved. That is because Gu1's research was comprehensive enough to be able to isolate and fix the problem right away. This kind of detailed analysis should be sought as the norm. Whether that analysis is shared with the vendor or project before being made public typically depends on which camp (full or responsible disclosure) the researcher is in.

In terms of affected releases, 1.11 was originally shipped in June 2011. Shortly thereafter, distribution development branches started picking it up. Debian unstable got it in August, Debian testing got it in September, and the Fedora 16 stable release got it in November. A final timeline of the issue demonstrates how impressively quickly the issue was resolved after disclosure by those distributions affected by it:

Date/Time (UTC) Event
01/05/2012 Gu1 discovers issue
01/19/2012 00:03 Gu1 discloses issue on blog and oss-security
01/19/2012 05:49 workaround posted
01/19/2012 10:19 fixed in Debian unstable
01/19/2012 22:01 fixed in Fedora 16
01/19/2012 23:48 upstream fixed (actually in XKB)
01/22/2012 16:39 fixed in Debian testing (delay due to testing's 2-day minimum migration policy)

For the set of distributions actually affected by this issue, their security teams reacted with admirable speed. The table below lists the time it took to release a fix after Gu1's disclosure. Note that the "underground potential" entry is the length of time that the underground side of the computer security community may have been able to exploit the problem. That said, there is no way of ever knowing if or when it was actually discovered before the disclosure. We do know at least that Gu1 knew about the issue two weeks prior to publishing it.

Distribution Vulnerability window Underground potential
Debian unstable ~10 hours ~5 months
Fedora 16 ~22 hours ~2 months upstream (XKB) ~23 hours ~6 months
Debian testing ~64 hours ~4 months


This particular case raises some questions about the prevailing wisdom that its always best to be running the latest and greatest software releases. Note that each new release involves some kind of code modifications with varying levels of risk. Interestingly, it turns out that in this case users were safer if they chose slower-moving releases. As seen above, the incredibly fast-moving Debian unstable release had a 4 month potential for underground abuse; whereas Debian testing, which moves a bit slower, had a smaller 3 month potential. Fedora 16 was caught by this; whereas Ubuntu wasn't since they played it a bit safer and stuck with 1.10 for their 11.10 release. Distributions have to make their choices about which new releases to include based on their interest in delivering "bleeding edge" packages to their users. Sometimes that means that undiscovered security bugs come along for the ride.

By all measures Daniel and Peter have an extensive background working on Daniel has been working on various aspects (including DRM/KMS drivers, gstreamer, and kernel input drivers) for 9 years and Peter for 6 years as well (he is the input subsystem maintainer and has worked on libXi). Even with this extensive experience, is such a complex system that there is always the potential for mistakes. We're all human after all. Daniel had this to say:

Oh, at this stage I don't think we can say with a straight face that we're able to create perfectly resilient and secure systems. The best we can do is admit that failures will occur, try to pre-emptively limit the damage they can do before they're found, and then make sure our procedures for dealing with problems as they're found are best-of-class. Even if all your components are extensively documented, noting their various restrictions, requirements and limitations, as well as being extensively tested, the reality is that people are human so either your implementation will be subtly broken in ways you don't expect, or one of your users will just use it wrong. Saying that we have perfect security is just hubris.

I've got a lot of time for the school of thought that argues that as complex systems are inherently less secure than simple ones, the best thing to do is to build less complex software. Understanding the flow of events between X and its myriad clients, and the effects even a simple change will have, is really not an easy thing to do. I find the setuid vs. capabilities issue that's been cropping up recently a pretty entertaining example of the law of unintended consequences.

One could argue that Wayland is the simplification needed to eliminate the complexities of X, and it's good that most distributions are now on a long-term term path toward that goal. But even so, Wayland is not necessarily going to be a magic bullet as some have argued. It too will have its share of complexity, and there is always the possibility of writing flaws into the new code, which will only be discovered given time, interest, and motivation. Computer security is always a matter of vigilance.

[ The author would like to thank Daniel Stone, Peter Hutterer, and Gu1 for taking the time to answer interview questions for this article. ]

Comments (11 posted)

Brief items

Security quotes of the week

Sure, ASLR helps, but I want a basic browser capable of running Javascript securely in a thread-safe jail without crashing on double frees, running out of memory, and selling more cookies than the Girl Scouts, that somehow manages to maintain more hidden access logs than a Swiss bank on MY personal computer, regardless of the privacy settings I choose.
-- John Doe (Thanks to Daniel Dickman.)

DARPA is funding research into new forms of biometrics that authenticate people as they use their computer: things like keystroke patterns, eye movements, mouse behavior, reading speed, and surfing and e-mail response behavior. The idea -- and I think this is a good one -- is that the computer can continuously authenticate people, and not just authenticate them once when they first start using their computers.
-- Bruce Schneier

One attack I hadn't seen before was to try a large number of usernames, and parts of the hostname as password. For a hostname of the style, the attack tried DOMAIN, DOMAIN.DEPARTMENT, MACHINE, then MACHINE.DOMAIN. This clearly isn't a dictionary but a bit of custom code which did a reverse DNS lookup on this host then generated some possible passwords. Using the hostname as a password for a host isn't a good idea, but I can imagine some sysadmins doing so. The fact that some attackers are taking this approach might merit some explicit statement in password selection guidance.
-- Steven J. Murdoch continues his SSH brute force research

Comments (none posted) screensaver bypass found

A debugging feature introduced into the server 1.11 can be used by someone with physical access to the system to bypass the screensaver. First reported by "Gu1" on their blog and on the oss-security mailing list. The key sequence Ctrl-Alt-KeypadMultiply will bypass any screensaver. A workaround has been posted, but one would expect an update from before long.

Comments (30 posted)

Linux Local Privilege Escalation via SUID /proc/pid/mem Write (zx2c4)

The "zx2c4" weblog has a detailed writeup of a local root vulnerability in /proc introduced in 2.6.39 and just fixed on January 17. "In 2.6.39, the protections against unauthorized access to /proc/pid/mem were deemed sufficient, and so the prior #ifdef that prevented write support for writing to arbitrary process memory was removed. Anyone with the correct permissions could write to process memory. It turns out, of course, that the permissions checking was done poorly. This means that all Linux kernels >=2.6.39 are vulnerable, up until the fix commit for it a couple days ago. Let’s take the old kernel code step by step and learn what’s the matter with it." As of this writing, distributors do not yet appear to have begun shipping updates for this vulnerability.

Comments (107 posted)

New vulnerabilities

bip: code execution

Package(s):bip CVE #(s):CVE-2012-0806
Created:January 25, 2012 Updated:April 9, 2013
Description: The bip IRC proxy contains a buffer overflow that may be exploitable for code execution by a remote attacker.
Mandriva MDVSA-2013:063 bip 2013-04-08
Mageia MGASA-2012-0265 bip 2012-09-13
Fedora FEDORA-2012-0916 bip 2012-02-04
Fedora FEDORA-2012-0941 bip 2012-02-04
Gentoo 201201-18 bip 2012-01-30
Debian DSA-2393-1 bip 2012-01-25

Comments (none posted)

bugzilla: multiple vulnerabilities

Package(s):bugzilla CVE #(s):CVE-2011-3657 CVE-2011-3667 CVE-2011-3668 CVE-2011-3669
Created:January 19, 2012 Updated:January 25, 2012

From the Red Hat bugzilla entry:

CVE-2011-3657: Tabular and graphical reports, as well as new charts have a debug mode which displays raw data as plain text. This text is not correctly escaped and a crafted URL could use this vulnerability to inject code leading to XSS.

CVE-2011-3667: The User.offer_account_by_email WebService method ignores the user_can_create_account setting of the authentication method and generates an email with a token in it which the user can use to create an account. Depending on the authentication method being active, this could allow the user to log in using this account. Installations where the createemailregexp parameter is empty are not vulnerable to this issue.

CVE-2011-3668, CVE-2011-3669: The creation of bug reports and of attachments is not protected by a token and so they can be created without the consent of a user if the relevant code is embedded in an HTML page and the user visits this page. This behavior was intentional to let third-party applications submit new bug reports and attachments easily. But as this behavior can be abused by a malicious user, it has been decided to block submissions with no valid token starting from version 4.2rc1.

Fedora FEDORA-2012-0301 bugzilla 2012-01-19
Fedora FEDORA-2012-0328 bugzilla 2012-01-19

Comments (none posted)

dhcp: denial of service

Package(s):dhcp CVE #(s):CVE-2011-4868
Created:January 23, 2012 Updated:January 25, 2012
Description: From the CVE entry:

The logging functionality in dhcpd in ISC DHCP before 4.2.3-P2, when using Dynamic DNS (DDNS) and issuing IPv6 addresses, does not properly handle the DHCPv6 lease structure, which allows remote attackers to cause a denial of service (NULL pointer dereference and daemon crash) via crafted packets related to a lease-status update.

Gentoo 201301-06 dhcp 2013-01-09
Slackware SSA:2012-237-01 dhcp 2012-08-24
Fedora FEDORA-2012-0490 dhcp 2012-01-22

Comments (none posted)

emacs: privilege escalation

Package(s):emacs CVE #(s):CVE-2012-0035
Created:January 24, 2012 Updated:January 27, 2014
Description: From the CVE entry:

Untrusted search path vulnerability in EDE in CEDET before 1.0.1, as used in GNU Emacs before 23.4 and other products, allows local users to gain privileges via a crafted Lisp expression in a Project.ede file in the directory, or a parent directory, of an opened file.

Gentoo 201403-05 emacs 2014-03-20
Gentoo 201401-31 cedet 2014-01-27
Mandriva MDVSA-2013:076 emacs 2013-04-08
Ubuntu USN-1586-1 emacs23 2012-09-27
Mageia MGASA-2012-0261 emacs 2012-09-09
Fedora FEDORA-2012-0462 emacs 2012-01-23
Fedora FEDORA-2012-0494 emacs 2012-01-23

Comments (none posted)

glibc: denial of service

Package(s):glibc CVE #(s):CVE-2011-4609
Created:January 25, 2012 Updated:January 25, 2012
Description: The glibc remote procedure call implementation allows remote attackers to open large numbers of connections, causing the target application to use excessive amounts of CPU time.
Ubuntu USN-1396-1 eglibc, glibc 2012-03-09
Scientific Linux SL-glib-20120214 glibc 2012-02-14
Scientific Linux SL-glib-20120214 glibc 2012-02-14
Oracle ELSA-2012-0126 glibc 2012-02-14
Oracle ELSA-2012-0125 glibc 2012-02-14
CentOS CESA-2012:0126 glibc 2012-02-14
CentOS CESA-2012:0125 glibc 2012-02-14
Red Hat RHSA-2012:0125-01 glibc 2012-02-13
Red Hat RHSA-2012:0126-01 glibc 2012-02-13
CentOS CESA-2012:0058 glibc 2012-01-30
Scientific Linux SL-glib-20120125 glibc 2012-01-25
Red Hat RHSA-2012:0058-01 glibc 2012-01-24

Comments (none posted)

kernel: denial of service

Package(s):linux CVE #(s):CVE-2012-0044
Created:January 24, 2012 Updated:February 7, 2012
Description: From the Ubuntu advisory:

Chen Haogang discovered an integer overflow that could result in memory corruption. A local unprivileged user could use this to crash the system.

openSUSE openSUSE-SU-2012:1439-1 kernel 2012-11-05
Ubuntu USN-1556-1 linux-ec2 2012-09-06
Ubuntu USN-1555-1 linux 2012-09-05
openSUSE openSUSE-SU-2012:0799-1 kernel 2012-06-28
Red Hat RHSA-2012:1042-01 kernel 2012-06-26
Oracle ELSA-2012-0743 kernel 2012-06-21
Ubuntu USN-1394-1 Linux kernel (OMAP4) 2012-03-07
Ubuntu USN-1387-1 linux-lts-backport-maverick 2012-03-06
Ubuntu USN-1386-1 linux-lts-backport-natty 2012-03-06
Red Hat RHSA-2012:0333-01 kernel-rt 2012-02-23
Ubuntu USN-1362-1 linux 2012-02-13
Ubuntu USN-1361-1 linux 2012-02-13
Scientific Linux SL-kern-20120619 kernel 2012-06-19
CentOS CESA-2012:0743 kernel 2012-06-19
Red Hat RHSA-2012:0743-01 kernel 2012-06-18
Ubuntu USN-1356-1 linux-ti-omap4 2012-02-07
Ubuntu USN-1340-1 linux-lts-backport-oneiric 2012-01-23

Comments (none posted)

kernel: privilege escalation

Package(s):kernel CVE #(s):CVE-2012-0056
Created:January 23, 2012 Updated:January 30, 2012
Description: Jüri Aedla discovered that the kernel incorrectly handled /proc/<pid>/mem permissions. A local attacker could exploit this and gain root privileges.

See the "zx2c4" weblog and this LWN article for additional details.

Oracle ELSA-2013-1645 kernel 2013-11-26
openSUSE openSUSE-SU-2013:0927-1 kernel 2013-06-10
Oracle ELSA-2012-0862 kernel 2012-07-02
Ubuntu USN-1364-1 linux-ti-omap4 2012-02-13
Ubuntu USN-1342-1 linux-lts-backport-oneiric 2012-01-25
Scientific Linux SL-kern-20120125 kernel 2012-01-25
Oracle ELSA-2012-2001 kernel-uek 2012-01-25
Oracle ELSA-2012-2001 kernel-uek 2012-01-25
Oracle ELSA-2012-0052 kernel 2012-01-25
Red Hat RHSA-2012:0061-01 kernel-rt 2012-01-24
Fedora FEDORA-2012-0861 kernel 2012-01-24
CentOS CESA-2012:0052 kernel 2012-01-24
Fedora FEDORA-2012-0876 kernel 2012-01-24
Red Hat RHSA-2012:0052-01 kernel 2012-01-23
Ubuntu USN-1336-1 linux 2012-01-23

Comments (6 posted)

krb5: denial of service

Package(s):mit-krb5 CVE #(s):CVE-2011-0283 CVE-2011-4151
Created:January 24, 2012 Updated:January 25, 2012
Description: From the CVE entries:

The Key Distribution Center (KDC) in MIT Kerberos 5 (aka krb5) 1.9 allows remote attackers to cause a denial of service (NULL pointer dereference and daemon crash) via a malformed request packet that does not trigger a response packet. (CVE-2011-0283)

The krb5_db2_lockout_audit function in the Key Distribution Center (KDC) in MIT Kerberos 5 (aka krb5) 1.8 through 1.8.4, when the db2 (aka Berkeley DB) back end is used, allows remote attackers to cause a denial of service (assertion failure and daemon exit) via unspecified vectors, a different vulnerability than CVE-2011-1528. (CVE-2011-4151)

Gentoo 201201-13 mit-krb5 2012-01-23

Comments (none posted)

logsurfer: arbitrary code execution

Package(s):logsurfer CVE #(s):CVE-2011-3626
Created:January 23, 2012 Updated:January 25, 2012
Description: From the Gentoo advisory:

Logsurfer log files may contain substrings used for executing external commands. The prepare_exec() function in src/exec.c contains a double-free vulnerability.

A remote attacker could inject specially-crafted strings into a log file processed by Logsurfer, resulting in the execution of arbitrary code with the permissions of the Logsurfer user.

Gentoo 201201-04 logsurfer 2012-01-20

Comments (none posted)

nxserver-freeedition: privilege escalation

Package(s):nxserver-freeedition CVE #(s):CVE-2011-3977
Created:January 23, 2012 Updated:January 25, 2012
Description: From the Gentoo advisory:

NX Server Free Edition and NX Node use, a setuid script containing an unspecified vulnerability.

A local attacker could gain escalated privileges.

Gentoo 201201-07 nxserver-freeedition 2012-01-23

Comments (none posted)

openssl: denial of service

Package(s):openssl CVE #(s):CVE-2012-0050
Created:January 23, 2012 Updated:February 17, 2012
Description: From the CVE entry:

OpenSSL 0.9.8s and 1.0.0f does not properly support DTLS applications, which allows remote attackers to cause a denial of service via unspecified vectors. NOTE: this vulnerability exists because of an incorrect fix for CVE-2011-4108.

openSUSE openSUSE-SU-2013:0336-1 openssl 2013-02-25
SUSE SUSE-SU-2012:0674-1 openssl 2012-05-30
Gentoo 201203-12 openssl 2012-03-05
openSUSE openSUSE-SU-2012:0266-1 openssl 2012-02-17
Ubuntu USN-1357-1 openssl 2012-02-09
Mandriva MDVSA-2012:011 openssl 2012-01-29
Oracle ELSA-2012-0059 openssl 2012-01-25
Oracle ELSA-2012-0060 openssl 2012-01-25
Fedora FEDORA-2012-0702 openssl 2012-01-24
Debian DSA-2392-1 openssl 2012-01-23
Fedora FEDORA-2012-0708 openssl 2012-01-22

Comments (none posted)

phpmyadmin: cross-site scripting

Package(s):phpmyadmin CVE #(s):CVE-2011-1940
Created:January 23, 2012 Updated:January 25, 2012
Description: From the Debian advisory:

Cross site scripting was possible in the table tracking feature, allowing a remote attacker to inject arbitrary web script or HTML.

Debian DSA-2391-1 phpmyadmin 2012-01-22

Comments (none posted)

qemu-kvm: code execution

Package(s):qemu-kvm CVE #(s):CVE-2012-0029
Created:January 24, 2012 Updated:August 20, 2012
Description: From the Ubuntu advisory:

Nicolae Mogoreanu discovered that QEMU did not properly verify legacy mode packets in the e1000 network driver. A remote attacker could exploit this to cause a denial of service or possibly execute code with the privileges of the user invoking the program.

Gentoo 201210-04 qemu-kvm 2012-10-18
SUSE SUSE-SU-2012:1320-1 qemu 2012-10-09
Mageia MGASA-2012-0222 qemu 2012-08-18
Fedora FEDORA-2012-8592 qemu 2012-06-07
Fedora FEDORA-2012-8604 qemu 2012-06-07
openSUSE openSUSE-SU-2012:0548-1 xen 2012-04-23
Scientific Linux SL-xen-20120321 xen 2012-03-21
openSUSE openSUSE-SU-2012:0347-1 Xen 2012-03-09
Oracle ELSA-2012-0370 xen 2012-03-08
Oracle ELSA-2012-0149 kvm 2012-03-07
Red Hat RHSA-2012:0370-01 xen 2012-03-07
Fedora FEDORA-2012-1539 xen 2012-02-19
Fedora FEDORA-2012-1375 xen 2012-02-19
openSUSE openSUSE-SU-2012:0267-1 qemu-kvm 2012-02-17
openSUSE openSUSE-SU-2012:0207-1 kvm 2012-02-09
Debian DSA-2404-1 xen-qemu-dm-4.0 2012-02-05
Debian DSA-2396-1 qemu-kvm 2012-01-27
Scientific Linux SL-qemu-20120125 qemu-kvm 2012-01-25
CentOS CESA-2012:0051 kvm 2012-01-24
Scientific Linux SL-kvm-20120124 kvm 2012-01-24
CentOS CESA-2012:0050 qemu-kvm 2012-01-24
Oracle ELSA-2012-0051 kvm 2012-01-23
Oracle ELSA-2012-0050 qemu-kvm 2012-01-23
Red Hat RHSA-2012:0051-01 kvm 2012-01-23
Red Hat RHSA-2012:0050-01 qemu-kvm 2012-01-23
Ubuntu USN-1339-1 qemu-kvm 2012-01-23

Comments (none posted)

rsyslog: denial of service

Package(s):rsyslog CVE #(s):CVE-2011-4623
Created:January 24, 2012 Updated:July 10, 2012
Description: From the Ubuntu advisory:

Peter Eisentraut discovered that Rsyslog would not properly perform input validation when configured to use imfile. If an attacker were able to craft messages in a file that Rsyslog monitored, an attacker could cause a denial of service. The imfile module is disabled by default in Ubuntu.

Gentoo 201412-35 rsyslog 2014-12-24
CentOS CESA-2012:0796 rsyslog 2012-07-10
Scientific Linux SL-rsys-20120709 rsyslog 2012-07-09
Oracle ELSA-2012-0796 rsyslog 2012-07-02
Mandriva MDVSA-2012:100 rsyslog 2012-06-25
Red Hat RHSA-2012:0796-04 rsyslog 2012-06-20
Ubuntu USN-1338-1 rsyslog 2012-01-23

Comments (none posted)

tomcat: denial of service via hash collision

Package(s):tomcat CVE #(s):CVE-2011-4858
Created:January 19, 2012 Updated:February 2, 2012

From the Novell CVE entry:

Apache Tomcat before 5.5.35, 6.x before 6.0.35, and 7.x before 7.0.23 computes hash values for form parameters without restricting the ability to trigger hash collisions predictably, which allows remote attackers to cause a denial of service (CPU consumption) by sending many crafted parameters.

Mageia MGASA-2012-0189 tomcat6 2012-08-02
Gentoo 201206-24 tomcat 2012-06-24
Oracle ELSA-2012-0474 tomcat5 2012-04-12
Scientific Linux SL-tomc-20120411 tomcat6 2012-04-11
Scientific Linux SL-tomc-20120411 tomcat5 2012-04-11
CentOS CESA-2012:0475 tomcat6 2012-04-11
CentOS CESA-2012:0474 tomcat5 2012-04-11
Red Hat RHSA-2012:0475-01 tomcat6 2012-04-11
Red Hat RHSA-2012:0474-01 tomcat5 2012-04-11
Ubuntu USN-1359-1 tomcat6 2012-02-13
Debian DSA-2401-1 tomcat6 2012-02-02
openSUSE openSUSE-SU-2012:0103-1 tomcat 2012-01-19

Comments (none posted)

torque: impersonation vulnerability

Package(s):torque CVE #(s):
Created:January 23, 2012 Updated:January 25, 2012
Description: Torque allows one user to impersonate another within a batch system. Fixed in version 3.0.3.
Fedora FEDORA-2012-0372 torque 2012-01-21

Comments (none posted)

wireshark: multiple vulnerabilities

Package(s):wireshark CVE #(s):CVE-2012-0041 CVE-2012-0042 CVE-2012-0043
Created:January 23, 2012 Updated:January 27, 2012
Description: From the Red Hat bugzilla [1], [2], [3]:

Laurent Butti discovered that Wireshark failed to properly check record sizes for many packet capture file formats. It may be possible to make Wireshark crash by convincing someone to read a malformed packet trace file. This is corrected in upstream 1.4.11 and 1.6.5.

Wireshark was improperly handling NULL pointers when displaying packet information which could lead to a crash. It may be possible to make Wireshark crash by injecting a malformed packet onto the wire or by convincing someone to read a malformed packet trace file. This is corrected in upstream 1.4.11 and 1.6.5.

The RLC dissector could overflow a buffer. It may be possible to make Wireshark crash by injecting a malformed packet onto the wire or by convincing someone to read a malformed packet trace file. This is corrected in upstream 1.4.11 and 1.6.5.

Oracle ELSA-2013-1569 wireshark 2013-11-26
Gentoo GLSA 201308-05:02 wireshark 2013-08-30
Gentoo 201308-05 wireshark 2013-08-28
Oracle ELSA-2013-0125 wireshark 2013-01-12
Scientific Linux SL-wire-20130116 wireshark 2013-01-16
CentOS CESA-2012:0509 wireshark 2012-04-24
Oracle ELSA-2012-0509 wireshark 2012-04-23
Scientific Linux SL-wire-20120423 wireshark 2012-04-23
Red Hat RHSA-2012:0509-01 wireshark 2012-04-23
openSUSE openSUSE-SU-2012:0295-1 wireshark 2012-02-23
Debian DSA-2395-1 wireshark 2012-01-27
Fedora FEDORA-2012-0440 wireshark 2012-01-24
Fedora FEDORA-2012-0435 wireshark 2012-01-22

Comments (none posted)

xkeyboard-config: screensaver lock bypass

Package(s):xkeyboard-config CVE #(s):CVE-2012-0064
Created:January 20, 2012 Updated:January 30, 2012
Description: From the Red Hat bugzilla:

It was found that XKB actions for debugging clients were enabled by default. This could cause a screen locking application such as gnome-screensaver to be killed when those key combinations were triggered.

Gentoo 201201-16 xkeyboard-config 2012-01-27
Fedora FEDORA-2012-0709 xkeyboard-config 2012-01-24
Fedora FEDORA-2012-0712 xkeyboard-config 2012-01-19

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 3.3-rc1, released on January 19; the 3.3 merge window is now closed. "Anyway, it's out now, and I'm taking off early for a weekend of beer, skiing and poker (not necessarily in that order: 'don't drink and ski'). No email." See our merge window summaries (part 1, part 2) for details on the features merged for the 3.3 release.

Stable updates: The, 3.0.18, and 3.2.2 stable updates were released on January 25.

Comments (none posted)

Quotes of the week

This is digressing a bit, but the binary nvidia driver is the best way that I see that we can support our users with a feature set compatible to that available to other operating systems. For technical reasons, we've chosen to leverage a lot of common code written internally, which allows us to release support for new hardware and software features much more quickly than if those of us working on the Linux/FreeBSD/Solaris drivers wrote it all from scratch. This means that we share a lot with other NVIDIA drivers, but we for better or worse can't share much infrastructure like DRI.
-- Robert Morell

For a Linux kernel containing any code I own the code is under the GNU public license v2 (in some cases or later), I have never given permission for that code to be used as part of a combined or derivative work which contains binary chunks. I have never said that modules are somehow magically outside the GPL and I am doubtful that in most cases a work containing binary modules for a Linux kernel is compatible with the licensing, although I accept there may be some cases that it is.
-- Alan Cox

Comments (none posted)

Kernel development news

A /proc/PID/mem vulnerability

By Jake Edge
January 25, 2012

A privilege escalation in the kernel is always a serious threat that leads kernel hackers and distributions to scramble to close the hole quickly. That's exactly what happened after a January 17 report from Jüri Aedla to the closed kernel security mailing list. But most people didn't learn of the hole from Aedla (since he posted to a closed list), but instead from Jason Donenfeld (aka zx2c4) who posted a detailed look at the flaw on January 22. The fix was made by Linus Torvalds and went into the mainline on January 17, though with a commit message that obfuscated the security implications—something that didn't sit well with some.

The problem and exploit

The problem itself stems from the removal of the restriction on writes to /proc/PID/mem that was merged for the 2.6.39 kernel. It was part of a patch set that was specifically targeted at allowing debuggers to write to the memory of processes easily via the /proc/PID/mem file. Unfortunately, it left open a hole that Aedla and Donenfeld (at least) were able to exploit.

The posting by Donenfeld is worth a read for those interested in how exploits of this sort are created. The problem starts with the fact that the open() call for /proc/PID/mem does no additional checking beyond the normal VFS permissions before returning a file descriptor. That will prove to be a mistake, and one that Torvalds's fix remedies. Instead of checks at open() time, the code would check in write() and only allow writing if the process being written to is the same as the process doing the writing (i.e. task == current).

That restriction seems like it would make an exploit difficult, but it can be avoided with an exec() and coercing the newly run program to do the writing to itself. That will be dangerous if the newly run program is a setuid executable for example. But there is another test that is meant to block that particular path, by testing that current->self_exec_id has the same value as it did at open() time. self_exec_id is incremented every time that a process does an exec(), so it will definitely be different after executing the setuid binary. But, since it is simply incremented, one can arrange (via fork()) to have a child process with the same self_exec_id as the main process after the setuid exec() is done.

The child with the "correct" self_exec_id value (which it gets by doing an exec()) can then open the parent's /proc/PID/mem file (since there are no extra checks on the open()) and pass the descriptor back to the parent via Unix sockets. The parent then needs to arrange that the setuid executable writes to that file descriptor once a seek() to the proper address has been done. Finding that proper address and getting the binary to write to the fd are the final pieces of the puzzle.

Donenfeld's example uses su because it is not compiled as a position-independent executable (PIE) for most distributions, which makes it easier to figure out which address to use. He exploits the fact that su prints an error message when it is passed an unknown username and the error message helpfully prints the username passed. That allows the exploit to pass shellcode (i.e. binary machine language that spawns a shell when executed) as the argument to su.

After printing the error message, su calls the exit() function (really exit@plt), which is what Donenfeld's exploit overwrites. It finds the address of the function using objdump, subtracts the length of the error message that gets printed before the argument, and seeks the file to that location. It uses dup2() to connect stderr to the /proc/PID/mem file descriptor and execs su "shellcode".

In pseudocode, it might look something like this:

    if (!child && fork()) {  /* child flag set based on -c */
        /* first program invocation, this is parent, wait for fd from child */
	fd = recv_fd();  /* get the fd from the child */
	dup2(2, 15);
	dup2(fd, 2);  /* make fd be stderr */
	lseek(fd, offset);  /* offset to overwrite location */
	exec("/bin/su", shellcode);  /* will have self_exec_id == 1 */
    else if (!child) {
        /* this is the child from the fork(), exec with child flag */
        exec("thisprog", "-c");  /* this program with -c (child) */
    else {
        /* child after exec, will have self_exec_id == 1 */
        fd = open("/proc/PPID/mem", O_RDWR); /* open parent PID's mem file */
        send_fd(fd);  /* send the fd to the parent */
Of course Aedla's proof-of-concept or Donenfeld's exploit code are likely to be even more instructive.

It's obviously a complicated multi-step process, but it is also a completely reliable way to get root privileges. Updates to Donenfeld's post show exploits for distributions like Fedora that do build su as a PIE, or for Gentoo where the read permissions on setuid binaries have been removed so objdump can't be used to find the address of the exit function. For Fedora, gpasswd can be substituted as it is not built as a PIE, while on Gentoo, ptrace() can be used to find the needed address. While it was believed that address space layout randomization (ASLR) for PIEs would make exploitation much more difficult, that proved to be only a small hurdle, at least on 32-bit systems.

The fix and reactions

The fix hit the mainline without any coordination with Linux distributions. Kees Cook, who works on ChromeOS security (and formerly was a member of the Ubuntu security team), told LWN that Red Hat has a person on the closed kernel security mailing list, so it was aware of the problem but did not share that information on the Linux distribution security list. "I've been told this will change in the future, but I'm worried it will be overlooked again", he said. The first indication that other distributions had was likely from Red Hat's Eugene Teo's request for a CVE on the oss-security mailing list.

As Cook points out, the abrupt public disclosure of the bug (via a mainline commit) runs counter to the policy described in the kernel's Documentation/SecurityBugs file, where the default policy is to leave roughly seven days between reports to the mailing list and public disclosure to allow time for vendors to fix the problem. Cook is concerned that bugs reported to are not being handled reasonably:

The current behavior of harms end users, harms distros, and harms security researchers, all while ignoring their own published standards of notification. I have repeatedly seen the list hold a double-standard of "it is urgent to publish this security fix" and "it's just a bug like any other bug". If it were just a bug, there should be no problem in delaying publication. If it were an urgent security fix, all the distros should be notified.

The "just a bug" refers to statements that Torvalds has made over the years about security bugs being no different than any other kind of bug. In email, Torvalds described it this way:

To me, a bug is a bug. Nothing more, nothing less. Some bugs are critical, but it's not about some random "security" crap - it could be because it causes a machine to crash, or it could be because it causes some user application to misbehave.

In keeping with that philosophy, Torvalds does not disclose the security relevance of a fix in the commit message: "I think the whole 'mark this patch as having security implications' is pure and utter garbage". Even if there is a known security problem that is being fixed, his commit messages do not reflect that, as with the message for the /proc/PID/mem fix:

Jüri Aedla reported that the /proc/<pid>/mem handling really isn't very robust, and it also doesn't match the permission checking of any of the other related files.

This changes it to do the permission checks at open time, and instead of tracking the process, it tracks the VM at the time of the open. That simplifies the code a lot, but does mean that if you hold the file descriptor open over an execve(), you'll continue to read from the _old_ VM.

Torvalds's commit message stands in pretty stark contrast to Aedla's report to (linked above):

I have found a privilege escalation vulnerability, introduced by making /proc/<pid>/mem writable. It is possible to open /proc/self/mem as stdout or stderr before executing a SUID. This leads to SUID writing to it's own memory.

This "masking" of the actual reason for a commit doesn't site well with either Cook or Teo (who also responded to an email query). Cook "cannot overstate how much I am against this kind of masking", while Teo pointed out that this particular bug is in no way unique:

There are many kernel vulnerabilities that were fixed silently in the upstream kernel. This is not the first one, nor will be the last one I'm afraid.

Both Teo and Cook were in agreement that disclosing what is known about a fix at the time it is applied can only help distributions and others trying to track kernel development. Torvalds, on the other hand, is concerned about attackers reading commit messages, which could lead to more attacks against Linux systems. He has a well-known contempt for security "clowns" that seems to also factor into his reasoning:

So I just ignore the idiots, and go "fix things asap, but try not to help black hats". No games, no crap, just get the damn work done and don't make a circus out of it.

Both the security camps hate me. The full disclosure people think I try to hide things (which is true), while the embargo people think I despise their corrupt arses (which is also true).

The strange thing is that by explicitly not putting the known security implications of a patch into the commit message, Torvalds is treating security bugs differently. They are no longer "just bugs" because some of the details of the bug are being purposely omitted. That may make it difficult for "black hats"—though it would be somewhat surprising if it did—but it definitely makes it more difficult for those who are trying to keep Linux users secure. Worse yet, it makes it more difficult down the road when someone is looking at a commit (or reversion) in isolation because they may miss out on some important context.

Silent security fixes are a hallmark of proprietary software, and Torvalds's policy resembles that to some extent. It could be argued (and presumably would be by Torvalds and others) that the fixes aren't silent since they go into a public repository and that is true—as far as it goes. By deliberately omitting important information about the bug, which is not done for most or all other bugs, perhaps they aren't so much silent as they are "muted" or, sadly, "covered up". There is definitely a lot of validity to Torvalds's complaints about the security "circus", but his reaction to that circus may not be in the best interests of the kernel community either.

Comments (32 posted)

The zsmalloc allocator

By Jonathan Corbet
January 25, 2012
The kernel cannot be said to lack for memory allocation mechanisms. At the lowest level, "memblock" handles chunks of memory for the rest of the system. The page allocator provides memory to the rest of the kernel in units of whole pages. Much of the kernel uses one of the three slab allocators to get memory blocks in arbitrary sizes, but there is also vmalloc() for situations where large, virtually-contiguous regions are needed. Add in various other specialized allocation functions and other allocators (like CMA) and it starts to seem like a true embarrassment of choices. So what's to be done in this situation? Add another one, of course.

The "zsmalloc" allocator, proposed by Seth Jennings, is aimed at a specific use case. The slab allocators work by packing multiple objects into each page of memory; that works well when the objects are small, but can be problematic as they get larger. In the worst case, if a kernel subsystem routinely needs allocations that are just larger than PAGE_SIZE/2, only one object will fit within a page. Slab allocators can attempt to allocate multiple physically-contiguous pages in order to pack those large objects more efficiently, but, on memory-constrained systems, those allocations can become difficult - or impossible. So, on systems that are already tight of memory, large objects will need to be allocated one-per-page, wasting significant amounts of memory through internal fragmentation.

The zsmalloc allocator attempts to address this problem by packing objects into a new type of compound page where the component pages are not physically contiguous. The result can be much more efficient memory usage, but with some conditions:

  • Code using this allocator must not require physically-contiguous memory,

  • Objects must be explicitly mapped before use, and

  • Objects can only be accessed in atomic context.

Code using zsmalloc must start by creating an allocation pool to work from:

    struct zs_pool *zs_create_pool(const char *name, gfp_t flags);

Where name is the name of the pool, and flags will be used to allocate memory for the pool. It is not entirely clear (to your editor, at least) why multiple pools exist; the zs_pool structure is relatively large, and a pool is really only efficient if the number of objects allocated from it is also large. But that's how the API is designed.

A pool can be released with:

    void zs_destroy_pool(struct zs_pool *pool);

A warning (or several warnings) will be generated if there are objects allocated from the pool that have not been freed; those objects will become entirely inaccessible after the pool is gone.

Allocating and freeing memory is done with:

    void *zs_malloc(struct zs_pool *pool, size_t size);
    void zs_free(struct zs_pool *pool, void *obj);

The return value from zs_malloc() will be a pointer value, or NULL if the object cannot be allocated. It would be a fatal mistake, though, to treat that pointer as if it were actually a pointer; it is actually a magic cookie that represents the allocated memory indirectly. It might have been better to use a non-pointer type, but, again, that is how the API is designed. Getting a pointer that can actually be used is done with:

    void *zs_map_object(struct zs_pool *pool, void *handle);
    void zs_unmap_object(struct zs_pool *pool, void *handle);

The return value from zs_map_object() will be a kernel virtual address that can be used to access the actual object. The return address is essentially a per-CPU object, so the calling code will be in atomic context until the object is freed with zs_unmap_object(). Note that the handle passed to zs_unmap_object() is the original cookie obtained from zs_malloc(), not the pointer from zs_map_object(). Note also that only one object can be safely mapped at a time on any given CPU.

Internally, zsmalloc divides allocations by object size much like the slab allocators do, but with a much higher granularity - there are 254 possible allocation sizes all less than PAGE_SIZE. For each size, the code calculates an optimum number of pages (up to 16) that will hold an array of objects of that size with minimal loss to fragmentation. When an allocation is made, a "zspage" is created by allocating the calculated number of individual pages and tying them all together. That tying is done by overloading some fields of struct page in a scary way (that is not a criticism of zsmalloc: any additional meanings overlaid onto the already heavily overloaded page structure are scary):

  • The first page of a zspage has the PG_private flag set. The private field points to the second page (if any), while the lru list structure is used to make a list of zspages of the same size.

  • Subsequent pages are linked to each other with the lru structure, and are linked back to the first page with the first_page field (which is another name for private, if one looks at the structure definition).

  • The last page has the PG_private_2 flag set.

Within a zspage, objects are packed from the beginning, and may cross the boundary between pages. The cookie returned from zs_malloc() is a combination of a pointer to the page structure for the first physical page and the offset of the object within the zspage. Making that object accessible to the rest of the kernel at mapping time is a matter of calculating its location, then either (1) mapping it with kmap_atomic() if the object fits entirely within one physical page, or (2) assigning a pair of virtual addresses if the object crosses a physical page boundary.

The primary users of zsmalloc are the zcache and zram mechanisms, both of which are currently in staging. These subsystems use the transcendent memory abstraction to store compressed copies of pages in memory. Those compressed pages can still be a substantial fraction of the (uncompressed) page size, so fragmentation issue addressed by zsmalloc can be a real problem. Given the specialized use case and the limitation imposed by zsmalloc, it is not clear that it will find users elsewhere in the kernel, but one never knows.

Comments (3 posted)

XFS: the filesystem of the future?

By Jonathan Corbet
January 20, 2012
Linux has a lot of filesystems, but two of them (ext4 and btrfs) tend to get most of the attention. In his 2012 talk, XFS developer Dave Chinner served notice that he thinks more users should be considering XFS. His talk covered work that has been done to resolve the biggest scalability problems in XFS and where he thinks things will go in the future. If he has his way, we will see a lot more XFS around in the coming years.

XFS is often seen as the filesystem for people with massive amounts of data. It serves that role well, Dave said, and it has traditionally performed well for a lot of workloads. Where things have tended to fall down is in the [benchmark plot] writing of metadata; support for workloads that generate a lot of metadata writes has been a longstanding weak point for the filesystem. In short, metadata writes were slow, and did not really scale past even a single CPU.

How slow? Dave put up some slides showing fs-mark results compared to ext4. XFS was significantly worse (as in half as fast) even on a single CPU; the situation just gets worse up to eight threads, after which ext4 hits a cliff and slows down as well. For I/O-heavy workloads with a lot of metadata changes - unpacking a tarball was given as an example - Dave said that ext4 could be 20-50 times faster than XFS. That is slow enough to indicate the presence of a real problem.

Delayed logging

The problem turned out to be journal I/O; XFS was generating vast amounts of journal traffic in response to metadata changes. In the worst cases, almost all of the actual I/O traffic was for the journal - not the data the user was actually trying to write. Solving this problem took multiple attempts over years, one major algorithm change, and a lot of other significant optimizations and tweaks. One thing that was not required was any sort of on-disk format change - though that may be in the works in the future for other reasons.

Metadata-heavy workloads can end up changing the same directory block many times in a short period; each of those changes generates a record that must be written to the journal. That is the source of the huge journal traffic. The solution to the problem is simple in concept: delay the journal updates and combine changes to the same block into a single entry. Actually implementing this idea in a scalable way took a lot of work over some years, but it is now working; delayed logging will be the only XFS journaling mode supported in the 3.3 kernel.

The actual delayed logging technique was mostly stolen from the ext3 filesystem. Since that algorithm is known to work, a lot less time was required to prove that it would work well for XFS as well. Along with its performance benefits, this change resulted in a net reduction in code. Those wanting details on how it works should find more than they ever wanted in filesystems/xfs-delayed-logging.txt in the kernel documentation tree.

Delayed logging is the big change, but far from the only one. The log space reservation fast path is a very hot path in XFS; it is now lockless, though the slow path still requires a global lock at this point. The asynchronous metadata writeback code was creating badly scattered I/O, reducing performance considerably. Now metadata writeback is delayed and sorted prior to writing out. That means that the filesystem is, in Dave's words, doing the I/O scheduler's work. But the I/O scheduler works with a request queue that is typically limited to 128 entries while the XFS delayed metadata writeback queue can have many thousands of entries, so it makes sense to do the sorting in the filesystem prior to I/O submission. "Active log items" are a mechanism that improves the performance of the (large) sorted log item list by accumulating changes and applying them in batches. Metadata caching has also been moved out of the page cache, which had a tendency to reclaim pages at inopportune times. And so on.

[benchmark plot]

How the filesystems compare

So how does XFS scale now? For one or two threads, XFS is still slightly slower than ext4, but it scales linearly up to eight threads, while ext4 gets worse, and btrfs gets a lot worse. The scalability constraints for XFS are now to be found in the locking in the virtual filesystem layer core, not in the filesystem-specific code at all. Directory traversal is now faster for even one thread and much faster for eight. These are, he suggested, not the kind of results that the btrfs developers are likely to show people.

The scalability of space allocation is "orders of magnitude" faster than ext4 offers now. That changes a bit with the "bigalloc" feature added in 3.2, which improves ext4 space allocation scalability by two orders of magnitude if a sufficiently large block size is used. Unfortunately, it also increases small-file space usage by about the same amount, to the point that 160GB are required to hold a kernel tree. Bigalloc does not play well with some other ext4 options and requires complex configuration questions to be answered by the administrator, who must think about how the filesystem will be used over its entire lifetime when the filesystem is created. Ext4, Dave said, is suffering from architectural deficiencies - using bitmaps for space tracking, in particular - that are typical of an 80's era filesystem. It simply cannot scale to truly large filesystems.

Space allocation in Btrfs is even slower than with ext4. Dave said that the problem was primarily in the walking of the free space cache, which is CPU intensive currently. This is not an architectural problem in btrfs, so it should be fixable, but some optimization work will need to be done.

The future of Linux filesystems

Where do things go from here? At this point, metadata performance and scalability in XFS can be considered to be a solved problem. The performance bottleneck is now in the VFS layer, so the next round of work will need to be done there. But the big challenge for the future is in the area of reliability; that may require some significant changes in the XFS filesystem.

Reliability is not just a matter of not losing data - hopefully XFS is already good at that - it is really a scalability issue going forward. It just is not practical to take a petabyte-scale filesystem offline to run a filesystem check and repair tool; that work really needs to be done online in the future. That requires robust failure detection built into the filesystem so that metadata can be validated as correct on the fly. Some other filesystems are implementing validation of data as well, but that is considered to be beyond the scope of XFS; data validation, Dave said, is best done at either the storage array or the application levels.

"Metadata validation" means making the metadata self describing to protect the filesystem against writes that are misdirected by the storage layer. Adding checksums is not sufficient - a checksum only proves that what is there is what was written. Properly self-describing metadata can detect blocks that were written in the wrong place and assist in the reassembly of a badly broken filesystem. It can also prevent the "reiserfs problem," where a filesystem repair tool is confused by stale metadata or metadata found in filesystem images stored in the filesystem being repaired.

Making the metadata self-describing involves a lot of changes. Every metadata block will contain the UUID of the filesystem to which it belongs; there will also be block and inode numbers in each block so the filesystem can verify that the metadata came from the expected place. There will be checksums to detect corrupted metadata blocks and an owner identifier to associate metadata with its owning inode or directory. A reverse-mapping allocation tree will allow the filesystem to quickly identify the file to which any given block belongs.

[Dave Chinner] Needless to say, the current XFS on-disk format does not provide for the storage of all this extra data. That implies an on-disk format change. The plan, according to Dave, is to not provide any sort of forward or backward format compatibility; the format change will be a true flag day. This is being done to allow complete freedom in designing a new format that will serve XFS users for a long time. While the format is being changed to add the above-described reliability features, the developers will also add space for d_type in the directory structure, NFSv4 version counters, the inode creation time, and, probably, more. The maximum directory size, currently a mere 32GB, will also be increased.

All this will enable a lot of nice things: proactive detection of filesystem corruption, the location and replacement of disconnected blocks, and better online filesystem repair. That means, Dave said, that XFS will remain the best filesystem for large-data applications under Linux for a long time.

What are the implications of all this from a btrfs perspective? Btrfs, Dave said, is clearly not optimized for filesystems with metadata-heavy workloads; there are some serious scalability issues getting in the way. That is only to be expected for a filesystem at such an early stage of development. Some of these problems will take some time to overcome, and the possibility exists that some of them might not be solvable. On the other hand, the reliability features in btrfs are well developed and the filesystem is well placed to handle the storage capabilities expected in the coming few years.

Ext4, instead, suffers from architectural scalability issues. According to Dave's results, it is not the fastest filesystem anymore. There are few plans for reliability improvements, and its on-disk format is showing its age. Ext4 will struggle to support the storage demands of the near future.

Given that, Dave had a question of sorts to end his presentation with. Btrfs will, thanks to its features, soon replace ext4 as the default filesystem in many distributions. Meanwhile, ext4 is being outperformed by XFS on most workloads, including those where it was traditionally stronger. There are scalability problems that show up on even smaller server systems. It is "an aggregation of semi-finished projects" that do not always play well together; ext4, Dave said, is not as stable or well-tested as people think. So, he asked: why do we still need ext4?

One assumes that ext4 developers would have a robust answer to that question, but none were present in the room. So this seems like a discussion that will have to be continued in another setting; it should be interesting to watch.

[ Your editor would like to thank the organizers for their assistance with his travel to the conference. ]

Comments (278 posted)

Patches and updates

Kernel trees


Core kernel code

Device drivers

Filesystems and block I/O

Memory management




Page editor: Jonathan Corbet


SCALE: The road ahead for automotive Linux and open source

January 25, 2012

This article was contributed by Nathan Willis

At SCALE 10X in Los Angeles, Alison Chaiken presented on the short-term future of automotive computing, and how open source is well-positioned to make a big impact on the direction that the carmakers take. Several major auto companies are gearing up to release new platforms and SDKs in 2012, most based on Linux. When they do so, Chaiken said, the open source community can show them far more interesting ideas than simple MP3 sales and bird-versus-pig gaming. The land rush has not started yet, but car-specific open source development tools are already available — if you know where to look.

The opportunity

[Alison Chaiken]

In the car industry, automotive computing platforms are called "in-vehicle infotainment" (IVI) systems, which distinguishes them from the engine-control unit (ECU) computers that manage ignition, fuel-injection, and other critical systems. Pause to let the involuntary shudder triggered by the word "infotainment" settle down, of course, but Chaiken says that the term itself reveals everything that application developers need to understand the situation.

The carmakers are interested in collecting their slice of the lucrative digital entertainment market that is dominating the television, e-reader, and mobile phone "app store" spaces. Some companies have attempted to write the entire IVI stack in-house — usually with disastrous results, such as Ford's broadly panned "MyFord Touch" platform. Fortunately, they have learned their lesson, and are prepared to open up their software platforms to third party developers.

But the carmakers may not be in 100% alignment with their customers. Although they are lured by the idea of an IVI application market with the prospect of a steady MP3, game, and movie rental revenue stream, Chaiken argued, that is not what car buyers want. A survey conducted by GigaOM instead showed that most drivers were interested in safety and security applications: blind-spot alerts, emergency roadside assistance, and other such practical utilities. Social-network-integration simply is not important to them — nor should that be a surprise, since "connected" car-buyers are probably more comfortable using their smartphones to keep in touch with their friends.

In addition, Chaiken said, carmakers cannot be counted on to create applications that theoretically compete (even remotely) with their own business model — such as "casual carpooling" applications that encourage drivers to share rides. The open source community has a unique opportunity to get into the automotive application market in its infancy, and show the automobile industry the value of working with an open community by building the applications consumers really want, as well as applications carmakers have not even dreamed up.

The platforms

Chaiken next outlined the make-up and status of the IVI systems available in the wild or scheduled to launch in the near future. They include Renault's Android-based R-Link, Ford's Android-based OpenXC, General Motors' MontaVista Linux-based Cadillac User Experience (CUE), Fiat/Chrysler's Blue&Me (running on Windows Embedded Automotive), and a slew of QNX-based solutions. The major platforms listed in the talk are in varying stages of availability, but some Linux-based MeeGo IVI units are already on the road in China. Intriguingly, she commented, there are also several prominent Japanese car makers who have either participated in Linux Foundation automotive computing events or are members of Linux IVI efforts like GENIVI, but who have not yet unveiled their IVI plans.

Among the current players, Chaiken said that Renault is making the biggest effort to reach out and partner with the independent developer community, offering an SDK, an "app store" and even an incubator program. GM's CUE is by far the best-reviewed user interface, which makes it worth exploring. It features haptic feedback from the screen, gesture support, and both in-dash and windshield-projected display options. It debuted on Cadillacs, but is expected to roll out to GM's other makes in 2012 (although perhaps under a new name). Chaiken also gave the CUE high marks for MontaVista's security design, which is based on the cgroups-and-Linux-containers approach that LWN covered in November 2011. GM's SDK is currently in preview release, and is expected to go public in March or April.

Ford's OpenXC is a bit of a puzzle in Chaiken's estimation, primarily because it incorporates a "black box" hardware device rather than providing direct access to vehicle sensors and raw data: interested developers must contact Ford with the make and model of their car, and the company will send them a sealed interface dongle programmed with vehicle-specific filters. In the long run, Chaiken said, that grates against the spirit of open access that most developers will expect — but it is early enough in the program that Ford's plans may change. OpenXC is in a limited developer pre-release.

Of the non-Linux IVI platforms, Chaiken suggested watching QNX for further developments. The OS used to be open source, but was taken closed again, and is currently owned by the troubled Research In Motion, parent company of the Blackberry mobile line. Considering RIM's troubles, it is plausible that the company will either sell or re-open QNX, which would be welcome news for Linux developers because of QNX's familiar, BSD-inspired design.

The tools

Separate from the carmakers' SDKs, there are also a number of community-built automotive tools that prospective IVI developers will want to investigate. The first is Gary Briggs' OBD GPS Logger, a data-logging framework for car computing. OBD GPS Logger uses inexpensive USB or Bluetooth dongles to connect to the industry standard OBD2 diagnostics port found on all recent vehicles, and logs the codes that are transmitted. OBD2 is a read-only interface to the industry-standard Controller Area Network (CAN) bus, and although only a subset of CAN codes are standardized across vehicle manufacturers, the bus sees them all and most have been reverse-engineered.

OBD GPS Logger is designed to log information for later analysis, and the project includes several options for plotting the data in three dimensions, complete with geolocation support. A worthy companion for OBD GPS Logger is NOBDy, a message-passing service that provides a generic interface to ODB events. The design is akin to GStreamer's source-and-sink model, Chaiken said, with a variety of event "providers" and "subscribers" implemented in separate modules. With NOBDy, developers can write event-driven applications that use TCP, D-Bus, or Bluetooth to communicate. NOBDy provides QML and HTML5 interfaces for rapid application development, and ships with plug-ins for common services like OpenStreetMap.

Automobile manufacturers are unlikely to push for OBD-aware applications, Chaiken said, but the technology offers many opportunities for developers to create compelling applications. On top of that, however, it is worth the open source community's time to get familiar with vehicle data-logging, because the possibility exists for it to be exploited in consumer-unfriendly ways — such as after-the-fact "surveillance" by carmakers or even law enforcement.

Also of note in the community tools arena is the Ubuntu IVI remix, an Ubuntu distribution designed for in-dash vehicle deployments. The remix is a stripped-down core OS with both Intel and ARM support (from Linaro) integrated, plus several automotive packages, including GENIVI-compliance libraries. Chaiken herself has packaged NOBDy for both Ubuntu and Debian.

The challenges

Although the automotive application market is spinning up in 2012, Chaiken also outlined the challenges that independent software developers face when writing code for IVI systems. The first is that nobody really knows what constitutes the best UI conventions. So far, all carmakers' deployments have met with negative feedback — although perhaps they were simply assuming too much commonality with the mobile handset market. The GENIVI Alliance is a proponent of voice-driven interaction, a choice that Chaiken finds suspect. "When the navigation system asks for a destination, all the kids in the back seat can simply yell out 'Toys-R-Us'" she said, "how do you account for that?"

Fortunately, she continued, the fact that every UI available now is bad is actually "wonderful, because we have a chance. There aren't crap standards yet — but there will be in a few years." Chaiken is a fan of gesture-based user experiences (including the work showcased in GM's CUE) and believes that they may prove to be the winning paradigm for IVI, provided that developers make an effort to learn from the accessibility community. After all, arguably the most important feature of an in-dash IVI UI is that is not distract the driver, and the accessibility community already knows how to work with users with low-or-no visual contact.

If gestures do prove to be a winning formula, she continued, the Microsoft Kinect represents a real danger to open source software. It is becoming the de-facto standard for touchless gesture interaction — because of its well-done implementation — but if open source does not catch up on its own, the risk is that Microsoft could turn litigious and shut down rival projects, or simply change the Kinect protocols periodically just to break compatibility (a tactic the company employed against Samba in the past, she said).

Several other challenges arise solely from the unique conditions of IVI computing. There are different pieces required, such as new buses like CAN, unusual sensors like tire-pressure-gauges, and different requirements for safety and security. Car "security" has evolved to its current state without any consideration for digital information residing in storage (including both vehicle data and user data), she noted. We typically leave our cars overnight — keys included — at repair shops; something we wouldn't dream of doing with our PCs.

Finally, a practical problem confronting the interested IVI developer is that in spite of the carmakers' interest in developing an IVI application ecosystem, information is still much harder to get from carmakers than from consumer electronics OEMs. That is likely to change with experience, but Chaiken related how her efforts to collect the information she presented on IVI SDKs sometimes involved comparatively "extreme measures" like writing and mailing physical letters, and how a developer outreach representative from LG was not even aware that the company made in-dash IVI products.

The solution is much the same formula as promoting open source in other arenas, Chaiken said — getting involved in open source IVI projects, downloading the SDKs, and asking the carmakers for more information. But asking car dealers about the IVI features in new models is worthwhile, too, as is buying a Linux-based car when shopping. There are a few groups of IVI hackers scattered around the highways and byways, including the Silicon Valley Automotive Open Source group. As IVI hits the showroom in 2012, hopefully it won't take the broader open source development community by surprise.

Comments (none posted)

Brief items

Distribution quote of the week

Good news on that front too. Gentoo Linux will be renamed to GNU/FDO/IBM/Oracle/Mozilla/KDE/Gnome/Linux as soon as we make sure we haven't left anyone out.
-- Ciaran McCreesh

Comments (1 posted)

Distribution News

Debian GNU/Linux

Bits from the Release Team

The Debian release team has an update on new team members, the freeze, release goals and more. "This is the first "Bits from the Release Team" of 2012. In the year ahead, we plan to freeze (and perhaps even release) Debian 7.0 "Wheezy", and we need your help to achieve this. Read on to find out what we've been up to recently and what to expect in the near future."

Full Story (comments: none)


openSUSE 11.3 has reached end of SUSE support

openSUSE 11.3 is officially discontinued. There will be no more support by SUSE. Click below for some security statistics.

Full Story (comments: none)

Ubuntu family

Ubuntu Developer Week: 31st Jan-2nd Feb

The next Ubuntu Developer Week takes place on IRC from January 31, 2012 to February 2. It includes tutorials and hands-on sessions all about Ubuntu development. "No matter if you are new to Ubuntu development or quite experienced already, we are sure going to have an interesting session for you." The list of sessions has been posted here.

Full Story (comments: none)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

Shuttleworth: Introducing the HUD. Say hello to the future of the menu.

Here's a lengthy posting from Mark Shuttleworth describing the "heads-up display" concept that Ubuntu is pushing toward. "It’s smart, because it can do things like fuzzy matching, and it can learn what you usually do so it can prioritise the things you use often. It covers the focused app (because that’s where you probably want to act) as well as system functionality; you can change IM state, or go offline in Skype, all through the HUD, without changing focus, because those apps all talk to the indicator system. When you’ve been using it for a little while it seems like it’s reading your mind, in a good way."

Comments (45 posted)

Cinnamon fork of GNOME Shell gets stable release (The H)

The H covers the first stable release of Cinnamon. "Version 1.2 of Cinnamon, the Linux Mint project's fork of the GNOME Shell, has been released and the APIs and desktop interface have been declared fully stable by Mint Founder Clement Lefebvre. Created last year to streamline the Mint developers' changes to the GNOME 3 environment, the Cinnamon fork brings familiar GNOME 2 design elements to the GNOME 3 shell. Among the enhancements in the stable version is easier customisation through a "Cinnamon Settings" tool which includes, for example, the ability to set the date format for the calendar applet and change panel launchers' icons. " The Cinnamon download page has instructions for installing Cinnamon on other distributions, including Ubuntu 11.10, Fedora 16, openSUSE 12.1, Arch Linux and Gentoo.

Comments (none posted)

Meet Bodhi's Bulky Brother: Bloathi (OStatic)

Bloathi is a community spin of Bodhi Linux aimed at providing a more fully featured "out-of-the-box" experience. OStatic takes a look at Bloathi. "Bloathi retains the Enlightenment desktop environment and comes with lots of themes and several hardware profiles. These are setup upon reaching the desktop through a pop-up configuration. The hard drive installer icon normally found on the desktop doesn't show up in a lot of themes, so check in the file manager under Desktop." LWN looked at Bodhi at the end of March 2011.

Comments (none posted)

Why Don’t Other Linux Distros Use Unity? A Few Thoughts (The Var Guy)

Christopher Tozzi notes the lack of distributions offering the Unity desktop shell. "It’s also telling that Unity has not been distributed outside of Ubuntu’s own channel. An effort to port it to Fedora fizzled out, and I couldn’t even find RPM packages of the software anywhere. Meanwhile, the only up-to-date Launchpad PPA for Unity currently supports Ubuntu 12.04 alone. In other words, even just installing Unity on a distribution that’s not Ubuntu remains a tall order, too difficult for most people to consider."

Comments (1 posted)

Page editor: Rebecca Sobol


Porting office suites to mobile platforms

January 25, 2012

This article was contributed by Bruce Byfield

At first, office suites might seem out of place on tablets and smart phones. Word processors and spreadsheets might seem more at home on workstations, or at least laptops. Yet free office suites are being ported to mobile devices — often belatedly, and usually with reduced feature sets as projects start to develop for these increasingly important platforms.

Currently, free software users have much the same choices on mobile platforms as they do on workstations. LibreOffice and Calligra Suite (formerly part of KOffice) are working on ports to Android, while AbiWord and Gnumeric have some aging Maemo ports from a few years ago. All share the difficulties of moving from the desktop to mobile devices, but each also faces its own advantages and disadvantages in making a port. and LibreOffice is currently an Apache Incubator project, and concentrating on an audit of the current code. Under these circumstances, it is not surprising that, according to long-time developer Jürgen Schmidt, "nobody is currently really working on such a port." Nor does Schmidt think that a port of to a mobile platform the best approach:

I would not say that it is impossible, but it would be definitely a lot of work. is too heavy weight and I would focus more on new apps handling Open Document Format. Smaller apps with a modern fresh user interface and a subset of features would be probably much better.

By contrast, LibreOffice is actively working to cross-compile its core code to both Android and iOS, because, according to Michael Meeks "Android is a free software platform and iOS also has substantial market share. These two together should give the majority of the tablet market share, I hope." The effort has no firm deadline, beyond some time in 2012 or 2013.

Most of the work so far has been done by Tor Lillqvist of SUSE, another veteran. Recently, though, Meeks has also begun contributing patches. Currently, Lillqvist said, LibreOffice has a unit test for the Calc spreadsheet on Android, which is a GUI-less script that "exercises quite a lot of the LibreOffice application startup code, reading tons of configuration files, and then exercises a lot of Calc functionality."

So far, touchscreens are not supported, but Lillqvist suggested that may not be a pressing issue.

For some initial viewer-only app, we won't need that much touchscreen support. Just listening for and handling basic gestures like zooming and panning [or] flipping pages would be the first step. There is also work ongoing for improvement to the LibreOffice UI on desktop OSes that will also benefit touch-based OSes.

However, although LibreOffice is going ahead with the ports, its developers, like Schmidt of, are well aware of the difficulties involved. For one thing, although LibreOffice has been ported to over half a dozen hardware architectures and operating systems and the system abstractions used for porting are, in Meeks's words, "pretty well tested," they are not particularly suited to the target mobile platforms. For instance, Lillqvist noted that, until recently, LibreOffice included

at least two different APIs as abstractions for file access. Clearly, it would be suboptimal to have to add such hooks into two different places. So I have the last [few] days been working on making the other of these two APIs call the other one for actual file access.

Another challenge is the fact that LibreOffice and are, as Meeks put it, "infamous as a stress-test for your I/O subsystem and CPU." General efforts such as the ongoing code cleanup should help improve performance, and a port might be further streamlined by omitting legacy database or file format filters.

Yet another challenge is the user interface, which Meeks described as "pretty old and not particularly suitable for mobile devices." Meeks expects that LibreOffice's general move toward the GTK3 toolkit will improve general performance, but design issues remain:

The in-document WYSIWYG editing of existing graphical items should work reasonably well even on a smaller device, but the chrome — all those dialogs and options around the document — will require much more work.

Still, Meeks sounded optimistic about the interface challenges. The 3D support on modern mobile devices, he suggests, could mean that LibreOffice could add transparencies, meaning that more information could be presented on the reduced sized screens by layering it. He also mentioned experiments with a prototype for a formatting-style selector pane that includes "thumbnail previews of each style in a side-bar," which would give the mobile port a feature that the desktop version of LibreOffice lacks.

KOffice and Calligra

KOffice and Calligra have been developing for mobile platforms for several years. In 2009, KOffice, the KDE-centric office suite, released a viewer for Maemo and MeeGo. This application included support for KOffice's word processor, spreadsheet, and slide show applications. Yet another KOffice-based viewer, FreOffice, was developed by Nokia for the same mobile platforms, and included support for the word processor and slide show applications.

When KOffice and Calligra became separate projects in mid-2010, the mobile code went only into the Calligra repository, and the resulting application became known as Calligra Mobile. In April 2011, Calligra announced Calligra Active for use with Plasma Active, which is KDE's new interface that is designed for tablets, handsets, media centers, and more. Calligra Mobile and Calligra Active share most of the same code and functionality, but use different interfaces.

In the last year, these efforts became the basis for Harmattan Office, which consists of Calligra Active and a proprietary interface that is installed by Nokia on its MeeGo-based N9 phone. Calligra is also the foundation for KO GmbH's still-in-development ports of Calligra Mobile/Active to Windows and Android. In addition, KO GmbH has released WebODF, a service that allows users to view and edit Open Document Format (ODF) files in web browsers, under the Affero GPL.

On January 12, 2012, Calligra developer Marijn Kruisselbrink also announced that he had Calligra running on Android. According to Inge Wallin, Calligra's marketing coordinator, Kruisselbrink's port is "now more or less crash-free." Wallin suggested that the Android port will probably be given the Calligra Active interface, rather than Calligra Mobile's, because "that has a more modern look and feel."

According to Wallin, Calligra is well-suited for porting to mobile platforms. As with all KDE applications since the start of the KDE 4 release series, Calligra's architecture "separates the actual functions from the interface. It also uses plugins heavily, to the degree that some plugins actually have plugins of their own" — for instance ones that connect to web services like Google Docs.

Wallin added:

As a side effect of this modularity, it is very easy to create subsets of the whole suite. This is possible not only by choosing which applications to include, but also to choose which [plugin] modules to install. Another effect of this module design is that the code is easy to follow and quite fast to get familiar with. This means that new people often become productive in hours or days instead of weeks.

Some sense of this modularity can be had from the fact that, according to Wallin's estimate, Calligra as a whole contains some 1.2 million lines of code. Of that, some 12,000 lines are for Calligra Mobile, and 2500 lines for Calligra Active. Much of the remaining code affects mobile applications, but only so far as it affects the rest of Calligra as well. "The mobile ports are of great importance," Wallin said, "but our architecture lets us get away with not putting very much work into them."

However, like all mobile ports, Calligra struggles with the limited screen size of handsets compared to netbooks, laptops, and workstations. Calligra plans to begin porting the database application Kexi some time soon, but some of Calligra's applications, especially the graphics editors Karbon and Krita, may never be practical for working on some mobile devices:

What we have to keep in mind is that tablets and to an even greater extent smart phones are not well-suited to provide complex and/or large contents. Viewing is fine, as are minor edits. But in general, we are more concentrating on making the viewing experience perfect at this stage.

AbiWord and Gnumeric

While these plans for porting are going forward in LibreOffice and Calligra Suite, the earliest free software office ports to mobile devices have been more or less abandoned. Both the AbiWord word processor and the Gnumeric spreadsheet had Maemo ports by the start of 2007. These ports remain widely available, but active development on both of them ceased several years ago. They are now four or five versions behind the current desktop releases, with little or no work having been done to prepare them for Maemo 5, let alone its successor, MeeGo.

According to Gnumeric developer Morten Welinder, if any efforts at porting Gnumeric are happening, "the main Gnumeric team is not involved. We've asked for patches, but haven't received anything." A port using the Hildon framework was begun, but Welinder described it as "officially abandoned."

Similarly, when asked the current state of affairs at AbiWord, lead developer Hub Figuière said:

Maemo is mostly dead. I know there is one person making packages for AbiWord, but when I stopped maintaining it in 2009, I didn't even have a N900 [phone] to test on Maemo 5. So there is little happening on that front. The two other major mobile platforms are either impossible (iOS - the AppStore does not allow GPL) or hard (Android - the NDK and Java makes a port harder than can be afforded by the very scarce resources the project has).

This state of affairs seems unfortunate. AbiWord and Gnumeric have always been faster than LibreOffice or even Calligra Suite, as well as more pared down in features. Both are logical candidates for mobile ports. As things are, while I have heard unconfirmed reports that the existing ports are still usable, their main interest now are as case studies of the considerations that go into a port. About the closest thing to mobile support is, AbiWord's equivalent of Google Docs.

Making the difficult practical

Porting applications designed for workstations and laptops to mobile platforms is always going to be challenging. It means going from virtually unlimited memory to the memory limits of years ago. To a large extent, it is the art of deciding what to leave out: format support, templates, features, or even entire applications. It also means risking the whims of a rapidly developing market, as developers discovered when tablets took over much of the netbook's niche.

It seems a sign of the increasing importance of mobile devices that the ports are even attempted — despite their difficulties, the need for them is simply too big to ignore if developers hope to offer users the choice of free software on all their devices. Still, regardless of the success or popularity of the ports, they seem likely to have at least one long-term effect. Asked if the ports will have any benefit for workstation users, Meeks answered rhetorically: "Beyond small, faster, quicker starting, more usable and potentially prettier [applications]?"

The difficulties of porting LibreOffice's venerable code and Calligra's more recently revised code are obviously very different. Yet in attempting the ports, developers are rethinking and revising the work of the past — and that can only benefit all users, regardless of how the ports themselves are received.

Comments (3 posted)

Brief items

Quotes of the week

When a git user runs into a problem, they look at the tools they have on hand and ask, “how can I combine these ideas to solve my problem?” When a mercurial user runs into a problem, they look at the problem and ask, “what code can I write to work around this?” They are very different approaches that may end up at the same place, but follow alternate routes.
-- Jason Chu

I therefore suggest two responses:

(a) Either Perl 5 Porters (i.e. Rik as Pumpking) or TPF should contact Fedora/Redhat packagers and inform them of our concerns. I'm not saying that TPF should slap them with a "cease and desist" (though that would certainly be emotionally satisfying), but I do think we should "officially" raise concerns that splitting out core libraries is not viewed as acceptable by upstream and that we do not feel it is in the spirit of the license.

(b) p5p should finally bite the bullet and write the spec for "minimal perl" (whatever we finally think that is) and we should then offer that to packagers as a sanctioned minimal distribution as a compromise to response (a). We should also be clear about binary package naming -- i.e. a minimal perl should not be packaged as "perl".

-- David Golden

Comments (none posted)

GDB 7.4 released

Version 7.4 of the GDB debugger is out. New features include a Renesas RL78 simulator, a number of Python scripting improvements, several new debugging commands and options, and more.

Full Story (comments: 1)

Goptical 1.0

The first release of Goptical, the GNU optical design and simulation library, is available. "The Goptical library provides C++ model classes for optical components, surfaces and materials. It enables building optical systems by creating and placing various optical components in a 3d space and simulates light propagation through the system. Classical optical design analysis tools can be used on optical systems."

Full Story (comments: none)

KDE 4.8 released

The KDE project has announced the release of KDE Plasma Workspaces, KDE Applications, and KDE Platform 4.8. "KDE applications released today include Dolphin with its new display engine and semantic goodies, new Kate features and improvements, and Gwenview enhancements. Enjoy new Marble features such as interactive Elevation Profile, satellite tracking and Krunner integration."

Comments (13 posted)

Suricata 1.2

Version 1.2 of the Suricata intrusion detection system is out. The headline features appear to be HTTP file extraction and inspection; this release also features a number of performance improvements.

Full Story (comments: none)

Newsletters and articles

Development newsletters from the last week

Comments (none posted)

Poettering: systemd for Administrators, Part XII

The twelfth installment of systemd for administrators covers securing services. "In this iteration of the series we want to focus on a couple of these security features of systemd and how to make use of them in your services. These features take advantage of a couple of Linux-specific technologies that have been available in the kernel for a long time, but never have been exposed in a widely usable fashion. These systemd features have been designed to be as easy to use as possible, in order to make them attractive to administrators and upstream developers..."

Comments (164 posted)

Page editor: Jonathan Corbet


Brief items

HP: webOS to be fully released by September

HP has announced a roadmap for the open-sourcing of webOS that calls for the full code base to be released by September. The Apache 2.0 license will be used. "HP also announced it is releasing version 2.0 of webOS’s innovative developer tool, Enyo. Enyo 2.0 enables developers to write a single application that works across mobile devices and desktop web browsers, from the webOS, iOS and Android platforms to the Internet Explorer and Firefox browsers – and more. The source code for Enyo is available today, giving the open source community immediate access to the acclaimed application framework for webOS."

Comments (5 posted) 2012 videos available

Videos from the recently concluded in Ballarat have been uploaded to YouTube.

Comments (19 posted)

Articles of interest

World IPv6 Launch: this time it's for real (ars technica)

A successor to last year's World IPv6 Day is the subject of an article over at ars technica. World IPv6 Launch will take place on June 6 and this time the plan is to leave things up and running on IPv6 after the day has ended. "Also new this year is that several Internet service providers will be participating by enabling IPv6 for at least one percent of their customers—with more to follow. These ISPs include not only those that have already put a toe in the IPv6 waters before, such as Comcast, Free Telecom in France, and XS4ALL in the Netherlands; but also Time Warner Cable and AT&T. Last but not least, Cisco/Linksys and D-Link will be enabling IPv6 support in the default configurations of their home routers."

Comments (59 posted)

FOSDEM interviews, part 3

The third set of interviews with speakers from the upcoming FOSDEM conference has been posted; featured this time are Bdale Garbee, Finne Boonen, Guido Trotter, Wim Godden, Garrett Serack, and Renzo Davoli. "The central role of computers and interfaces has disappeared, services are the main focus now. The logical structure of the internet must change as a consequence of this. By the IoTh [Internet of Threads] we mean a structure where the addressable nodes of the internet are, or can also be, processes or even concurrent threads of a process. In the IoTh the definition of an independent networking stack, with its own virtual interfaces, addresses, routing is as simple as the creation of a PF_UNIX socket. It is an 'ordinary business' user-space operation, not a structural and dangerous change, for system administrators only."

Comments (none posted)

New Books

Arduino Cookbook, 2nd Edition--New from O'Reilly Media

O'Reilly Media has released "Arduino Cookbook, 2nd Edition" by Michael Margolis.

Full Story (comments: none)

Calls for Presentations

GNU Tools Cauldron 2012 - Call for Abstracts and Participation

GNU Tools Cauldron will take place July 9-11 in Prague, Czech Republic. The abstract submission deadline is January 31. "The purpose of this workshop is to gather all GNU tools developers, discuss current/future work, coordinate efforts, exchange reports on ongoing efforts, discuss development plans for the next 12 months, developer tutorials and any other related discussions."

Full Story (comments: none)

PgNext, The Next PostgreSQL Conference

The PostgreSQL Conference will take place June 26-29, 2012 in Denver, Colorado. The call for papers is open until April 15. "As always content is key and we have already secured the two of the four full day trainings, Practical PostgreSQL Administration from community member Jim Mlgodeski and the usual excellent Performance material from Major Contributor Greg Smith of 2nd Quadrant."

Full Story (comments: none)

Upcoming Events

LibreOffice DevRoom at FOSDEM 2012 in Brussels

The Document Foundation and LibreOffice will have a dedicated track on Saturday, February 4. There will also be a booth where it will be possible to meet developers and other volunteers and ask for information about contributing to the project.

Full Story (comments: none)

Paris: Debian bug Squashing Party (Wheezy)

There will be a Debian Bug Squashing Party in Paris, France February 17-19, 2012. "This event is also the opportunity for new potential contributors to meet Debian Developers or Maintainers. Numerous regular contributors will attend to this BSP and will help newcomers to fix their first bugs."

Full Story (comments: none)

Linaro Connect Q2.12

Linaro Connect Q2.12 will be held from May 28-June 1, 2012 in Hong Kong. "This will be Linaro's first major event in Asia, and quite possibly the largest Linux on ARM event to be hosted in that part of the world. As well as being a convenient location for many of our Member engineers, its also in good proximity to a number of ARM's leading Cortex-A licensees and reflects the growing importance of that region to ARM open source software development."

Full Story (comments: none)

Early, Early Bird Registration Closing Soon for POSSCON 2012

The Palmetto Open Source Software Conference (POSSCON) will take place March 28-29, 2012 in Columbia, South Carolina. The early, early bird registration ends January 31.

Full Story (comments: none)

Events: January 26, 2012 to March 26, 2012

The following event listing is taken from the Calendar.

January 27
January 29
DebianMed Meeting Southport2012 Southport, UK
January 31
February 2
Ubuntu Developer Week #ubuntu-classroom,
February 4
February 5
Free and Open Source Developers Meeting Brussels, Belgium
February 6
February 10
Linux on ARM: Linaro Connect Q1.12 San Francisco, CA, USA
February 7
February 8
Open Source Now 2012 Geneva, Switzerland
February 10
February 12
Linux Vacation / Eastern Europe Winter session 2012 Minsk, Belarus
February 10
February 12
Skolelinux/Debian Edu developer gathering Oslo, Norway
February 13
February 14
Android Builder's Summit Redwood Shores, CA, USA
February 15
February 17
2012 Embedded Linux Conference Redwood Shores, CA, USA
February 16
February 17
Embedded Technology Conference 2012 San José, Costa Rica
February 17
February 18
Red Hat, Fedora, JBoss Developer Conference Brno, Czech Republic
February 24
February 25
PHP UK Conference 2012 London, UK
February 27
March 2
ConFoo Web Techno Conference 2012 Montreal, Canada
February 28 Israeli Perl Workshop 2012 Ramat Gan, Israel
March 2
March 4
Debian BSP in Cambridge Cambridge, UK
March 2
March 4
BSP2012 - Moenchengladbach Mönchengladbach, Germany
March 5
March 7
14. German Perl Workshop Erlangen, Germany
March 6
March 10
CeBIT 2012 Hannover, Germany
March 7
March 15
PyCon 2012 Santa Clara, CA, USA
March 10
March 11
Open Source Days 2012 Copenhagen, Denmark
March 10
March 11
Debian BSP in Perth Perth, Australia
March 16
March 17
Clojure/West San Jose, CA, USA
March 17
March 18
Chemnitz Linux Days Chemnitz, Germany
March 23
March 24
Cascadia IT Conference (LOPSA regional conference) Seattle, WA, USA
March 24
March 25
LibrePlanet 2012 Boston, MA, USA

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol

Copyright © 2012, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds