LWN.net Logo

LWN.net Weekly Edition for March 7, 2013

ELC: Nano-copters!

By Jake Edge
March 6, 2013

The Embedded Linux Conference often has talks about interesting Linux-powered devices, which should come as no surprise, but talks about devices that fly tend to attract a larger audience. Gregoire Gentil's presentation on a video "nano-copter" was no exception. While there was no free-flying demo, there were several tethered demos that clearly showed some of the possibilities of the device.

Gentil started his talk by showing a YouTube video of the marketing-level pitch from his company, AlwaysInnovating (AI), for the MeCam device. The MeCam is a small quad-copter that allows people to video their every move as it follows them around. The video will stream to their mobile phone, where it can be uploaded to Facebook, Twitter, and the like.

[MeCam]

The device itself, pictured at right, is a "flying Pandaboard" with "lots of modifications", Gentil said. The copter has four propellers, a camera, and can communicate via WiFi. He had one of the copters tethered to a stand so that it could go up and down, but not "kill anybody if something goes wrong", he said with a laugh.

The copter runs an OMAP4 with dual Cortex-A9 processors. It uses pulse-width modulation (PWM) to control each of the four propeller motors. The camera has 1080p resolution and uses CSI-2 for the data interface as USB is not fast enough, he said. There are also a "bunch of sensors", including Euler angle sensors, altitude sensors, and wall-detection sensors.

There is a single battery powering both the CPU and the motors, which is different from some radio-controlled (RC) copters, he said. The motors are quite small, and run at 10,000-15,000rpm. That can create a lot of noise in the electrical system, so many RC devices use two separate batteries. Instead, AI has added a lot of filtering on the power, so that it could avoid the extra weight of an additional battery.

[Gregoire Gentil]

The MeCam is not an RC device, instead it has an auto-pilot that is voice controlled and uses facial recognition to position itself. AI considered three different possibilities for running the auto-pilot code. The first was to run a standard Linux kernel, which would give them a "mature full environment" in which to develop and run the code. The downside is the latency. There were times when the motors would not be serviced for 50ms, which was enough time to cause the MeCam to "crash against the wall", Gentil said.

The second option was to use the realtime Linux kernel, which has "much better latency", but is "less mature than the standard kernel". That is the approach being used now, but AI is pursuing another approach as well: writing a custom realtime operating system (RTOS) for the Cortex-M3 which is present on the OMAP4. That will allow the system to have "perfect latency", he said, but it "will be complex to develop and is not mainstream".

[Demo]

For the demos, and until the Cortex-M3 RTOS version is working, the MeCam is running a PREEMPT_RT kernel on the Cortex-A9s. The auto-pilot process is given a priority of 90 using the SCHED_FIFO scheduling class.

The auto-pilot uses Euler angles (i.e. roll, pitch, and yaw) but due to the gimbal lock effect, that is not sufficient for navigating. The solution to that problem is to use quaternions, which use four numbers in a complex number space. That requires a math library and floating-point numbers, which is a problem for the Cortex-M3 version because it doesn't have any floating-point support. There are plans to use a fixed-point library to work around that.

To control the movement once the desired direction has been calculated, the MeCam uses a proportional-integral-derivative (PID) controller. The PID controller uses a feedback loop that produces movement that smoothly narrows in on the goal location without overcompensating. In addition, its "implementation is very straightforward", Gentil said. There are constants used in the PID algorithm, which can either be derived experimentally or calculated theoretically using a program like MATLAB. AI chose the experimental approach, and he recommended the PID without a PhD article for those interested.

There is an ultrasonic altitude sensor that uses frequencies above 40kHz to determine how far the copter is above the ground so that it can maintain a constant height. It uses the time for an echo return to determine its height. Someone asked about it getting confused when flying past a cliff (or off the edge of a table), but Gentil said there is a barometer that is also used for more coarse altitude information and that it would detect that particular problem.

The OMAP4 has a "bunch of video coprocessing stuff" that is used by the MeCam. The camera data is routed to two different tasks, one for streaming to the phone, the other for face detection. It uses Video4Linux2 (V4L2) media controls to control the camera and its output. He mentioned the yavta (Yet Another V4L2 Test Application) as an excellent tool for testing and debugging.

The camera sensor provides multiple outputs, which are routed to resizers and then to the live streaming and face detection. With the OMAP4 and V4L2, "you can definitely do crazy things on your system", he said. For streaming, the MeCam uses Gstreamer to produce Real Time Streaming Protocol (RTSP) data.

Gentil had various demos, including the copter operating from a four-way stand (tethered at each corner), from a two-way stand (tethered at opposite corners) to show the PID algorithm recovering from his heavy-handed inputs, as well as video streaming to attendees' laptops if their browser supported RTSP. There is still plenty of work to do it would seem, but there is quite a bit that is functioning already. Current battery life is around 15 minutes, but "I think we can do better", Gentil said. One can imagine plenty of applications for such a device, well beyond the rather self-absorbed examples shown in the marketing video.

[ I would like to thank the Linux Foundation for travel assistance to attend ELC. ]

Comments (10 posted)

ELC: SpaceX lessons learned

By Jake Edge
March 6, 2013

On day two of the 2013 Embedded Linux Conference, Robert Rose of SpaceX spoke about the "Lessons Learned Developing Software for Space Vehicles". In his talk, he discussed how SpaceX develops its Linux-based software for a wide variety of tasks needed to put spacecraft into orbit—and eventually beyond. Linux runs everywhere at SpaceX, he said, on everything from desktops to spacecraft.

[Robert Rose]

Rose is the lead for the avionics flight software team at SpaceX. He is a former video game programmer, and said that some lessons from that work were valuable in his current job. He got his start with Linux in 1994 with Slackware.

SpaceX as a company strongly believes in making humans into a multi-planetary species. A Mars colony is the goal, but in order to get there, you need rockets and spaceships, he said. It is currently expensive to launch space vehicles, so there is a need to "drive costs down" in order to reach the goal.

The company follows a philosophy of reusability, which helps in driving costs down, Rose said. That has already been tried to some extent with the space shuttle program, but SpaceX takes it further. Not only are hardware components reused between different spacecraft, but the software is shared as well. The company builds its rockets from the ground up at its facility, rather than contracting out various pieces. That allows for closer and more frequent hardware-software integration.

One thing that Rose found hard to get used to early on in his time at SpaceX is the company's focus on the "end goal". When decisions are being made, people will often bring it up: "is this going to work for the Mars mission?" That question is always considered when decisions are being made; Mars doesn't always win, but that concern is always examined, he said.

Challenges

Some of the challenges faced by the company are extreme, because the safety of people and property are involved. The spacecraft are dangerous vehicles that could cause serious damage if their fuel were to explode, for example. There is "no undo", no second chance to get things right; once the rocket launches "it's just gonna go". Another problem that he didn't encounter until he started working in the industry is the effects of radiation in space, which can "randomly flip bits"—something that the system design needs to take into account.

There are some less extreme challenges that SpaceX shares with other industries, Rose said. Dealing with proprietary hardware and a target platform that is not the same as the development platform are challenges shared with embedded Linux, for example. In addition, the SpaceX team has had to face the common problem that "no one outside of software understands software".

SpaceX started with the Falcon rocket and eventually transitioned the avionics code to the Dragon spacecraft. The obvious advantage of sharing code is that bugs fixed on one platform are automatically fixed on the other. But there are differences in the software requirements for the launch vehicles and spacecraft, largely having to do with the different reaction times available. As long as a spacecraft is not within 250 meters of the International Space Station (ISS), it can take some time to react to any problem. For a rocket, that luxury is not available; it must react in short order.

False positives are one problem that needs to be taken into account. Rose mentioned the heat shield indicator on the Mercury 6 mission (the first US manned orbital flight) which showed that the heat shield had separated. NASA tried to figure out a way to do a re-entry with no heat shield, but "eventually just went for it". It turned out to be a false positive. Once again, the amount of time available to react is different for launch vehicles and spacecraft.

Gathering data

Quoting Fred Brooks (of The Mythical Man-Month fame), Rose said "software is invisible". To make software more visible, you need to know what it is doing, he said, which means creating "metrics on everything you can think of". With a rocket, you can't just connect via JTAG and "fire up gdb", so the software needs to keep track of what it is doing. Those metrics should cover areas like performance, network utilization, CPU load, and so on.

The metrics gathered, whether from testing or real-world use, should be stored as it is "incredibly valuable" to be able to go back through them, he said. For his systems, telemetry data is stored with the program metrics, as is the version of all of the code running so that everything can be reproduced if needed.

SpaceX has programs to parse the metrics data and raise an alarm when "something goes bad". It is important to automate that, Rose said, because forcing a human to do it "would suck". The same programs run on the data whether it is generated from a developer's test, from a run on the spacecraft, or from a mission. Any failures should be seen as an opportunity to add new metrics. It takes a while to "get into the rhythm" of doing so, but it is "very useful". He likes to "geek out on error reporting", using tools like libSegFault and ftrace.

Automation is important, and continuous integration is "very valuable", Rose said. He suggested building for every platform all of the time, even for "things you don't use any more". SpaceX does that and has found interesting problems when building unused code. Unit tests are run from the continuous integration system any time the code changes. "Everyone here has 100% unit test coverage", he joked, but running whatever tests are available, and creating new ones is useful. When he worked on video games, they had a test to just "warp" the character to random locations in a level and had it look in the four directions, which regularly found problems.

"Automate process processes", he said. Things like coding standards, static analysis, spaces vs. tabs, or detecting the use of Emacs should be done automatically. SpaceX has a complicated process where changes cannot be made without tickets, code review, signoffs, and so forth, but all of that is checked automatically. If static analysis is part of the workflow, make it such that the code will not build unless it passes that analysis step.

When the build fails, it should "fail loudly" with a "monitor that starts flashing red" and email to everyone on the team. When that happens, you should "respond immediately" to fix the problem. In his team, they have a full-size Justin Bieber cutout that gets placed facing the team member who broke the build. They found that "100% of software engineers don't like Justin Bieber", and will work quickly to fix the build problem.

Project management

In his transition to becoming a manager, Rose has had to learn to worry about different things than he did before. He pointed to the "Make the Invisible More Visible" essay from the 97 Things Every Programmer Should Know project as a source of inspiration. For hardware, it's obvious what its integration state is because you can look at it and see, but that's not true for software. There is "no progress bar for software development". That has led his team to experiment with different methods to try to do project planning.

Various "off the shelf" project management methodologies and ways to estimate how long projects will take do not work for his team. It is important to set something up that works for your people and set of tasks, Rose said. They have tried various techniques for estimating time requirements, from wideband delphi to evidence-based scheduling and found that no technique by itself works well for the group. Since they are software engineers, "we wrote our own tool", he said with a chuckle, that is a hybrid of several different techniques. There is "no silver bullet" for scheduling, and it is "unlikely you could pick up our method and apply it" to your domain. One hard lesson he learned is that once you have some success using a particular scheduling method, you "need to do a sales job" to show the engineers that it worked. That will make it work even better the next time because there will be more buy-in.

Some technical details

Linux is used for everything at SpaceX. The Falcon, Dragon, and Grasshopper vehicles use it for flight control, the ground stations run Linux, as do the developers' desktops. SpaceX is "Linux, Linux, Linux", he said.

Rose went on to briefly describe the Dragon flight system, though he said he couldn't give too many details. It is a fault-tolerant system in order to satisfy NASA requirements for when it gets close to the ISS. There are rules about how many faults a craft needs to be able to tolerate and still be allowed to approach the station. It uses triply redundant computers to achieve the required level of fault tolerance. The Byzantine generals' algorithm is used to handle situations where the computers do not agree. That situation could come about because of a radiation event changing memory or register values, for example.

For navigation, Dragon uses positional information that it receives from the ISS, along with GPS data it calculates itself. As it approaches the station, it uses imagery of the ISS and the relative size of the station to compute the distance to the station. Because it might well be in darkness, Dragon uses thermal imaging as the station is slightly warmer than the background.

His team does not use "off-the-shelf distro kernels". Instead, they spend a lot of time evaluating kernels for their needs. One of the areas they focus on is scheduler performance. They do not have hard realtime requirements, but do care about wakeup latencies, he said. There are tests they use to quantify the performance of the scheduler under different scenarios, such as while stressing the network. Once a kernel is chosen, "we try not to change it".

The development tools they use are "embarrassingly non-sophisticated", Rose said. They use GCC and gdb, while "everyone does their own thing" in terms of editors and development environments. Development has always targeted Linux, but it was not always the desktop used by developers, so they have also developed a lot of their own POSIX-based tools. The main reason for switching to Linux desktops was because of the development tools that "you get out of the box", such as ftrace, gdb (which can be directly attached to debug your target platform), netfilter, and iptables.

Rose provided an interesting view inside the software development for a large and complex embedded Linux environment. In addition, his talk was more open than a previous SpaceX talk we covered, which was nice to see. Many of the techniques used by the company will sound familiar to most programmers, which makes it clear that the process of creating code for spacecraft is not exactly rocket science.

[ I would like to thank the Linux Foundation for travel assistance to attend ELC. ]

Comments (5 posted)

SCALE: Advocating FOSS at the DoD

By Nathan Willis
March 7, 2013

Law, government regulations, and public policy rarely pique the interests of free software developers, which is unfortunate considering the impact these topics have on the success of free software. At SCALE 11x in Los Angeles, Mario Obejas provided an interesting look into the "dark arts" of policy-making, with an account of his trip to Washington DC to speak at a public forum on open source software in the Department of Defense (DoD). Obejas went to the forum on his own initiative, not his employer's, and his session reported on his presentation as well as those from big defense contractors, lobbying groups—and some well-known open source software vendors.

Departments, sub-departments, and agendas

Obejas works in information systems and engineering for a large defense contractor, although he declined to say which one because he wished to emphasize that he was not appearing or speaking on the company's behalf. The DoD forum in question was held in January 2012 by the Defense Acquisition Regulations System (DARS), a directorate of the Defense Procurement and Acquisition Policy (DPAP). At the name suggests, that office sets procurement policy for the DoD, which includes software and IT contracts in addition to aircraft carriers and similar high-visibility items. The forum was announced through the DPAP web site in December 2011, along with three agenda items about which the DoD solicited public input.

The first topic asked "What are the risks that open source software may include proprietary or copyrighted material incorporated into the open source software without the authorization of the actual author," thus exposing the contractor and the government to a copyright infringement liability (in this case, presumably, the question was about material copyrighted by parties other than the contractor, of course). The second topic asked whether contractors were "facing performance and warranty deficiencies to the extent that the open source software does not meet contract requirements, and the open source software license leaves the contractors without recourse." The third item was the question of whether the Defense Federal Acquisition Regulation Supplement (DFARS) should be revised to "specify clearly the rights the Government obtains when a contractor acquires open source software for the Government"

Obejas submitted a presentation proposal to the DoD forum as an individual open source enthusiast, he said, and was accepted. In January, he joined the other participants in a decidedly non-fancy auditorium chamber. Looking around the room, he said, the crowd included "lawyer, lawyer, lawyer, VP of engineering for Boeing, lawyer, lawyer, lawyer," and so on. In fact, he estimated that he was one of only three or four non-lawyers in the room, "but this is how policy gets crafted."

Presentations

The first presenter at the forum was Black Duck, a vendor of license compliance and source code management software products. On the copyright infringement topic, the Black Duck speaker commented that the reverse situation was considerably more likely to happen—open source code wandering into a proprietary product—or that open source code with conflicting licenses would be merged into a combined work. On the performance deficiency topic, Black Duck noted that contractors have the option of contracting with third-party open source companies for support if they encounter deficiencies with an open source components. The company spokesperson did not offer an answer on the DFARS agenda item, but did comment that the benefits of open source are too great to ignore, and advised that the DoD ensure its contractors are open source savvy to manage the risks involved.

Obejas spoke second. He briefly addressed the three agenda items, beginning by reminding the attendees of the DoD's own FAQ asserting that open source software is commercial and off-the-shelf, so it should be treated like other commercial off-the-shelf (COTS) products. Consequently, it follows that the risks asked about in the first two topics should be addressed in the same way as they are with proprietary software. On the third question about revising DFARS for clarity, Obejas said "I must be too much of an engineer but I don't see the downside of the DFARS being more specific."

But Obejas then introduced a major point of his own, arguing that open source adoption in DoD projects is inhibited by fear stemming from the misconception that the GPL forces public disclosure of source code. In reality, of course, the GPL allows a vendor to provide the corresponding source code of a binary to the customer when the binary is delivered. The customer in this case would be the DoD, who is not likely to redistribute the code to others. But the misconception that the source must be made available to the public persists, and contractors avoid GPL-licensed components as a result.

Obejas cited several factors contributing to this hangup, including the fact that the GNU project does not provide black-and-white rules on what constitutes a combined work under the GPL and what does not. Instead it uses "mushy" language like "communicate at arms length" and "partly a matter of substance and partly a matter of form." Managers react to this nebulous language with fear, and open source adoption is impeded. Better education and more specifics are the remedy, he said. The DoD FAQ and other documents already advise contractors that ignoring open source software makes them less competitive, but there is a disconnect between this policy and reality on the ground. Furthermore, the DoD has consulted with groups like the Software Freedom Law Center and has written Memoranda Of Understanding (MOU) documents, but they remain unpublished.

After Obejas, a representative from the OpenGeo project spoke about the copyright infringement and performance deficiency agenda topics, also advising that open source software does not carry different risks of copyright infringement or warranty liability than does proprietary software. There was also a paper submitted to the forum by the Aerospace Industries Association (AIA), a lobbying group. The paper was wishy-washy at some times and dead wrong at least once, Obejas said. On the copyright infringement topic, it listed examples of potential copyright infringement risks, but did not provide any statistics of prevalence. On the performance and warranty question, it said contractors are "absolutely without recourse" under open source licenses—but that that was also true of using proprietary software. It incorrectly claimed that open sources licenses prohibit the use of the software in "hazardous" activities, he said, and incorrectly said that the GPL was at odds with US export law. It also requested more clarity on incorporating GPL code, and cited the 2012 issue of the Apache License's indemnification clause as an example where licensing can be tricky to understand.

The next speaker was Craig Bristol, an intellectual-property lawyer at DoD contractor Raytheon. He mostly discussed the aforementioned Apache license "debacle," and also stated publicly that he did not believe that open source and proprietary software should be treated the same. Obejas said that he respectfully disagreed with that viewpoint, but that it did concern him that a major contractor's IP lawyer took a public stance at odds with the DoD's written declarations.

The final speaker was from Red Hat, and did not address either of the first two agenda topics. Red Hat's speaker did say it was important to remove discrimination against open source, hailing the 2009 DoD memo on open source usage and the 2011 Office of Management and Budget (OMB) memo on software acquisition. Red Hat was wary of changing the DFARS, however, saying the DoD must make sure it addressed both open source and proprietary software and that it does not build on legacy misconceptions.

Discussion

The DoD forum included a question-and-answer session, during which Obejas requested that the DoD publish its memos and accounts of its conversations with the Free Software Foundation (FSF) and related groups. Several others agreed that the complexity of current software development regulations is making matters worse, and that an effort should be made to reduce regulatory complexity.

The session ended with comments from Dan Risacher, who is the DoD's official "Developer Advocate" and was the primary author of the 2009 memo. Risacher said that the government has done the legal research and determined that open source software meets the definition of commercial computer software as defined by the DFARS—even though some "non believers" at the forum disagreed with that conclusion.

He then responded to two points from the AIA paper. First, he said that when contractors encounter any warranty problems with open source software components, those are the contractor's problem to deal with:

You have the source code so you can fix it , so the idea that there’s not a warranty from the copyright holder .... is kind of irrelevant and I don’t think there’s a need for a DFARS change or any other sort of policy change, right? If your contract says you’re responsible for warranting that software, you’re responsible for ... open source, for proprietary, for all those components.

He also rejected the notion that the GPL requires public disclosure, calling it "completely a misunderstanding of the license."

Risacher's comments marked the end of the DoD forum recap, but Obejas also addressed two "weird edge cases" encountered during the event. The first was the Apache indemnification clause debate. The issue stemmed from clause 9 of the Apache License, which says that the licensee may choose to offer indemnity to its clients, but if it does so it must also extend that same indemnity to the upstream project. A US Army lawyer interpreted that clause incorrectly, Obejas said, and commented publicly that the Apache License loads unlimited legal liability onto the government in the form of liability guarantees to multiple (upstream) third parties. That would run afoul of regulations. The Apache project responded that the indemnification clause is only triggered if the licensee chooses to offer indemnity. Eventually the Army came around and dropped its objection in March 2012, Obejas said, but considerable time and effort were consumed in the process.

The second edge case he described was an unnamed government contractor who built a software product based on classified modifications to a GPL-licensed work. The contractor asked the government to give it a waiver from copyright law compliance, so that it would not be required to deliver the relevant source code. The notion that a company would knowingly violate copyright law is objectionable enough, Obejas observed, but the contractor's concern was also based on the misconception that the GPL requires public source code disclosure. In reality, the US government is the contractor's customer, and would not distribute the binary (or source code) to anyone itself.

In conclusion, Obejas said he can muster little sympathy for a company that starts building a product with components it acquired through a well-known open source license, then expects the government to relieve it from its legal obligation. Contractors are responsible for the components they deliver to the government, and should pick them prudently—including complying with open source licenses. The DoD already recognizes the value of open source software, he said. He wants contractors (including the company he works for) to recognize that value as well, and to utilize it.

The future

Moving forward, Obejas noted several practical obstacles to increasing open source adoption in DoD contract work. Contractors' Information Systems offices do not always agree with the DoD's interpretation of licensing issues, he said. Contractors have different incentives than their government customers, and they may decide that complying with open source licenses takes more work than sticking with proprietary software. Inertia is a another real problem; defense contractors frequently want to see precedents before they adopt a new strategy. Consequently, very few contractors want to be the first to try something new, such as using open source software components.

That last point was where Obejas said that publishing more legal case studies and clearer guidelines on license compliance—in particular on how to combine proprietary and free software components in a single product—would attract more contractors to open source. On Sunday afternoon, Obejas raised the issue in Bradley Kuhn's talk about the AGPL. Kuhn's reply was twofold. First, he said that there were already sufficient public examples of how to combine GPL components with non-free components, including compliance actions and court cases.

But more importantly, providing clear guidelines about how to combine free and non-free components without running afoul of the license runs contrary to the FSF's (or other licensor's) goals. It would amount to unilaterally "disarming," Kuhn said. A guide to how to walk the line between compliance and non-compliance would essentially be a guide to how to skirt the license when making derivative works. In addition, any such published guidelines would be fodder for precedent in future court cases—including cases between third parties. Fundamentally, the goal of the copyleft movement is to expand the scope of software freedom, he said; it is unreasonable to expect copyleft proponents to weaken their position just to provide "clarity" for people not interested in furthering the principle of copyleft.

Of course, groups like the FSF and SFLC should be expected to take a hardened position on combining free and non-free software; they exist for the purpose of promoting free software. It is the defense contractors whose decisions are motivated by other factors (such as profitability), which will shape how they select components and which subcontractors they work with. But defense contractors are in an unusual position in one big respect: they have one large client (the government), and they can be fairly certain that the client will not give away their product to the public and cost them future revenue. It is impressive how far open source has worked its way into the DoD contractor world already—whichever direction it heads in the days to come. Or, as Obejas put it, whatever else one thinks about the government, it is pretty cool to stop and notice that the DoD even has a position like the one Dan Risacher's occupies: developer advocate.

Comments (9 posted)

Page editor: Jonathan Corbet

Security

Oxford blocks Google Docs as a phishing countermeasure

By Nathan Willis
March 7, 2013

Google services are nearly ubiquitous these days. Although the most oft-repeated concern is that this ubiquity compromises user privacy, recent action by Oxford University illustrates that there are other risks accompanying the search giant's omnipresence, such as security. Robin Stevens, from the University's IT department, posted a blog entry about the action on February 18, explaining that IT "recently felt it necessary to take, temporarily, extreme action for the majority of University users: we blocked Google Docs." University officials enforced the block for only two and a half hours, not to combat the security threat itself, but to get the attention of its own users.

Go phish

The issue at hand is phishing attacks delivered via Google Docs's web forms. Phishing itself is not a new problem, Stevens noted, but historically phishing attacks would be delivered as email messages asking the recipient to reply and include account information (such as the password). The replying accounts would then be taken over and used as a platform from which to send out thousands of spam emails through the university's email servers. As a large, established university, Oxford's servers are implicitly trusted by many other email providers and ISPs, which raises the chance of the outgoing spam flood sneaking past filters. This type of email-based phishing attack would generally masquerade as an urgent request from some on-campus office (such as IT itself), warning the user of a policy violation, a full mailbox, or some other issue requiring rapid attention.

These days, however, direct-reply phishing is on the decline, and the more common approach is to trick users into visiting a legitimate-looking web form. Like the phishing email, this form masquerades as official communication, perhaps asking the user to log in (with his or her real password) to take care of some urgent account problem. The trouble is that Google Docs offers a free web form creation service—and it delivers it over SSL, thus making it harder for the university's anti-malware defenses to detect. Stevens reported that recent weeks had seen a "marked increase" in such phishing activity, and that although the majority of the university's users spotted the scams, a small proportion did not.

Now, we may be home to some of the brightest minds in the nation. Unfortunately, their expertise in their chosen academic field does not necessarily make them an expert in dealing with such mundane matters as emails purporting to be from their IT department. Some users simply see that there's some problem, some action is required, carry it out, and go back to considering important matters such as the mass of the Higgs Boson, or the importance of the March Hare to the Aztecs.

With even a small fraction of the tens of thousands of university email users falling for the phishing forms, a sizable number of accounts were compromised—and, presumably, could be used to mount spam floods at any time. That put the university at additional risk, Stevens said, because in the past there have been incidents where major email providers began rejecting Oxford email due to large-scale spam. The recent surge in Google Docs form-phishing attacks happened over a short period of time, but thanks to the potential for a site-wide rejection by other ISPs, it risked causing a major disruption to email service for university users.

Response

The straightforward response to phishing attacks delivered via Google Docs would seem to be reporting the incident to Google, but Stevens said that this approach proved futile. IT could report each phishing web form to Google's security team, but:

Unfortunately, you then need to wait for them to take action. Of late that seems typically to take a day or two; in the past it’s been much longer, sometimes on a scale of weeks. Most users are likely to visit the phishing form when they first see the email. After all it generally requires “urgent” action to avoid their account being shut down. So the responses will be within a few hours of the mails being sent, or perhaps the next working day. If the form is still up, they lose. As do you – within the next few days, you’re likely to find another spam run being dispatched from your email system.

Instead, the university decided to pull the plug on Google Docs from the university network, in the hopes that the outage would awaken users to the risk. "A temporary block would get users' attention and, we hoped, serve to moderate the 'chain reaction'."

Evidently the block did get users' attention—but IT failed to take into account how tightly Google Docs has become integrated with other Google services in recent years. The disruption to legitimate users was "greater than anticipated," causing Stevens's office to issue an apology and a detailed explanation of the problem.

On the other hand, Stevens did report that the temporary block accomplished its goal of short-circuiting the phishing attack. In the future, he said, the university would both search for a less disruptive way to circumvent Google Docs phishing attacks, and pressure Google to be "far more responsive, if not proactive, regarding abuse of their services for criminal activities." Google's slow reaction to reports of criminal activity has severe consequences for the university, he said.

We have to ask why Google, with the far greater resources available to them, cannot respond better. [...] Google may not themselves be being evil, but their inaction is making it easier for others to conduct evil activities using Google-provided services.

The 800 pound gorilla

So far, Google has not issued a public response to the Oxford incident. But one does not need to be a major university to find lessons in the story. First, the existence of web forms in Google Docs provides a nearly worldwide-accessible platform for mounting phishing attacks. Google's ubiquity has turned it into a de-facto "generic service" which many users may be oblivious to. In fact, Google Docs is widespread enough that many universities do use it to send out general polls, surveys, and other form-based questionnaires. Yes, the IT department is far less likely to employ a Google Docs form than is (for example) Human Resources, but that is the sort of detail it is all too easily missed by some small proportion of users on any particular email.

Second, Google's multi-day turnaround time for taking action against reported criminal activity is a problem in its own right. But while accurate reports of such criminal activity need to be acted on as soon as possible, the reality is that swift action raises the risk of false positives, too. Here again, Google services are so widespread now that it would be a challenge to police them all in real time. If, as Stevens suggested, Google were to automate any part of the form shutdown process, one nasty side effect would be that the automated process might turn into a vehicle for widespread denial of service instead.

Third, some will say that the sort of large-scale phishing attack seen at Oxford demonstrates that passwords alone are no longer sufficient for account security. But the university's tens of thousands of users present a daunting set of accounts to manage; supplying that many users with cryptographic security tokens or supporting that many users in a multi-factor authentication scheme would constitute a substantially higher cost than it would for most businesses—more so when one considers that the student population turns over regularly.

Of course, Oxford's experience is only one data point. In the Hacker News discussion of the event, commenter Jose Nazario pointed to a 2011 IEEE paper (and provided a PDF link for those without IEEE library access) he co-authored that examined the prevalence of form-based phishing attacks. Google Docs was the second-most popular host for form phishing attacks, and phishing forms based there lasted, on average, more than six days. The most widely-used service for form-based phishing attacks was addaform.com, and there were several others with numbers somewhat close to those of Google Docs.

The prospect of intercepting all form-based phishing is a daunting one, to be sure. But regardless of the precise rankings, eliminating the threat from Google Docs is likely to be far more difficult since, like Big Brother, Google services are everywhere.

Comments (31 posted)

Brief items

Security quotes of the week

A knife is allowed if:
  • The blade is no longer than 2.36 inches or 6 centimeters in length
  • The blade width is no more than ½ inch at its widest point
  • ...
-- US Transportation Security Administration [PDF] on new rules governing knives on planes using nice round numbers in two different measurement systems

Excommunication is like being fired, only it lasts for eternity.
-- Bruce Schneier

When conducting national security investigations, the U.S. Federal Bureau of Investigation can issue a National Security Letter (NSL) to obtain identifying information about a subscriber from telephone and Internet companies. The FBI has the authority to prohibit companies from talking about these requests. But we’ve been trying to find a way to provide more information about the NSLs we get—particularly as people have voiced concerns about the increase in their use since 9/11.

Starting today, we’re now including data about NSLs in our Transparency Report. We’re thankful to U.S. government officials for working with us to provide greater insight into the use of NSLs. Visit our page on user data requests in the U.S. and you’ll see, in broad strokes, how many NSLs for user data Google receives, as well as the number of accounts in question. In addition, you can now find answers to some common questions we get asked about NSLs on our Transparency Report FAQ.

-- Google shines a little light onto US government secrecy

This also goes for security people. If we had any sense we'd go live in the woods in a cabin and drink moonshine and go hunting. I'm still assigning CVE's for /tmp file vulns. That's just inexcusably stupid.
-- Kurt Seifried

Comments (4 posted)

New vulnerabilities

apache2: privilege escalation

Package(s):apache2 CVE #(s):CVE-2013-1048
Created:March 5, 2013 Updated:March 6, 2013
Description: From the Debian advisory:

Hayawardh Vijayakumar noticed that the apache2ctl script created the lock directory in an unsafe manner, allowing a local attacker to gain elevated privileges via a symlink attack. This is a Debian specific issue.

Alerts:
Debian DSA-2637-1 2013-03-04
Ubuntu USN-1765-1 2013-03-18

Comments (none posted)

cfingerd: code execution

Package(s):cfingerd CVE #(s):CVE-2013-1049
Created:March 1, 2013 Updated:March 6, 2013
Description:

From the Debian advisory:

Malcolm Scott discovered a remote-exploitable buffer overflow in the rfc1413 (ident) client of cfingerd, a configurable finger daemon. This vulnerability was introduced in a previously applied patch to the cfingerd package in 1.4.3-3.

Alerts:
Debian DSA-2635-1 2013-03-01

Comments (none posted)

drupal7: denial of service

Package(s):drupal7 CVE #(s):
Created:March 6, 2013 Updated:March 6, 2013
Description: Drupal 7.20, resolves SA-CORE-2013-002, a denial of service vulnerability.
Alerts:
Fedora FEDORA-2013-2862 2013-03-05
Fedora FEDORA-2013-2872 2013-03-05

Comments (none posted)

dtach: information disclosure

Package(s):dtach CVE #(s):CVE-2012-3368
Created:March 5, 2013 Updated:March 6, 2013
Description: From the Red Hat bugzilla:

A portion of memory (random stack data) disclosure flaw was found in the way dtach, a simple program emulating the detach feature of screen, performed client connection termination under certain circumstances. A remote attacker could use this flaw to potentially obtain sensitive information by issuing a specially-crafted dtach client connection close request.

Alerts:
Fedora FEDORA-2013-2923 2013-03-04

Comments (none posted)

ekiga: denial of service

Package(s):ekiga CVE #(s):CVE-2012-5621
Created:March 4, 2013 Updated:March 6, 2013
Description: From the Red Hat bugzilla:

A denial of service flaw was found in the way Ekiga, a Gnome based SIP/H323 teleconferencing application, processed information from certain OPAL connections (UTF-8 strings were not verified for validity prior showing them). A remote attacker (other party with a not UTF-8 valid name) could use this flaw to cause ekiga executable crash.

Alerts:
Fedora FEDORA-2013-2998 2013-03-03
Fedora FEDORA-2013-2890 2013-03-03
Fedora FEDORA-2013-2998 2013-03-03
Fedora FEDORA-2013-2890 2013-03-03
Fedora FEDORA-2013-2998 2013-03-03
Fedora FEDORA-2013-2890 2013-03-03

Comments (none posted)

git: information disclosure

Package(s):git CVE #(s):CVE-2013-0308
Created:March 4, 2013 Updated:March 18, 2013
Description: From the Red Hat advisory:

It was discovered that Git's git-imap-send command, a tool to send a collection of patches from standard input (stdin) to an IMAP folder, did not properly perform SSL X.509 v3 certificate validation on the IMAP server's certificate, as it did not ensure that the server's hostname matched the one provided in the CN field of the server's certificate. A rogue server could use this flaw to conduct man-in-the-middle attacks, possibly leading to the disclosure of sensitive information.

Alerts:
openSUSE openSUSE-SU-2013:0380-1 2013-03-01
openSUSE openSUSE-SU-2013:0382-1 2013-03-01
Fedora FEDORA-2013-2829 2013-03-02
Fedora FEDORA-2013-2763 2013-03-02
Red Hat RHSA-2013:0589-01 2013-03-04
Scientific Linux SL-git-20130304 2013-03-04
Oracle ELSA-2013-0589 2013-03-04
CentOS CESA-2013:0589 2013-03-09
Mageia MGASA-2013-0091 2013-03-16

Comments (none posted)

isync: information disclosure

Package(s):isync CVE #(s):CVE-2013-0289
Created:March 4, 2013 Updated:March 6, 2013
Description: From the Red Hat bugzilla:

A security flaw was found in the way isync, a command line application to synchronize IMAP4 and Maildir mailboxes, (previously) performed server's SSL x509.v3 certificate validation, when performing IMAP protocol based synchronization (server's hostname was previously not compared for match the CN field of the certificate). A rogue server could use this flaw to conduct man-in-the-middle (MiTM) attacks, possibly leading to disclosure of sensitive information.

Alerts:
Fedora FEDORA-2013-2795 2013-03-03
Fedora FEDORA-2013-2758 2013-03-03

Comments (none posted)

kernel: multiple vulnerabilities

Package(s):kernel CVE #(s):CVE-2013-0216 CVE-2013-0217
Created:March 1, 2013 Updated:March 22, 2013
Description:

From the Xen advisory:

The Xen netback implementation contains a couple of flaws which can allow a guest to cause a DoS in the backend domain, potentially affecting other domains in the system.

CVE-2013-0216 is a failure to sanity check the ring producer/consumer pointers which can allow a guest to cause netback to loop for an extended period preventing other work from occurring.

CVE-2013-0217 is a memory leak on an error path which is guest triggerable.

Alerts:
Oracle ELSA-2013-2507 2013-02-28
openSUSE openSUSE-SU-2013:0395-1 2013-03-05
openSUSE openSUSE-SU-2013:0396-1 2013-03-05
Ubuntu USN-1756-1 2013-03-06
Ubuntu USN-1760-1 2013-03-12
Ubuntu USN-1767-1 2013-03-18
Ubuntu USN-1769-1 2013-03-18
Ubuntu USN-1768-1 2013-03-18
Ubuntu USN-1774-1 2013-03-21
Fedora FEDORA-2013-3909 2013-03-22

Comments (none posted)

kernel: multiple vulnerabilities

Package(s):kernel CVE #(s):CVE-2013-1767 CVE-2013-1774
Created:March 4, 2013 Updated:March 22, 2013
Description: From the Mageia advisory:

Linux kernel is prone to a local privilege-escalation vulnerability due to a tmpfs use-after-free error. Local attackers can exploit the issue to execute arbitrary code with kernel privileges or to crash the kernel, effectively denying service to legitimate users (CVE-2013-1767).

Linux kernel built with Edgeport USB serial converter driver io_ti, is vulnerable to a NULL pointer dereference flaw. It happens if the device is disconnected while corresponding /dev/ttyUSB? file is in use. An unprivileged user could use this flaw to crash the system, resulting DoS (CVE-2013-1774).

Alerts:
Mageia MGASA-2013-0079 2013-03-02
Mageia MGASA-2013-0080 2013-03-02
Mageia MGASA-2013-0081 2013-03-02
Mageia MGASA-2013-0082 2013-03-02
Mageia MGASA-2013-0083 2013-03-02
Fedora FEDORA-2013-3223 2013-03-02
Ubuntu USN-1767-1 2013-03-18
Fedora FEDORA-2013-3909 2013-03-22
Ubuntu USN-1781-1 2013-03-26
Ubuntu USN-1787-1 2013-04-02

Comments (none posted)

kernel: multiple vulnerabilities

Package(s):kernel CVE #(s):CVE-2012-5374 CVE-2013-0160
Created:March 5, 2013 Updated:March 6, 2013
Description: From the CVE entries:

The CRC32C feature in the Btrfs implementation in the Linux kernel before 3.8-rc1 allows local users to cause a denial of service (extended runtime of kernel code) by creating many different files whose names are associated with the same CRC32C hash value. (CVE-2012-5374)

The Linux kernel through 3.7.9 allows local users to obtain sensitive information about keystroke timing by using the inotify API on the /dev/ptmx device. (CVE-2013-0160)

Alerts:
openSUSE openSUSE-SU-2013:0395-1 2013-03-05
openSUSE openSUSE-SU-2013:0396-1 2013-03-05

Comments (none posted)

kernel: privilege escalation/information leak

Package(s):kernel linux CVE #(s):CVE-2013-0349 CVE-2013-1773
Created:March 6, 2013 Updated:March 6, 2013
Description: From the Ubuntu advisory:

An information leak was discovered in the Linux kernel's Bluetooth stack when HIDP (Human Interface Device Protocol) support is enabled. A local unprivileged user could exploit this flaw to cause an information leak from the kernel. (CVE-2013-0349)

A flaw was discovered on the Linux kernel's VFAT filesystem driver when a disk is mounted with the utf8 option (this is the default on Ubuntu). On a system where disks/images can be auto-mounted or a FAT filesystem is mounted an unprivileged user can exploit the flaw to gain root privileges. (CVE-2013-1773)

Alerts:
Ubuntu USN-1756-1 2013-03-06
Red Hat RHSA-2013:0566-01 2013-03-06
Ubuntu USN-1760-1 2013-03-12
Ubuntu USN-1767-1 2013-03-18
Ubuntu USN-1769-1 2013-03-18
Ubuntu USN-1768-1 2013-03-18
Ubuntu USN-1775-1 2013-03-22
Ubuntu USN-1776-1 2013-03-22
Ubuntu USN-1778-1 2013-03-22
Ubuntu USN-1781-1 2013-03-26

Comments (none posted)

libxml2: denial of service

Package(s):libxml2 CVE #(s):CVE-2013-0338
Created:March 1, 2013 Updated:March 28, 2013
Description:

From the Red hat advisory:

A denial of service flaw was found in the way libxml2 performed string substitutions when entity values for entity references replacement was enabled. A remote attacker could provide a specially-crafted XML file that, when processed by an application linked against libxml2, would lead to excessive CPU consumption.

Alerts:
Red Hat RHSA-2013:0581-01 2013-02-28
Oracle ELSA-2013-0581 2013-02-28
CentOS CESA-2013:0581 2013-03-01
Scientific Linux SL-libx-20130228 2013-02-28
Oracle ELSA-2013-0581 2013-03-01
Mageia MGASA-2013-0085 2013-03-03
Mandriva MDVSA-2013:017 2013-03-05
CentOS CESA-2013:0581 2013-03-09
Debian DSA-2652-1 2013-03-26
openSUSE openSUSE-SU-2013:0552-1 2013-03-27
openSUSE openSUSE-SU-2013:0555-1 2013-03-27
Ubuntu USN-1782-1 2013-03-28

Comments (none posted)

nginx: world accessible directories

Package(s):nginx CVE #(s):CVE-2013-0337
Created:March 5, 2013 Updated:March 6, 2013
Description: From the Red Hat bugzilla:

Agostino Sarubbo reported on the oss-security mailing list that, on Gentoo, /var/log/nginx is world-accessible and the log files inside the directory are world-readable. This could allow an unprivileged user to read the log files.

Alerts:
Fedora FEDORA-2013-2974 2013-03-04
Fedora FEDORA-2013-2955 2013-03-04

Comments (none posted)

openafs: multiple vulnerabilities

Package(s):openafs CVE #(s):CVE-2013-1794 CVE-2013-1795
Created:March 5, 2013 Updated:March 6, 2013
Description: From the Scientific Linux advisory:

By carefully crafting an ACL entry an attacker may overflow fixed length buffers within the OpenAFS fileserver, crashing the fileserver, and potentially permitting the execution of arbitrary code. To perform the exploit, the attacker must already have permissions to create ACLs on the fileserver in question. Once such an ACL is present on a fileserver, client utilities such as 'fs' which manipulate ACLs, may be crashed when they attempt to read or modify the ACL.(CVE-2013-1794)

The ptserver accepts a list of unbounded size from the IdToName RPC. The length of this list is then used to determine the size of a number of other internal data structures. If the length is sufficiently large then we may hit an integer overflow when calculating the size to pass to malloc, and allocate data structures of insufficient length, allowing heap memory to be overwritten. This may allow an unauthenticated attacker to crash an OpenAFS ptserver. (CVE-2013-1795)

Alerts:
Scientific Linux SL-open-20130304 2013-03-04
Debian DSA-2638-1 2013-03-04

Comments (none posted)

openjdk-6: code execution

Package(s):openjdk-6 CVE #(s):CVE-2013-0809 CVE-2013-1493
Created:March 6, 2013 Updated:March 20, 2013
Description: From the CVE entries:

Unspecified vulnerability in the 2D component in the Java Runtime Environment (JRE) component in Oracle Java SE 7 Update 15 and earlier, 6 Update 41 and earlier, and 5.0 Update 40 and earlier allows remote attackers to execute arbitrary code via unknown vectors, a different vulnerability than CVE-2013-1493. (CVE-2013-0809)

The color management (CMM) functionality in the 2D component in Oracle Java SE 7 Update 15 and earlier, 6 Update 41 and earlier, and 5.0 Update 40 and earlier allows remote attackers to execute arbitrary code or cause a denial of service (crash) via an image with crafted raster parameters, which triggers (1) an out-of-bounds read or (2) memory corruption in the JVM, as exploited in the wild in February 2013. (CVE-2013-1493)

Alerts:
Ubuntu USN-1755-1 2013-03-05
Red Hat RHSA-2013:0600-01 2013-03-06
Red Hat RHSA-2013:0601-01 2013-03-06
Red Hat RHSA-2013:0603-01 2013-03-06
Red Hat RHSA-2013:0602-01 2013-03-06
Red Hat RHSA-2013:0604-01 2013-03-06
Red Hat RHSA-2013:0605-01 2013-03-06
CentOS CESA-2013:0604 2013-03-06
CentOS CESA-2013:0603 2013-03-06
Fedora FEDORA-2013-3467 2013-03-06
Oracle ELSA-2013-0603 2013-03-07
Oracle ELSA-2013-0602 2013-03-06
Oracle ELSA-2013-0604 2013-03-07
Oracle ELSA-2013-0605 2013-03-06
Scientific Linux SL-java-20130307 2013-03-07
Scientific Linux SL-java-20130307 2013-03-07
Ubuntu USN-1755-2 2013-03-07
Mandriva MDVSA-2013:021 2013-03-08
CentOS CESA-2013:0605 2013-03-09
CentOS CESA-2013:0602 2013-03-09
Mageia MGASA-2013-0088 2013-03-09
Mageia MGASA-2013-0089 2013-03-09
Red Hat RHSA-2013:0624-01 2013-03-11
Red Hat RHSA-2013:0625-01 2013-03-11
Red Hat RHSA-2013:0626-01 2013-03-11
openSUSE openSUSE-SU-2013:0430-1 2013-03-12
openSUSE openSUSE-SU-2013:0438-1 2013-03-12
SUSE SUSE-SU-2013:0434-1 2013-03-12
Fedora FEDORA-2013-3468 2013-03-14
openSUSE openSUSE-SU-2013:0509-1 2013-03-20

Comments (none posted)

openstack-packstack: multiple vulnerabilities

Package(s):openstack-packstack CVE #(s):CVE-2013-0261 CVE-2013-0266
Created:March 6, 2013 Updated:March 6, 2013
Description: From the Red Hat advisory:

A flaw was found in PackStack. During manifest creation, the manifest file was written to /tmp/ with a predictable file name. A local attacker could use this flaw to perform a symbolic link attack, overwriting an arbitrary file accessible to the user running PackStack with the contents of the manifest, which could lead to a denial of service. Additionally, the attacker could read and potentially modify the manifest being generated, allowing them to modify systems being deployed using OpenStack. (CVE-2013-0261)

It was discovered that the cinder.conf and all api-paste.ini configuration files were created with world-readable permissions. A local attacker could use this flaw to view administrative passwords, allowing them to control systems deployed and managed by OpenStack. (CVE-2013-0266)

Alerts:
Red Hat RHSA-2013:0595-01 2013-03-05

Comments (none posted)

PackageKit: installs old package versions

Package(s):PackageKit CVE #(s):
Created:March 4, 2013 Updated:March 6, 2013
Description: From the openSUSE advisory:

PackageKit was fixed to add a patch to forbid update to downgrade (bnc#804983)

As the update operation is allowed for logged in regular users, they could install old package versions which might have been still affected by already fixed security problems.

Alerts:
openSUSE openSUSE-SU-2013:0381-1 2013-03-01

Comments (none posted)

php: two vulnerabilities

Package(s):php CVE #(s):CVE-2013-1635 CVE-2013-1643
Created:February 28, 2013 Updated:April 3, 2013
Description:

From the Mandriva advisory:

PHP does not validate the configration directive soap.wsdl_cache_dir before writing SOAP wsdl cache files to the filesystem. Thus an attacker is able to write remote wsdl files to arbitrary locations (CVE-2013-1635).

PHP allows the use of external entities while parsing SOAP wsdl files which allows an attacker to read arbitrary files. If a web application unserializes user-supplied data and tries to execute any method of it, an attacker can send serialized SoapClient object initialized in non-wsdl mode which will make PHP to parse automatically remote XML-document specified in the location option parameter (CVE-2013-1643).

Alerts:
Mandriva MDVSA-2013:016 2013-02-28
Debian DSA-2639-1 2013-03-05
Ubuntu USN-1761-1 2013-03-13
Slackware SSA:2013-081-01 2013-03-23
Mageia MGASA-2013-0101 2013-04-02
Fedora FEDORA-2013-3891 2013-04-03
Fedora FEDORA-2013-3927 2013-04-03

Comments (none posted)

ruby: denial of service

Package(s):ruby CVE #(s):
Created:March 6, 2013 Updated:March 6, 2013
Description: From the Ruby advisory:

Unrestricted entity expansion can lead to a DoS vulnerability in REXML. (The CVE identifier will be assigned later.) We strongly recommend to upgrade ruby.

When reading text nodes from an XML document, the REXML parser can be coerced in to allocating extremely large string objects which can consume all of the memory on a machine, causing a denial of service.

Alerts:
Fedora FEDORA-2013-3037 2013-03-05
Fedora FEDORA-2013-3038 2013-03-05

Comments (none posted)

rubygem-devise: unauthorized account access

Package(s):rubygem-devise CVE #(s):CVE-2013-0233
Created:March 4, 2013 Updated:March 6, 2013
Description: From the Novell bugzilla:

Using a specially crafted request, an attacker could trick the database type conversion code to return incorrect records. For some token values this could allow an attacker to bypass the proper checks and gain control of other accounts.

Alerts:
openSUSE openSUSE-SU-2013:0374-1 2013-03-01

Comments (none posted)

rubygem-ruby_parser: insecure file creation

Package(s):openshift CVE #(s):CVE-2013-0162
Created:March 1, 2013 Updated:March 6, 2013
Description:

From the Red Hat advisory:

It was found that ruby_parser from rubygem-ruby_parser created a temporary file in an insecure way. A local attacker could use this flaw to perform a symbolic link attack, overwriting arbitrary files accessible to the application using ruby_parser.

Alerts:
Red Hat RHSA-2013:0582-01 2013-02-28

Comments (none posted)

sudo: privilege escalation

Package(s):sudo CVE #(s):CVE-2013-1775
Created:February 28, 2013 Updated:March 20, 2013
Description:

From the Ubuntu advisory:

Marco Schoepl discovered that Sudo incorrectly handled time stamp files when the system clock is set to epoch. A local attacker could use this issue to run Sudo commands without a password prompt.

Alerts:
Ubuntu USN-1754-1 2013-02-28
Mageia MGASA-2013-0078 2013-03-01
Slackware SSA:2013-065-01 2013-03-06
Debian DSA-2642-1 2013-03-09
Mandriva MDVSA-2013:026 2013-03-18
Fedora FEDORA-2013-3297 2013-03-16
Fedora FEDORA-2013-3270 2013-03-19
openSUSE openSUSE-SU-2013:0495-1 2013-03-20
openSUSE openSUSE-SU-2013:0503-1 2013-03-20

Comments (none posted)

sudo: privilege escalation

Package(s):sudo CVE #(s):CVE-2013-1776
Created:March 4, 2013 Updated:March 20, 2013
Description: From the Mageia advisory:

Sudo before 1.8.6p7 allows a malicious user to run commands via sudo without authenticating, so long as there exists a terminal the user has access to where a sudo command was successfully run by that same user within the password timeout period (usually five minutes).

Alerts:
Mageia MGASA-2013-0078 2013-03-01
Slackware SSA:2013-065-01 2013-03-06
Debian DSA-2642-1 2013-03-09
Mandriva MDVSA-2013:026 2013-03-18
Fedora FEDORA-2013-3297 2013-03-16
Fedora FEDORA-2013-3270 2013-03-19
openSUSE openSUSE-SU-2013:0495-1 2013-03-20
openSUSE openSUSE-SU-2013:0503-1 2013-03-20

Comments (none posted)

yum: denial of service

Package(s):yum CVE #(s):
Created:March 4, 2013 Updated:March 18, 2013
Description: From the Fedora advisory:

Fix a DOS attack (maybe more) by a bad Fedora mirror on repo. metadata.

Alerts:
Fedora FEDORA-2013-2799 2013-03-02
Fedora FEDORA-2013-2789 2013-03-18

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 3.9-rc1, released on March 3. Linus said: "I don't know if it's just me, but this merge window had more 'Uhhuh' moments than I'm used to. I stopped merging a couple of times, because we had bugs that looked really scary, but thankfully each time people were on them like paparazzi on Justin Bieber." See the article below for a summary of the final changes merged during the 3.9 merge window.

Stable updates: 3.8.1, 3.4.34, and 3.0.67 were released on February 28; 3.8.2, 3.4.35, and 3.0.68 followed on March 4. The 3.2.40 update was released on March 6. All of them contain the usual mix of important fixes. Also released on March 4 was 3.5.7.7.

Comments (none posted)

Kernel development news

The conclusion of the 3.9 merge window

By Jonathan Corbet
March 5, 2013
By the time that Linus released the 3.9-rc1 kernel prepatch and closed the merge window for this cycle, he had pulled a total of 10,265 non-merge changesets into the mainline repository. That is just over 2,000 changes since last week's summary. The most significant user-visible changes merged at the end of the merge window include:

  • The block I/O controller now has full hierarchical control group support.

  • The NFS code has gained network namespace support, allowing the operation of per-container NFS servers.

  • The Intel PowerClamp driver has been merged; PowerClamp allows the regulation of a CPU's power consumption through the injection of forced idle states.

  • The device mapper has gained support for a new "dm-cache" target that is able to use a fast drive (like a solid-state device) as a cache in front of slower storage devices. See Documentation/device-mapper/cache.txt for details.

  • RAID 5 and 6 support for the Btrfs filesystem has been merged at last.

  • Btrfs defragmentation code has gained snapshot awareness, meaning that sharing of data between snapshots will no longer be lost when defragmentation runs.

  • Architecture support for the Synopsys ARC and ImgTec Meta architectures has been added.

  • New hardware support includes:

    • Systems and processors: Marvell Armada XP development boards, Ralink MIPS-based system-on-chip processors, Atheros AP136 reference boards, and Google Pixel laptops.

    • Block: IBM RamSam PCIe Flash SSD devices and Broadcom BCM2835 SD/MMC controllers.

    • Display: TI LP8788 backlight controllers.

    • Miscellaneous: Kirkwood 88F6282 and 88F6283 thermal sensors, Marvell Dove thermal sensors, and Nokia "Retu" watchdog devices.

Changes visible to kernel developers include:

  • The menuconfig configuration tool now has proper "save" and "load" buttons.

  • The rework of the IDR API has been merged, simplifying code that uses IDR to generate unique integer identifiers. Users throughout the kernel tree have been updated to the new API.

  • The hlist_for_each_entry() iterator has lost the unused "pos" parameter.

At this point, the stabilization period for the 3.9 kernel has begun. If the usual pattern holds, the final 3.9 release can be expected sometime around the beginning of May.

Comments (42 posted)

LC-Asia: A big LITTLE MP update

By Jonathan Corbet
March 6, 2013
The ARM "big.LITTLE" architecture pairs two types of CPU — fast, power-hungry processors and slow, efficient processors — into a single package. The result is a system that can efficiently run a wide variety of workloads, but there is one little problem: the Linux kernel currently lacks a scheduler that is able to properly spread a workload across multiple types of processors. Two approaches to a solution to that problem are being pursued; a session at the 2013 Linaro Connect Asia event reviewed the current status of the more ambitious of the two.

LWN recently looked at the big.LITTLE switcher, which pairs fast and slow processors and uses the CPU frequency subsystem to switch between them. The switcher approach has the advantage of being relatively straightforward to get working, but it also has a disadvantage: only half of the CPUs in the system can be doing useful work at any given time. It also is not yet posted for review or merging into the mainline, though this posting is said to be planned for the near future, after products using this code begin to ship.

The alternative approach has gone by the name "big LITTLE MP". Rather than play CPU frequency governor games, big LITTLE MP aims to solve the problem directly by teaching the scheduler about the differences between processor types and how to distribute tasks between them. The big.LITTLE switcher patch touches almost no files outside of the ARM architecture subtree; the big LITTLE MP patch set, instead, is focused almost entirely on the core scheduler code. At Linaro Connect Asia, developers Vincent Guittot and Morten Rasmussen described the current state of the patch set and the plans for getting it merged in the (hopefully) not-too-distant future.

The big LITTLE MP patch set has recently seen a major refactoring effort. The first version was strongly focused on the heterogeneous multiprocessing (HMP) problem but, among other things, it is hard to get developers for the rest of the kernel interested in HMP. So the new patch set aims to improve [Morten and
Vincent] scheduling results on all systems, even traditional SMP systems where all CPUs are the same. There is a patch set that is in internal review and available on the Linaro git server. Some parts have been publicly posted recently; soon the rest should be more widely circulated as well.

The new patches are working well; for almost all workloads, their performance is similar to that achieved with the old patch set. The patches were developed with a view toward simplicity: they affect a critical kernel path, so they must be both simple and fast. Some of the patches, fixes for the existing scheduler, have already been posted to the mailing lists. The rest try to augment the kernel's scheduler with three simple rules:

  • Small tasks (those that only use small amounts of CPU time for brief periods) are not worth the trouble to schedule in any sophisticated way. Instead, they should just be packed onto a single, slow core whenever they wake up, and kept there if at all possible.

  • Load balancing should be concerned with the disposition of long-running tasks only; it should simply pass over the small tasks.

  • Long-running tasks are best placed on the faster cores.

Implementing these policies requires a set of a half-dozen patches. One of them is the "small-task packing" patch that was covered here in October, 2012. Another works to expand the use of per-entity load tracking (which is currently only enabled when control groups and the CPU controller are being used) so that the per-task load values are always available to the scheduler. A further patch ensures that the "LB_MIN" scheduler feature is turned on; LB_MIN (which defaults to "off" in mainline kernels) causes the load balancer to pass over small tasks when working to redistribute the computing load on the system, essentially implementing the second policy objective above.

After that, the patch set augments the scheduler with the concept of the "capacity" of each CPU; the unloaded capacity is essentially the clock speed of the processor. The load balancer is tweaked to migrate processes to the CPU with the largest available capacity. This task is complicated by the fact that a CPU's capacity may not be a constant value; realtime scheduling, in particular, can "steal" capacity away from a CPU to give to realtime-priority tasks. Scheduler domains also need to be tuned for the big.LITTLE environment with an eye toward reducing the periodic load balancing work that needs to be done.

The final piece is not yet complete; it is called "scheduling invariance." Currently, the "load" put on the system by a process is a function of the amount of time that process spends running on the CPU. But if some CPUs are faster than others, the same process could end up with radically different load values depending on which CPU it is actually running on. That is suboptimal; the actual amount of work the process needs to do is the same in either case, and varying load values can cause the scheduler to make poor decisions. For now, the problem is likely to be solved by scaling the scheduler's load calculations by a constant value associated with each processor. Processes running on a CPU that is ten times faster than another will accumulate load ten times more quickly.

Even then, the load calculations are not perfect for the HMP scheduling problem because they are scaled by the process's priority. A high-priority task that runs briefly can look like it is generating as much load as a low-priority task that runs for long periods, but the scheduler may want to place those processes in different ways. The best solution to this problem is not yet clear.

A question from the audience had to do with testing: how were the developers testing their scheduling decisions? In particular, was the Linsched testing framework being used? The answer is that no, Linsched is not being used. It has not seen much development work since it was posted for the 3.3 kernel, so it does not work with current kernels. Perhaps more importantly, its task representation is relatively simple; it is hard to present it with something resembling a real-world Android workload. It is easier, in the end, to simply monitor a real kernel with an actual Android workload and see how well it performs.

The plan seems to be to post a new set of big LITTLE MP patches in the near future with an eye toward getting them upstream. The developers are a little concerned about that; getting reviewer attention for these patches has proved to be difficult thus far. Perhaps persistence and a more general focus will help them to get over that obstruction, clearing the way for proper scheduling on heterogeneous multiprocessor systems in the not-too-distant future.

[Your editor would like to thank Linaro for travel assistance to attend this event.]

Comments (11 posted)

Simplifying RCU

March 6, 2013

This article was contributed by Paul McKenney

Read-copy update (RCU) is a synchronization mechanism in the Linux kernel that allows extremely efficient and scalable handling of read-mostly data. Although RCU is quite effective where it applies, there have been some concerns about its complexity. One way to simplify something is to eliminate part of it, which is what is being proposed for RCU.

One source of RCU's complexity is that the kernel contains no fewer than four RCU implementations, not counting the three other special-purpose RCU flavors (sleepable RCU (SRCU), RCU-bh, and RCU-sched, which are covered here). The four vanilla implementations are selected by the SMP and PREEMPT kernel configuration parameters:

  1. !SMP && !PREEMPT: TINY_RCU, which is used for embedded systems with tiny memories (tens of megabytes).
  2. !SMP && PREEMPT: TINY_PREEMPT_RCU, for deep sub-millisecond realtime response on small-memory systems.
  3. SMP && !PREEMPT: TREE_RCU, which is used for high performance and scalability on server-class systems where scheduling latencies in milliseconds are acceptable.
  4. SMP && PREEMPT: TREE_PREEMPT_RCU, which is used for systems requiring high performance, scalability, and deep sub-millisecond response.
Quick Quiz 1: Since when is ten megabytes of memory small???
Answer

The purpose of these four implementations is to cover Linux's wide range of hardware configurations and workloads. However, although TINY_RCU, TREE_RCU, and TREE_PREEMPT_RCU are heavily used for their respective use cases, TINY_PREEMPT_RCU's memory footprint is not all that much smaller than that of TREE_PREEMPT_RCU, especially when you consider that PREEMPT itself expands the kernel's memory footprint. All of those preempt_disable() and preempt_enable() invocations now generate real code.

The size for TREE_PREEMPT_RCU compiled for x86_64 is as follows:

   text    data     bss     dec     hex filename
   1541     385       0    1926     786 /tmp/b/kernel/rcupdate.o
  18060    2787      24   20871    5187 /tmp/b/kernel/rcutree.o

That for TINY_PREEMPT_RCU is as follows:

   text    data     bss     dec     hex filename
   1205     337       0    1542     606 /tmp/b/kernel/rcupdate.o
   3499     212       8    3719     e87 /tmp/b/kernel/rcutiny.o

If you really have limited memory, you will instead want TINY_RCU:

   text    data     bss     dec     hex filename
    963     337       0    1300     514 /tmp/b/kernel/rcupdate.o
   1869      90       0    1959     7a7 /tmp/b/kernel/rcutiny.o

This points to the possibility of dispensing with TINY_PREEMPT_RCU because the difference in size is not enough to justify its existence.

Quick Quiz 2: Hey!!! I use TINY_PREEMPT_RCU! What about me???
Answer

Of course, this needs to be done in a safe and sane way. Until someone comes up with that, I am taking the following approach:

  1. Poll LKML for objections (done: the smallest TINY_PREEMPT_RCU system had 128 megabytes of memory, which is enough that the difference between TREE_PREEMPT_RCU and TINY_PREEMPT_RCU is 0.01% of memory, namely, down in the noise).
  2. Update RCU's Kconfig to once again allow TREE_PREEMPT_RCU to be built on !SMP systems (available in 3.9-rc1 or by applying this patch for older versions).
  3. Alert LWN's readers to this change (you are reading it!).
  4. Allow time for testing and for addressing any issues that might be uncovered.
  5. If no critical problems are uncovered, remove TINY_PREEMPT_RCU, which is currently planned for 3.11.

Note that the current state of Linus's tree once again allows a choice of RCU implementation in the !SMP && PREEMPT case: either TINY_PREEMPT_RCU or TREE_PREEMPT_RCU. This is a transitional state whose purpose is to allow an easy workaround should there be a bug in TREE_PREEMPT_RCU on uniprocessor systems. From 3.11 forward, the choice of RCU implementation will be forced by the values selected for SMP and PREEMPT, once again adhering to the dictum of No Unnecessary Knobs.

If all goes well, this change will remove about 1,000 lines of code from the Linux kernel, which is a worthwhile reduction in complexity. So, if you currently use TINY_PREEMPT_RCU, please go forth and test TREE_PREEMPT_RCU on your hardware and workloads.

Acknowledgments

I owe thanks to Josh Triplett for suggesting this approach, and to Jon Corbet and Linus Torvalds for further motivating it. I am grateful to Jim Wasko for his support of this effort.

Answers to Quick Quizzes

Quick Quiz 1: Since when is ten megabytes of memory small???

Answer: As near as I can remember, Rip, since some time in the early 1990s.

Back to Quick Quiz 1.

Quick Quiz 2: Hey!!! I use TINY_PREEMPT_RCU! What about me???

Answer: Please download Linus's current git tree (or 3.9-rc1 or later) and test TREE_PREEMPT_RCU, reporting any problems you encounter. Alternatively, try disabling PREEMPT, thus switching to TINY_RCU for an even smaller memory footprint, relying on improvements in the non-realtime kernel's latencies. Either way, silence will be interpreted as assent!

Back to Quick Quiz 2.

Comments (none posted)

Namespaces in operation, part 6: more on user namespaces

By Michael Kerrisk
March 6, 2013

In this article, we continue last week's discussion of user namespaces. In particular, we look in more detail at the interaction of user namespaces and capabilities as well as the combination of user namespaces with other types of namespaces. For the moment at least, this article will conclude our series on namespaces.

User namespaces and capabilities

Each process is associated with a particular user namespace. A process created by a call to fork() or a call to clone() without the CLONE_NEWUSER flag is placed in the same user namespace as its parent process. A process can change its user-namespace membership using setns(), if it has the CAP_SYS_ADMIN capability in the target namespace; in that case, it obtains a full set of capabilities upon entering the target namespace.

On the other hand, a clone(CLONE_NEWUSER) call creates a new user namespace and places the new child process in that namespace. This call also establishes a parental relationship between the two namespaces: each user namespace (other than the initial namespace) has a parent—the user namespace of the process that created it using clone(CLONE_NEWUSER). A parental relationship between user namespaces is also established when a process calls unshare(CLONE_NEWUSER). The difference is that unshare() places the caller in the new user namespace, and the parent of that namespace is the caller's previous user namespace. As we'll see in a moment, the parental relationship between user namespaces is important because it defines the capabilities that a process may have in a child namespace.

Each process also has three associated sets of capabilities: permitted, effective, and inheritable. The capabilities(7) manual page describes these three sets in some detail. In this article, it is mainly the effective capability set that is of interest to us. This set determines a process's ability to perform privileged operations.

User namespaces change the way in which (effective) capabilities are interpreted. First, having a capability inside a particular user namespace allows a process to perform operations only on resources governed by that namespace; we say more on this point below, when we talk about the interaction of user namespaces with other types of namespaces. In addition, whether or not a process has capabilities in a particular user namespace depends on its namespace membership and the parental relationship between user namespaces. The rules are as follows:

  1. A process has a capability inside a user namespace if it is a member of the namespace and that capability is present in its effective capability set. A process may obtain capabilities in its effective set in a number of ways. The most common reasons are that it executed a program that conferred capabilities (a set-user-ID program or a program that has associated file capabilities) or it is the child of a call to clone(CLONE_NEWUSER), which automatically obtains a full set of capabilities.
  2. If a process has a capability in a user namespace, then it has that capability in all child (and further removed descendant) namespaces as well. Put another way: creating a new user namespace does not isolate the members of that namespace from the effects of privileged processes in a parent namespace.
  3. When a user namespace is created, the kernel records the effective user ID of the creating process as being the "owner" of the namespace. A process whose effective user ID matches that of the owner of a user namespace and which is a member of the parent namespace has all capabilities in the namespace. By virtue of the previous rule, those capabilities propagate down into all descendant namespaces as well. This means that after creation of a new user namespace, other processes owned by the same user in the parent namespace have all capabilities in the new namespace.

We can demonstrate the third rule with the help of a small program, userns_setns_test.c. This program takes one command-line argument: the pathname of a /proc/PID/ns/user file that identifies a user namespace. It creates a child in a new user namespace and then both the parent (which remains in the same user namespace as the shell that was used to invoke the program) and the child attempt to join the namespace specified on the command line using setns(); as noted above, setns() requires that the caller have the CAP_SYS_ADMIN capability in the target namespace.

For our demonstration, we use this program in conjunction with the userns_child_exec.c program developed in the previous article in this series. First, we use that program to start a shell (we use ksh, simply to create a distinctively named process) running in a new user namespace:

    $ id -u
    1000
    $ readlink /proc/$$/ns/user       # Obtain ID for initial namespace
    user:[4026531837]
    $ ./userns_child_exec -U -M '0 1000 1' -G '0 1000 1' ksh
    ksh$ echo $$                      # Obtain PID of shell
    528
    ksh$ readlink /proc/$$/ns/user    # This shell is in a new namespace
    user:[4026532318]

Now, we switch to a separate terminal window, to a shell running in the initial namespace, and run our test program:

    $ readlink /proc/$$/ns/user       # Verify that we are in parent namespace
    user:[4026531837]
    $ ./userns_setns_test /proc/528/ns/user
    parent: readlink("/proc/self/ns/user") ==> user:[4026531837]
    parent: setns() succeeded

    child:  readlink("/proc/self/ns/user") ==> user:[4026532319]
    child:  setns() failed: Operation not permitted

The following program shows the parental relationships between the various processes (black arrows) and namespaces (blue arrows) that have been created:

[A user namespace hierarchy]

Looking at the output of the readlink commands at the start of each shell session, we can see that the parent process created when the userns_setns_test program was run is in the initial user namespace (4026531837). (As noted in an earlier article in this series, these numbers are i-node numbers for symbolic links in the /proc/PID/ns directory.) As such, by rule three above, since the parent process had the same effective user ID (1000) as the process that created the new user namespace (4026532318), it had all capabilities in that namespace, including CAP_SYS_ADMIN; thus the setns() call in the parent succeeds.

On the other hand, the child process created by userns_setns_test is in a different namespace (4026532319)—in effect, a sibling namespace of the namespace where the ksh process is running. As such, the second of the rules described above does not apply, because that namespace is not an ancestor of namespace 4026532318. Thus, the child process does not have the CAP_SYS_ADMIN capability in that namespace and the setns() call fails.

Combining user namespaces with other types of namespaces

Creating namespaces other than user namespaces requires the CAP_SYS_ADMIN capability. On the other hand, creating a user namespace requires (since Linux 3.8) no capabilities, and the first process in the namespace gains a full set of capabilities (in the new user namespace). This means that that process can now create any other type of namespace using a second call to clone().

However, this two-step process is not necessary. It is also possible to include additional CLONE_NEW* flags in the same clone() (or unshare()) call that employs CLONE_NEWUSER to create the new user namespace. In this case, the kernel guarantees that the CLONE_NEWUSER flag is acted upon first, creating a new user namespace in which the to-be-created child has all capabilities. The kernel then acts on all of the remaining CLONE_NEW* flags, creating corresponding new namespaces and making the child a member of all of those namespaces.

Thus, for example, an unprivileged process can make a call of the following form to create a child process that is a member of both a new user namespace and a new UTS namespace:

    clone(child_func, stackp, CLONE_NEWUSER | CLONE_NEWUTS, arg);

We can use our userns_child_exec program to perform a clone() call equivalent to the above and execute a shell in the child process. The following command specifies the creation of a new UTS namespace (-u), and a new user namespace (-U) in which both user and group ID 1000 are mapped to 0:

    $ uname -n           # Display hostname for later reference
    antero
    $ ./userns_child_exec -u -U -M '0 1000 1' -G '0 1000 1' bash

As expected, the shell process has a full set of permitted and effective capabilities:

    $ id -u              # Show effective user and group ID of shell
    0
    $ id -g
    0
    $ cat /proc/$$/status | egrep 'Cap(Inh|Prm|Eff)'
    CapInh: 0000000000000000
    CapPrm: 0000001fffffffff
    CapEff: 0000001fffffffff

In the above output, the hexadecimal value 1fffffffff represents a capability set in which all 37 of the currently available Linux capabilities are enabled.

We can now go on to modify the hostname—one of the global resources isolated by UTS namespaces—using the standard hostname command; that operation requires the CAP_SYS_ADMIN capability. First, we set the hostname to a new value, and then we review that value with the uname command:

    $ hostname bizarro     # Update hostname in this UTS namespace
    $ uname -n             # Verify the change
    bizarro

Switching to another terminal window—one that is running in the initial UTS namespace—we then check the hostname in that UTS namespace:

    $ uname -n             # Hostname in original UTS namespace is unchanged
    antero

From the above output, we can see that the change of hostname in the child UTS namespace is not visible in the parent UTS namespace.

Capabilities revisited

Although the kernel grants all capabilities to the initial process in a user namespace, this does not mean that process then has superuser privileges within the wider system. (It may, however, mean that unprivileged users now have access to exploits in kernel code that was formerly accessible only to root, as this mail on a vulnerability in tmpfs mounts notes.) When a new IPC, mount, network, PID, or UTS namespace is created via clone() or unshare(), the kernel records the user namespace of the creating process against the new namespace. Whenever a process operates on global resources governed by a namespace, permission checks are performed according to the process's capabilities in the user namespace that the kernel associated with the that namespace.

For example, suppose that we create a new user namespace using clone(CLONE_NEWUSER). The resulting child process will have a full set of capabilities in the new user namespace, which means that it will, for example, be able to create other types of namespaces and be able to change its user and group IDs to other IDs that are mapped in the namespace. (In the previous article in this series, we saw that only a privileged process in the parent user namespace can create mappings to IDs other than the effective user and group ID of the process that created the namespace, so there is no security loophole here.)

On the other hand, the child process would not be able to mount a filesystem. The child process is still in the initial mount namespace, and in order to mount a filesystem in that namespace, it would need to have capabilities in the user namespace associated with that mount namespace (i.e., it would need capabilities in the initial user namespace), which it does not have. Analogous statements apply for the global resources isolated by IPC, network, PID, and UTS namespaces.

Furthermore, the child process would not be able to perform privileged operations that require capabilities that are not (currently) governed by namespaces. Thus, for example, the child could not do things such as raising its hard resource limits, setting the system time, setting process priorities, or loading kernel modules, or rebooting the system. All of those operations require capabilities that sit outside the user namespace hierarchy; in effect, those operations require that the caller have capabilities in the initial user namespace.

By isolating the effect of capabilities to namespaces, user namespaces thus deliver on the promise of safely allowing unprivileged users access to functionality that was formerly limited to the root user. This in turn creates interesting possibilities for new kinds of user-space applications. For example, it now becomes possible for unprivileged users to run Linux containers without root privileges, to construct Chrome-style sandboxes without the use of set-user-ID-root helpers, to implement fakeroot-type applications without employing dynamic-linking tricks, and to implement chroot()-based applications for process isolation. Barring kernel bugs, applications that employ user namespaces to access privileged kernel functionality are more secure than traditional applications based on set-user-ID-root: with a user-namespace-based approach, even if an applications is compromised, it does not have any privileges that can be used to do damage in the wider system.

The author would like to thank Eric Biederman for answering many questions that came up as he experimented with namespaces during the course of writing this article series.

Comments (23 posted)

Patches and updates

Kernel trees

Build system

Core kernel code

Development tools

Device drivers

Documentation

Filesystems and block I/O

Memory management

Architecture-specific

Virtualization and containers

Page editor: Jonathan Corbet

Distributions

Ubuntu unveils its next-generation shell and display server

By Nathan Willis
March 6, 2013

Ubuntu publicly announced its plan for the future of its Unity graphical shell on March 4, a plan that includes a new compositing window manager designed to run on the distribution's device platforms as well as on desktop systems. The plan will reimplement the Unity shell in Qt and replace Compiz with a new display stack called Mir that will incorporate a compositor, input manager, and several other pieces. Mir is not designed to use the Wayland display protocol (although the Ubuntu specification suggests it could be added later), a decision that raised the ire of developers in several other projects.

Announcements, announcements, announcements

Oliver Ries made the announcement on the ubuntu-devel mailing list, saying it was timed to coincide with the start of the distribution's Ubuntu Developer Summit, where more detail and discussion would follow. Ries said the changes were necessary "in order to implement the vision of converged devices"—namely the Ubuntu Touch project to build an interface compatible with phones, tablets, and smart TVs. The plan involves porting Unity from its current OpenGL-based implementation to Qt and implementing the Mir server. There are descriptions available on the Ubuntu wiki for both Mir and "Unity Next."

In a blog post, Ries elaborated further on the "overhaul" and the reasons behind it. It was already clear that the current implementation of the Unity shell (which runs as a plugin to Compiz) would eventually need to go; handling multiple monitors is problematic, as is implementing the global menu bar. The Ubuntu Touch devices are expected to add to the complexity by relying on different pointing devices and text input methods. In addition, Compiz itself was put into maintenance mode by its lead developer in December 2012.

In evaluating the options, Canonical decided that the X server needed replacing, and that Wayland was not viable for running on handheld devices. The new solution, Mir, is designed to run both on the system-on-chip (SoC) hardware found in phones and on standard desktop graphics hardware. The port of Unity from the OpenGL-based Nux toolkit is a reversal of sorts, Ries noted, but when Ubuntu halted work on its previous Qt-based implementation of Unity (the "fallback" mode for systems without sufficient graphics power to run Nux-based Unity) it did so in large part because Qt's future was uncertain. The project was in the midst of a hand-off from corporate owner Nokia to a community-governed council, and it was not clear that the Qt-based Unity would offer a comparable feature set to the OpenGL version. Now that Qt is in a stable and healthy position, Ubuntu has resumed work on the Qt-based Unity.

Thomas Voß wrote a blog post of his own, which lists several other rationales for Mir, including the ability to leverage existing Android drivers, and the desire for an input system suitable for mobile device usage. In addition, Weston, the reference implementation of Wayland, suffered from a "lack of a clearly defined driver model as well as the lack of a rigorous development process in terms of testing driven by a set of well-defined requirements." For a short-term solution, the project borrowed Android's SurfaceFlinger, but will replace it with Mir in time for the release of Ubuntu 14.04. Builds of both Mir and Unity Next will be available starting in a few months, although Ubuntu 13.04 and 13.10 are not expected to use them.

Specifics and protocols

The Mir wiki page goes into considerably more detail about the architecture of the system. It consists of a server-side library called libmir-server, a matching client communication library called libmir-client, and the unity-system-compositor. Other re-written Unity components include the Unity shell, Unity Greeter, and bindings for GUI toolkits (initially Qt, with GTK+ and SDL to follow). The Unity Next page further details how the changes will affect applications, such as the environment's launchers and notification system.

But the majority of the public discussion about the announcement has centered around the Mir display server—and in particular why it will not simply be an implementation of the Wayland protocol. For now, the Mir page lists a few reasons why Ubuntu decided Wayland did not fit the bill, including input event handling, input methods (that is, flexible mechanisms for text input, which are a more complicated issue for logographic writing systems like Chinese), and the manner in which shells and sessions are treated distinctly from normal client applications. On the other hand, the page does emphasize that Mir is designed to be "protocol-agnostic" and that Wayland support could be added in the future.

Not everyone found the reasons listed compelling, of course, particularly developers working on Wayland and Weston. Kristian Høgsberg started a discussion thread about it on Google Plus, in which several took issue with the Mir wiki page's description of Wayland's input event system. Most notably, the wiki page had initially said that Wayland's input events duplicated insecure semantics from X11's input system. Canonical's Christopher James Halse Rogers ("RAOF") later visited the Wayland IRC channel, and was asked about the security issue, which Høgsberg said was incorrect. Halse Rogers said he was unaware that the wiki mentioned the security issue, and subsequently removed it from the page.

The IRC discussion log makes for interesting reading, once one wades through the less compelling flame-like comments. Høgsberg also took issue with the Mir wiki page's comments about Wayland's distinction between normal client applications and special processes like the shell and session manager. The wiki page said that the shell-integration parts of the Wayland protocol were privileged, a design that the Ubuntu team disagreed with because it would require additional security measures. Høgsberg argued that the APIs provided were unprivileged, and that Ubuntu could replace any of them that it did not like without altering the core interfaces. In particular, the interfaces in question (wl_shell and wl_shell_surface) are actually optional extensions. In an interesting wrinkle, Wayland developer Tiago Vignatti posted a blog entry on March 5 describing the special protocol available to user shells, although he, too, said that it was not a privileged protocol.

In the IRC discussion, Halse Rogers countered that removing and replacing multiple interfaces would result in a display server that was not really Wayland anyway (and would require Ubuntu to maintain separate integration code for the GUI toolkits in particular). He added that Mir also uses a different (server-side) buffer allocation scheme in order to support ARM devices. Høgsberg replied that Wayland could add support for that as well, noting "I realize that this all isn't really documented, but it's not like Wayland only works with client side allocated buffers."

Divergent or convergent projects

Two other people in the IRC discussion raised a non-technical complaint, commenting that Ubuntu should have brought its issues with Wayland's design to the Wayland development mailing list, rather than develop a separate project. That does sound ideal, but on the other hand it is not easy to demonstrate that such a discussion would have guaranteed that Wayland evolved into the protocol that Ubuntu wanted. After all, at one point in the past, Ubuntu was planning on adopting Wayland; Mark Shuttleworth announced that intention in November 2010.

Canonical's Chase Douglas subsequently did join the Wayland mailing list, and on at least one occasion he weighed in on a design issue. The topic was touch input support in particular, in February 2012, and Douglas did not seem pleased with Wayland's touch event handling, noting specifically that it would not work when the user was sending touch events to more than one application. Most mobile platforms do not support having multiple foreground applications, but the tablet builds of Ubuntu Touch do.

Touch event handling is an interesting case to consider. The Mir wiki page cites input event handling as one of the project's points of disagreement with Wayland. It goes into frustratingly little detail on the subject, but Ubuntu is clearly interested in multi-touch and gesture recognition support due to its push on tablets and handheld devices. It debuted a multi-touch and gesture input stack with the release of Ubuntu 10.04, which it still maintains. Wayland, meanwhile, has stayed focused primarily on the desktop. In August 2012, there was an effort to revive the dormant weston-tablet-shell, although based on the commit logs it has been receiving less attention subsequently.

Certainly multi-touch and gesture-recognition are critical for phone and tablet user interfaces. Perhaps if Ubuntu is dead-set on implementing a touch-and-gesture-aware input event system that it can ship within the year, then the argument could be made that Wayland is not ready. There are few alternatives to Ubuntu's gesture framework; GTK+ gained multi-touch support in 3.4, but GNOME has only recently started working on its own touch event implementation. One might also make the case that no distributions have moved to Wayland itself, either, and it is not clear when Mutter or other window managers will implement it in a form ready for end users. There are other potential incompatibilities, such as licensing—Wayland and Weston are MIT-licensed; Mir is GPLv3. So Ubuntu could merge in Weston code, but submitting Mir patches upstream would mean adopting the other project's non-copyleft license.

None of those arguments are likely to sway the opinions of any Wayland developers toward Mir, of course. The best hope for peace probably lies in getting the two projects together to discuss their differences. On that point, Halse Rogers offered a tantalizing possibility in the IRC discussion, noting on line 215 that someone (possibly Voß) is attempting to organize such a meeting. In the meantime, however, most end users will simply have to sit still and wait to see what happens next. Ubuntu has declared its intention to ship Mir and Unity Next with Ubuntu 14.04; at the moment there is not a large distribution with a public launch date for its Wayland implementation, so it will be interesting to see which arrives first.

But one thing the Mir announcement and subsequent debate has made crystal clear is that both projects have fallen far short on the documentation front. The Mir wiki page and its associated Launchpad blueprints are all that currently exists, and they are short on detail. Then again, the Mir page is only a few days old at this stage. Wayland has been in active development for years, and its documentation, too, is sparse, to put it mildly. Høgsberg admitted as much in the IRC discussion, but who knows how many points of disagreement about Mir and Wayland compatibility could have been avoided entirely with more thorough documentation of the core and extensions. Neither project does itself—much less users—any favors when the only way to learn implementation details is to track down the lead developer and ask specific questions over email or IRC.

Ubuntu says it will begin making builds of Mir and Unity Next available in May 2013. Where both projects head over the course of the following year remains to be seen—as is also true with Wayland. A year from now, perhaps the two teams will have found common ground. If not, a head-to-head comparison of the software will surely make for a more interesting debate than does this week's strictly hypothetical discussion.

Comments (46 posted)

Brief items

Distribution quotes of the week

You must be thinking: “What do you mean by refreshing storage? I didn’t think you could drink storage?” No, sad to say, this blog post isn’t about the type of refreshment you get from a crisp cold glass of Anaconda Cola (yum!)
-- Máirín Duffy

Personally, I prefer the approach where we figure out what kind of tires we need on the next car and plan for them when we buy the car over an approach where we try to change the tires while the car is in motion.
-- Scott Kitterman

If the "rolling releases" really aren't intended for end-users, then we should just drop the fiction, say the change is from a 6-month cadence to a 2-year cadence, and be done with it.

Yes, it has all the problems we've come to know-and-hate with stale applications. So, either allow SRU exceptions for more applications like we do for Firefox, or start really supporting Backports for the LTS.

It's a waste of everyone's time and effort to rework the whole project around talk of "rolling releases" when it's really just the same old development release on a slower schedule. (Remember how we used to call monthly images alphas and betas? That was ages ago, like 4 whole months.)

-- Allison Randal

If like Martin Owens you're feeling the lack of Ubuntu community and wanting an Ubuntu community that cares about everyone's contribution, doesn't make random announcements every couple of days that have obviously been made behind closed doors and cares about a community made upstream desktop (and err.. whole graphics stack), you'd be very welcome here at Kubuntu. Join us in #kubuntu-devel
-- Jonathan Riddell

Comments (4 posted)

Ubuntu discussing moving to LTS + rolling release model

Rick Spencer, Canonical's VP of Ubuntu Engineering, has put out a call to discuss dropping the "interim" Ubuntu releases, which are those that are not long-term support (LTS) releases, and switching to a rolling release model in between LTS releases. Spencer's "tl;dr":

Ubuntu has an amazing opportunity in the next 7-8 months to deliver a Phone OS that will be widely adopted by users and industry while also putting into place the foundation for a truly converged OS.

To succeed at this we will need both velocity and agility. Therefore, I am starting a discussion about dropping non-LTS releases and move to a rolling release plus LTS releases right now.

The ubuntu-devel mailing list thread is already getting fairly long, as might be guessed. The idea will also be discussed at the upcoming online Ubuntu Developer Summit, March 5-6.

Comments (48 posted)

openSUSE 12.3 preview available for AArch64

The openSUSE ARM team has a 12.3 AArch64 preview available. "This is a huge achievement and milestone for us, thanks to lots of helpful hands in openSUSE. Just to put this into context: This is not a minimal system with a couple of toolchain packages. It is also not an embedded variant of a Linux environment. No, this is the full featured, standard openSUSE distribution as you’re used to, ported to AArch64, up and running. We have built it based on (slightly newer versions of) standard openSUSE 12.3 packages, and the changes are mostly already merged back into openSUSE Factory." (Thanks to Mattias Mattsson)

Comments (none posted)

Debian Edu 6.0.7+r1 (aka "Debian Edu Squeeze") updated

Debian Edu has released an update to its stable 6.0 "squeeze" distribution. "Debian Edu 6.0.7+r1 is an incremental update to Debian Edu 6.0.4+r0, containing all the changes between Debian 6.0.4 and 6.0.7 as well Debian Edu specific bugfixes and enhancements."

Full Story (comments: none)

Distribution News

Debian GNU/Linux

Discover Debian's hassle-free trademarks

The Debian Project has announced that Debian logos and marks may now be used freely for both non-commercial and commercial purposes, under the terms of the new trademark policy. "Stefano Zacchiroli, current Debian Project Leader and one of the main promoters of the new trademark policy, said "Software freedoms and trademarks are a difficult match. We all want to see well-known project names used to promote free software, but we cannot risk they will be abused to trick users into downloading proprietary spyware. With the help of SPI and SFLC, we have struck a good balance in our new trademark policy. Among other positive things, it allows all sorts of commercial use; we only recommend clearly informing customers about how much of the sale price will be donated to Debian."" (Thanks to Paul Wise)

Comments (none posted)

Debian Project Leader Elections 2013: Call for nominations

The call for nominations for the next Debian Project Leader are open. "Prospective leaders should be familiar with the constitution, but just to review: there's a one week period when interested developers can nominate themselves and announce their platform, followed by a three week period intended for campaigning, followed by two weeks for the election itself."

Full Story (comments: none)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

Canonical reveals plans to launch Mir display server (The H)

Persistent rumors of a Canonical-developed display server have been confirmed over at The H. Instead of X, Wayland, or SurfaceFlinger, the Mir display server will be used in upcoming projects.
Currently, the developers are using the Android SurfaceFlinger to deliver the Phone and Tablet experiences which were recently released as developer previews. But Canonical says that, by May this year, it will be replaced by Mir. It added that eventually the tablet will migrate to the same infrastructure as the desktop system.

For the desktop, the plan is equally ambitious: a migration in May to a Mir/QMir/Unity Next shell and Mir/QtUbuntu/Qt/QML application stack running on top of the current free graphics drive stack. Closed source driver support is still being worked on, with Canonical saying it is talking with GPU vendors about creating distilled, reusable, cross-platform EGL-centric drivers in the future. The May milestone will, though, be the first point in actual shell development and giving developers something to work with.

Comments (340 posted)

A fresh litter of Puppy Linux releases: Wary, Racy and Quirky (The H)

Three new Puppy Linux releases are covered in The H. Puppy Linux comes in a variety of "puplets" which are all built using the Woof build system. "Wary is the edition of Puppy designed to be run on older hardware, whereas Racy has more features and needs more system resources but is based on Wary. For version 5.5, both editions had most of their underlying system libraries and some of the applications updated during the development phase; this took almost a year from the release of Wary 5.3 in April 2012." Lead developer Barry Kauler also has a new release of his experimental "puplet", Quirky.

Comments (none posted)

Second RC for openSUSE 12.3 brings bigger-than-CD live images (The H)

The H takes a look at openSUSE 12.3 rc 2. A final release will be available next week. "For openSUSE 12.3 RC2, the boot process on Secure-Boot-enabled systems includes a step where users will have to manually enable Secure Boot support in YaST. The developers are working to remove this additional step for the final release. The developers have also changed the size of the live media: it now exceeds the 800MB CD size limit, meaning that it will have to be booted from USB sticks instead of CDs. This allows openSUSE 12.3 RC2 to ship with a larger number of tools in the live images, including the GIMP, the entirety of LibreOffice 3.6 and the full OpenJDK environment. As part of the changes, Rhythmbox has replaced Banshee as the default audio player in the GNOME based live images."

Comments (none posted)

Page editor: Rebecca Sobol

Development

Static site generators for building web sites

March 6, 2013

This article was contributed by Martin Michlmayr

There are many ways to create web sites. Possibilities include writing HTML files manually, employing a framework to create dynamic web sites, or adopting a fully-fledged content management system (CMS) that offers a central interface to create, edit, review, and publish content. There are also a number of tools called static site generators that can help with the creation of static web sites. They transform input (typically text in lightweight markup languages, such as Markdown or reStructuredText) to static HTML, employing templates and filters on the way.

While static site generators have been around for many years, it seems that their popularity is increasing. FOSDEM migrated from Drupal to nanoc for their 2013 edition and kernel.org just rolled out a new site based on Pelican. Static site generators are likely to appeal to LWN readers as they allow you to turn your web site into an open source project, approaching it like any software development project. This article explains the concept behind static site generators, highlighting their benefits and functionality. It refers to nanoc as one example of such a tool, but the functionality of other site generators is quite similar.

Benefits

Static site generators offer a number of advantages over dynamic web sites. One is high performance, as static HTML pages can immediately be served by the web server because there are no database requests or other overhead. Performance is further enhanced because browsers can easily cache static web pages based on the modification time. Security is also higher since web sites generated statically are just collections of HTML files and supplementary files, such as images. There is no database on the server and no code is being executed when the page is requested. As a side effect, no software upgrades of specific web frameworks have to be applied in order to keep your site up to date, making the site much easier to maintain. Finally, as the site is compiled on your machine instead of the server, the requirements for the hosting site are quite minimal: no special software needs to be installed, and the processing and memory requirements are low as it just needs to run a web server.

There are also a number of benefits in terms of workflow. One advantage is that static site generators allow you to follow whatever workflow you like. Input files are simple text files that can be modified with your editor of choice and there is a wide choice of input formats. Markdown seems particularly popular, but any input format that can be transformed to HTML can be used. Furthermore, input files can be stored in the version control system of your choice and shared with potential collaborators.

Static site generators also promote a smooth review and deployment process. Since you compile your content locally, you can check it before uploading. This can include a review of the diff of the generated content or more thorough checks, such as validating the HTML or checking for broken links. Once you're ready to deploy the site, updating your site is just a matter of running your static site generator to generate the new output and syncing your files to your hosting provider using rsync.

While static web sites are not suited for every use case, they are an attractive alternative to a CMS or a dynamic web framework in many situations.

Use software development processes

The use of static site generators makes the creation of your web site into a process akin to software development. You create input files along with rules that specify how these files should be transformed. Your static site generator of choice performs the compilation process to generate your site and make it ready for deployment. Dependencies between different files are tracked and pages are only regenerated when their contents or dependencies have changed.

As in every good software development project, content that is common to several pages can be split out. The most common approach is to create a template for the layout of your pages (consisting of HTML headers, the site layout, sidebars, and other information). Nanoc supports Ruby's templating system, ERB, as well as Haml, the HTML abstraction markup language. You can also split out commonly used snippets of HTML code, such as a PayPal or Flattr button. These can be included from other files and it's possible to pass parameters in order to modify their appearance.

A site generator like nanoc will compile individual items and combine them with a layout to produce the finished HTML document. Nanoc allows the creation of a Rules file which defines the operations that nanoc should perform on different items. Nanoc differentiates between compile rules, which specify the transformation steps for an item, and route rules, which tell nanoc where to put an item. A compile rule could specify that pages with the .md extension are to be rendered from Markdown to HTML with the pandoc filter. The rule would also specify a layout to use for the page. A route directive would be used to specify that the rendered output of foo.md should be stored as foo/index.html.

There are many filters that can transform your input. Nanoc offers filters to transform text from a range of formats to HTML. It also allows you to embed Ruby code using ERB, which is useful to access information from other pages and to run code you've written. What I like about static site generators is that they make it really easy to write content: instead of writing HTML, you use a lightweight markup language and let the tool perform the transformation for you. Additionally, you can run filters to improve the typography of your page, such as converting --- to — or "foo" to “foo”. You could also write a filter to include images without manually specifying their height and width—why not let a filter do the boring work for you? While nanoc has a number of built-in filters, it's trivial to write your own—or to extend it in other ways.

Once you have written some input files and created a layout along with rules to specify how files should be compiled, the site generator will do the rest for you. The compilation process will take every changed item and produce output by running your specified transformations on the input. You can also configure the tool to deploy the site for you. However, as mentioned before, you should approach your web site like your software project—and who wants to ship code before testing it? Nanoc allows you to run checks on your output. It has built-in checks to validate CSS and HTML, find stale files, and to find broken links (either internal or external links). Further checks can be added with a few lines of code.

Some examples

Thinking of my own home page, I can see a number of ways that using a static site generator would make it easier to maintain. At the moment, my site relies on a collection of HTML files and a Perl script to provide basic template functionality ("hack" might be a more appropriate description). Migrating to a tool like nanoc would instantly give me dependency tracking and a proper templating system.

There are a number of ways I could further improve my site, though. I maintain a list of academic publications, consisting of a main index page along with a separate page for each paper. When adding a new paper, I have to duplicate a lot of information on two pages. Using nanoc and some Ruby libraries, I could simply generate both pages from a BibTeX file (LaTeX's bibliography system). This would not only reduce code text duplication but also automatically format the paper information according to my preferred citation style. Similarly, I maintain several HTML tables showing the status of Debian support for embedded devices. While updating these tables is not too much work, it would be much cleaner to store the information in a YAML or JSON file and generate the tables automatically.

Another useful nanoc feature is the ability to create different representations from one input file. In addition to transforming your CV or résumé from the Markdown input to HTML, you could also generate a PDF from the same input. Similarly, you could create an ebook from a series of blog entries in addition to displaying the blog entries on your web site.

Static doesn't mean boring

One objection people might have to static site generators is that static sites are boring. However, this isn't necessarily the case, for a number of reasons. First, a static site can use JavaScript to provide dynamic and interactive elements on the site. Second, a statically generated web site doesn't have to be static—it can be updated many times per day. Nanoc, for example, allows you to use data from any source as input. You could periodically download a JSON file of your Twitter feed and render that information on your web site. An open source project could download the changelog from its version control system and automatically generate a list of releases for its web site.

A good example is the FOSDEM web site: the FOSDEM organizers internally use the Pentabarf conference planning system to schedule talks. Information from Pentabarf is periodically exported and used as a data source to generate the schedule on the web site. The organizers only had to write some code once to transform the Pentabarf data into a nice schedule. Now that this functionality has been implemented, nanoc will update their web site whenever the data changes.

Another problem with static sites is the lack of support for comments and other discussion mechanisms. Fortunately, there are a number of solutions. One approach is demonstrated by a plug-in for Jekyll, which contains a PHP script that forwards comments by email. These can be added by the web site owner (either automatically or after manual moderation) and the web site re-built. A more interactive, and commonly used solution, is the use of Disqus, an online discussion and commenting service that can be embedded in web sites and blogs with the help of JavaScript. Juvia appears to be a viable open source alternative to Disqus, although I couldn't find many sites using it.

Conclusion

Static site generators are an attractive solution for many web sites and there is a wide range of tools to choose from. Since many site generators are frameworks that allow you to extend the software, a good way to select a tool is by looking at its programming language. There are solutions for Haskell (Hakyll), Perl (Templer), Python (Hyde, Pelican), Ruby (Jekyll, Middleman, nanoc) and many more. You can also check out Steve Kemp's recent evaluation of static site generators.

What's clear to me is that the time of routinely writing HTML by hand is definitely over. It's much nicer to write your content in Markdown and let the site generator do the rest for you. This allows you to spend more time writing content—assuming you can stop yourself from further and further enhancing the code supporting your site.

Comments (10 posted)

Brief items

Quotes of the week

I was blown away by the number of fixes and small enhancements that were committed. Right now I count a total of 56 bugs fixed through that initiative alone. Some of these include some of the most obvious bugs we’ve had in GNOME 3 since it was first released.
Allan Day, on GNOME's "Every Detail Matters" campaign.

So, to summarize: Google forces others to use open standards which they do not support themselves.
Roland Wolters

Comments (none posted)

Python moves to electronic contributor agreements

The Python project has announced that it is trying to ease the process of signing a contributor agreement through the use of Adobe's "EchoSign" service. "Faxes fail, mail gets lost, and sometimes pictures or scans turn out poorly. It was time to find a more user-friendly solution, and the Foundation is happy to finally offer this electronic form."

Comments (5 posted)

Buildroot 2013.02 released

Version 2013.02 of the buildroot tool for embedded Linux development is available. Changes include 66 new packages, Eclipse integration support, and the option to set the root password.

Full Story (comments: none)

10 years of PyPy

The PyPy project, which is working toward the creation of a highly-optimized interpreter for the Python language, is celebrating its tenth anniversary. "To make it more likely to be accepted, the proposal for the EU project contained basically every feature under the sun a language could have. This proved to be annoying, because we had to actually implement all that stuff. Then we had to do a cleanup sprint where we deleted 30% of codebase and 70% of features."

Comments (none posted)

Google releases a better compression algorithm

The Google Open Source Blog has announced the release of the "Zopfli" open source compression algorithm. Though compression cost is high, it could be a win for certain applications:
The output generated by Zopfli is typically 3–8% smaller compared to zlib at maximum compression, and we believe that Zopfli represents the state of the art in Deflate-compatible compression. Zopfli is written in C for portability. It is a compression-only library; existing software can decompress the data. Zopfli is bit-stream compatible with compression used in gzip, Zip, PNG, HTTP requests, and others.

Due to the amount of CPU time required, 2–3 orders of magnitude more than zlib at maximum quality, Zopfli is best suited for applications where data is compressed once and sent over a network many times — for example, static content for the web.

Comments (36 posted)

Upstart 1.7 available

James Hunt has released upstart 1.7, the latest version of the alternative init daemon. This version includes new D-Bus signals, new tests, an event bridge for proxying system-level events, plus the ability "to run with PID >1 to allow Upstart to manage a user session. Running Upstart as a 'Session Init' in this way provides features above and beyond those provided by the original User Jobs such that the User Job facility has been removed entirely: to migrate from a system using User Jobs, simply ensure the user session is started with 'init --user'."

Full Story (comments: none)

0install 2.0 released

Version 2.0 of Zero Install, the decentralised cross-platform software installation system, is now available. There is a new feed format, which is "100% backwards compatible with the 1.0 format (all software distributed for 1.0 will also work with 2.0), while supporting more expressive dependency requirements (optional, OS-specific, restriction-only dependencies and dependencies for native packages), more flexible version constraints, and executable bindings (dependencies on executable programs, not just on libraries)." Other changes include easier roll-back, improved diagnostics, and better support for headless systems.

Full Story (comments: none)

[ANNOUNCE] xorg-server 1.14.0

Keith Packard has released xserver 1.14.0, complete with fixes for the touch device and GPU hotplugging, plus software rendering speedups.

Full Story (comments: 1)

Newsletters and articles

Development newsletters from the past week

Comments (none posted)

Firefox OS, Ubuntu and Jolla's Sailfish at MWC (The H)

The H briefly covers a panel session at the Mobile World Congress. The panel featured representatives of three Linux-based contenders in the mobile space: Mozilla Chair Mitchell Baker (Firefox OS), Canonical founder Mark Shuttleworth (Ubuntu for Phones), and Jolla CEO Marc Dillon (Sailfish OS). "Jolla CEO Dillon remarked at the panel that the time was right to give people alternatives, and like Shuttleworth, suggested that his company is doing its best to do so. The Sailfish SDK is based on QtCreator, the Mer project's build engine and an emulator for the operating system. The SDK is released under a combination of open source licences and the company states its goal with Sailfish 'is to develop an open source operating system in co-operation with the community', but it has not made clear what parts of the code, beyond the Mer underpinnings, it intends to open under which specific licences." There is a video of the panel session available as well.

Comments (none posted)

Michaelsen: One

On his blog, LibreOffice hacker Bjoern Michaelsen celebrates the conversion to make for LibreOffice builds. Michael Meeks congratulated Michaelsen and the others responsible for "killing our horrible, legacy, internal dmake". Michaelsen looks at the speed improvements that came with the new build system, which reduced the "null build" (nothing to do) from 5 minutes (30 minutes on Windows) to 37 seconds. "There are other things improved with the new build system too. For example, in the old build system, if you wanted to add a library, you had to touch a lot of places (at minimum: makefile.mk for building it, prj/d.lst for copying it, solenv/inc/libs.mk for others to be able to link to it, scp2 to add it to the installation and likely some other things I have forgotten), while now you have to only modify two places: one to describe what to build and one to describe where it ends up in the install. So while the old build system was like a game of jenga, we can now move more confidently and quickly."

Comments (219 posted)

Page editor: Nathan Willis

Announcements

Brief items

Software Freedom Conservancy publishes annual report and public filings

The Software Freedom Conservancy, which serves as the legal entity for 29 free software projects, has published its Fiscal Year 2011 annual report as well as its Federal and New York state public filings. The report includes statistics for number-crunchers (such as US $1,768,095 raised for member projects and 1,260 contributing developers), plus news highlights from a number of the member projects and associated events.

Comments (none posted)

Calls for Presentations

PyCon Australia 2013 Call for Proposals

PyCon Australia will take place July 5-7 in Hobart, Tasmania. The call for proposals will be open until April 5. "We’re looking for proposals for presentations and tutorials on any aspect of Python programming, at all skill levels from novice to advanced. Presentation subjects may range from reports on open source, academic or commercial projects; or even tutorials and case studies. If a presentation is interesting and useful to the Python community, it will be considered for inclusion in the program."

Full Story (comments: none)

2013 Android Microconference at Linux Plumbers: Call for Participation

There will be an Android microconference at the Linux Plumbers conference (LPC). LPC will take place September 18-20 in New Orleans, Louisiana. "I'd like to invite people to add topics to the Wiki. Please include a description of the topic you'd like to discuss. In general, topics that present work to close the gap between the mainstream kernel and the Android kernel are preferred as well as topics for future mechanisms that may be needed by Android and other mobile usecases. In addition, topics related to successful out-of-tree patch maintenance and challenges in commercialization would also be useful."

Full Story (comments: none)

Upcoming Events

Distro Recipes

Distro Recipes is a multi-distribution conference that will take place April 4-5, 2013, in Paris, France. "Two days of lectures, lighting talks and a round table dedicated to Linux distributions and their development process" Registration is free but limited to 100 people.

Full Story (comments: none)

Events: March 7, 2013 to May 6, 2013

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
March 4
March 8
LCA13: Linaro Connect Asia Hong Kong, China
March 6
March 8
Magnolia Amplify 2013 Miami, FL, USA
March 9
March 10
Open Source Days 2013 Copenhagen, DK
March 13
March 21
PyCon 2013 Santa Clara, CA, US
March 15
March 16
Open Source Conference Szczecin, Poland
March 15
March 17
German Perl Workshop Berlin, Germany
March 16
March 17
Chemnitzer Linux-Tage 2013 Chemnitz, Germany
March 19
March 21
FLOSS UK Large Installation Systems Administration Newcastle-upon-Tyne , UK
March 20
March 22
Open Source Think Tank Calistoga, CA, USA
March 23 Augsburger Linux-Infotag 2013 Augsburg, Germany
March 23
March 24
LibrePlanet 2013: Commit Change Cambridge, MA, USA
March 25 Ignite LocationTech Boston Boston, MA, USA
March 30 Emacsconf London, UK
March 30 NYC Open Tech Conference Queens, NY, USA
April 1
April 5
Scientific Software Engineering Conference Boulder, CO, USA
April 4
April 5
Distro Recipes Paris, France
April 4
April 7
OsmoDevCon 2013 Berlin, Germany
April 8 The CentOS Dojo 2013 Antwerp, Belgium
April 8
April 9
Write The Docs Portland, OR, USA
April 10
April 13
Libre Graphics Meeting Madrid, Spain
April 10
April 13
Evergreen ILS 2013 Vancouver, Canada
April 14 OpenShift Origin Community Day Portland, OR, USA
April 15
April 17
Open Networking Summit Santa Clara, CA, USA
April 15
April 17
LF Collaboration Summit San Francisco, CA, USA
April 15
April 18
OpenStack Summit Portland, OR, USA
April 17
April 18
Open Source Data Center Conference Nuremberg, Germany
April 17
April 19
IPv6 Summit Denver, CO, USA
April 18
April 19
Linux Storage, Filesystem and MM Summit San Francisco, CA, USA
April 19 Puppet Camp Nürnberg, Germany
April 27
April 28
LinuxFest Northwest Bellingham, USA
April 27
April 28
WordCamp Melbourne 2013 Melbourne, Australia
April 29
April 30
2013 European LLVM Conference Paris, France
April 29
April 30
Open Source Business Conference San Francisco, USA
May 1
May 3
DConf 2013 Menlo Park, CA, USA

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol

Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds