LWN.net Weekly Edition for September 26, 2013
Free drivers for ARM graphics
At the 2013 Linux Plumbers Conference in New Orleans, Rob Clark presented an update of the progress toward free software graphic drivers on ARM systems. Free drivers for desktop display hardware are more actively developed, with many users simply resigning themselves to the use of binary-only drivers in the ARM system-on-chip (SoC) world. But as Clark explained, the effort to provide free drivers has gained significant momentum in the past year alone.
A few years ago, Clark said, the situation looked very bleak, and there were no projects attempting to write free drivers for ARM SoCs. Then Luc Verhaegen started his Lima project, an attempt to reverse engineer the Mali GPU. That project "motivated the rest of us," Clark said, and now there are four active projects, each targeting a different GPU.
The first that Clark described is the Etnaviv project for GPUs made by Vivante. The Vivante GPUs are on the low-end, usually found in mobile phones and other small devices like the nettop-class CuBox PC. These GPUs have a fairly straightforward design that is similar to many desktop GPUs. They use a unified shader architecture, meaning that there are not separate shader units designated for separate purposes (e.g., vertex shaders, pixel shaders, and geometry shaders). Most Vivante models have supported OpenGL ES 2.0, while the latest revision has moved to OpenGL ES 3.0 and OpenCL. The instruction set is all vector-based, and historically offered only floating-point data—although here, too, there is a recent change: the newest models now offer integer support, too.
Etnaviv is making very rapid progress, Clark said: work only started in late 2012. As of now, the project has produced a working Gallium3D driver, which is capable of playing some games. But it only supports the Linux framebuffer (fbdev) backend. The X backend needs a lot of help, he said: Direct Rendering Manager (DRM), Direct Rendering Infrastructure (DRI), and 2D X (DDX) support are all missing.
For NVIDIA's Tegra SoC series, there is the Grate project. Tegra is a widely-adopted SoC that can be found in a number of well-known products, such as Samsung Galaxy tablets, the (first-generation) Google Nexus 7, and Trim-Slice nettops. The GPU architecture is more sophisticated than Vivante's, with separate vertex and fragment shaders. The instruction set is "minimalist," Clark said, not even providing loops, but the GPUs tend to offer good performance by incorporating a massive number of cores. The devices support OpenGL ES 2.0.
Grate is still in the very early stages, he said, and is not yet usable. Grate can capture and replay a command stream, the basic GL state is well understood, and the project has reverse engineered the vertex shader instruction set architecture. But the vertex shader is straightforward; the challenge is the fragment shader, which "is more weird," he said. It uses three separate instruction streams: one for texture lookup, one for varying variable interpolation and the "Special Function Unit" or SFU (which implements out-of-the-ordinary functions like reciprocal square roots), and one for the standard arithmetic logic unit (ALU). But even the ALU is unusual; it accepts packets of three to four instructions and offers just four opcodes.
Clark next discussed the Lima project, which was the first such ARM GPU driver project to get started. Most of the Lima effort is directed at the Mali 200 and 400 series, although work has recently started on the Mali 600 series. The 200 and 400 are similar: both support OpenGL ES 2.0 and offer separate vertex and fragment shaders, and both support 2D and cubemap textures. The 600 series diverges considerably: it supports OpenGL ES 3.0 and OpenCL 1.1, uses a unified shader architecture (with different models using varying numbers of cores and ALUs of varying register widths), and supports 3D textures in addition to 2D and cubemap textures. Mali 200/400 chips are found in high-end phones, Allwinner devices, and some Samsung tablets, while the Mali 600 series is found in Google's Chromebooks and Nexus 10 tablet.
The 200/400 series vertex shader is a bit unusual, he said. It is single-threaded but deeply pipelined. It uses very long instruction words (VLIW), each of which can include two additions, two multiplications, one complex ALU operation, one pass-through ALU operation, one attribute load, one register load, one uniform load, and one varying store. In addition, there are no explicit output registers: the outputs from previous instructions are routed into the current instruction, a job which the shader compiler is responsible for sequencing correctly. The fragment shader is not quite as strange, he said, although it uses variable-length VLIW rather than the fixed-length VLIW design of the vertex shader.
As of now, the Lima driver is starting to reach usable status on Mali 200 and 400 series GPUs. There is a Mesa-based DRI driver that runs es2gears (the OpenGL ES version of the famous "glxgears" demo) as well as some other 3D demos. Connor Abbott has been developing a shader compiler, although it has not been hooked up to Lima yet. For the Mali 600 series, work has only recently begun, and so there is little progress to report.
Clark himself is the developer behind Freedreno, the driver project for Adreno GPUs. Adreno GPUs are found in Qualcomm Snapdragon SoCs, HP TouchPads, and several high-end Android phones. The GPU is available in two generations. The 200 series chips support OpenGL ES 2.0, while the 300 series chips support OpenGL ES 3.0 and Open CL 1.1. Both use a unified shader architecture, although there are differences between them. The 200 series uses VLIW instructions on vectors with the ability to co-dispatch scalars, while the 300 series uses explicitly pipelined scalar instructions.
As of now, the Freedreno driver is the furthest along of the free driver projects. There is an initial DRM/KMS driver in for kernel 3.12, and there is a working Gallium3D driver that supports both 200 and 300 series GPUs. The Gallium3D driver also works on the Kernel Graphics Support Layer (KGSL)/fbdev backend for Android. There is an X driver, too, which can utilize Adreno Z180 2D vector graphics cores. Freedreno currently implements support for OpenGL ES 1.0 and 2.0 and offers OpenGL 1.4 support on what Clark called "a best effort basis." That best effort is good enough to run GNOME Shell, XBMC, and several well-known 3D games like Xonotic and OpenArena. The DRM/KMS backend also supports the Weston compositor for Wayland.
Binary graphics drivers are a thorny issue: many free software supporters decry them, but they are a common sight—particularly on mobile Linux devices. Considering the rapid progress that has been made on free drivers for desktop GPUs in recent years, it can be easy to forget just how recently binary drivers were considered a necessary evil for that hardware as well. As Clark's talk illustrated, free drivers for ARM SoC systems still have a ways to go, but they are also making rapid progress, which should give hope to those who feel stuck with proprietary GPUs in their pockets.
A gathering of kernel developers
The now-traditional kernel panel, with yet another new set of participants, was held at LinuxCon North America (LCNA) in New Orleans, Louisiana on September 18. The questions ranged from the kernel development process to more personal queries about the kernel hackers on stage. It is a popular session (Linux Foundation Executive Director Jim Zemlin called it his favorite in the introduction) that helps give the audience a glimpse into the personalities of some of those who create and maintain the kernel that underlies their businesses and, perhaps, other parts of their lives as well.
![[Panel]](https://static.lwn.net/images/2013/lcna-panel-sm.jpg)
Red Hat's Ric Wheeler moderated the panel, asking his own questions as well as some from the audience. The panel consisted of Sarah Sharp from Intel, who works on USB 3; Tejun Heo from Red Hat, who is mostly working on control groups and resource management these days, he said, but has also done work on workqueues and per-CPU data structures; and Greg Kroah-Hartman of the Linux Foundation who maintains several subsystems and is the stable kernel maintainer. Linus Torvalds rounded out the panel, noting that he no longer did any real work as he had "turned to the dark side": management. He just merges other people's patches these days, he said with a grin.
With much of the focus on embedded devices today, has the kernel moved too far in that direction and away from servers, Wheeler asked. Torvalds said that there was a good balance in the last merge window; while there were lots of commits for "wild wacky devices", there was also a lot of scalability work that was added. Heo mostly agreed, saying that the companies doing the kernel work are allocating their resources to generally maintain the balance. No one seemed to think there was a real problem or concern that servers would be ignored in favor of the latest smartphone.
Getting involved
Wheeler turned to the question of diversity within the community, asking if it had gotten easier for new developers to get involved. He note that Sharp has been leading an Outreach Program for Women (OPW) effort for the kernel and turned to her first. Sharp said that the OPW kernel effort came out of an interest in helping women find a bigger project, with a mentor to assist them in getting up to speed. Seven internships were awarded, and several of the participants were presenting at the conference, she said.
Kroah-Hartman said that he was an OPW mentor this year and was pleased to see some 60 patches from participants in the last merge window, including some to the TTY drivers. He also noted that there is a professor in the Czech Republic that is making "get a patch into the kernel" an assignment for his students. While it was difficult for some, a handful (3-5) said it was "easy" and would continue working on the kernel.
Sharp also pointed out that the documentation for getting started has improved, partly because of the tutorial that she wrote for OPW on creating a first patch and interacting with the community to get it merged. The participants in the program "are doing real work", she said, including speeding up the x86 boot process by parallelizing it.
While the kernel can be difficult to get involved with because of its complexity, it can also be easier to do so because of all of the different kinds of contributions that can be made, Torvalds said. People can contribute drivers, bug fixes, documentation, and so on, which gives more opportunities to contribute than some other open source projects. "Just look at the numbers", he said, since there are patches from more than a thousand developers every release, "it can't be that hard" to contribute.
It really just takes two hours to go into a file and look at it, Heo said, you will likely find things that are "stupid" in the code. That provides plenty of opportunities for new developers. He also noted the rise in the numbers of Chinese developers, which has been noticeable recently and is good to see. Wheeler said that there are few places in the world that don't submit patches to the kernel these days. Kroah-Hartman chimed in that Antarctica should be included in the list as patches had come from there in the past.
The first audience question was a bit whimsical. "Apparently there is an opening for a CEO at Microsoft", Wheeler said to laughter from both audience and panel, was there any interest in the position? The silence (or chuckles) from the panel members made it pretty clear what they thought of the idea.
The airplane seat-back entertainment system seems to be a popular choice for the "most embarrassing place you have seen Linux used". Normally, you only know that it runs Linux because it has crashed, as Sharp pointed out. Torvalds said that he hates seeing the kernel version numbers (like 2.2.18) that sometimes pop up when embedded Linux crashes.
![[Ric Wheeler]](https://static.lwn.net/images/2013/lcna-panel2-sm.jpg)
What do you see for Linux beyond your lifetime was the next question up. Kroah-Hartman said that he wanted to see it continue to succeed, which means that it needs to continue to change. "If we stop our rate of change, we will die", he said, because hardware keeps on changing and Linux needs to keep adapting to it. "I can't argue with that", Torvalds said; he hopes that hardware innovation continues its pace.
On the other hand, Heo said that he has no long-term view because he "can't predict reality". He can't see much beyond getting the user-space interface to control groups into a usable form, "after that, who knows?". Sharp said she "would like to see a community that is welcoming to all" to applause.
Kernel and user space
Linux is an ecosystem, Wheeler said, so are there important user-space issues that concern the panel? "That's why we have the Plumbers conference", Kroah-Hartman said. That conference and LCNA overlapped on the day of the panel and Plumbers is an opportunity to put the kernel and user-space developers together to resolve any issues. Torvalds would like to see more kernel developers work in user space. He wouldn't necessarily like to lose them, but it would be worth it to see some of the kernel culture spread to user space. In particular, he complained about APIs breaking with every release; "we know how to do these things" correctly, he said.
But Heo said that the kernel developers could learn from user space as well. There is a need for more communication "between those using our features and us", he said. It would be "beneficial if we talked more". Sharp pointed to power management as one place where collaboration could be better. Linux could have the best power management of any operating system, but it will take cooperation between the kernel and user space.
Another audience question was mostly targeted at Torvalds. It is known that he is a diver and likes conferences that are near good diving locations, so what would be good locations for upcoming conferences? He suggested that "more conferences in the Caribbean" would be desirable. Heo suggested Hawaii as a destination as well. Both were unsurprisingly met with widespread applause.
The only one who responded directly (most just laughed) to the question of "have any of you been approached by the US for a backdoor?" was Torvalds. After a bit of a pause, he bobbed his head up and down in assent, while saying "NO!", which brought another laugh from the packed house.
How the panel members ended up involved in kernel programming was the next question on the agenda. Kroah-Hartman's girlfriend (now his wife) told him about sitting in on a talk with a "strange bearded guy" (Richard Stallman) about free software, which was all new to him. Some years went by and he eventually learned about Linux and got bored when his wife and daughter went away on a trip—so he wrote a driver. In that same vein, Sharp's boyfriend (now husband) was involved with open source rocketry, so she got involved with Linux to work on those rockets.
Torvalds said that lack of money is what drove him to kernel development—creation really. He couldn't afford to buy Unix, nor to buy games (he had to type them in), so he turned to kernel development out of necessity. Heo said that he was unsure why but always wanted to do operating system programming. He is from Korea and universities there did not offer operating system development without pursuing a Masters degree. That led him to Linux, where no one cares what degree you have: "if you can do it, you can do it".
The difficulties of being a maintainer was next up. Kroah-Hartman said that he had a whole talk on what makes his life hard, but it could be summed up as "read the documentation". One of Torvalds's major sources of stress is last minute pull requests—"and I am looking at you James [Bottomley]". Those make him hurry, which he hates to do. If the code isn't ready, just wait until the next release rather than making him hurry, he said.
How to interact with Torvalds (and others, like Kroah-Hartman) is not really documented, Heo said, and it takes six months or a year to come up to speed on that. Beyond that, maintaining a piece of the kernel is "not that hard", he said. For Sharp, getting patches without a justification is one of the hardest problems she deals with. Submitters need to convince her that the patch is actually needed; "why should I care?" If it fixes a bug or adds a feature, the patch should say that. Torvalds emphatically agreed with Sharp; submitters should not just send patches in a "drive-by" fashion, but be ready to answer questions and justify their patch.
Combining two questions, Wheeler asked about what the panel members did in their non-software time and what, if anything, might draw them away from Linux development eventually. Kroah-Hartman joked that Linux was once his hobby, but then he got a Linux job and lost his hobby. Lately he has been building a kayak. Sharp listed several things she likes to do outside of kernel hacking including bicycling, gardening, and fantasy gaming (e.g. Magic: The Gathering and Dungeons & Dragons).
Diving and having a "regular life" occupy Torvalds. He doesn't see anything "coming along that would be more interesting than Linux". Nothing else would "fill that void in my life". Heo is also happy in his job and "could not imagine" doing anything else for work. He has recently moved to New York City, so he spends time "hanging out in the city" and "trying to have a life", he said with a grin.
Working with the community
The final question came out of what is sometimes called "The Greg and Jim show", which is when Kroah-Hartman and Zemlin travel to different companies to talk with them about how to get better at working with the kernel community. Zemlin retook the stage to ask the panel if they had advice for those companies' engineers. Getting involved with the community early in the hardware design phase is important, Kroah-Hartman said. Some companies understand that, to the point where code for an Intel chip that never shipped had to be ripped out of the kernel a few cycles ago, he said.
Torvalds said it goes beyond just looking at the problems that come from your company's new hardware, you need to look at the bigger picture. The "perfect solution" for a specific problem may not be useful for others with similar problems. Features need to be added "in a way that makes sense for other people".
Heo said that it is important for companies to budget extra time and resources to work with upstream. Some think they can "send it and forget about it", but it doesn't work that way. Sharp said that companies need to design their changes more openly; don't design a new API behind closed doors. Instead, working with the kernel developers on the design will help build up the trust, which eases work with the community.
As a closing note, Zemlin not only thanked the panel and moderator, but also reported on a conversation that he recently had. He has spoken with the airline entertainment system company and reassured everyone that it would be upgrading its kernel "soon". With luck, perhaps, that means it will crash less often too.
[ I would like to thank LWN subscribers for travel assistance to New Orleans for LinuxCon North America. ]
A SPDX case study
At LinuxCon North America in New Orleans, Samsung's Young-taek Kim described his company's experience rolling out support for the Software Package Data Exchange (SPDX) standard in its product development tools. SPDX, of course, is a data format for tracking software components, licenses, and copyrights. The company was able to improve its efficiency regarding license compliance, but that was not the only benefit to the program. The implementation team also came away from the experience with feedback for several ways to improve the SPDX specification itself.
Why SPDX
Kim is an engineer in Samsung's Open Source Initiative (OSI) team. Like the open-source groups inside many large corporations, the team is charged (in addition to its development duties) with educating and guiding other units in the company about open source principles. Kim gave a quick overview of SPDX before describing the OSI team's task and where SPDX fit into Samsung's workflow.
The SPDX specification is designed to produce a standardized "bill of materials" for an open source software package, he said. It communicates the licenses and copyrights that make up a package—including, importantly, packages that are derived from multiple sources. A constant problem in business scenarios is making sure that one's company gets good information about these factors from software suppliers and subcontractors. It is common, he said, for a supplier to say simply "this is open source" and provide no further information. The package could be MIT-licensed or under the GPL, but if one does not know which of those licenses it is, one does not know how to comply with it.
In practice, Kim said, he has often manually vetted a package by looking through the source. He does not mind this process, but it clearly results in duplication of effort when multiple project teams in multiple divisions repeat the vetting for a package that is already in use elsewhere. Standardizing the license and copyright information with SPDX lets the company create a central database to unambiguously keep track of the packages it has already vetted, and it helps resolve complex compliance questions that arise from combining multiple packages. Both benefits were of interest to Samsung.
Samsung's pilot program
Kim explained that Samsung wanted to reduce the overhead of license compliance, so it charged the OSI team with deploying SPDX data interchange in a pilot program inside the company. He then described Samsung's existing open source compliance process. The company breaks the process into four steps: discovering an open source package, developing a product with the package, verifying the obligations imposed by the open source license, and releasing the appropriate material to satisfy that obligation.
The SPDX pilot program was charged with improving those final two steps. Before the program, the verification stage meant confirming the license on a package by having a human read through the source, which is time-consuming, often redundant (such as when the same package has already been verified by a different product team), and prone to error. Human beings, he said, can reach different conclusions when reading the same code. The obligation-satisfaction stage was also largely manual (e.g., a person having to post source code on a public Samsung web site, make it available to customers, or insert a copyright statement onto a product screen) and could be expensive (especially when printing a source code offer in a user manual was involved—and even more expensive when re-printing is necessary).
The pilot program's first goal was to reduce the time lost to re-verification. The OSI team developed a tool called AIRS to identify software packages and verify their license and copyright in SPDX format. AIRS started out with a command-line interface, but is also usable as a Java library. It uses the Protex code verifier from Black Duck to scan a package and pick out license and copyright information. It then exports this information as SPDX data, including the licenses and copyrights of all components and (perhaps most importantly) the "concluded license" that applies to the combined work as a whole. It identifies files by SHA1 checksum, which helps catch duplicates—meaning that files which have already been scanned and analyzed once do not need to be re-scanned even when directory structures have been rearranged.
The eventual design is for AIRS to store this SPDX data in a central, company-wide database, which can then be queried whenever a new (or a duplicate) package is imported for testing. Right now, teams within the company exchange SPDX information internally using other tools. However, the chief benefit of AIRS is that it can identify the correct license and copyright of a package automatically. Even for a small development team, that demonstrably saves time.
The second goal of the pilot program was to simplify the obligation-satisfaction step, Kim said. For this, the OSI team developed a web tool (tied in to AIRS) that can automatically publish the appropriate license notice for a package on the company's web site. It generates the page for each package based on the stored SPDX data, and even generates a QR Code containing a link to the license page URL. Samsung intends to start putting these URLs on physical product packaging, perhaps as soon as October.
SPDX in the future
Overall, the company was quite happy with the pilot program, Kim said, so work is continuing. The AIRS centralized SPDX database is the first order of business, but there are several other to-do list items. One is support for verification engines other than Protex; another is the ability to identify the same code snippet even when the file checksum changes. The OSI team also wrote its own SPDX parser when developing AIRS, which Kim said he hopes to release as an open source project in its own right.
In reply to an audience question, Kim said that the company may start requiring external software suppliers to provide SPDX data on the packages that they supply. What makes that request tricky is that Samsung is still responsible for verifying that the information is correct, so it will probably have to use AIRS to process the suppliers' code anyway.
Despite its general satisfaction, Samsung ran into several problems with SPDX itself when running its pilot program. First was the "Artifact of Project" property (defined in sections 6.8 to 6.10 of the SPDX specification [PDF]), which is meant to indicate that the file in question belongs to a specific project. In the specification, the cardinality of this property is "one," so a given file can only be associated with a single project. Samsung found that insufficient to record projects that constitute combined works, and had to modify its SPDX output to list every project that a file belongs to.
The property also requires parent projects to be described with the Description of a Project (DOAP) format, which duplicates the same RDF/XML data for every file in a project—a simple database reference would save space. In addition, Kim said, Samsung found it problematic that SPDX does not account for sub-projects within a project, which is a common situation when creating large products. It also ran into problems caused by the fact that SPDX does not enforce a common rule for the formatting of file paths; packages can reference files with relative path names, which makes it difficult to match them up for the purpose of determining the concluded license. Requiring the file paths be normalized would simplify things.
SPDX is often touted for its ability to ensure correctness in license-compliance efforts, so it is interesting to see that it can enable other benefits, too, such as reducing the amount of duplicated work undertaken by developers. Samsung is an enormous company, so even saving a small amount of time on a per-project basis can add up to a lot.
[The author would like to thank the Linux Foundation for assistance with travel to New Orleans.]
Security
Encouraging a wider view
For his keynote at the 2013 Linux Security Summit, Ted Ts'o weaved current events and longstanding security problem areas together. He encouraged the assembled kernel security developers to look beyond the kernel and keep the "bigger picture" in mind. His talk kicked off the summit, which was co-located with the Linux Plumbers Conference (and others) in New Orleans, Louisiana.
Adversaries
Ts'o began by looking at the adversaries we face today, starting with the secret services of various governments—our own and foreign governments, no matter where we live. Beyond that, though, there are organized cyber-criminals who maintain botnets and other services for hire. He noted that there is a web service available for solving CAPTCHAs, where a rural farmer with no knowledge of English (or even Roman characters, perhaps) will solve one in realtime. "Isn't capitalism wonderful?", he asked.
The historic assumptions made about the budgets of our adversaries may not be accurate, he said. Many in the room will know about the problems he is describing, but the general public does not. How do we get the rest of the world to understand these issues, he asked.
Beyond criminals, we have also seen the rise of cyber-anarchists recently. These folks are causing trouble "for the fun of it". They have different motivations than other kinds of attackers. No matter what you might think of their politics, he said, they can cause a lot of problems for "systems we care about".
Ts'o related several quotes from Robert Morris, who was the chief scientist at the US National Security Agency (NSA)—and father of Robert T. Morris of Morris worm "fame". Morris was also an early Multics and Unix developer, who was responsible for the crypt() function used for passwords. The upshot of Morris's statements was that there is more than one way to attack security and that underestimating the "time, expense, and effort" an adversary will expend is foolhardy. Morris's words were targeted at cryptography, but are just as applicable to security. In addition, it is fallible humans who have to use security software, so Morris's admonition to "exploit your opponent's weaknesses" can be turned on its head: our opponents may have vast resources, but developers need to "beware the stupid stuff", Ts'o said.
The CA problem
In May, Ts'o and his Google team were at a hotel in Yosemite for a team-building event where he encountered some kind of man-in-the-middle attack that highlighted the problems in the current SSL certificate system. While trying to update his local IMAP mail cache, which uses a static set of certificates rather than trust the certificate authority (CA) root certificates, his fetch failed because the po14.mit.edu certificate had, seemingly, changed—to a certificate self-signed by Fortinet. That company makes man-in-the-middle proxy hosts to enable internet surveillance by companies and governments.
He dug further, trying other sites such as Gmail and Harvard University, but those were not being intercepted. In addition, requesting a certificate for the MIT host from elsewhere on the internet showed that the certificate had not actually changed. Something was targeting traffic from the hotel (and, perhaps, other places as well) to MIT email hosts for reasons unknown. The bogus certificate was self-signed, which would hopefully raise red flags in most email clients, but the problem persisted for the weekend he was there—at least.
As people in the room are aware, but, again, the rest of the world isn't, the CA system is broken, Ts'o said. He referred to a Defcon 19 presentation [YouTube] by Moxie Marlinspike about the problems inherent in SSL and the CA system. While Marlinspike's solution may not be workable, his description of the problem is quite good, Ts'o said.
It comes down to the problem that some certificate issuers are "too big to jail", so that punishing them by banning their root certificates is unworkable. Marlinspike estimated that banning Comodo (which famously allowed fraudulent certificates to be issued) would have caused 20-25% of HTTPS servers on the internet to go dark. Comodo got to that level of popularity by being one of the cheapest available providers, of course. There are some 650 root authorities that are currently blindly trusted to run a tight ship, with no way to punish them if they don't, Ts'o said.
There are some solutions like certificate pinning, which Google started and various browser vendors have adopted, but that solution doesn't scale. Many have claimed that DNSSEC is a solution, but Marlinspike has argued otherwise—the actors are different, but the economic incentives are the same. Instead of trusting a bunch of CAs, Verisign will have to be trusted instead.
Ts'o doesn't know how to solve the CA problem, but did have a selfish request: he would like to see certificates be cached and warnings be issued when those certificates change. Unfortunately, it won't work for the average non-technical person, nor would it be all that easy because OpenSSL and libraries that call it are typically unconnected from the user interface, but it would make him happier.
Linux security solutions
A short program that just did setuid(0) and spawned a shell led to Ts'o's question of "when is a setuid program not a setuid program?". He showed that the program wasn't owned by root with the setuid bit set, yet it gave a root shell. It worked because the file had CAP_SETUID set in its file capabilities—something that all of the security scanning tools he looked at completely ignored. File capabilities have been around since 2.6.30, but no one is paying attention, which is "kind of embarrassing". Worse yet, there is no way to disable file capabilities in the kernel, he said.
Linux capabilities are meant to split up root's powers into discrete chunks, but their adoption has been slow. The idea is that capabilities are by default not inherited by children, so parents need the right to pass on their capabilities, and the child executable has to have the right to accept them. But there is a "compatibility mode" that has been created where root-spawned processes inherit all of the parent's capabilities. This is done so that running shell scripts as root continues to work, but that mode leads to another problem.
Of the 30 or so powers granted by capabilities, over half can (sometimes) be used to gain full root privileges. You must be able to use those capabilities in an "unrestricted way", which may or may not be true depending on how the system is set up. But many would not be a privilege-escalation problem at all if it weren't for the compatibility mode.
So, why not use SELinux instead, he asked. It can do all of the things that capabilities were intended to do, although the policy has to be set up correctly. Unfortunately, the policy is several megabytes of source that is difficult to understand, change, or use.
As it turns out, though, things have "gotten a lot better" in the SELinux world, according to Ts'o. Every few years, he turns on SELinux to see how well it is working. "Usually, it screws me to the wall" and he has to immediately disable it. In one case, he even had to reinstall his system because of it. But when he tried it just prior to the summit, it mostly worked for him.
The audit2allow program, which looks at the SELinux denials and generates new policy, is "a huge win". On his system, it generated 400 lines of local policy to make things work. Overall, it is much better and he will probably leave it running on his system. There is still a ways to go, particularly in the area of documentation. There is plenty of beginner documentation and expert documentation (i.e. the source code), but information for intermediate users is lacking. That leads to those users just turning off SELinux. The problems he ran into (which were fewer than his earlier tries, but still present) may have been partly due to the SELinux policy packages for Debian testing; perhaps Fedora users would have had a better time, he said.
His experiment with SELinux showed another problem, though. He now gets email every two hours from logcheck with a vast number of complaints. It is clear that his logcheck configuration files are out of sync with the SELinux installation. How to handle security policy and configuration with respect to various kinds of distribution packages is a difficult problem. Right now, the SELinux policy package maintainers and logcheck package maintainers would need to coordinate, but that doesn't scale well. Does logcheck also need to coordinate with AppArmor as well, or should the policy packages be handling the configuration needed for logcheck? There is no obvious solution to that problem, but perhaps automated tools a la audit2allow might help, he said.
Wrapping up
Turning to the summit itself, Ts'o noted all of the different example topics listed in the call for participation, which included ideas like system hardening, virtualization, cryptography, and so on. The program committee did a good job on that list, he said, but what ended up on the schedule? An update to Linux Security Module (LSM) A, a change to LSM B, a new feature for LSM C, and composing (i.e. stacking) LSMs. That's not completely fair, Ts'o said, as there are other topics on the list like kernel address space layout randomization (ASLR) and embedded Linux security, but his point was clear.
He encouraged Linux security developers to think more widely. The program committee can only choose from the topics that are submitted and people submit what they can get funding to work on. The executives of the companies they work for only fund those things that users really care about, so how can we get users to care about security, he asked.
It turns out that perhaps "NSA" is part of the answer, he said—to widespread laughs. But the best outcome from the Snowden revelations is that people are talking about security again. According to Ts'o, US President Obama has been quoted as saying "never let a good crisis go to waste". Security researchers and developers should follow that advice, he said.
A business case needs to be made for better Linux security, Ts'o said. After the kernel.org compromise, some companies were interested in funding Linux security work, but after two months or so, that interest all dried up. It may be that the NSA surveillance story also dies away, but Glenn Greenwald is something of an expert at dribbling out the details from Snowden. That may give this particular crisis longer legs.
Security folks need to find a way for security countermeasures to take advantage of the power of scale, he said. Both Google and the NSA have figured out that if you can invest a large amount into fixed costs and bring the incremental costs way down, you can service a lot of users. Cyber-criminals have also figured this out; the security community needs to do so as well.
In the kernel developers' panel that had been held at LinuxCon the day before, Linus Torvalds suggested that he would be willing to lose some of the best kernel developers if they would export kernel culture to various user-space projects. The same applies to security, Ts'o said. The security of the libraries needs to improve, hardware support for random number generation needs to be more widely available, and so on. Though there have been concerns about the RDRAND instruction in Intel processors because it is not auditable, Ts'o said he would much rather have it available than not.
Similarly, the trusted platform module (TPM) available in most systems is generally not used. Some TPM implementations are suspect, but there is no incentive for manufacturers to improve them since they aren't really used. It is hard enough to get a manufacturer to add $0.25 to the bill of materials (BOM) for a device; without a business case (i.e. users), it is likely impossible.
Security technology is not useful unless it gets used. In fact, as the file capabilities example showed, it can actually be actively harmful if it isn't used.
Ts'o concluded by suggesting that the assembled developers think about a "slightly bigger picture" than LSMs and the composition of LSMs. Those topics are important, but there is far more out there that needs fixing. As he noted, though, it will take a push from users to get the needed funding to address many of these issues.
[ I would like to thank LWN subscribers for travel assistance to New Orleans for the Linux Security Summit. ]
Brief items
Security quotes of the week
We're not there yet, but already we've learned that both the DEA and the IRS use NSA surveillance data in prosecutions and then lie about it in court. Power without accountability or oversight is dangerous to society at a very fundamental level.
New vulnerabilities
apt-xapian-index: authorization bypass
Package(s): | apt-xapian-index | CVE #(s): | CVE-2013-1064 | ||||
Created: | September 19, 2013 | Updated: | September 25, 2013 | ||||
Description: | From the Ubuntu advisory:
It was discovered that apt-xapian-index was using polkit in an unsafe manner. A local attacker could possibly use this issue to bypass intended polkit authorizations. | ||||||
Alerts: |
|
chromium: multiple vulnerabilities
Package(s): | chromium | CVE #(s): | CVE-2012-5116 CVE-2012-5117 CVE-2012-5118 CVE-2012-5119 CVE-2012-5121 CVE-2012-5122 CVE-2012-5123 CVE-2012-5124 CVE-2012-5125 CVE-2012-5126 CVE-2012-5151 CVE-2013-0828 CVE-2013-0829 CVE-2013-0839 CVE-2013-0840 CVE-2013-0841 CVE-2013-0842 CVE-2013-0902 CVE-2013-0903 CVE-2013-0904 CVE-2013-0905 CVE-2013-0906 CVE-2013-0907 CVE-2013-0908 CVE-2013-0909 CVE-2013-0910 CVE-2013-0911 CVE-2013-0912 CVE-2013-0916 CVE-2013-0917 CVE-2013-0918 CVE-2013-0919 CVE-2013-0920 CVE-2013-0921 CVE-2013-0922 CVE-2013-0923 CVE-2013-0924 CVE-2013-0925 CVE-2013-0926 CVE-2013-2836 CVE-2013-2874 | ||||
Created: | September 25, 2013 | Updated: | September 25, 2013 | ||||
Description: | From the Gentoo advisory:
Multiple vulnerabilities have been discovered in Chromium and V8. A context-dependent attacker could entice a user to open a specially crafted web site or JavaScript program using Chromium or V8, possibly resulting in the execution of arbitrary code with the privileges of the process or a Denial of Service condition. Furthermore, a remote attacker may be able to bypass security restrictions or have other, unspecified, impact. | ||||||
Alerts: |
|
freeswitch: code execution
Package(s): | freeswitch | CVE #(s): | CVE-2013-2238 | ||||
Created: | September 19, 2013 | Updated: | September 25, 2013 | ||||
Description: | From the Mageia advisory:
In FreeSWITCH before 1.2.12, if the routing configuration includes regular expressions that don't constrain the length of the input, buffer overflows are possible. Since these regular expressions are matched against untrusted input, remote code execution may be possible | ||||||
Alerts: |
|
glpi: improper sanitation of user input
Package(s): | glpi | CVE #(s): | CVE-2013-5696 | ||||||||||||||||
Created: | September 20, 2013 | Updated: | September 25, 2013 | ||||||||||||||||
Description: | From the Mageia advisory:
Security vulnerabilities due to improper sanitation of user input in GLPI before version 0.84.2 (CVE-2013-5696). | ||||||||||||||||||
Alerts: |
|
hplip: authorization bypass
Package(s): | hplip | CVE #(s): | CVE-2013-4325 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | September 19, 2013 | Updated: | October 21, 2013 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Ubuntu advisory:
It was discovered that HPLIP was using polkit in an unsafe manner. A local attacker could possibly use this issue to bypass intended polkit authorizations. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
icedtea-web: code execution
Package(s): | icedtea-web | CVE #(s): | CVE-2013-4349 | ||||||||||||||||||||||||
Created: | September 23, 2013 | Updated: | October 7, 2013 | ||||||||||||||||||||||||
Description: | From the Red Hat bugzilla:
An off-by-one heap-based buffer overflow was found in IcedTeaScriptableJavaObject::invoke function. This problem was discovered in Oct 2012 and was assigned CVE-2012-4540. Version 1.4 released in May 2013 did not include the fix and is affected by the issue. | ||||||||||||||||||||||||||
Alerts: |
|
jockey: authorization bypass
Package(s): | jockey | CVE #(s): | CVE-2013-1065 | ||||
Created: | September 19, 2013 | Updated: | September 25, 2013 | ||||
Description: | From the Ubuntu advisory:
It was discovered that Jockey was using polkit in an unsafe manner. A local attacker could possibly use this issue to bypass intended polkit authorizations. | ||||||
Alerts: |
|
kernel: privilege escalation
Package(s): | kernel | CVE #(s): | CVE-2013-4350 CVE-2013-4343 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | September 19, 2013 | Updated: | November 1, 2013 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Red Hat bugzilla [1; 2]:
Alan Chester reported an issue with IPv6 on SCTP that IPsec traffic is not being encrypted, whereas on IPv4 it is. (CVE-2013-4350) Linux kernel built with the Universal TUN/TAP device driver(CONFIG_TUN) support is vulnerable to a potential privilege escalation via an use-after-free flaw. It could occur while doing an ioctl(TUNSETIFF) call. A privileged(CAP_NET_ADMIN) user/program could use this flaw to crash the kernel resulting DoS or potentially escalate privileges to gain root access to a system. (CVE-2013-4343) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
language-selector: authorization bypass
Package(s): | language-selector | CVE #(s): | CVE-2013-1066 | ||||
Created: | September 19, 2013 | Updated: | September 25, 2013 | ||||
Description: | From the Ubuntu advisory:
It was discovered that language-selector was using polkit in an unsafe manner. A local attacker could possibly use this issue to bypass intended polkit authorizations. | ||||||
Alerts: |
|
libvirt: multiple vulnerabilities
Package(s): | libvirt | CVE #(s): | CVE-2013-4311 CVE-2013-4296 CVE-2013-5651 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | September 19, 2013 | Updated: | November 25, 2013 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Ubuntu advisory:
It was discovered that libvirt used the pkcheck tool in an unsafe manner. A local attacker could possibly use this flaw to bypass polkit authentication. In Ubuntu, libvirt polkit authentication is not enabled by default. (CVE-2013-4311) It was discovered that libvirt incorrectly handled certain memory stats requests. A remote attacker could use this issue to cause libvirt to crash, resulting in a denial of service. This issue only affected Ubuntu 12.04 LTS, Ubuntu 12.10, and Ubuntu 13.04. (CVE-2013-4296) It was discovered that libvirt incorrectly handled certain bitmap operations. A remote attacker could use this issue to cause libvirt to crash, resulting in a denial of service. This issue only affected Ubuntu 13.04. (CVE-2013-5651) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
moodle: sql injection
Package(s): | moodle | CVE #(s): | CVE-2013-4313 CVE-2013-4341 | ||||
Created: | September 19, 2013 | Updated: | September 25, 2013 | ||||
Description: | From the CVE entries:
Moodle through 2.2.11, 2.3.x before 2.3.9, 2.4.x before 2.4.6, and 2.5.x before 2.5.2 does not prevent use of '\0' characters in query strings, which might allow remote attackers to conduct SQL injection attacks against Microsoft SQL Server via a crafted string. (CVE-2013-4313) Multiple cross-site scripting (XSS) vulnerabilities in Moodle through 2.2.11, 2.3.x before 2.3.9, 2.4.x before 2.4.6, and 2.5.x before 2.5.2 allow remote attackers to inject arbitrary web script or HTML via a crafted blog link within an RSS feed. (CVE-2013-4341) | ||||||
Alerts: |
|
polarssl: denial of service
Package(s): | polarssl | CVE #(s): | CVE-2013-4623 | ||||||||||||||||||||
Created: | September 23, 2013 | Updated: | September 25, 2013 | ||||||||||||||||||||
Description: | From the polarssl advisory:
A bug in the logic of the parsing of PEM encoded certificates in x509parse_crt() can result in an infinite loop, thus hogging processing power. While parsing a Certificate message during the SSL/TLS handshake, PolarSSL extracts the presented certificates and sends them on to be parsed. As the RFC specifies that the certificates in the Certificate message are always X.509 certificates in DER format, bugs in the decoding of PEM certificates should normally not be triggerable via the SSL/TLS handshake. Versions of PolarSSL prior to 1.1.7 in the 1.1 branch and prior to 1.2.8 in the 1.2 branch call the generic x509parse_crt() function for parsing during the handshake. x509parse_crt() is a generic functions that wraps parsing of both PEM-encoded and DER-formatted certificates. As a result it is possible to craft a Certificate message that includes a PEM encoded certificate in the Certificate message that triggers the infinite loop. This bug and code path will only be present if PolarSSL is compiled with the POLARSSL_PEM_C option. This option is enabled by default. | ||||||||||||||||||||||
Alerts: |
|
policykit-1: privilege escalation
Package(s): | policykit-1 | CVE #(s): | CVE-2013-4288 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | September 19, 2013 | Updated: | October 7, 2013 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Ubuntu advisory:
It was discovered that polkit didn't allow applications to use the pkcheck tool in a way which prevented a race condition in the UID lookup. A local attacker could use this flaw to possibly escalate privileges. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
proftpd: denial of service
Package(s): | proftpd | CVE #(s): | CVE-2013-4359 | ||||||||||||||||||||||||||||||||
Created: | September 24, 2013 | Updated: | October 22, 2013 | ||||||||||||||||||||||||||||||||
Description: | From the Red Hat bugzilla:
ProFTPd default installation comes with mod_sftp and mod_sftp_pam activated, which initiates this flaw. The bug is useful to trigger a large heap allocation and exhaust all available system memory of the underlying operating system. | ||||||||||||||||||||||||||||||||||
Alerts: |
|
rtkit: authorization bypass
Package(s): | rtkit | CVE #(s): | CVE-2013-4326 | ||||||||||||||||||||||||||||||||||||||||
Created: | September 19, 2013 | Updated: | October 29, 2013 | ||||||||||||||||||||||||||||||||||||||||
Description: | From the Ubuntu advisory:
It was discovered that RealtimeKit was using polkit in an unsafe manner. A local attacker could possibly use this issue to bypass intended polkit authorizations. | ||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
rubygems: denial of service
Package(s): | rubygems | CVE #(s): | CVE-2013-4287 | ||||||||||||||||||||||||||||||||||||||||||||||||
Created: | September 23, 2013 | Updated: | February 25, 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Fedora advisory:
A vulnerability was found on rubygems currently being shipped on Fedora in validating versions with a regular expression which leads to denial of service due to backtracking. | ||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
software-properties: authorization bypass
Package(s): | software-properties | CVE #(s): | CVE-2013-1061 | ||||
Created: | September 19, 2013 | Updated: | September 25, 2013 | ||||
Description: | From the Ubuntu advisory:
It was discovered that Software Properties was using polkit in an unsafe manner. A local attacker could possibly use this issue to bypass intended polkit authorizations. | ||||||
Alerts: |
|
spice-gtk: authorization bypass
Package(s): | spice-gtk | CVE #(s): | CVE-2013-4324 | ||||||||||||||||||||||||||||||||||||
Created: | September 20, 2013 | Updated: | January 1, 2014 | ||||||||||||||||||||||||||||||||||||
Description: | From the Red Hat advisory:
spice-gtk communicated with PolicyKit for authorization via an API that is vulnerable to a race condition. This could lead to intended PolicyKit authorizations being bypassed. This update modifies spice-gtk to communicate with PolicyKit via a different API that is not vulnerable to the race condition. | ||||||||||||||||||||||||||||||||||||||
Alerts: |
|
systemd: authorization bypass
Package(s): | systemd | CVE #(s): | CVE-2013-4327 | ||||||||||||||||||||||||
Created: | September 19, 2013 | Updated: | October 14, 2013 | ||||||||||||||||||||||||
Description: | From the Ubuntu advisory:
It was discovered that systemd was using polkit in an unsafe manner. A local attacker could possibly use this issue to bypass intended polkit authorizations. | ||||||||||||||||||||||||||
Alerts: |
|
tiff: code execution
Package(s): | tiff | CVE #(s): | CVE-2013-4243 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | September 24, 2013 | Updated: | June 23, 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the CVE entry:
Heap-based buffer overflow in the readgifimage function in the gif2tiff tool in libtiff 4.0.3 and earlier allows remote attackers to cause a denial of service (crash) and possibly execute arbitrary code via a crafted height and width values in a GIF image. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
ubuntu-system-service: authorization bypass
Package(s): | ubuntu-system-service | CVE #(s): | CVE-2013-1062 | ||||
Created: | September 19, 2013 | Updated: | September 25, 2013 | ||||
Description: | From the Ubuntu advisory:
It was discovered that ubuntu-system-service was using polkit in an unsafe manner. A local attacker could possibly use this issue to bypass intended polkit authorizations. | ||||||
Alerts: |
|
usb-creator: authorization bypass
Package(s): | usb-creator | CVE #(s): | CVE-2013-1063 | ||||
Created: | September 19, 2013 | Updated: | September 25, 2013 | ||||
Description: | From the Ubuntu advisory:
It was discovered that usb-creator was using polkit in an unsafe manner. A local attacker could possibly use this issue to bypass intended polkit authorizations. | ||||||
Alerts: |
|
wireshark: denial of service
Package(s): | wireshark | CVE #(s): | CVE-2013-5719 CVE-2013-5721 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | September 19, 2013 | Updated: | September 25, 2013 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the CVE entries:
epan/dissectors/packet-assa_r3.c in the ASSA R3 dissector in Wireshark 1.8.x before 1.8.10 and 1.10.x before 1.10.2 allows remote attackers to cause a denial of service (infinite loop) via a crafted packet. (CVE-2013-5719) The dissect_mq_rr function in epan/dissectors/packet-mq.c in the MQ dissector in Wireshark 1.8.x before 1.8.10 and 1.10.x before 1.10.2 does not properly determine when to enter a certain loop, which allows remote attackers to cause a denial of service (application crash) via a crafted packet. (CVE-2013-5721) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
xen: privilege escalation
Package(s): | xen | CVE #(s): | CVE-2013-4329 | ||||||||||||||||||||
Created: | September 19, 2013 | Updated: | September 25, 2013 | ||||||||||||||||||||
Description: | From the Red Hat bugzilla:
With HVM domains, libxl's setup of PCI passthrough devices does the IOMMU setup after giving (via the device model) the guest access to the hardware and advertising it to the guest. If the IOMMU is disabled the overall setup fails, but after the device has been made available to the guest; subsequent DMA instructions from the guest to the device will cause wild DMA. A HVM domain, given access to a device which bus mastering capable in the absence of a functioning IOMMU, can mount a privilege escalation or denial of service attack affecting the whole system. | ||||||||||||||||||||||
Alerts: |
|
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 3.12-rc2, released on September 23. Linus said: "Things have been fairly quiet, probably because lots of people were traveling for LinuxCon and Linux Plumbers conference last week. So nothing very exciting stands out. It's mainly driver updates/fixes (gpu drivers stand out, but there's networking too, and smaller stuff all over). Apart from drivers there's arch updates (tile/arm/mips) and some filesystem noise (mainly btrfs)."
Stable updates: no stable updates have been released in the last week. The 3.11.2, 3.10.13, 3.4.63, and 3.0.97 updates are in the review process as of this writing; they can be expected on or after September 27.
Quotes of the week
Garrett: Implementing UEFI Boot to Zork
Matthew Garrett has finally implemented what we all really wanted in the first place: direct boot into the Zork game from UEFI. "But despite having a set of functionality that makes it look much more like an OS than a boot environment, UEFI doesn't actually expose a standard C library. The EFI Application Development Kit solves this particular design decision."
NVIDIA to provide documentation for Nouveau
Nouveau is the reverse-engineered driver for NVIDIA GPUs; it has been developed for a number of years with no assistance from NVIDIA. Now, though, an NVIDIA developer has surfaced on the Nouveau list with an offer to help: "NVIDIA is releasing public documentation on certain aspects of our GPUs, with the intent to address areas that impact the out-of-the-box usability of NVIDIA GPUs with Nouveau. We intend to provide more documentation over time, and guidance in additional areas as we are able." This would appear to be a big step in the right direction.
Kernel development news
Split PMD locks
Once upon a time, the standard response to scalability problems in the kernel was the introduction of finer-grained locking. That approach has its problems, though: the cache-line bouncing that locking activity creates can be a scalability problem in its own right. So much of the scalability work in the kernel has, in recent years, been focused on lockless algorithms instead. But, sometimes, there is little alternative to the introduction of finer-grained locks; a current memory management patch set illustrates one of those situations, with some additional complications.Page tables hold the mapping between a virtual address in some process's address space and the physical location of the memory behind that address. It is easy to think of the page table as a simple linear array indexed by the page frame number, but the reality is more complicated: page tables are implemented as a sparse tree with up to four levels. Various subfields of a virtual address are used to progress through the tree as shown here:
Some systems do not have all four levels; no 32-bit system has the PUD ("page upper directory") level, for example, and some 32-bit systems may still get by with two-level page tables. Kernel code is written to deal with all four levels, though; the extra code will vanish in the compilation state for configurations with fewer levels.
Changes to page tables can be made frequently; every movement of a page into or out of RAM must be reflected there, as must changes to the virtual address space (such as those made via an mmap() call). If the page table is not shared across processes, there is little potential for contention (and, thus, for scalability problems), since only one process will be making changes there. Sharing of the page tables, as happens most frequently in threaded workloads, changes the picture, though; it is not uncommon for threads to be making concurrent page table changes. The more concurrently running threads there are, the higher the potential for contention becomes.
In some configurations, the entire page table is protected by a single spinlock (called page_table_lock) in the process's mm_struct structure. That lock was recognized as a scalability problem years ago; in response, locking for the lowest level of the page table tree (the PTE — "page table entry" — pages) was made per-PTE-page for multiprocessor configurations. But all of the other layers of the page table tree are still protected by page_table_lock; in general, changes at those levels are rare enough that more sophisticated locking is not worth the trouble.
There is only one problem: as Kirill A Shutemov has pointed out, that is not always true. When huge pages are in use, the PTE level of the page table tree is omitted. Instead, the entry in the next level up — the "page middle directory" or PMD — points directly to a much larger page. So, in effect, huge pages prune the page table tree back to three levels, with the PMD becoming the lowest level. The elimination of one level of translation is one of the reasons why huge pages can improve performance, though this effect is likely overshadowed by the large increase in the coverage of the translation lookaside buffer (TLB), which avoids a lot of address translations altogether.
What Kirill has noted is that highly threaded workloads slow down considerably when the transparent huge pages feature is in use. Given that huge pages are meant to increase performance, this result is seen as surprising and undesirable. The problem is contention for the page_table_lock; the use of lots of huge pages greatly increases the number of changes made at the PMD level and, thus, increases contention. To address this problem, Kirill has put together a patch set that pushes the locking down to the PMD level, eliminating much of that contention.
Locks are normally embedded within the data structures they protect, so one might be inclined to put a spinlock into the PMD. But the PMD is a hardware-defined structure; it is simply a page full of pointers to PTE pages or huge pages, with some status bits. There is no place there for an added spinlock, so that lock must go somewhere else. When fine-grained locking was implemented at the PTE level, the same problem was encountered; the solution was to shoehorn the lock into the already overcrowded struct page, which is the core structure for tracking the system's physical memory. (See this article for details on how struct page is put together). Kirill's patch replicates the approach used at the PTE level, putting the lock into struct page.
The results would appear to be reasonably convincing. A benchmark designed to demonstrate the problem runs in 36.5 seconds with transparent huge pages off. When transparent huge pages are turned on in an unmodified kernel, the number of page faults drops from over 24 million to 50,000, but the run time increases to 49.9 seconds — not the speed improvement that one might hope for. Adding the patch, though, cuts the run time to 33.9 seconds, significantly faster than an unmodified kernel without transparent huge pages. By getting rid of the cost of the locking contention at the PMD level, Kirill's patch allows the benchmark to enjoy the performance benefits that come from using huge pages.
There is one remaining problem, as pointed out by Peter Zijlstra: the patch as written will not work with the realtime preemption patch set. In the realtime world, spinlocks are sleeping locks; that makes them bigger, to the point that they will no longer fit into the tight space available in struct page. That structure will grow to accommodate the larger lock, but, given that there is one page structure for every page in the system, the memory cost of that growth is difficult to accept. The realtime developers resolved this problem at the PTE level by allocating the lock separately and putting a pointer into struct page.
Something similar can certainly be done for the PMD-level locking. But, as Peter pointed out, the lock allocation means that the initialization of PMD pages is now subject to out-of-memory failures, complicating the code considerably. He hoped that the new code could be written with the assumption that PMD construction could fail so that the realtime tree would not have to carry a complicated add-on patch. Kirill is not required to cater to the needs of an out-of-tree patch set, but it's still nicer to avoid making life difficult for the realtime people if it can be avoided. So chances are, there will be another version of this set coming in the near future.
Beyond that, though, this work appears to be mostly complete and in good shape. It could, thus, find its way into a mainline kernel sometime in the relatively near future.
A perf ABI fix
It is often said that the kernel developers are committed to avoiding ABI breaks at almost any cost. But ABI problems can, at times, be hard to avoid. Some have argued that the perf events interface is particularly subject to incompatible ABI changes because the perf tool is part of the kernel tree itself; since perf can evolve with the kernel, there is a possibility that developers might not even notice a break. So the recent discovery of a perf ABI issue is worth looking at as an example of how compatibility problems are handled in that code.The perf_event_open() system call returns a file descriptor that, among other things, may be used to map a ring buffer into a process's address space with mmap(). The first page of that buffer contains various bits of housekeeping information represented by struct perf_event_mmap_page, defined in <uapi/linux/perf_event.h>. Within that structure (in a 3.11 kernel) one finds this bit of code:
union { __u64 capabilities; __u64 cap_usr_time : 1, cap_usr_rdpmc : 1, cap_____res : 62; };
For the curious, cap_usr_rdpmc indicates that the RDPMC instruction (which reads the performance monitoring counters directly) is available to user-space code, while cap_usr_time indicates that the time stamp counter can be read with RDTSC. When these features (described as "capabilities," though they have nothing to do with the security-oriented capabilities implemented by the kernel) are available, code which is monitoring itself can eliminate the kernel middleman and get performance data more efficiently.
The intent of the above union declaration is clear enough: the developers wanted to be able to deal with the full set of capabilities as a single quantity, or to be able to access the bits individually via the cap_ fields. One need not look at it for too long, though, to see the error: each of the cap_ fields is a separate member of the enclosing union, so they will all map to the same bit. This interface, thus, has never worked as intended. But, in a testament to the thoroughness of our code review, it was merged for 3.4 and persisted through the 3.11 release.
Once the problem was noticed, Adrian Hunter quickly posted the obvious fix, grouping the cap_ fields into a separate structure. But it didn't take long for Vince Weaver to find a new problem: code that worked with the broken structure definition no longer does with the fixed version. The fix moved cap_usr_rdpmc from bit 0 to bit 1 (while leaving cap_usr_time in bit 0), with the result that binaries built for older kernels look for it in the wrong place. If a program is, instead, built with the newer definition, then run on an older kernel, it will, once again, look in the wrong place and come to the wrong conclusion.
After some discussion, it became clear that it would not be possible to fix
this problem in an entirely transparent way or to
hide the fix from newer code. At that point, Peter Zijlstra suggested that a version number field be used;
applications could explicitly check the ABI version and react accordingly.
But Ingo Molnar rejected that approach as
"really fragile
" and came up with a fix of his own. After a
few rounds of discussion, the union came to
look like this:
union { __u64 capabilities; struct { __u64 cap_bit0 : 1, cap_bit0_is_deprecated : 1, cap_user_rdpmc : 1, cap_user_time : 1, cap_user_time_zero : 1, cap_____res : 59; }; };
In the new ABI, cap_bit0 is always zero, while cap_bit0_is_deprecated is always one. So code that is aware of the shift can test cap_bit0_is_deprecated to determine which version of the interface it is using; if it detects a newer kernel, it will know that the various cap_user_ (changed from cap_usr_) fields are valid and can be used. Code built for older kernels will, instead, see all of the old capability bits (both of which mapped onto bit 0) as being set to zero. (For the curious, the new cap_user_time_zero field was added in an independent 3.12 change).
One could argue that this change still constitutes an ABI break, in that
older code may
conclude that RDPMC is unavailable when it is, in fact, supported
by the system it is running on. Such code will not perform as well as it
would have with an older kernel. But it will perform correctly, which is
the biggest concern here. More annoying to some might be the fact that
code written for one version of the interface will fail to compile with the
other; it is an API break, even if the ABI continues to
work. This will doubtless be irritating for some users or packagers, but it
was seen as being better than continuing to allow code to use an interface that
was known to be broken. Vince Weaver, who has sometimes been critical of
how the perf ABI is managed, conceded that
"this seems to be about as reasonable a solution to this
problem as we can get
".
One other important aspect to this change is the fact that the structure itself describes which interpretation should be given to the capability bits. It can be tempting to just make the change and document somewhere that, as of 3.12, code must use the new bits. But that kind of check is easy for developers to overlook or forget, even in this simple situation. If the fix is backported into stable kernels, though, then simple kernel version number checks are no longer good enough. With the special cap_bit0_is_deprecated bit, code can figure out the right thing to do regardless of which kernel the fix appears in.
In the end, it would be hard to complain that the perf developers have failed to respond to ABI concerns in this situation. There will be an API shift in 3.12 (assuming Ingo's patch is merged, which had not happened as of this writing), but all combinations of newer and older kernels and applications will continue to work; this ABI break went in during the 3.12 merge window, but never found its way into a stable kernel release. The key there is early testing; by catching this issue at the beginning of the development cycle, Vince helped to ensure that it would be fixed by the time the stable release happened. The kernel developers do not want to create ABI problems, but extensive user testing of development kernels is a crucial part of the process that keeps ABI breaks from happening.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Development tools
Device drivers
Documentation
Filesystems and block I/O
Memory management
Security-related
Page editor: Jonathan Corbet
Distributions
Fedora 20 takes shape
The Fedora 20 alpha release was announced on September 24. This release provides the first opportunity for many outside the Fedora development community to see what is being planned for the next version of Fedora which, according to the current (revised) schedule, is due to be released on December 3. This seems like as good a time as any to take a look at what Fedora is up to and what Fedora users can expect in Fedora 20.It's not given its own billing in the Fedora 20 change list, but, for many users, one of the headline features may well be the Wayland preview that is expected to be shipped as part of the GNOME 3.10 release. It may well be the first time that it will be possible to run a Wayland setup from within a major distribution. That said, it seems unlikely that most users will actually want to do so; as Christian Schaller put it:
The hope is that the preview will help developers to find problems and stabilize things to the point that a shift to Wayland as the default could be considered for Fedora 21. Given all that has to happen, though, and given the developers' intent (as reiterated by Christian) to ensure that users don't even notice the change, a switch for Fedora 21 may be an overly ambitious goal. But it's worth a try, and it will be interesting to see how Wayland holds up if one tries to do real work with it.
The change that many people got worked up over, of course, was the dropping of sendmail from the default install. The project did decide, after some back-and-forth, to take sendmail out; those who need it can put it back with a single yum command. There has been a lot less noise in the wider community about the decision to drop rsyslog from the default install as well. Without rsyslog, the classic text system log in /var/log/messages will be no more; there will also be no support for the syslog network protocol. Instead, systemd's journal will be solely responsible for system logging. Once again, anybody who relies on syslog functionality can have it with a single yum command. But, doubtless, there will be some complaints from users who are unhappy to see Fedora taking another step away from traditional Unix practice.
The years-long effort to support ARM as a primary architecture took a big step forward when the Fedora 19 ARM release happened on the same day as the x86 release. With the Fedora 20 release, ARM as a primary architecture should be official. The user base for Fedora on ARM remains small, but it can be expected to grow as ARM processors find homes in laptops, servers, and other systems of interest.
There are, needless to say, numerous other additions beyond the usual upgrades to the latest versions of various packages. For example, the new GNOME Software application installer will be present. This tool intends to ease the task of installing and maintaining applications; it's not clear how many applications will be managed that way in the F20 release, though. Apache OpenOffice will be added to the distribution, though nobody seems to envision it replacing LibreOffice as the default Fedora office suite. There is a plan to add a snapshot and rollback facility to facilitate recovery from bad updates. And so on.
Interestingly, one feature that appears to have fallen off the list entirely is the proposed shift to Btrfs as the default filesystem. The new snapshot feature is, instead, built on LVM. Once upon a time (around Fedora 17) switching to Btrfs was an explicit release goal. Various difficulties with the filesystem, the departure of one of the key developers from Red Hat, and installer difficulties all seem to have pushed Btrfs off the radar for now; indeed, a recent discussion suggests that openSUSE will get there first.
Fedora's "Foundations" notwithstanding, being the first to ship every shiny new feature is not necessarily the best way to run a distribution, especially if, as some people still feel about Btrfs, a feature is not yet ready for production use. Even without Btrfs, Fedora 20 will clearly contain a large amount of new and interesting software. Needless to say, the quality of that release will be improved if more people download the alpha release, give it a try, and report any bugs that they find.
Brief items
Distribution quotes of the week
Ten years of Fedora
It has been ten years since Michael K. Johnson announced: "Red Hat and Fedora Linux are pleased to announce an alignment of their mutually complementary core proficiencies leveraging them synergistically in the creation of the Fedora Project, a paradigm shift for Linux technology development and rolling early deployment models." One decade and nearly twenty releases later, Fedora has clearly accomplished quite a bit; it will be interesting to see what the next ten years will bring.
openSUSE 13.1 Beta 1 ready
OpenSUSE 13.1 Beta 1 is ready for testing. "It's pretty solid as it received an extra amount of automated checks via openQA. Nevertheless there's still quite some work to be done to get the quality we need for the final release. So please help testing and file bug reports!"
Tails 0.20.1
The Amnesic Incognito Live System (Tails) has announced the release of Tails 0.20.1 which fixes numerous security issues. All users should upgrade as soon as possible.Valve launches SteamOS
Valve has announced the launch of a new gaming-oriented operating system. "As we’ve been working on bringing Steam to the living room, we’ve come to the conclusion that the environment best suited to delivering value to customers is an operating system built around Steam itself. SteamOS combines the rock-solid architecture of Linux with a gaming experience built for the big screen. It will be available soon as a free stand-alone operating system for living room machines." There is little in the way of details available at this time.
Newsletters and articles of interest
Distribution newsletters
- This Week in CyanogenMod (September 21)
- DistroWatch Weekly, Issue 526 (September 23)
- Ubuntu Weekly Newsletter, Issue 335 (September 22)
Page editor: Rebecca Sobol
Development
On OpenGL debugging
During the Graphics and Display microconference at Linux Plumbers Conference (LPC) 2013, Ian Romanick presented a session detailing the state of the art in tools used for debugging and optimizing 3D graphics code. The state of said art is, clearly, poor—riddled with vendor-specific applications and options with limited functionality. The situation is not drastically better on proprietary operating systems than it is on Linux, but there is clearly more work that needs to be done to meet developers' needs.
Romanick's talk addressed 3D graphics in general terms, but the emphasis was primarily on 3D games—where achieving the maximum frame rate can both keep the players happier and sell more game licenses (to the game developer's benefit). On Linux systems, such games are written to the OpenGL API, while on Windows, the popular titles are split between OpenGL and Microsoft's Direct3D. Regardless of the API, however, the developer's needs tend to boil down to similar challenges: locating and fixing bugs in the shaders and GPU drivers, profiling the performance of the application, and tuning the code.
Debugging
The 3D game market on Windows dwarfs that of its Linux counterpart, so it is no surprise that there are far more 3D debugging tools for Windows. However, Romanick said, they are almost always proprietary and the majority of them are single-vendor. So developers must debug their code separately for AMD, NVIDIA, and Intel GPUs. These tools do offer a lot of 3D-specific debugging features, though, such as the ability to step through shader code, or the ability to inspect which on-screen pixels were drawn by which function calls. Several of the tools are what Romanick termed "driver assisted" debuggers, which let the GPU driver register callback functions and trigger a notification when "something the application should know about" happens—such as an error in shader code.
The Linux debugging landscape is far more sparse, he said. The best option at the moment is apitrace, which is free software and is under active development (including, he pointed out, with contributions from game maker Valve). But apitrace is not usable for debugging many kinds of application, particularly those that (like games) are concerned with interactivity. It requires running an application inside the apitrace environment, which captures and records the OpenGL commands so that they can be replayed and analyzed later. Another option, BuGLe, looked promising, but it has since stalled while the lead developer is off pursuing a PhD.
Other tools available for Linux tend to be proprietary, such as NVIDIA's Nsight and AMD's Gremedy gDEBugger. Both are vendor-specific, of course. In addition, Gremedy has been discontinued. The only option for "driver assisted" debugging on Linux is to use the GL_ARB_debug_output OpenGL extension and add gdb breakpoints in the driver.
Performance tuning
While apitrace and the vendor-specific debuggers offer functionality that at least approaches that of the Windows tools, where the Windows products truly excel is in performance tuning. In addition to the vendor tools, Windows developers can get a good system-wide view with GPUView. Together, they offer copious system data much like one might find in a CPU profiler, which allows the developer to see whether delays are caused by the application, the driver, the display manager, or other factors. The vendor tools also allow introspection, showing when drawing commands are submitted, how long they take to reach the GPU, and when the pixel finally make it to the screen.
On this front, Romanick said, Linux lags quite a bit. Apitrace is "not that useful," he said, and Nsight offers less than it does on Windows. Intel has a utility called intel_gpu_top which he described as "of very, very mild usefulness," providing a little system data, which was akin to "trying to use top to do performance tuning." There are a handful of driver-assisted techniques which can offer some help, such as instrumenting one's code with GL_ARB_timer_query and GL_ARB_performance_monitor calls. Gallium also offers a heads-up display that shows performance information, but ultimately the tools on Linux need a lot of work to rival their Windows counterparts.
Common problems
If there are so few good options, one might ask what 3D developers actually do in practice. The answer, Romanick said, is that they tend to use the vendor-specific tools where they can, and roll their own tools for everything else. But the vendor-specific tools are not much help on the development front, he said, since they often use undocumented ABIs and back-channels created solely for the vendor. Many of them are specific not just to one GPU vendor, but to a specific GPU, so that developers must repeat the debugging and tuning process for several generations of hardware for each vendor. In addition, the vendor-specific tools often target Direct3D ahead of OpenGL, and do not provide as many OpenGL features.
For rolling their own tools, developers tend to choose one of two routes. The first is to log data in their application, then export it for later analysis and visualization. For example, he said, at SIGGRAPH 2012 Valve described [PDF] Telemetry, the in-house tool it had developed to log and visualize OpenGL performance when it started its Linux porting effort. The second option is to collect data in the application and try to visualize it while the application is running. This, he said, usually takes the form of a "heads-up display" that shows the frame rate, where "hiccups" occur, and so on.
But both routes are awful, he said. They each require manual insertion of trace points in the application, which he described as putting "fancy rdtsc() calls everywhere." In the end, developers are faced with two unappealing alternatives: relying on generic, imprecise data collection methods hand-instrumented in the code, or using vendor-specific tools that must be run for each separate target system. "It's a fairly dire situation," he concluded, "I don't envy developers trying to make things faster."
The $64,000 question
Considering the direness of the situation, Romanick said, the big question for LPC attendees is whether the Linux community can provide help—most likely through new interfaces at the kernel level. He described a couple of basic requirements. First, developers need more fine-grained data about GPU execution than can be provided by GL_ARB_timer_query. This means timing information that tracks the entire process from the submission of a draw command to its completion.
Second, this information needs to include semantic information about the system. For example, when the drawing commands come relative to system events like the display's vertical blanking interval, or what portions of time were taken up by the compositor and display server, rather than the application itself. But the implementation must be careful not to leak system information that would constitute a security risk, he said. Also, the interfaces designed must be implementable by closed-source drivers in addition to open source drivers, and not provoke "too much rage" from driver maintainers. Romanick ended by appealing to the microconference audience to help him sort out the specific requirements. "I know who I can tell to go do this; I just need to know what to tell them to do."
The audience did provide some feedback at the end of the session, although the conversation had to move on to other presentations. One audience member asked what sort of events OpenGL programmers might be interested in that could be logged by the kernel's perf system. Romanick listed three: API usage patterns that cause the graphics driver to consume a lot of CPU, usage patterns in shaders that result in significantly different performance characteristics, and how specific drawing commands are pipelined through the GPU.
Another audience member asked if the architectural differences between GPUs would permit much in the way of a general-purpose solution. Romanick responded that different GPUs would of course generate different data, but that by comparison "GL_ARB_timer_query sucks"—reporting numerous operations as taking zero time, followed by one large time-chunk for the final command. In addition, the "coarse-grained stuff" would be similar for all GPUs, such as how long different shaders take to execute.
The holy grail is knowing what goes on inside of shaders, another participant said. Android has some tools (such as trace-viewer) that are currently only usable for Android Views, but they are open source, and do provide some of the semantic information Romanick referred to.
Improving the OpenGL debugging and optimization picture on Linux is likely to be a lengthy process. But it is clearly one that needs addressing; the topic frequently comes up whenever game development is discussed (as it did in Valve CEO Gabe Newell's keynote at LinuxCon North America before LPC began). Furthermore, when one considers how unpleasant the graphics debugging and tuning process is on Windows, Linux could surpass the proprietary offerings, which would benefit quite a few cross-platform developers at the same time.
Brief items
Quotes of the week
GTK+ 3.10 released
GTK+ 3.10 is now available. New are support for Wayland 1.2, scaled output on high-DPI screens, client-side window decorations, and several new widgets.
Mozilla Lightning 2.6 released
Version 2.6 of Mozilla's Lightning calendar component has been released as an add-on for Thunderbird 24 and Seamonkey 2.21. New in this release is support for Google Calendars over CalDAV using most recent API revision, the ability to assign multiple categories to an event, the ability to schedule recurring events on the last day of the month, and updated time zone data.
WebRTC enabled in Firefox for Android
At the Mozilla Hacks blog, Maire Reavy announces
that support for WebRTC has been switched on by default in Firefox for
Android, starting with Firefox 24. This allows web applications to capture audio and video from Android devices, although Reavy notes that some caution is called for. "This is still a hard-hat area, especially for mobile. We’ve tested our Android support of 1:1 calling with a number of major WebRTC sites, including talky.io, apprtc.appspot.com, and codeshare.io.
"
GNOME Shell and Mutter-Wayland 3.10 available
Version 3.10.0 of GNOME Shell, the official GNOME Shell Extensions, and the Wayland-backed branch of the Mutter window manager have all been released. These releases preview the imminent release of GNOME 3.10 itself, but incorporate a number of changes on their own as well.
GStreamer 1.2 released
Version 1.2 of the GStreamer multimedia framework has been released. Packages for GStreamer Core and GStreamer Plugins are available. The 1.2 release is API- and ABI-backwards-compatible with GStreamer 1.0, however there are several new features introduced. Several new plugins are included, including support for DASH adaptive streaming, JPEG2000 images, VP9 and Daala video, and decoding-only support for WebP. There is also a new command-line playback tool called gst-play-1.0 (designed for testing purposes), as well as numerous bugfixes and improvements.
GNOME 3.10 Released
The GNOME Project has announced the release of GNOME 3.10. Many components in this release have initial support for Wayland. See the release notes for details.
Newsletters and articles
Development newsletters from the past week
- Caml Weekly News (September 24)
- What's cooking in git.git (September 20)
- What's cooking in git.git (September 23)
- GNU Toolchain Update (September 24)
- Haskell Weekly News (September 18)
- OpenStack Community Weekly Newsletter (September 20)
- Perl Weekly (September 23)
- PostgreSQL Weekly News (September 23)
- Ruby Weekly (September 19)
- Tor Weekly News (September 25)
Apache Foundation embraces real time big data cruncher 'Storm' (The Register)
The Register reports
on the recent decision by the Apache Foundation to accept the Storm project into the
Apache incubator program. "Storm aims to do for real time data processing what Hadoop did for batch processing: queue jobs and send them off to a cluster of computers, then pull everything back together into usable form.
"
Page editor: Nathan Willis
Announcements
Brief items
Studio Storti joins The Document Foundation Advisory Board
The Document Foundation has announced that Studio Storti is now a member of its Advisory Board. "Studio Storti is the largest provider of open source solutions to the Italian Public Administration, and is launching a LibreOffice Division to support migrations from Microsoft Office to LibreOffice."
Calls for Presentations
oSSum13
The openSUSE Summit 2013 (oSSum13) will take place November 15-17 in Lake Buena Vista, FL. The call for participation is open until October 4, 2013.SCALE 12X ramps up
The 12th annual Southern California Linux Expo (SCALE 12X) will take place February 21-23, 2014 in Los Angeles, California. The call for papers is open until December 15.CFP Deadlines: September 26, 2013 to November 25, 2013
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
Deadline | Event Dates | Event | Location |
---|---|---|---|
October 1 | November 28 | Puppet Camp | Munich, Germany |
October 4 | November 15 November 17 |
openSUSE Summit 2013 | Lake Buena Vista, FL, USA |
November 1 | January 6 | Sysadmin Miniconf at Linux.conf.au 2014 | Perth, Australia |
November 4 | December 10 December 11 |
2013 Workshop on Spacecraft Flight Software | Pasadena, USA |
November 15 | March 18 March 20 |
FLOSS UK 'DEVOPS' | Brighton, England, UK |
November 22 | March 22 March 23 |
LibrePlanet 2014 | Cambridge, MA, USA |
November 24 | December 13 December 15 |
SciPy India 2013 | Bombay, India |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
linux.conf.au announces Jonathan Oxer as keynote speaker
The linux.conf.au 2014 team has announced that Jonathan Oxer will be a keynote speaker. "Recently he has been working on ArduSat, a satellite that aims to give hobbyists, students and space enthusiasts an opportunity to design and run their own experiments in space. By choosing a standardised platform based on the hugely popular Arduino hardware design, ArduSat allows anyone to develop and prototype experiments at home using readily accessible parts and all based on a simple open source software environment."
Events: September 26, 2013 to November 25, 2013
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
September 23 September 27 |
Tcl/Tk Conference | New Orleans, LA, USA |
September 24 September 26 |
OpenNebula Conf | Berlin, Germany |
September 25 September 27 |
LibreOffice Conference 2013 | Milan, Italy |
September 26 September 29 |
EuroBSDcon | St Julian's area, Malta |
September 27 September 29 |
GNU 30th anniversary | Cambridge, MA, USA |
September 30 | CentOS Dojo and Community Day | New Orleans, LA, USA |
October 3 October 5 |
Open World Forum 2013 | Paris, France |
October 3 October 4 |
PyConZA 2013 | Cape Town, South Africa |
October 4 October 5 |
Open Source Developers Conference France | Paris, France |
October 7 October 9 |
Qt Developer Days | Berlin, Germany |
October 12 October 14 |
GNOME Montreal Summit | Montreal, Canada |
October 12 October 13 |
PyCon Ireland | Dublin, Ireland |
October 14 October 19 |
PyCon.DE 2013 | Cologne, Germany |
October 17 October 20 |
PyCon PL | Szczyrk, Poland |
October 19 | Central PA Open Source Conference | Lancaster, PA, USA |
October 19 | Hong Kong Open Source Conference 2013 | Hong Kong, China |
October 20 | Enlightenment Developer Day 2013 | Edinburgh, Scotland, UK |
October 21 October 23 |
KVM Forum | Edinburgh, UK |
October 21 October 23 |
LinuxCon Europe 2013 | Edinburgh, UK |
October 21 October 23 |
Open Source Developers Conference | Auckland, New Zealand |
October 22 October 24 |
Hack.lu 2013 | Luxembourg, Luxembourg |
October 22 October 23 |
GStreamer Conference | Edinburgh, UK |
October 23 | TracingSummit2013 | Edinburgh, UK |
October 23 October 25 |
Linux Kernel Summit 2013 | Edinburgh, UK |
October 23 October 24 |
Open Source Monitoring Conference | Nuremberg, Germany |
October 24 October 25 |
Embedded LInux Conference Europe | Edinburgh, UK |
October 24 October 25 |
Xen Project Developer Summit | Edinburgh, UK |
October 24 October 25 |
Automotive Linux Summit Fall 2013 | Edinburgh, UK |
October 25 October 27 |
Blender Conference 2013 | Amsterdam, Netherlands |
October 25 October 27 |
vBSDcon 2013 | Herndon, Virginia, USA |
October 26 October 27 |
T-DOSE Conference 2013 | Eindhoven, Netherlands |
October 26 October 27 |
PostgreSQL Conference China 2013 | Hangzhou, China |
October 28 November 1 |
Linaro Connect USA 2013 | Santa Clara, CA, USA |
October 28 October 31 |
15th Real Time Linux Workshop | Lugano, Switzerland |
October 29 November 1 |
PostgreSQL Conference Europe 2013 | Dublin, Ireland |
November 3 November 8 |
27th Large Installation System Administration Conference | Washington DC, USA |
November 5 November 8 |
OpenStack Summit | Hong Kong, Hong Kong |
November 6 November 7 |
2013 LLVM Developers' Meeting | San Francisco, CA, USA |
November 8 | PGConf.DE 2013 | Oberhausen, Germany |
November 8 | CentOS Dojo and Community Day | Madrid, Spain |
November 8 November 10 |
FSCONS 2013 | Göteborg, Sweden |
November 9 November 11 |
Mini DebConf Taiwan 2013 | Taipei, Taiwan |
November 9 November 10 |
OpenRheinRuhr | Oberhausen, Germany |
November 13 November 14 |
Korea Linux Forum | Seoul, South Korea |
November 14 November 17 |
Mini-DebConf UK | Cambridge, UK |
November 15 November 16 |
Linux Informationstage Oldenburg | Oldenburg, Germany |
November 15 November 17 |
openSUSE Summit 2013 | Lake Buena Vista, FL, USA |
November 17 November 21 |
Supercomputing | Denver, CO, USA |
November 18 November 21 |
2013 Linux Symposium | Ottawa, Canada |
November 22 November 24 |
Python Conference Spain 2013 | Madrid, Spain |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol