LWN.net Weekly Edition for August 9, 2012
TXLF: TexOS teaching open source
The third annual Texas Linux Fest (TXLF) hosted sessions on a range of technical subjects in San Antonio August 3-4 — plus some lesser-known projects that also centered around open source software. An example of the latter was the report from TexOS, a volunteer project that not only provides school kids with computers, but teaches them about open source as well. The project is gearing up for its third instructional cycle, and is looking to expand what it offers.
What it does
The TexOS session was led by Brian Beck, a faculty development staffer at Angelo State University (ASU), and Dr. George Pacheco, Jr, an ASU communications professor. Beck has previously worked with the HeliOS Project, an Austin-based initiative that restores and donates Linux computers to economically disadvantaged children. TexOS is based in San Angelo, but it differs from HeliOS in another way, too. While HeliOS's mission is refurbishing and supporting the donated computers (and doing so in large numbers), TexOS works with smaller groups of students, providing a multi-week training class that introduces them to Linux and open source.
TexOS gets hardware donations from businesses and colleges in the
area and it receives referrals from teachers and local nonprofits about
potential students. The students must apply to the program, however,
and sign a contract. The terms boil down to "be respectful and
show up
", Beck said, and is primarily a means for the students
to take ownership of their participation.
The first round of classes was held in September and October of 2011,
and involved students in grades six through eight. The second round
was held in February and March of 2012, and involved slightly older
students, in grades eight through ten. That age range is a better fit
for several reasons; Beck noted that the older students have greater
need for a computer since their homework involves more research and
writing. Pacheco also joked that the older students were "a
little more receptive toward sitting still
".
In both rounds, the curriculum is broken up into six sessions spread out over three consecutive Saturdays. An outline of the first round's curriculum is listed on the TexOS project site; it covers topics from installing Linux to using common desktop applications for homework, while exploring basic system administration and shell commands along the way. But there are also non-technical subjects on the agenda, such as collaborative learning, ethical use of root privileges, and fair use of copyrighted material.
About the curriculum, Beck said that it is a common misconception that
schoolchildren of today are "digital natives
" (meaning
that they have been using technology practically since birth, and have
no problems adopting it). While that might have been true ten years
ago, he said he has found the kids of today function more like
"digital zombies
" instead: taking technology for granted,
taking no interest in how it works, and not knowing what to do when
flipping the "on" switch fails. Thus it is important to teach kids
about the technology under the hood, hopefully encouraging them to
start hacking on their own.
Extending the concept
Beck said that the project is currently setting up for its third round of classes, to be held in October. This time, the team hopes to bring back students from the 2011 sessions to act as mentors to the new class. It is also looking at expanding to four Saturdays, to cover more scientific applications in the lessons. He also asked for input from the audience about how else to expand the curriculum.
One area he would like to explore is introducing the students to programming, but, as he is not a software developer himself, he would prefer to find a quality curriculum developed elsewhere. A few members of the audience had suggestions for such material, but there are surprisingly few projects out there, particularly those designed for a classroom environment. The project would also like to find a viable solution for helping its students with Internet access, Beck said, given that most of the pupils are from lower-income households. As yet, TexOS has not found an affordable option.
Pacheco addressed the project's other new goal: developing a more
formal way to assess the students' progress. He requires his own
college students to participate in volunteer activities as part of the
"service
learning
" model, and is interested in getting more of them
to work with TexOS. Although the TexOS curriculum sets learning
objectives for the complete course and for each session, Pacheco would
like to see it develop a more structured method for measuring each
student's success — both in learning his or her way around the
operating system, and in the regular classroom.
Both metrics present challenges. It is difficult to measure a student's aptitude with technology, and privacy concerns mean that the project cannot simply look at a student's grades to assess academic success. But it is a critical step to take for the project, Pacheco said. The project needs to know where the students are and are not benefiting from the course. He did not yet have a plan for incorporating assessment into the curriculum, and was instead interested in hearing from audience members.
The question-and-answer portion of the session took up about a quarter of the allotted time. There were evidently more than a few educators in the audience, and many of them had suggestions for Beck and Pacheco. One of them suggested that teaching programming might be difficult to fit into a short-run weekend class, and that perhaps a continuing-education model might make more sense, with programming as a more advanced topic.
Price versus freedom
I have attended talks on open source in education at a variety of conferences in recent years, and while in some respects TexOS is pursuing a program akin to larger projects (such as HeliOS), it actually has a unique spin on the subject. Most of the projects that utilize refurbished computers fitted with Linux place the emphasis on access to the Internet and affordable software — in essence, making the argument for Linux and open source based on price.
But while it is certainly true that free software lowers the barrier to entry for software by eliminating (or almost eliminating) the price issue, making that the only selling point risks inadvertently teaching the student that open source is "okay for now," but can be dispensed with later when money is not as tight. The TexOS curriculum makes a stronger pitch, teaching the students about the principles behind open source.
Hopefully, in the coming years that curriculum will be expanded, perhaps even teaching programming. As Beck said, educating students about open source software is a "teach them to fish" operation: if they are trained not to hack on their technology, there are limits to what they will learn — whether Linux is under the hood or not.
Adobe ventures into open fonts
Adobe surprised many in open source circles with its August 2 release of Source Sans Pro, an open font made available under the standard SIL Open Font License (OFL). Adobe has not historically been an open source player (beyond its involvement with standard file formats like PDF or SVG), so Source Sans Pro is not only its first foray into open fonts, but may also herald an interest in adopting open source development methods.
Designer Paul Hunt announced the font in a post on the Adobe typography blog. The font is available in six weights, with regular and italic versions for each. The first release covers an extended Latin character set, but according to the comments other writing systems are reportedly still to come. Downloads are hosted at SourceForge.net.
Hunt said Adobe created the new font to provide a user interface (UI) font for the company's open source software projects, including its Strobe media playback framework and Brackets code editor, both of which are web applications. An open font allows Adobe to control the UI by delivering the font to the user's browser via CSS's @font-face rule.
The design of the font is inspired by early-20th-Century gothics from American Type Founders, such as News Gothic and Franklin Gothic, but it is the original work of Hunt and not a derivative of those originals. This distinction is a subtle one, but comparing Source Sans Pro to News Cycle (which is my own open font designed as a faithful revival of News Gothic), there are clear differences. In addition to miscellaneous differences between specific glyphs, Source Sans Pro is set wider, is a bit rounder, includes a bit more contrast, and incorporates a different approach to accents. Hunt said in the blog post that he intentionally paid attention to distinguishing between l (lower-case L), I (upper-case i), and 1 (the numeral), which was a less common concern a century ago.
Although the font covers "only" Latin characters, the implementation supports a wide array of languages that use the variations of the basic Latin alphabet (such as additional base characters and diacritic marks). Some of the languages supported, such as Vietnamese, Romanized Chinese, Navajo, and various Eastern European languages, are often under-served by even the commercial font industry. The font also includes some typographic features often omitted from open fonts, such as old-style or "text figure" numerals and alternate styles of various letters (such as variations of I (upper-case i) with and without horizontal top- and bottom-caps, which can further distinguish it from l and 1).
There are also Multiple Master (MM) versions of the fonts included in the release, which is unusual. MM fonts are a rarely-employed format developed at Adobe, in which a set of parameters (usually weight and width) can be adjusted at will to change the appearance of the font. For example, an MM font might ship with an Extra Light and an Extra Black version, representing the lightest and darkest ends of the weight spectrum. The user can then use MM to interpolate smoothly between these extremes to find the right look for the project at hand. It is a clever idea, and spares the designer the overhead of producing separate versions for Extra Light, Light, Demi Bold, Bold, Extra Bold, and so on, ad nauseum.
Similarly, the differences between Condensed and Extra Wide versions can be interpolated to produce various widths in between. Software could naively interpolate between two widths of a non-MM font, too, but the naive approach produces undesirable results (such as fattening or squeezing the line widths in addition to the open spaces of the characters). The MM format is designed to produce eye-pleasing output. In practice, though, most people rarely use more than one or two weight or width variations, so MM has not taken the world by storm.
Building
The release itself is in the form of Zip archives, one of which contains the fonts themselves in both TrueType and OpenType CFF format, and one of which contains the fonts plus the source files used to generate them. The contents of the source package will not be easy to take advantage of for Linux users, however. It consists of spline font sources (in Postscript .SFA format), sources for the proprietary Fontlab editor (in .VFB format), and a set of auxiliary text files used by Adobe's build tools. These text files contain information such as hinting, kerning pairs, and tables of characters composed out of other components (primarily accented letters). The auxiliary files are built for use with Adobe Font Development Kit for OpenType (AFDKO), Adobe's "font SDK."
AFDKO implements the font-building portion of Adobe's font development workflow. The glyph outlines are developed in a separate application (such as Fontlab) in PostScript Type 1 format. AFDKO includes proofing and validation tools, plus scripts that add OpenType features (such as substitution rules or stylistic alternates) based on text configuration files like those included with the Source Sans Pro package. It also includes scripts to build installable font files. Although the documentation says several of the individual scripts in AFDKO are open source, the download as a whole is not; the license agreement forbids reverse-engineering. The auxiliary files themselves are not in a standard, documented format that other tools can utilize.
However, that does not mean the auxiliary files are of no value. Some of their information could be extracted with minimal fuss and the judicious application of scripting. Many of the same features can also be extracted from the font files themselves in an open source editor like FontForge. Vernon Adams, developer of KDE's Oxygen font, commented on the blog post that he was interested in extracting the horizontal spacing information from Source Sans Pro and adapting it to Oxygen.
In the purely-open-source font development workflow, adding OpenType features to a font is typically done in FontForge — although it is far from pleasant. FontForge hides the necessary options and tools remarkably well, and effectively dictates that building the final font files be done manually. Better command-line tools like those in AFDKO could help automate the procedure. Intriguingly enough, several commenters in the blog post discussion raised questions about AFDKO, and Hunt replied with interest asking what would be necessary to make the release buildable on Linux.
In reply, Hunt got advice not just on the build process, but on how to set up Source Sans Pro as a "real" project and not just a Zip-dump — including issue tracking, revision control, and a development mailing list. He gave a hopeful-sounding response:
Bug reports and fixes are already beginning to queue up, too. Several
on the Open Font Library list noticed problems with the weight values
of the fonts (numeric metadata used to sort the various "light" to
"heavy" versions of the font). As John Haltiwanger put it, "And
(finally) we are legally allowed to fix a broken element in an Adobe
font!
"
Fonts and project management
Adobe is not alone among open font projects that come up short on bug tracking, revision control, and other development tools. Only a few large font projects tackle these challenges, and they do so in decidedly different ways. DejaVu, Liberation, SIL, and Ubuntu all employ different methods for tracking issues and feature requests, managing source code, merging patches, and making releases. Individuals working on a handful of personal font projects are even less likely to deploy such support utilities.
The lack of formal source code repositories and issue trackers generally means that distributions undertake the work of packaging and testing open fonts. Because Source Sans Pro relies on the non-free Fontlab and AFDKO, one might think it has scant chances of working its way into distribution packages, but Fedora's Ian Weller observed that Fedora's guidelines do not require that a font be buildable with open source software alone — they merely recommend it. A Fedora review request was opened on August 4. There is also a package request for the font in Debian, although Debian's guidelines dictate that a font with a non-free build path will be packaged for contrib.
There are a few inconsistencies in the Zip files, such as which feature files are present in which directories, and which include .SFA versus .VFB source files. Those are problems that source code management would help quash. Hunt also teased the future release of a monospace version of the font, which would be of particular interest to developers. Seeing such ongoing work in the open would also be a nice touch, and would allow the community to contribute to the process. However, one should not lose sight of Source Sans Pro's importance even in Zip format: Adobe has released its first open font, its team seems well aware of the issues involved (licensing and tool support included), and is expressing interest in fitting the project into the expected conventions and procedures of open source.
GENIVI: moving an industry to open source
Given this editor's recent history (3 years working there), an article on the GENIVI Alliance was perhaps inevitable, and perhaps it's better done sooner while the experience is still fresh. However, GENIVI is more than just a matter of personal interest and experience: the development of GENIVI has some interesting lessons on the adoption of free and open source software, and the results of the consortium's work are soon likely to be directly visible to many readers of this article on a daily basis.
Goals and history
The first question of course is: what is GENIVI? The brief answer is that it is a consortium of companies whose goal is to define a standardized common software platform for developing in-vehicle infotainment (IVI) systems and to nurture a development ecosystem around that platform. IVI systems, known in the trade as head units, are the computer systems commonly found in high-end cars and commercial vehicles—and increasingly in mid-range cars and other vehicles—that provide a range of entertainment and other functions.
Typical IVI functions include control of the media system (e.g., music player, radio, rear-seat video), navigation assistance and location-based services, and display from the rear-view camera on vehicles that provide one. Input to a modern IVI system is via physical wheel devices and buttons or a touch-screen interface, and, commonly, voice recognition. Many modern IVI systems provide integration with consumer-electronics devices via technologies such as Bluetooth and USB. The most interesting such devices are of course smart phones, where integration with the IVI system allows functionality such as playback of media hosted on the phone and hands-free, voice-activated calls conducted via the head-unit audio system. This type of integration also allows conveniences such as automatically turning the volume of the radio down when an incoming caller rings the phone.
The formation of the consortium was announced at the start of 2009, although there is some prehistory to its foundation that we'll briefly return to in a moment. The founding membership consisted of 8 companies that typify several categories of what is by now a much larger membership: automotive manufacturers (BMW Group, PSA Peugeot Citroën, General Motors), tier 1 automotive suppliers (Delphi, Magneti-Marelli, Visteon), Silicon vendors (Intel), and operating system vendors (Wind River Systems, then an independent company, now a subsidiary of Intel). During the subsequent three years, membership in each of the aforementioned categories has swelled (notably, a number of ARM Silicon vendors now balance out the heavyweight Intel among the Silicon vendors). In addition, ISVs (independent software vendors), middleware vendors, and software services companies with an interest in the automotive sector have also joined the consortium, with the result that the GENIVI membership has now grown to over 150 companies spread across Europe, America, and South and East Asia.
Above, I said GENIVI's goal is to define a standardized common software platform. That platform is not a complete IVI system. Rather, it is a packaging of operating system and middleware components that implement a range of non-differentiating functionalities that all IVI systems require. (Bluetooth connectivity is an example of what automotive manufacturers might consider non-differentiating functionality: manufacturers want a working implementation, but don't market their IVI systems to customers based on the Bluetooth implementation.) In effect, one of GENIVI's goals is to decrease the cost of developing the base system, so that developer resources can be devoted to innovating at higher levels in the software stack, such as the human-machine interface.
Linux and open source software were chosen as for the GENIVI software platform during an evaluation project that predated the foundation of the consortium. That project (conducted by BMW, Magneti Marelli, Intel, and Wind River) was motivated by the desire to balance two opposing requirements. On one side stand ever-increasing demands on the development and scope of IVI systems: to drivers, an IVI system starts to look more and more like other consumer electronics devices, and drivers expect to see similar levels of functionality and rapid development cycles. Furthermore, there is a market pressure to see IVI systems in all vehicle categories, rather than just the high end. On the other side, the costs of IVI system development have grown astronomical—a figure of $100 million to bring a solution from the drawing board to dashboard is not unheard of, and such costs are becoming intolerable for all but the largest automotive manufacturers.
In the evaluation phase, a number of platform alternatives were considered, including proprietary systems such as Windows CE and QNX. However, it quickly became clear that a platform based on Linux and free software had the most to offer, based on factors such as the economies available from reuse of software components and, importantly, the realization that free software would allow the greatest degree of control of the content and development of the platform. On that basis, the evaluation project embarked (successfully) on a proof-of-concept implementation of a prototype head-unit system based on Linux and free software components.
GENIVI outputs
In addition to code projects worked on by members, the consortium produces two primary outputs: a compliance specification and a baseline software release.
The goal of the compliance specification is to ensure that compliant GENIVI products ease integration of third-party software components, rather than guaranteeing full API or ABI compatibility across implementations. (In other words, GENIVI doesn't set out to be a standardization body.) Compliance is currently based on self-certification, but in time the plan is move to a more test-driven form of certification. The compliance specification is currently a members-only document.
The GENIVI baseline software release is essentially an internal proof-of-concept for the compliance specification. It is a packaging of the components required by a specific release of the compliance specification on top of a Linux distribution. The baseline isn't directly available outside the GENIVI membership, but is indirectly available via a number of GENIVI respins created by incorporating the baseline component set into an upstream distribution. These respins are created by GENIVI members and available to anyone for download. GENIVI respins are currently created for Ubuntu, Tizen, and Yocto.
What is the problem that GENIVI is trying to solve?
Technically speaking, implementing partial or complete IVI systems on Linux isn't fundamentally different or more difficult than using the platforms traditionally employed in the automotive industry. The pre-GENIVI proof-of-concept work, the recent Cadillac CUE system, and a range of demonstrator systems that appear at the twice-yearly GENIVI member meetings provide ample evidence of that fact. This raises the questions: why don't we already have (more) Linux-based IVI systems on the road and why is an alliance like GENIVI even necessary?
To answer those questions requires understanding that GENIVI's objective is not to solve a software technical problem, but rather to solve a software business problem. Escalating software costs mean that automotive manufacturers need to escape their traditional cycle of constantly reimplementing individually developed, tailor-made solutions for their IVI systems. The name of the game is to share development costs by collaborating on the development of a common, reusable software platform. The challenge then becomes: how does a diverse group of companies transform their traditional business and software development practices, stepping toward a new model to collaboratively define a common platform and bring it to reality? In practice that has proved to be quite a challenge.
The rocky road from prototype to product
To see why the path forward for GENIVI has been difficult requires some understanding of the traditional software development model in the automotive industry.
The traditional approach to IVI software development is rather waterfall in style: the automotive manufacturer develops a set of requirements and then, armed with a large checkbook, enters into a contract with a tier 1 supplier who does all of the software development to fulfill those requirements (in fact, the tier 1 supplier is often tasked with delivering the whole package of IVI hardware and software). Once development is completed, the manufacturer black-box tests the resulting software, and then ships it in vehicle head units. (In this traditional approach, it's worth noting that the manufacturer typically has few software engineers, and does little software development.)
Given their historical role as holders of the checkbooks, it's perhaps unsurprising that automotive manufacturers at first tried to remake GENIVI in the mold that was familiar to them. Thus, in its initial incarnation, although GENIVI's stated goal was to create a (largely) open source platform, the proposed development process was rather waterfall style, driven from the top down by the automotive manufacturers. The proposed process consisted of traditional phases: gathering of requirements, discovering an architecture, mapping the architecture to software components, and then selecting (existing) open source software components, and implementing new components to fill the gaps. Waterfall-style development is prone to be complex and time consuming; what made it even worse in GENIVI's case was trying to the scale the development process to handle multiple participating companies.
For many readers, it is probably no surprise that the results of trying to employ such a model to select and create open source software were not as successful as hoped: internal teams got bogged down in trying to define the process, and the alliance found it too unwieldy to implement in practice. Further complicating the problem was the fact that information was not open equally to all members of the alliance (there were restrictions on access to information such as draft specifications and other in-progress work according to the paid-for level of membership). The consequence of that differential access to information was to further impede participation in the work of the consortium.
What happened in response to the early low participation levels is something of a textbook lesson for any company, or, more particularly, any industry group trying to move to open source. Recognizing the problem, the consortium's Board of Directors implemented some simple but radical steps: membership-based restrictions to information inside the consortium were abolished and the top-down waterfall model described above was replaced by requirements gathering and implementation driven from the bottom, via domain-specific "Expert Groups" that any interested member company was free to participate in. The results of these changes became apparent quite rapidly: the level of mailing-list traffic, wiki activity, scheduled face-to-face meetings, and code contribution all increased dramatically.
Engaging with the open source community
Having read this far, the question you may be left wondering is: is GENIVI open?
From a process perspective, the answer is no. Access to various internal resources such as the wiki, issue tracker, and mailing lists is limited to the (paying) membership. Similarly, attendance at face-to-face meetings is limited to the membership. However, the boundary between members and nonmembers is already somewhat permeable. For example, a number of open source developers with relevant expertise have been invited to GENIVI meetings and provided valuable input—among them Kay Sievers and Lennart Poettering (systemd), Marcel Holtmann (BlueZ, ConnMan, oFono), Samuel Ortiz (ConnMan), and Kristian Hoegsberg (Wayland). In time, it can be expected that the boundary between members and nonmembers may become even more permeable; it's an ongoing process.
From a code perspective, GENIVI is not fully open source, but it's getting steadily closer. As noted above, the GENIVI baseline respins are publicly available, but the repositories of GENIVI-developed code are not (even though the code in those repositories is all under OSI-approved licenses). However, that situation is likely to change quite soon, as moves are afoot to open GENIVI work more widely to the outside world (so that individual GENIVI code projects have open code repositories, bug trackers, and mailing lists). At that point, it's likely that activity on GENIVI will notch up yet further, as outside developers start to take a closer interest in pieces of the code. (It should be noted GENIVI's goal is, as far as possible, to reuse open source software components; new components are developed by GENIVI members only in cases where no suitable free software component can be found. Thus, there are to date relatively few GENIVI code projects, for example, an automotive specific audio manager and a graphics layer management system; the vast majority of the components in the GENIVI respins are direct from the open source ecosystem.)
Looking in the other direction, GENIVI is increasingly participating in upstream projects, with members getting involved via code or conversations in a number of open source projects, such as systemd and ConnMan. In recent times, GENIVI has even been getting involved with kernel development, sponsoring development of kernel patches to improve D-Bus performance. (As noted in an earlier LWN article, the attempt to upstream this work has not so far proved successful. However, D-Bus is viewed as a key component of GENIVI, and it's likely that further work will be done to come up with a kernel solution to improving D-Bus performance that may be acceptable to the maintainer of the Linux networking stack.)
Challenges
There are a number of ongoing challenges for GENIVI, and one or two that remain unresolved. Some of the challenges can be easily guessed at, or can be deduced with a little reflection. For example, as with most open source projects, more contributors would always speed up development.
The process of adapting from closed software development in competition with peers to a model of collaboratively working and sharing ideas (so far, mainly within the membership) is ongoing. For companies that have not previously done so (which includes much of the GENIVI membership), contributing code under open source licenses involves educating both developers and company lawyers. But considering the heavily proprietary origins of automotive software, the progress has already been considerable.
A notable challenge for automotive manufacturers is that, by virtue of being distributors of open source software in their head units, they now need to ensure their engineers and lawyers are well educated about free software licenses. Furthermore, their code management processes need to be adapted to satisfy the obligations of those licenses, in particular, of course, the source code redistribution requirements of the GPL. By and large, the manufacturers seem to understand the challenge and are rising to it.
The GNU GPLv3 remains a so-far unresolved challenge for GENIVI. Already, a small but significant percentage of free software projects use this license, and over time more can be expected to do so. However, automotive manufacturers feel that they can't use software under this license in IVI systems. The problem hinges on the so-called anti-Tivoization clause of the GPLv3. In essence, this clause says that if GPLv3-licensed object code is placed on a computer system, then, either the system must prevent updates to that code by all users (i.e., no one, including the manufacturer, can perform updates) or, if the system does allow updates to the GPLv3-licensed software (e.g., so that the manufacturer can make updates), then the software recipient (i.e., the car driver) must likewise be permitted to update the software. The automotive manufacturers' position is that they need to be able to update the software in an IVI system, but they can't let the driver do so.
The issues for the manufacturers are driver safety, and manufacturer liability and reputation. Even if head-unit systems where fully isolated from the in-vehicle networks that control safety-critical functions such as the braking system (and in most automotive architectures they are not fully isolated), there are features of IVI systems that can be considered safety-impacting. It's easy to see that accidents could result if the navigation system directs the driver in the wrong direction up a one-way street or the picture from the rear-vision camera is delayed by 2 seconds. Consequently, the manufacturers' stance is that the only software that they can permit on the head unit is software that they've tested. Since the GPLv3 would in effect require manufacturers to allow drivers to perform software updates on the head unit, GPLv3-licensed software is currently considered a no-go area. (An oft-proposed resolution to the manufacturers' conundrum is the "solution" that the manufacturer should simply void the warranty if the driver wants to make software updates to the head unit. However, that's not a palatable option: such disclaimers may or may not hold up in court, and they don't protect reputations in the face of newspaper headlines along the lines of "4 killed following navigation failure in well-known manufacturer's car".)
The future
With respect to IVI systems, the future for GENIVI in particular, and Linux in general, looks bright. A number of manufacturers plan to have GENIVI-based head units in production within the next two years. In addition, at least one other Linux-based product (the Cadillac CUE) is already in production, and other Linux-based systems are rumored to be under development. Overall, a substantial body of automotive interest seems to be coalescing around Linux, so it's perhaps no surprise that the Linux Foundation is organizing the second Automotive Linux Summit next month in England. It seems likely that in a few years, we'll be living in a world where Linux in automotive IVI systems is as common as it is in today's consumer electronic devices.
Security
GUADEC: Imagining Tor built-in to GNOME
Jacob Appelbaum of the Tor project delivered the opening keynote at GUADEC 2012 in A Coruña, Spain, tackling better anonymity on the desktop. Appelbaum outlined the design of Tor, discussed statistics about the Tor network, and spoke about its future. One of his more interesting suggestions was that GNOME and other user environments could build in Tor support as a standard networking option. That would make Tor easier to use, and would provide the user with several peripheral benefits.
Tor, anonymity, and you
Tor is widely known these days, but Appelbaum gave a brief overview of the system's protocol and network design, highlighting some frequently-overlooked facets of the project. First, he said, Tor is larger than most people realize. It employs more than a dozen developers and receives additional help from around 100 volunteer coders. The developer-power is critical to Tor's success, he said, as almost any bug in the code turns into a security bug. At a given moment, it averages around 3,000 active relays, 400,000 users, and handles 1.2 GiB/s of traffic. Tor is a non-profit organization, and may be unique in that it receives funding from both the Electronic Frontier Foundation (EFF) and the U.S. Department of Defense.
Tor's mission is often misunderstood, too. Although it provides a
means of securing communication channels, its primary function is as
an anonymity tool. Anonymity comes in a variety of types, he said,
but the core idea is "trying to be free from surveillance and
censorship
". Tor gives you one thing off the bat, he said: an
anonymous IP address. Everything else is your choice from there.
The WiFi at the venue blocked SSH connections, Appelbaum said, so he
needed to tunnel over Tor to connect to his servers. That represents
one type of anonymity: freedom from network administrators inspecting
your traffic.
A different type of anonymity might be signing in to GMail over Tor in order to hide your geographic location. In that case, you still authenticate to Google, so the company knows who you are, but you do not have to reveal where you are simultaneously. The US government asserts that individuals have no reasonable expectation of privacy when voluntarily interacting with a business, including increasingly common web tracking techniques. Appelbaum showed an EFF diagram illustrating privacy risks from numerous angles, including "black hat" hackers, system administrators, lawyers, law enforcement, and even government agencies.
For each of those potential privacy foes, there are times when an
activity that would be innocuous otherwise becomes risky because
someone is monitoring your communication. The question for a project
like GNOME, he said, is "how free is your desktop if you're not able to
freely interact with others?
" Although some assume that online
anonymity is only the concern of "bad people
", he said,
that is "
a bit of a white privilege issue
". Censorship
is quite widespread and in practice it affects "good" people as much
as anyone else, a fact he illustrated with a collection of error page
screenshots from government and private networks that block access to
Tor project sites.
The Tor project's solution is to build a network that offers
"privacy by design
" rather than by policy. Policies are
hard to enforce and are subject to human error and bad actors. Tor
makes network connections private in a number of ways. Once every
hour, the project's trusted directory relays re-map the entire
network. Clients retrieve the latest version of the map (thus
limiting the potential time window of a widespread attack). Once
every ten minutes, clients select a new route through the Tor network for
their traffic channels (thus helping to protect them against analysis from
within the network). Each route through the Tor network is encrypted
separately between each pair of nodes along the route (so that the
first node knows the originating address but not the destination, the
exit node knows the destination but not the origin, and the
intermediary nodes know neither).
A censor could attempt to block all access to Tor by retrieving the network directory and blocking the entry points by IP address, so the project also runs hidden "bridge relays" that are unlisted. Users can fetch a short list of bridge relays via email or through a CAPTCHA-protected web form. The email method requires using an address from gmail.com or yahoo.com, which the project says helps make it more difficult for attackers to discover a significant number of bridges.
Tor statistics
Tor's pervasive anonymity makes it difficult to profile or monitor the network as a whole, Appelbaum said, but the project uses data mining to take snapshots and keep an eye on performance. Tor's total bandwidth and latency have improved significantly since 2010, he said. Back then, the median time to complete a request was approximately 25 seconds. In 2012, it is down to 2.5 seconds. Total maximum bandwidth has increased in the same time period from 500 to 2500 MiB/s.
The primary reason for the increase has been a significant uptick in the number of volunteers serving as Tor nodes — a change that has corresponded with the "Arab spring" upheavals in the Middle East. Based on analysis of the Tor network, the events in the Middle East have been followed quickly by a spike in new participants, and the network does not taper back down to its pre-spike size.
Which is not to say that there are never incidents of downticks in the Tor network. The project can detect sudden acts of censorship by examining metrics of the Tor network as well as traffic to its own domain. For example, in February 2012, Kazakhstan deployed protocol inspection and began blocking access to Tor. It was without doubt an expensive operation, Appelbaum said, even though the total number of users in the country was around 1200.
Nevertheless, the project is actively working on ways to circumvent such censorship actions. There is already an "obfuscated bridge" option, in which the bridge relay and the Tor client fake what appears to be a standard Firefox-to-Apache handshake. There are other options still in development, including steganographic handshakes. But outright censorship is probably not the wave of the future, Appelbaum said. The government in Syria has learned that it is more effective to watch who accesses sites that it finds objectionable than it is to block access to them across the board, and the U.S. government prefers to use U.S. law to suppress people over any purely technological measures.
The onion gnome?
The Tor network is healthy, Appelbaum said, but the tools to access it still need some work. Tor's own Vidalia application may have a dreadful UI, he said, but it is much better than it was five years ago. He highlighted several excellent projects, such as the Pidgin IM client (which has built-in support for Tor) and the TorBirdy extension for Mozilla Thunderbird, but argued that it would be better for the user if the functionality to use Tor was built into the operating system itself. After all, that option would require solving the anonymity problem once, rather than 50 times.
The option for GNOME would be to add support for Tor as a transport in Network Manager, much like VPNs are offered today. It might also be useful if an application could request a "private mode," which would activate the Tor connection and otherwise sandbox the process (both to protect against malicious content coming in, and to prevent the application from intentionally or accidentally leaking information about the local system over the connection). This would take some work to implement, he said, because Network Manager today does not "fail closed" — a fact that can be illustrated by its current VPN support. Applications using the VPN connection continue to function even when the VPN goes down, because Network Manager simply routes traffic through the existing network.
Built-in Tor functionality would come in handy in other ways, too, he said, such as with GNOME's "guest sessions." As it is now, anything a guest does while running in a guest session can be traced back to the computer — and the user needs to ask if that is something that he or she wants. It would be better if Tor automatically anonymized guest sessions for the user's protection.
He mentioned several other changes that GNOME could make to offer a more complete privacy-respecting environment for its users. One was allowing the user control over the Zeitgeist activity logger, which he said amounted to spyware if the user has not agreed to it. At the very least it should be encrypted and subject to user control. Zeitgeist developer Seif Lotfy is currently working on a "privacy panel" for GNOME, which Appelbaum suggested would be a good fit.
Appelbaum surveyed friends and colleagues about what to tell GUADEC attendees, and they provided three other suggestions. First, implement off-the-record (OTR) messaging in Empathy. Second, implement a fake-MAC-address generator, to keep a machine's real MAC address safe from monitoring on guest networks. Third, implement a Tor-based file transfer method in Telepathy.
Despite the list of feature requests, Appelbaum had plenty of good things to say about GNOME as well, in part because it has formed the basis for several good outside projects that offer anonymity and privacy tools. One example is the Tails live CD distribution, which is configured to use Tor for Internet connections out-of-the-box.
It remains to be seen whether GNOME will actually implement Tor as a Network Manager transport — it is clearly too late for inclusion in the 3.6 release currently in development. But over the course of the week, several GUADEC attendees were still discussing the idea, and it was mentioned in numerous personal blog posts about the event on Planet GNOME. Appelbaum certainly succeeded in raising the question of built-in privacy with the crowd, which could impact GNOME (and other open source projects) further down the line.
[The author would like to thank the GNOME Foundation for travel assistance to A Coruña for GUADEC.]
Brief items
Security quotes of the week
Walsh: SELinux Apache Security Study
On his blog, Dan Walsh writes about a study done by Kirill Ermakov about SELinux as applied to a vulnerable Apache web server. The study found that even with SELinux protections, an attacker could still read /etc/passwd. Walsh: "This points out what most people do not understand about SELinux. SELinux does not necessarily block errors in applications from happening. SELinux will just contain them. If you are able to subvert the Apache application then you can become the Apache application and will have the rights allowed to the apache application. In his examples he was able to take over the Apache server and do what an apache server needs to do, including reading the /etc/passwd file." Walsh goes on to list several other things that could have been tested as they would be blocked by the SELinux rules (e.g. connecting to the mail port, reading random user files). In addition, he points out some ways that administrators could increase the SELinux containment of a web server.
New vulnerabilities
auditlog-keeper: information disclosure
| Package(s): | auditlog-keeper | CVE #(s): | CVE-2012-0421 | ||||
| Created: | August 7, 2012 | Updated: | August 8, 2012 | ||||
| Description: | From the SUSE advisory:
/etc/auditlog-keeper.conf was world-readable and contains various passwords. | ||||||
| Alerts: |
| ||||||
bind-dyndb-ldap: named assertion failure
| Package(s): | bind-dyndb-ldap | CVE #(s): | CVE-2012-3429 | ||||||||||||||||||||||||
| Created: | August 3, 2012 | Updated: | August 17, 2012 | ||||||||||||||||||||||||
| Description: | From the Red Hat advisory:
A flaw was found in the way bind-dyndb-ldap performed the escaping of names from DNS requests for use in LDAP queries. A remote attacker able to send DNS queries to a named server that is configured to use bind-dyndb-ldap could use this flaw to cause named to exit unexpectedly with an assertion failure. | ||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||
dhcp: denial of service
| Package(s): | dhcp | CVE #(s): | CVE-2012-3570 | ||||||||||||||||||||
| Created: | August 2, 2012 | Updated: | August 8, 2012 | ||||||||||||||||||||
| Description: | From the Red Hat bugzilla entry: An unexpected client identifier parameter can cause the ISC DHCP daemon to segmentation fault when running in DHCPv6 mode, resulting in a denial of service to further client requests. In order to exploit this condition, an attacker must be able to send requests to the DHCP server. | ||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||
ecryptfs-utils: privilege escalation
| Package(s): | ecryptfs-utils | CVE #(s): | CVE-2012-3409 | ||||||||
| Created: | August 3, 2012 | Updated: | August 8, 2012 | ||||||||
| Description: | From the Red Hat bugzilla:
It was reported that the private ecryptfs mount helper (/sbin/mount.ecryptfs_private), which is setuid-root, could allow an unprivileged local user to mount user-controlled ecryptfs shares on the local system. Because the ecryptfs helper does not mount filesystems with the "nosuid" and "nodev" flags, it would be possible for a user to mount a filesystem containing setuid-root binaries and/or device files that could lead to the escalation of their privileges. This could be done via a USB device, if the user had physical access to the system. | ||||||||||
| Alerts: |
| ||||||||||
fckeditor: cross-site scripting
| Package(s): | fckeditor | CVE #(s): | CVE-2012-4000 | ||||||||
| Created: | August 6, 2012 | Updated: | November 24, 2015 | ||||||||
| Description: | From the Debian advisory:
Emilio Pinna discovered a cross site scripting vulnerability in the spellchecker.php page of FCKeditor, a popular html/text editor for the web. | ||||||||||
| Alerts: |
| ||||||||||
globus-gridftp-server: privilege escalation
| Package(s): | globus-gridftp-server | CVE #(s): | CVE-2012-3292 | ||||
| Created: | August 7, 2012 | Updated: | August 8, 2012 | ||||
| Description: | From the Debian advisory:
It was discovered that the GridFTP component from the Globus Toolkit, a toolkit used for building Grid systems and applications performed insufficient validation of a name lookup, which could lead to privilege escalation. | ||||||
| Alerts: |
| ||||||
glpi: multiple vulnerabilities
| Package(s): | glpi | CVE #(s): | |||||||||||||||||
| Created: | August 6, 2012 | Updated: | August 8, 2012 | ||||||||||||||||
| Description: | GLPI 0.83.4 fixes several issues. See the glpi changelog for details. | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
graphicsmagick: unspecified vulnerability
| Package(s): | graphicsmagick | CVE #(s): | |||||
| Created: | August 3, 2012 | Updated: | August 8, 2012 | ||||
| Description: | From the Mageia advisory:
This update fixes a security issue in the SetImageAttribute function in magick/attribute.c related to translating comment and label attributes when loading images. It was fixed upstream in GraphicsMagick 1.3.16. | ||||||
| Alerts: |
| ||||||
icinga: unintended database access
| Package(s): | icinga | CVE #(s): | CVE-2012-3441 | ||||
| Created: | August 8, 2012 | Updated: | August 8, 2012 | ||||
| Description: | From the openSUSE advisory:
icinga-create_mysqldb.sh granted icinga access to all dbs - so please check the permissions of your mysql icinga user | ||||||
| Alerts: |
| ||||||
kernel: information disclosure
| Package(s): | kernel | CVE #(s): | CVE-2012-3430 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | August 6, 2012 | Updated: | October 3, 2012 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Red Hat bugzilla:
Two similar issues: 1) Reported by Jay Fenlason and Doug Ledford: recvfrom() on an RDS socket can disclose sizeof(struct sockaddr_storage)-sizeof(struct sockaddr_in) bytes of kernel stack to userspace when receiving a datagram. 2) Reported by Jay Fenlason: recv{from,msg}() on an RDS socket can disclose sizeof(struct sockaddr_storage) bytes of kernel stack to userspace when other code paths are taken. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
libreoffice: code execution
| Package(s): | libreoffice | CVE #(s): | CVE-2012-2665 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | August 2, 2012 | Updated: | August 14, 2012 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Red Hat advisory: Multiple heap-based buffer overflow flaws were found in the way LibreOffice processed encryption information in the manifest files of OpenDocument Format files. An attacker could provide a specially-crafted OpenDocument Format file that, when opened in a LibreOffice application, would cause the application to crash or, potentially, execute arbitrary code with the privileges of the user running the application. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
moodle: many vulnerabilites
| Package(s): | moodle | CVE #(s): | CVE-2012-3387 CVE-2012-3388 CVE-2012-3389 CVE-2012-3390 CVE-2012-3391 CVE-2012-3392 CVE-2012-3393 CVE-2012-3394 CVE-2012-3395 CVE-2012-3396 CVE-2012-3397 CVE-2012-3398 | ||||||||
| Created: | August 2, 2012 | Updated: | August 8, 2012 | ||||||||
| Description: | From the Red Hat bugzilla entry: CVE-2012-3387 Moodle: MSA-12-0039: File upload validation issue CVE-2012-3388 Moodle: MSA-12-0040: Capabilities issue through caching CVE-2012-3389 Moodle: MSA-12-0041: XSS issue in LTI module CVE-2012-3390 Moodle: MSA-12-0042: File access issue in blocks CVE-2012-3391 Moodle: MSA-12-0043: Early information access issue in forum CVE-2012-3392 Moodle: MSA-12-0044: Capability check issue in forum subscriptions CVE-2012-3393 Moodle: MSA-12-0045: Injection potential in admin for repositories CVE-2012-3394 Moodle: MSA-12-0046: Insecure protocol redirection in LDAP authentication CVE-2012-3395 Moodle: MSA-12-0047: SQL injection potential in Feedback module CVE-2012-3396 Moodle: MSA-12-0048: Possible XSS in cohort administration CVE-2012-3397 Moodle: MSA-12-0049: Group restricted activity displayed to all users CVE-2012-3398 Moodle: MSA-12-0050: Potential DOS attack through database activity | ||||||||||
| Alerts: |
| ||||||||||
python-django: multiple vulnerabilities
| Package(s): | python-django | CVE #(s): | CVE-2012-3442 CVE-2012-3443 CVE-2012-3444 | ||||||||||||||||||||||||||||||||
| Created: | August 8, 2012 | Updated: | December 20, 2012 | ||||||||||||||||||||||||||||||||
| Description: | From the CVE entries:
The (1) django.http.HttpResponseRedirect and (2) django.http.HttpResponsePermanentRedirect classes in Django before 1.3.2 and 1.4.x before 1.4.1 do not validate the scheme of a redirect target, which might allow remote attackers to conduct cross-site scripting (XSS) attacks via a data: URL. (CVE-2012-3442) The django.forms.ImageField class in the form system in Django before 1.3.2 and 1.4.x before 1.4.1 completely decompresses image data during image validation, which allows remote attackers to cause a denial of service (memory consumption) by uploading an image file. (CVE-2012-3443) The get_image_dimensions function in the image-handling functionality in Django before 1.3.2 and 1.4.x before 1.4.1 uses a constant chunk size in all attempts to determine dimensions, which allows remote attackers to cause a denial of service (process or thread consumption) via a large TIFF image. (CVE-2012-3444) | ||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||
sudo: symlink attack
| Package(s): | sudo | CVE #(s): | CVE-2012-3440 | ||||||||||||||||
| Created: | August 8, 2012 | Updated: | August 9, 2012 | ||||||||||||||||
| Description: | From the Red Hat advisory:
An insecure temporary file use flaw was found in the sudo package's post-uninstall script. A local attacker could possibly use this flaw to overwrite an arbitrary file via a symbolic link attack, or modify the contents of the "/etc/nsswitch.conf" file during the upgrade or removal of the sudo package. | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
xen: denial of service
| Package(s): | xen | CVE #(s): | CVE-2012-3432 | ||||||||||||||||||||||||||||||||
| Created: | August 6, 2012 | Updated: | September 14, 2012 | ||||||||||||||||||||||||||||||||
| Description: | From the Red Hat bugzilla:
Internal data of the emulator for MMIO operations may, under certain rare conditions, at the end of one emulation cycle be left in a state affecting a subsequent emulation such that this second emulation would fail, causing an exception to be reported to the guest kernel where none is expected. Guest mode unprivileged (user) code, which has been granted the privilege to access MMIO regions, may leverage that access to crash the whole guest. Only HVM guests exposing MMIO ranges to unprivileged (user) mode are vulnerable to this issue. PV guests are not. | ||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 3.6-rc1, announced on August 2. "As usual, even the shortlog is too big
to usefully post, but there's the usual breakdown: about two thirds of
the changes are drivers (with the CSR driver from the staging tree
being a big chunk of the noise - christ, that thing is big and wordy
even after some of the crapectomy).
[...]
Of the non-driver portion, a bit over a third is arch (arm, x86, tile,
mips, powerpc, m68k), and the rest is a fairly even split among fs,
include file noise, networking, and just 'rest'.
" See the summary
below for what was merged after last week's update.
Stable updates: The 3.2.25 and 3.2.26 kernels were released on August 3 and August 5 respectively. The 3.2.27, 3.4.8, 3.0.40, and 3.5.1 stable reviews are underway as of this writing; those kernels can be expected on or after August 9.
Quotes of the week
Sticking to Mach and being hostile to Linux wasn't very smart and a lot of developers have not forgiven the FSF for that, which is one reason they find the "GNU/Linux" label deeply insulting.
The other screw up was that they turned down the use of UZI, which would have given them a working if basic v7 Unix equivalent OS years before Linux was released. Had they done that Linux would never have happened and probably the great Windows battle would have been much more fascinating.
The conclusion of the 3.6 merge window
Linus closed the 3.6 merge window on August 2, a couple of days earlier than would have normally been expected. There were evidently two reasons for that: a desire to send a message to those who turn in their pull requests on the last day of the merge window, and his upcoming vacation. In the end, he only pulled a little over 300 changes since the previous merge window summary, with the result that 8,587 changes were pulled in the 3.6 merge window as a whole.Those 300+ changes included the following:
- The block I/O bandwidth controller has been reworked so that each
control group has its own request list, rather than working from a
single, global list. This increases the memory footprint of block I/O
control groups, but makes them function in a manner much closer to the
original intention when lots of requests are in flight.
- A set of restrictions on the creation of
hard and soft links has been added in an attempt to improve
security; they should eliminate a lot of temporary file
vulnerabilities.
- The device mapper dm-raid module now supports RAID10 (a combination of
striping and mirroring).
- The list of new hardware support in 3.6 now includes OMAP DMA engines.
- The filesystem freeze functionality has been reimplemented to be more robust; in-tree filesystems have been updated to use the new mechanism.
The process of stabilizing all of those changes now begins; if the usual patterns hold, the final 3.6 kernel can be expected sometime in the second half of September.
Kernel development news
Testing for kernel performance regressions
It is not uncommon for software projects — free or otherwise — to include a set of tests intended to detect regressions before they create problems for users. The kernel lacks such a set of tests. There are some good reasons for this; most kernel problems tend to be associated with a specific device or controller and nobody has anything close to a complete set of relevant hardware. So the kernel depends heavily on early testers to find problems. The development process is also, in the form of the stable trees, designed to collect fixes for problems found after a release and to get them to users quickly.Still, there are places where more formalized regression testing could be helpful. Your editor has, over the years, heard a large number of presentations given by large "enterprise" users of Linux. Many of them expressed the same complaint: they upgrade to a new kernel (often skipping several intermediate versions) and find that the performance of their workloads drops considerably. Somewhere over the course of a year or so of kernel development, something got slower and nobody noticed. Finding performance regressions can be hard; they often only show up in workloads that do not exist except behind several layers of obsessive corporate firewalls. But the fact that there is relatively little testing for such regressions going on cannot help.
Recently, Mel Gorman ran an extensive set of benchmarks on a set of machines and posted the results. He found some interesting things that tell us about the types of performance problems that future kernel users may encounter.
His results include a set of scheduler tests, consisting of the "starve," "hackbench," "pipetest," and "lmbench" benchmarks. On an Intel Core i7-based system, the results were generally quite good; he noted a regression in 3.0 that was subsequently fixed, and a regression in 3.4 that still exists, but, for the most part, the kernel has held up well (and even improved) for this particular set of benchmarks. At least, until one looks at the results for other processors. On a Pentium 4 system, various regressions came in late in the 2.6.x days, and things got a bit worse again through 3.3. On an AMD Phenom II system, numerous regressions have shown up in various 3.x kernels, with the result that performance as a whole is worse than it was back in 2.6.32.
Mel has a hypothesis for why things may be happening this way: core kernel developers tend to have access to the newest, fanciest processors and are using those systems for their testing. So the code naturally ends up being optimized for those processors, at the expense of the older systems. Arguably that is exactly what should be happening; kernel developers are working on code to run on tomorrow's systems, so that's where their focus should be. But users may not get flashy new hardware quite so quickly; they would undoubtedly appreciate it if their existing systems did not get slower with newer kernels.
He ran the sysbench tool on three different filesystems: ext3, ext4, and xfs. All of them showed some regressions over time, with the 3.1 and 3.2 kernels showing especially bad swapping performance. Thereafter, things started to improve, with the developers' focus on fixing writeback problems almost certainly being a part of that solution. But ext3 is still showing a lot of regressions, while ext4 and xfs have gotten a lot better. The ext3 filesystem is supposed to be in maintenance mode, so it's not surprising that it isn't advancing much. But there are a lot of deployed ext3 systems out there; until their owners feel confident in switching to ext4, it would be good if ext3 performance did not get worse over time.
Another test is designed to determine how well the kernel does at satisfying high-order allocation requests (being requests for multiple, physically-contiguous pages). The result here is that the kernel did OK and was steadily getting better—until the 3.4 release. Mel says:
On the other hand, the test does well on idle systems, so the anti-fragmentation logic seems to be working as intended.
Quite a few other test results have been posted as well; many of them show regressions creeping into the kernel in the last two years or so of development. In a sense, that is a discouraging result; nobody wants to see the performance of the system getting worse over time. On the other hand, identifying a problem is the first step toward fixing it; with specific metrics showing the regressions and when they first showed up, developers should be able to jump in and start fixing things. Then, perhaps, by the time those large users move to newer kernels, these particular problems will have been dealt with.
That is an optimistic view, though, that is somewhat belied by the minimal response to most of Mel's results on the mailing lists. One gets the sense that most developers are not paying a lot of attention to these results, but perhaps that is a wrong impression. Possibly developers are far too busy tracking down the causes of the regressions to be chattering on the mailing lists. If so, the results should become apparent in future kernels.
Developers can also run these tests themselves; Mel has released the whole set under the name MMTests. If this test suite continues to advance, and if developers actually use it, the kernel should, with any luck at all, see fewer core performance regressions in the future. That should make users of all systems, large or small, happier.
A generic hash table
A data structure implementation that is more or less replicated in 50 or more places in the kernel seems like some nice low-hanging fruit to pick. That is just what Sasha Levin is trying to do with his generic hash table patch set. It implements a simple fixed-size hash table and starts the process of changing various existing hash table implementations to use this new infrastructure.
The interface to Levin's hash table is fairly straightforward. The API is defined in linux/hashtable.h and one declares a hash table as follows:
DEFINE_HASHTABLE(name, bits)
This creates a table with the given name and a power-of-2 size
based on bits. The table is implemented using buckets containing
a kernel
struct hlist_head type. It implements a chaining hash, where hash
collisions are simply added to the head of the hlist.
One then calls:
hash_init(name, bits);
to initialize the buckets.
Once that's done, a structure containing a struct hlist_node pointer can be constructed to hold the data to be inserted, which is done with:
hash_add(name, bits, node, key);
where node is a pointer to the hlist_node and
key is the key that is hashed
into the table. There are also two mechanisms to iterate over the table.
The first iterates through the entire hash table, returning the entries in
each bucket:
hash_for_each(name, bits, bkt, node, obj, member)
The second returns only the entries that correspond to the key's
hash bucket:
hash_for_each_possible(name, obj, bits, node, member, key)
In each case, obj is the type of the underlying data,
node is a struct hlist_head pointer to use as a loop
cursor, and
member is the name of the struct hlist_node member
in the stored data type. In addition, hash_for_each() needs an
integer loop cursor, bkt. Beyond that, one can remove an entry
from the table with:
hash_del(node);
Levin has also converted six different hash table uses in the kernel as examples in the patch set. While the code savings aren't huge (a net loss of 16 lines), they could be reasonably significant after converting the 50+ different fixed-size hash tables that Levin found in the kernel. There is also the obvious advantage of restricting all of the hash table implementation bugs to one place.
There has been a fair amount of discussion of the patches over the three revisions that Levin has posted so far. Much of it concerned implementation details, but there was another more global concern as well. Eric W. Biederman was not convinced that replacing the existing simple hash tables was desirable:
But, Linus Torvalds disagreed. He
mentioned that he had been "playing around
" with a directory
cache (dcache) patch that uses a fixed-size hash table as an L1 cache for
directory entries that provided a noticeable performance boost. If a
lookup in that
first hash table fails, the code then falls back to the existing
dynamically sized hash table. The reason that the code hasn't been
committed yet is because
"filling of the
small L1 hash is racy for me right now
" and he has not yet found a
lockless and race-free way to do so. So:
Torvalds posted his patch (dropped diff attachment) after a request
from Josh Triplett. The
race condition is "almost entirely theoretical
", he said, so
it could be used to generate some preliminary performance numbers. Beyond
just using the small fixed-sized table, Torvalds's patch also circumvents
any chaining; if the hash bucket doesn't contain the entry, the second
cache is consulted. By avoiding "pointer
chasing", the L1 dcache "really improved performance
".
Torvalds's dcache work is, of course, something of an aside in terms of Levin's patches, but several kernel developers seemed favorably inclined toward consolidating the various kernel hash table implementations. Biederman was unimpressed with the conversion of the UID cache in the user namespace code and Nacked it. On the other hand, Mathieu Desnoyers had only minor comments on the conversion of the tracepoint hash table and Eric Dumazet had mostly stylistic comments on the conversion of the 9p protocol error table. There are several other maintainers who have not yet weighed in, but so far most of the reaction has been positive. Levin is trying to attract more reviews by converting a few subsystems, as he notes in the patch.
It is still a fair amount of work to convert the other 40+ implementations,
but the conversion seems fairly straightforward. But, Biederman's
complaint about
the conversion of the namespace code is something to note: "I don't
have the time for a new improved better hash table that makes
the code buggier.
" Levin will need to prove that his implementation
works well, and that the conversions don't introduce regressions, before there
is any chance that we will see it in the mainline. There is no reason that all
hash tables need to be converted before that happens—though it might
make it more likely to go in.
Ask a kernel developer
Here is another in our series of articles with questions posed to a kernel developer. If you have unanswered questions about technical or procedural things involving Linux kernel development, ask them in the comment section, or email them directly to the author. This time, we look at UEFI booting, real-time kernels, driver configuration, and building kernels.
I’d like to follow a mailing list on UEFI-booting-related topics but don’t seem to find any specific subsystem in the MAINTAINERS file, would you please share some pointers?
Because of the wide range of topics involved in UEFI booting, there is no "one specific" mailing list where you can track just the UEFI issues. I recommend filtering the fast-moving linux-kernel mailing list, as most of the topics that kernel developers discuss cross that list. As the kernel isn't directly involved in UEFI, there is no one specific "maintainer" of this area at the moment. That being said, there are lots of different people working on this task right now.
From the kernel side itself, there has been some wonderful work from Matt Flemming and other Intel developers, in making it so that the kernel can be built as an image that is bootable from EFI directly. There were some recent patches that went into the 3.6-rc1 kernel that have made it easier for bootloaders to load the kernel in EFI mode. See the patch for the details about how this is done, but note that some bootloader work is also needed to take advantage of this.
From the "secure boot" UEFI mode side, James Bottomley, chair of the Technical Advisory Board of the Linux Foundation (and kernel SCSI subsystem maintainer), has been working through a lot of the "how do you get a distribution to boot in secure mode" effort and documenting it all for all distributions to use. He's published his results, with code; I also recommend reading his previous blog posts about this topic for more information about the subject and how it pertains to Linux.
As for distribution-specific work, both Canonical and Red Hat have been working with the UEFI Forum to help make Linux work properly on UEFI-enabled machines. I recommend asking those companies about how they plan to handle this issue, on their respective mailing lists, if you are interested in finding out what they are planning to do. Other distributions are aware of the issue, but as of this point in time, I do not believe they are working with the UEFI Forum.
I am evaluating Linux for use as an operating system in a real-time embedded application, however, I find it hard to find recent data with respect to the real-time performance of Linux. Do you have, or know of someone who has, information on the real-time performance of the Linux kernel, preferably under various load conditions?
I get this type of question a lot, in various forms. The very simple answer is: "No, there is no data, you should evaluate it yourself on your hardware platform, with your system loads, to determine if it meets your requirements." And in reality, that's what you should be doing in the first place even if there were "numbers" published anywhere. Don't trust a vendor, or a project, to know exactly how you are going to be using the operating system. Only you know best, so only you know how to determine if it solves your problem or not.
So, go forth, download the code, run it, and see if it works. It's really that simple.
Note, if it doesn't work for you, let the developers know about it. If they don't know about any problems, then they can't fix them.
What is the best way to get configuration data into a driver? (This is paraphrased from many different questions all asking almost the same thing.)
In the past (i.e. 10+ years ago), lots of developers used module parameters in order to pass configuration options into a driver to control a device. That started to break down very quickly when multiple devices of the same type were in the same system, as there isn't a simple way to use module parameters for this.
When the sysfs filesystem was created, lots of developers started using it to help configure devices, as the individual devices controlled by a single driver are much easier to see and write values to. This works today, for simple sets of configuration options (such as calibrating an input device). But, for more complex types of configurations, the best thing to use is configfs (kernel documentation, LWN article), which was written specifically for this task. It handles ways to tie configurations to sysfs devices easily, and handles notifying drivers when things have been changed by the user. At this point in time, I strongly recommend using that interface for any reasonably complex configuration task that a driver or subsystem might need.
What is a good, fast and reliable way to compile a custom kernel for a system? In the past, people have used lspci, lsusb, and others, combined with the old autokernelconf tool, but that can be difficult, is there a better way?
As Linus pointed out a few weeks ago, configuring a kernel is getting more and more complex, with different options being needed by different distributions. The simplest way I have found to get a custom kernel up and running on a machine is to take a distribution-built kernel that you know works, and then use the "make localmodconfig" build option.
To use this option, first boot the distribution kernel, and plug in any devices that you expect to use on the system, which will load the kernel drivers for them. Then go into your kernel source directory, and run "make localmodconfig". That option will dig through your system and find the kernel configuration for the running kernel (which is usually at /proc/config.gz, but can sometimes be located in the boot partition, depending on the distribution). Then, the script will remove all options for kernel modules that are not currently loaded, stripping down the number of drivers that will be built significantly. The resulting configuration file will be written to the .config file, and then you can build the kernel and install it as normal. The time to build this stripped-down kernel should be very short, compared to the full configuration that the distribution provides.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Development tools
Device drivers
Filesystems and block I/O
Memory management
Networking
Miscellaneous
Page editor: Jake Edge
Distributions
The troubles with release names
While the value of distribution release names is sometimes questioned, the Fedora community has pretty clearly indicated its preference to continue having them. We looked at some of the issues surrounding Fedora release names back in March—precipitated by the choice of "Beefy Miracle" for Fedora 17. Since that time, Fedora has chosen another somewhat controversial name for Fedora 18 ("Spherical Cow"), but it is also trying to come up with a better naming scheme for the future.
In practice, Fedora's release names aren't regularly used. As several have pointed out, it is difficult to remember the names for releases from the past (e.g., Fedora 14 "Laughlin"). Other distributions' naming schemes are more commonly used; for example, Debian release numbers are often harder to remember than their names ("Squeeze", "Wheezy"), which are based on characters from Toy Story.
The Ubuntu community is also prone to using names. The alliterative "adjective animal" names, rather than the release numbers, are often seen—though the names often just get shortened to the adjective (e.g. "Precise", "Natty"). The alphabetical ordering of the names helps make them memorable, of course. In addition, names decided by fiat (either by the Debian release team or Mark Shuttleworth for Ubuntu) may lead to fewer disgruntled supporters of names that didn't make the cut. By putting the names up for a vote, Fedora may be setting itself up for some division within its community.
While there has been some grumbling occasionally over the names chosen for Fedora in the past, "Beefy Miracle" seems to be the straw that broke the camel's back. But the Fedora community voted 550 to 384 to keep release names in a non-binding vote. The vote was taken back in April, at the same time "Spherical Cow" was chosen for Fedora 18. While it may not exactly be a ringing endorsement (59% for keeping release names), it does indicate an interest in continuing the tradition. Now the question is: "how?"
Eric Christensen put out a request from the Fedora Board for suggestions on how to name releases. Earlier efforts had already led to a list of proposals on naming schemes. Máirín Duffy's idea to use a particular theme (e.g. types of coffee/tea, dinosaur breeds, herbs and spices), where all names would connect to that theme, seems to be fairly popular. One problem is choosing the theme, of course, but another is perhaps a bit more surprising: trademark woes.
Fedora release names have always undergone a review by the Red Hat legal department before they were cleared for a vote. Much of that review concerns trademarks; there are a surprising number of seemingly innocuous terms that can't pass that hurdle. Some of the popular ideas for themes are much more likely to run afoul of problems in that area. For example, using famous people's names has been suggested in different ways (composers, computer pioneers, and so on), but, as Red Hat legal team member Pam Chestek explained, it can be difficult to get them cleared:
While
critiquing another proposal that suggested "materials" (e.g., wood, crystal,
diamond, ...) as a theme, an offhand comment by Lynn Dixon ("Since Fedora has a very fast
release cycle, once we ended up at something like platinum, where would we
go next? Into the heavy elements?
") quickly became popular. It spawned suggestions of
using the periodic table and perhaps synchronizing the release number with
the atomic number of the element used for the name. That would eliminate
the voting cycle, which is seen as a waste of time by some, but, alas, that
idea may
have run aground because of trademark issues as well.
First off, many element names are used in computer-related trademarks, which might make it difficult to clear the next element name for some upcoming Fedora release—breaking the synchronization. Opening up a vote on some suggested element names from the entire periodic table for each release might be an alternative. There were also thoughts of adding a second word to the name to try to avoid trademark conflicts—though Dixon's alliterative adjective suggestion (e.g., Perfect Potassium for Fedora 19) was not popular. But there was another surprise there, as Chestek pointed out:
It is a difficult problem. Fedora release names only last for around 18 months, and a new one needs to be chosen every six months. That leads to a fair amount of work in suggesting, clearing, then voting on a name twice per year. Given that few inside or outside of the Fedora community actually use the release name, it's not surprising that there have been calls to change the process—or eliminate it entirely.
So far, though, the board seems intent on continuing with release names—perhaps partly out of tradition, but also in keeping with the "will of the people". Over the next few months—as "Spherical Cow" gets released (currently scheduled for early November) and a name for Fedora 19 is needed—we will see what the board plans to do about release naming. While some find the names whimsical and fun, others are much less enamored of them. Whatever the board decides, it seems likely to be a lively topic of discussion for some time to come.
Brief items
Distribution quote of the week
Debian Installer 7.0 Beta1 release
The first beta release of the installer for Debian 7.0 "Wheezy" is available. That means Debian 7.0 Wheezy is a step closer to a final release. Click below for a list of changes in this release.
Distribution News
Debian GNU/Linux
bits from the DPL: July 2012
Debian Project Leader Stefano Zacchiroli presents his July activities. Highlights include discussions with the Free Software Foundation about the Free-ness of Debian and DebConf12. "Tip to feel good about the release #476: *before* reading this, grab and fix one of the [RC bugs affecting Wheezy]"
Newsletters and articles of interest
Distribution newsletters
- DistroWatch Weekly, Issue 468 (August 6)
- Maemo Weekly News (August 6)
- Ubuntu Weekly Newsletter, Issue 277 (August 5)
Page editor: Rebecca Sobol
Development
The Linux digital audio workstation - Part 1
The contemporary DAW—digital audio workstation—is a system of hardware and software parts assembled to record, edit, and play digital audio and MIDI (Musical Instrument Digital Interface) data. Hardware components include the host computer, one or more interfaces for audio and MIDI I/O, input devices (e.g. a MIDI keyboard), and a speaker system for playback. The software component—also commonly referred to as the DAW—is typically a program dedicated to displaying, editing, and saving the captured data. In the world of recording, the DAW is the heart of the digital studio.
This article presents a personal sampling of the variety of DAWs available to Linux users. It doesn't explain how to set up Linux to meet the requirements of professional audio, nor does it provide details of installing and configuring the software I present here. Complete information is available at each program's Web site, and many mainstream Linux distributions have sub-projects dedicated to advanced audio configuration. Search Google to find pointers to distribution-specific documentation regarding system configuration for professional audio.
If you're new to the terminology of digital audio, you should consult the comprehensive—and comprehensible—glossaries at The Sonic Spot and Sound On Sound. Those lists should suffice to define any unknown or confusing terms you encounter in the article.
Common characteristics
A modern DAW should perform at least the following tasks:
- Record and edit synchronized multitrack/multichannel audio and MIDI at
variable sampling rates and bit depths.
- Import and export a variety of audio and MIDI file types.
- Mix new tracks and previously recorded material with sample-accurate
timing.
- Support the use of audio and MIDI processing plugins in various formats.
- Provide total recall of all session parameters.
Manufacturers add interesting features beyond this basic set, but these characteristics meet the minimum requirements for a usable modern DAW. A Linux DAW should offer the basic feature set listed above, with the important addition of JACK support. JACK is an audio server specifically designed with pro-audio capabilities, such as multitrack/multichannel recording with realtime effects processing. However, JACK requires an audio back-end, and fortunately the Linux kernel provides the ALSA system. Support for JACK should be considered mandatory for Linux DAWs.
From sequencer to HDR to DAW to sequencer
In the middle 1980s, the advent of MIDI fueled a phenomenal drive in the development of new hardware and software for digital music and sound production. The MIDI data recorder—a.k.a. the MIDI sequencer—arrived in hard and soft formats, and both did essentially the same things: record (sequence), edit, process, and play back MIDI data. Early drum machines and performance synthesizers included basic sequencers, but MIDI—especially computer-centric MIDI—gave new life to the design of the sequencer.
By the late 1980s, sophisticated MIDI sequencing programs were available for every popular desktop computer. Those platforms included machines from Apple, Atari, Commodore/Amiga, IBM, and a horde of PC-clone/compatible manufacturers. Some MIDI hardware and software was available for UNIX systems, but few (if any) of the popular commercial programs were ported.
By the early 1990s, MIDI software capabilities expanded as the capabilities of the host computers advanced. As the hardware grew more powerful, it became possible to create an affordable hard-disk recorder (HDR) designed to run on the new desktop machines. The classic HDR was a standalone digital recorder built to accommodate high-quality analog-to-digital (ADC) and digital-to-analog (DAC) converters for audio I/O. The converters may or may not have been built into the device, and the user would typically need to provide further external support such as mixers and signal processors. These hardware HDRs had been available for desktop recordists but the boxes were often expensive to purchase and maintain—parts were rarely off-the-shelf components—and each machine's internal software was strictly proprietary for the device's operating system and data formats.
Fortunately, the increased power and lower entry cost of the general-purpose desktop computer paved the way for the software HDR, which eventually opened the way for the melding of software MIDI sequencer and the software HDR—with mixer and processors—into a single program called a digital audio workstation, i.e. a DAW.
The term "DAW" could be applied equally well to some of the machines built by SGI in the 1990s. Multichannel output was built into the hardware, and software had been developed to take advantage of the sonic possibilities. Unfortunately for SGI, the i386 continued its march forward to desktop domination—along with other computing niceties such as greatly enhanced video and massive storage capabilities—until the power of an average desktop machine rivaled SGI's bigger iron, at a much lower cost.
These days a DAW is also simply called a sequencer, perhaps as an unconscious reminder of the word's original use. Of course the very definition of a digital audio workstation continues to evolve as programs such as Ableton Live and Renoise present characteristics not commonly associated with a conventional DAW.
The Linux DAW
The blessing—or curse—of choice is in full effect when it comes to the Linux DAW. The DAW selection in this article is not exhaustive, and my descriptions present only a few salient characteristics of each program. With that admission out of the way, we'll take an alphabetical tour of Linux DAWs.
Ardour
The Ardour user interface will be familiar to anyone who has worked with the famous Pro Tools DAW. The interface model is loosely based on the multitrack tape recording paradigm in which recorded tracks are arranged in vertical order, much like the individual bands of a multitrack tape. Of course the similarity is primarily visual—the technology of hard-disk recording differs profoundly from its tape-based ancestry—but the tape-based interface model is deeply embedded in the contemporary digital recording industry.
Ardour is currently available in two distinct versions. The 2.x series is the stable public release track, but it lacks some of the features considered essential in a modern DAW. The soon-to-be-public 3.x series includes just about everything you can find in a DAW, including extensive MIDI support, the feature most notably missing in the 2.x releases. Of course Ardour synchronizes with external hardware and software by various means, including MTC (MIDI Time Code), MIDI Clock, and JACK. Open Sound Control (OSC) messaging is also supported, giving Ardour the opportunity to control or be controlled by other OSC-savvy programs.
Plugin support is extensive, though both the 2.x and 3.x lack support for the DSSI plugin API. Native Linux VST plugins are welcome, and it is possible to compile a version of Ardour that will host native Windows VST plugins. This capability is not unique to Ardour and, like any other Linux DAW with such support, its performance will vary according to initial conditions. Those conditions include the version of Wine used during the build, the conformance of the plugins to Windows programming standards, and the availability of required DLLs. Copy-protection schemes, especially hardware-based keys, are almost certain to block the use of the protected plugins.
Unfortunately none of the DAWs reviewed here include integrated video capabilities, but Robin Gareus and Luis Garrido are working to fill that gap with their Xjadeo project. Xjadeo is essentially a video display (shown in the Ardour screen shot above) that slaves to JACK or MTC, and all SMPTE framerates supported by Ardour are likewise supported by Xjadeo. It is not an editor, but it is incredibly useful, and I suspect that at some point in Ardour's development Xjadeo will be fully integrated into the DAW.
Ecasound
Developer Kai Vehmanen has developed his great ecasound DAW since 1995, the same year I began using Linux. Ecasound is a complete DAW with no GUI at all, a remarkable achievement in today's visually-dominated world of sound and music software. I must emphasize the "complete" aspect of ecasound—as far as I can tell it has every feature common to all DAWs, including MIDI and synchronization capabilities, and its command-line interface guarantees a unique position among its more colorful brethren.
Given its text-based UI, ecasound has some very appealing aspects to the recordist. Above all, the program is fast, and its happy lack of a dependency-laden GUI gives it an edge in the stability department. Ecasound can also be extensively and elaborately scripted—in essence you can define the program's available capabilities on a per-project basis. For example, I use a simple ecasound script when I want to record something very quickly and with high quality. Typically I'll then import my ecasound-recorded tracks into Ardour for detailing, arrangement, and the final mix. In truth, I could script ecasound to do all that too, but I like to keep everyone busy in my studio.
By the way, if you must have a GUI for ecasound take a look at Joel Roth's Nama or Luis Gasparotto's TkEca. Both programs provide GUIs for nearly complete control over ecasound's powers. And if you need to be convinced that ecasound can be used to make real music, check out the music of Julien Claassen.
Though its native UI is humble and unassuming, ecasound is awesomely powerful. I've used it for so long and for so many purposes I simply can't imagine my Linux audio life without it. In my opinion, the compleat Linux studio requires ecasound.
EnergyXT2
EnergyXT2—eXT2 to its users—is an inexpensive cross-platform, commercially available DAW designed chiefly by Joergen Aase. It is a complete DAW with the expected audio and MIDI record/edit/playback functions, though the demo version (shown at left) comes with restricted recording and file-saving capabilities.
Configuration and installation is uncomplicated, and the demo version worked out of the box for me on my AV Linux 5.0.1 system. I loaded the demo songs and played them without xruns (JACK-speak for audio buffer over or under-runs) or other audio discontinuities being reported by JACK, but I expect that kind of stability from a mature application (I tested version 2.6).
eXT2's plugin support is limited to native Linux VSTs, of which fortunately we have quite a few these days. However, it partially atones for that limitation by including a built-in synthesizer/sampler and a very nice multi-effects processor. The full version of energyXT2 also bundles 400+ loops and 32 instruments from Loopmasters, so there are plenty of goodies to get you started.
EnergyXT2 is a popular program that's easy to learn and master. If you do get stuck there's plenty of help available within the program and on-line. See the unofficial eXT2 Wiki and the energyXT2 forum at KVR-audio for opinions, suggestions, and advice from eXT2 users world-wide.
LMMS
LMMS—a.k.a. the Linux MultiMedia Studio—has its roots in the design philosophy behind programs such as the original FruityLoops and GarageBand. Those programs were designed to get the user into making music as quickly and efficiently as possible. Like the programs it is modeled on, LMMS proves that efficiency does not necessarily arrive at the expense of power. (For examples, see the compositions of Louigi Verona.) Be assured, LMMS is a true DAW. It is a lot of fun to play with, but it is no mere toy.
LMMS is designed for loop-based composition. You can record and import audio and MIDI loops, or you can manually enter your own MIDI loops on a piano-roll display. Alas, there is no automated time/pitch stretching. Plugin support is limited to the LMMS internal plugins and plugins in LADSPA or Windows VST formats, but it must be noted that the LMMS internal plugins sound pretty good to my ears. I think they look pretty good too.
Control automation—graphic curve control of signal processing parameters—is a strength of the program. LMMS provides excellent graphic control curve editing, a necessary feature for accurately synchronizing sweeps and other effects to your material. Check out some of the demo songs to hear and see how easily LMMS handles the task.
In early versions, LMMS had problems with its JACK support, but recent releases have mitigated those problems. LMMS is perfectly comfortable in an ALSA-only environment, though; on my systems, I get better performance from LMMS with pure ALSA anyway. Your mileage may vary.
With its colorful and well-organized GUI, LMMS presents itself as an upbeat environment for making music. At development version 0.4.9, LMMS still shows some rough edges, but its usability rates high. It works out of the box, it's very easy to learn, and it's great fun. The LMMS interface is unlike any other presented in this article, but I find it attractive and conducive to productivity.
Mixbus
I thought about including Mixbus in my description of Ardour—Mixbus is based on the Ardour 2.x release series—but it is in a class of its own and deserves separate treatment.
Mixbus is a commercially-available cross-platform DAW created by Harrison Consoles, a company dedicated to the manufacture of some of the most prestigious audio mixing desks in the professional recording industry. Harrison's Web site lists the many famous musicians whose work has been mixed on Harrison boards, and it suffices to say that the list is very impressive. Obviously Harrison's technology is much-esteemed, but it's also costly. Harrison mixing desks are high-end professional products with fully professional price tags to match, so there was much anticipation about the company's release of a software DAW that took the editing and GUI capabilities from Ardour2 and blended those features with elements of a Harrison mixing console.
The result is the mixer par excellence for serious Linux audio production. The track editor is recognizably from Ardour, but the mixer section is all Harrison, with built-in EQ, compression/limiting, a K-meter, and a very cool saturation control. The sound quality is remarkable to my ears, and I've begun to use Mixbus as the master mixer in my workflow. I'm not exaggerating when I claim that everything I record elsewhere is significantly improved by remixing it in Mixbus. Other reviewers have fallen all over themselves with praise for the program, and I'll willingly join the crowd. For its relatively low price—miniscule when compared to Harrison's hardware—there is no better deal in the indispensable Linux audio arsenal. If you intend to do serious mixing then you need Mixbus.
The Mixbus development plans include the adoption of features from Ardour3, including a complete suite of MIDI functions. With those extensions Mixbus may well become one of the most powerful DAWs on any platform. These are exciting times for the serious Linux-based recordist.
Outro
There are more attractions on the tour, but we've run out of room for them this time. Join me next week for Part 2 of this article as I finish this short stroll through the land of the Linux DAW. I'll also introduce an upcoming program that may have a profound influence on Linux audio applications development. Or it may not. Tune in next week to catch the buzz.
Brief items
Quotes of the week
LibreOffice 3.6 released
The Document Foundation has announced the release of LibreOffice 3.6, with lots of new features. "Wherever you look you see improvements: a new CorelDRAW importer, integration with Alfresco via the CMIS protocol and limited SharePoint integration, color-scales and data-bars in spreadsheet cells, PDF export watermarking, improved auto-format function for tables in text documents, high quality image scaling, Microsoft SmartArt import for text documents, and improved CSV handling. In addition, there is a lot of contributions from the design team: a cleaner look, especially on Windows PCs, beautiful new presentation master pages, and a new splash screen." More information can be found in the new feature summary and release notes. In addition, Michael Meeks has put together a "behind-the-scenes" view of 3.6 development, including information on dead code removal, build system improvements, more unit tests, and so on.
PythonOnWheels announced
PythonOnWheels, a new generative Web framework built for Python, has been announced. The author admits "I know what you are thinking: 'What the world doesn't need are more lawyers and python web frameworks'", but evidently found existing frameworks either too big or too small. The new framework offers an intentionally Ruby-on-Rails like MVC feature set.MySQL Connector/Python 1.0.5 beta available
A new version of the Python database driver for MySQL has been released. Version 1.0.5 is a beta not yet ready for production environments, but it introduces several new features. Included among the changes are support for fractional seconds in time values, the ability to reconnect to a server with configurable retries and delays, and "descriptive error codes for both client and server errors in the module errorcode." Not all changes are backward-compatible, however.Binutils/gas/ld port for ARM's new 64-bit architecture, AArch64
ARM has announced the release of ports of binutils, gas, and ld for its AArch64 64-bit architecture. Although the company cautions that the tools are not yet complete, it does state that "we believe that the code is now in a state where it is worth starting the process of a public review." We may have been a bit late in picking this up; hopefully it still registers as good news....
Newsletters and articles
Development newsletters from the last week
- Caml Weekly News (August 7)
- Haskell Weekly News (August 1)
- Mozilla Hacks Weekly (August 2)
- OpenStack Community Weekly Newsletter (August 3)
- Perl Weekly (August 6)
- PostgreSQL Weekly News (August 7)
- Ruby Weekly (August 2)
McCann: Cross Cut [the future of Nautilus]
GNOME developer William Jon McCann has posted a lengthy article on recent work done with the Nautilus file manager and where that utility is going. "Nautilus was a bit of black sheep among the GNOME 3 core applications. It had a design that grew organically over many years and didn’t really seem to fit in any more. In bringing it back up to par we now have things like a much improved and space efficient maximized window state, a more consistent menu layout and behavior, more consistent use of icons, and a more GNOME 3 style pathbar and toolbar."
Day: GNOME OS
On his blog, Allan Day has posted an overview of GNOME OS with a description of what it is (and isn't) based on discussions at the recently concluded GUADEC. "Many of the things that we want to do as a part of GNOME OS are old ideas that have been around in the GNOME project for a really long time. The aspirations that are driving this process include things like providing a better experience for application developers, automated testing, sandboxed applications and broad hardware compatibility. While each of these goals could be pursued independently, there are enough interconnections between them to make a holistic plan worthwhile. Yes we could call the initiative something else, but GNOME OS has stuck, and it kinda fits (as I hope to explain a bit better below)."
Page editor: Nathan Willis
Announcements
Brief items
DECLARATION of INTERNET FREEDOM
The DECLARATION of INTERNET FREEDOM is gathering signatures from organizations that support Internet Freedom. "We believe that a free and open Internet can bring about a better world. To keep the Internet free and open, we call on communities, industries and countries to recognize these principles. We believe that they will help to bring about more creativity, more innovation and more open societies." (Thanks to Paul Wise)
Articles of interest
With anti-shill order, Google/Oracle judge enters "uncharted territory" (ars technica)
Ars technica looks at an interesting order made by Oracle v. Google judge William Alsup (Groklaw also covers the order). In it, he asks both parties to produce a list of "print or internet authors, journalists, commentators or bloggers who have and/or may publish comments on the issues in this case" that "
received money (other than normal subscription fees) from the party or its counsel" by August 17. "
'I wonder if it produces too much information,' [Public Citizen attorney Paul Alan] Levy said. If taken literally, Google and Oracle could produce an extraordinarily long list of names, most of whom have only tangential connections to the software giants. Levy notes that the firms are not required to give details on how and why the funds were provided—the kind of context that would be needed to figure out which relationships raised ethical questions." One suspects that one or both parties will appeal the order.
SCO files for chapter 7 (Groklaw)
Groklaw reports that the SCO group has filed for chapter 7 liquidation. "I will try my best to translate the legalese for you: the money is almost all gone, so it's not fun any more. SCO can't afford Chapter 11. We want to shut the costs down, because we'll never get paid. But it'd look stupid to admit the whole thing was ridiculous and SCO never had a chance to reorganize through its fantasy litigation hustle. Besides, Ralph Yarro and the other shareholders might sue. So they want the litigation to continue to swing in the breeze, just in case."
Calls for Presentations
Call For Proposals XDC2012
The 2012 X.Org Developers Conference takes place in Nürnberg, Germany, September 19-21. The call for proposals ends August 15. "While any serious proposal will be gratefully considered, topics of interest to X.org and FreeDesktop.org developers are encouraged."
3rd Call For Papers, Tcl'2012
The 19th Annual Tcl/Tk Conference will take place November 12-16, 2012 in Chicago, IL. The call for papers ends August 27. "The program committee is asking for papers and presentation proposals from anyone using or developing with Tcl/Tk (and extensions)."
Upcoming Events
LF Announces Automotive Linux Summit
The Linux Foundation has announced the keynote presentations for the Automotive Linux Summit, taking place September 19-20, 2012 in Gaydon/Warwickshire, UK. "Attendees will collaborate on how to use Linux and open source software in automotive applications ranging from in-vehicle, on-board systems to cloud solutions for vehicle-to-vehicle and vehicle-to-infrastructure communications."
First-Ever Korea Linux Forum
The Korea Linux Forum will take place October 11-12, 2012 in Seoul, South Korea. "The Korea Linux Forum will bring together a unique blend of top regional and international talent, including core kernel developers, to collaborate with software developers in Korea and increase participation. It is designed to facilitate in-person collaboration and to support future interaction between Korea and other Asia-Pacific countries and the rest of the global Linux community."
Events: August 9, 2012 to October 8, 2012
The following event listing is taken from the LWN.net Calendar.
| Date(s) | Event | Location |
|---|---|---|
| August 8 August 10 |
21st USENIX Security Symposium | Bellevue, WA, USA |
| August 18 August 19 |
PyCon Australia 2012 | Hobart, Tasmania |
| August 20 August 22 |
YAPC::Europe 2012 in Frankfurt am Main | Frankfurt/Main, Germany |
| August 20 August 21 |
Conference for Open Source Coders, Users and Promoters | Taipei, Taiwan |
| August 25 | Debian Day 2012 Costa Rica | San José, Costa Rica |
| August 27 August 28 |
XenSummit North America 2012 | San Diego, CA, USA |
| August 27 August 28 |
GStreamer conference | San Diego, CA, USA |
| August 27 August 29 |
Kernel Summit | San Diego, CA, USA |
| August 28 August 30 |
Ubuntu Developer Week | IRC |
| August 29 August 31 |
2012 Linux Plumbers Conference | San Diego, CA, USA |
| August 29 August 31 |
LinuxCon North America | San Diego, CA, USA |
| August 30 August 31 |
Linux Security Summit | San Diego, CA, USA |
| August 31 September 2 |
Electromagnetic Field | Milton Keynes, UK |
| September 1 September 2 |
Kiwi PyCon 2012 | Dunedin, New Zealand |
| September 1 September 2 |
VideoLAN Dev Days 2012 | Paris, France |
| September 1 | Panel Discussion Indonesia Linux Conference 2012 | Malang, Indonesia |
| September 3 September 8 |
DjangoCon US | Washington, DC, USA |
| September 3 September 4 |
Foundations of Open Media Standards and Software | Paris, France |
| September 4 September 5 |
Magnolia Conference 2012 | Basel, Switzerland |
| September 8 September 9 |
Hardening Server Indonesia Linux Conference 2012 | Malang, Indonesia |
| September 10 September 13 |
International Conference on Open Source Systems | Hammamet, Tunisia |
| September 14 September 16 |
Debian Bug Squashing Party | Berlin, Germany |
| September 14 September 21 |
Debian FTPMaster sprint | Fulda, Germany |
| September 14 September 16 |
KPLI Meeting Indonesia Linux Conference 2012 | Malang, Indonesia |
| September 15 September 16 |
Bitcoin Conference | London, UK |
| September 15 September 16 |
PyTexas 2012 | College Station, TX, USA |
| September 17 September 19 |
Postgres Open | Chicago, IL, USA |
| September 17 September 20 |
SNIA Storage Developers' Conference | Santa Clara, CA, USA |
| September 18 September 21 |
SUSECon | Orlando, Florida, US |
| September 19 September 20 |
Automotive Linux Summit 2012 | Gaydon/Warwickshire, UK |
| September 19 September 21 |
2012 X.Org Developer Conference | Nürnberg, Germany |
| September 21 | Kernel Recipes | Paris, France |
| September 21 September 23 |
openSUSE Summit | Orlando, FL, USA |
| September 24 September 25 |
OpenCms Days | Cologne, Germany |
| September 24 September 27 |
GNU Radio Conference | Atlanta, USA |
| September 27 September 29 |
YAPC::Asia | Tokyo, Japan |
| September 27 September 28 |
PuppetConf | San Francisco, US |
| September 28 September 30 |
Ohio LinuxFest 2012 | Columbus, OH, USA |
| September 28 September 30 |
PyCon India 2012 | Bengaluru, India |
| September 28 October 1 |
PyCon UK 2012 | Coventry, West Midlands, UK |
| September 28 | LPI Forum | Warsaw, Poland |
| October 2 October 4 |
Velocity Europe | London, England |
| October 4 October 5 |
PyCon South Africa 2012 | Cape Town, South Africa |
| October 5 October 6 |
T3CON12 | Stuttgart, Germany |
| October 6 October 8 |
GNOME Boston Summit 2012 | Cambridge, MA, USA |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
