LWN.net Weekly Edition for May 3, 2012
An uphill battle for LibreOffice
LibreOffice (LO) keeps chugging along, with another bug fix release on May 2. Meanwhile, Apache OpenOffice (AOO) nears its first release and is starting to plan for what comes after AOO 3.4. But a recent blog post by developer Michael Meeks highlights a special challenge faced by LO: making a name for itself.
The OpenOffice "brand" has been a successful one. Many users have heard of
it as an alternative to the proprietary office suites (notably Microsoft
Office) and that is where they turn when they are looking. But, as Meeks
points out, there has been no OpenOffice release in 16 months or so and the
features of the suite have been essentially frozen for an additional six
months. Largely because of the brand, though, "users are still
downloading this increasingly old and creaky release at top speed
",
Meeks said.
He also put together a feature comparison that, not surprising given the source, shows LO with a substantial feature lead. One would guess that AOO partisans might find things to quibble with in that chart, but it isn't grossly inaccurate by any means. Because LO didn't suffer from some of the impediments that have stood in the way of AOO progress—Oracle's disinterest, followed by the move to Apache which necessitated a lot of changes—it has surged ahead feature-wise, and quite possibly community-wise as well.
One place it hasn't made its mark, however, is in the name recognition arena. Linux users can be forgiven for wondering what the fuss is about given that most distributions switched to LO more or less immediately after it was first released. But, as we are reminded ad nauseam by the media, Linux desktop users make up a tiny fraction of the market. For good or ill, to be a successful player in the free office suite world, Windows (and, increasingly, Mac OS X) is where the battle will be won or lost.
That's not to say that LO needs to "overtake" OpenOffice in order to be successful, but its developers and backers want to see it have a significant presence. That's perfectly understandable, but it will be something of an uphill battle now that there soon will be a viable successor for the OpenOffice brand. In fact, as Meeks notes, it's already been an uphill battle even without a viable competitor.
Brand recognition is a tricky problem to overcome. As we have seen over the years, technical merits are only a limited factor in which brands come out on top and which fall by the wayside. While LO may currently have features that AOO lacks (and vice versa, but the problem is mitigated for LO to some extent by the permissive license on AOO code) that gap may shrink over time. In a year or two, it's possible that there may be two roughly equivalent free software office suites supporting the same data formats and incorporating most of the same features.
Beyond the existing feature sets, many of the differences between LO and AOO are largely invisible to users. Most users don't choose their software based on its license—perhaps unfortunately—even if they did, it's not at all clear whether copyleft or permissive would be more attractive. The code cleanups and other streamlining that LO has done makes the code easier to work with, though that is disputed by some in the AOO camp, but that kind of work doesn't really directly show itself to users. That leaves brand identity as the main distinguishing element.
Now that the vote has passed, AOO 3.4 should be officially released any time now. In addition, AOO mentor Ross Gardler thinks the project is well on its way to graduating from the Apache Incubator to become a full-fledged Apache project. Once that initial, largely procedural hurdle has been cleared, it will be interesting to see where things go.
For one thing, regular AOO updates mean that security updates can be quickly addressed with actual binary packages, rather than by releasing patches that users are expected to build for themselves. The long-awaited code drop of IBM's Symphony fork appears to be imminent as well. That should bring a whole slew of features that will be of interest to users. While some have questioned whether AOO is really a project dominated by one large company, IBM, Gardler does not believe that is the case—which bodes well for the project as a whole.
The Symphony features, as well as the "line caps" and other drawing improvements that come with AOO 3.4, are likely to be incorporated into LO as well. The real question is how much, if any, of the improvements that LO makes can be incorporated into AOO. The Apache license will allow things to flow into LO, but even the dual-licensed (LGPL/MPL) portions of LO may not be acceptable for an Apache project. But, in terms of differentiating itself, LO would do well to come up with its own new features. One "killer feature" might well be enough to start the "brand recognition" ball rolling—adding a few more might go a long way toward erasing AOO's lead.
Beyond that, though, it would seem that LO and the Document Foundation have some
work cut out
for them just in terms of getting the message out about what Meeks calls
"the new, exciting, much more featureful, and fun suite
". His
post was a clear call for LO fans to assist in the effort to raise the
profile of the LO brand. That too will be interesting to watch.
The plumbing layer as the new kernel
Last week's Unix wars article discussed the long-feared possibility of Linux fragmenting into a number of incompatible—and weaker—systems. Before leaving that subject behind entirely, it is worthwhile to take a look at an effort that is intended to be a unifying force in the Linux community: the growth of a well-defined and actively developed "plumbing layer." By wrapping the kernel in layers of higher-level software, the plumbers hope to create a new core that will function similarly across all distributions. Thus far, we have not seen clear evidence that all distributions want to play along, though.
The kernel has long been the unifying force across Linux distributions. Especially in recent years, as distributors have worked harder to stay close to the mainline, the kernel is one thing that could be counted on to be nearly the same on any system. The layers on top of the kernel can vary widely between distributions, though, with some distributors, such as Android, having replaced almost everything above the kernel with their own code. That creates differences between distributions that can make life harder for users, developers, and for the maintainers of the distributions themselves.
There have been attempts in the past to create a common distribution core at a higher level than the kernel. The Linux Standard Base is one such endeavor, but, despite years of effort and lots of resources poured into it, the LSB has never been hugely helpful. Its lowest-common-denominator focus left much of the system uncovered, and its stated intent of making life easier for distributors of binary software has failed to create excitement in the community. Another effort was the ill-fated United Linux project; it may well have failed in any case, but it was certainly doomed by having the SCO Group as one of its founding members. Since then, there have been few overt efforts to create a common distribution core with the arguable exception of Debian, which has always, to some extent, seen itself as being that core.
What we have seen in recent years has been an effort to bring together and organize developers working in the "plumbing layer" that wraps the kernel. This layer is not precisely defined; it includes the bootstrap and initialization systems, udev, communications mechanisms like D-Bus, and related utilities. Interestingly, the C library—the layer that intermediates most access to the kernel—has typically not been a part of that group; perhaps that situation will change as a result of the more community-oriented focus in the glibc project.
The Linux Plumbers effort was never explicitly envisioned as the creation of a new common distribution core; instead, it was just a way to help developers work together and make the system better. Recently, though, things have started to change; in particular, much of the work on systemd has been justified with the claim that it would work to reduce fragmentation between distributions. Systemd is tightly tied to the Linux kernel—it will not run anywhere else—and is meant to integrate many of the kernel's features into a tighter, well-functioning, and quickly evolving system. That system then becomes the core of a modern, fast-moving distribution.
A number of people have commented on this trend; see this recent remark from Martin Langhoff or this Google+ discussion, for example. There are some clear benefits from moving in this direction, but some challenges as well.
One challenge is ABI stability. Kernel developers try hard to ensure that they do not break applications as the kernel evolves. That policy is unlikely to change anytime soon, but the truth is that there are some in our community who would like to move the stability commitment further out, allowing the ABI between the kernel and the plumbing layer to change in incompatible ways. Such changes should not bother users as long as the kernel and the plumbing layer change at the same time, but they would be bothersome indeed if only one of those components changes, or for anybody who is not using the "standard" plumbing code. The ABI stability requirements have proved frustrating to plumbing-layer developers in the past; one can only expect that to happen again in the future.
The other problem is that there will be disagreements on what the larger core should look like; one need only read the many systemd discussions to get a sense for how deep those disagreements are. Assuming that we have achieved all we can through the reproduction of classic Unix systems, there are no precedents for what the Linux system of the future should look like. There are no POSIX standards to guide development. So developers are breaking new ground, and that will always lead to disagreement and conflict.
Add to this disagreement a certain degree of discomfort among developers who feel that they are losing influence over the direction things are going. Debian developer Marco d'Itri expressed it clearly:
There is no special effort to exclude anybody from the development of the plumbing layer. But the usual rule of free software development holds: those who are doing the work have the biggest say in the directions it takes. For better or for worse, many of the developers doing work at the plumbing level have concentrated into a relatively small number of companies. That leaves a number of distributions out in the cold.
It also leaves them with a choice: sign on to this new, unofficial core distribution, or continue to go it alone without. There are plenty of examples of distributions that have followed their own path for many years; see Slackware as the classic example. Gentoo is another distribution that has built much of its own core infrastructure, including its own init system that few people have heard of. It can be done, but there is a real cost in the form of duplicated effort and an inability to benefit from the work done by others.
Predicting how this situation will turn out is not easy. This is a time of rapid change in both the community and the wider industry; it's not clear what is going to work in the long run. But the vision of a more tightly coupled system that can enable Linux distributions to evolve more quickly has some appeal. If these developers can pull it off, they may help to ensure that a unified Linux plays a major role for years to come.
A report from the Linux Audio Conference
[ Author's note: In contrast to my usual style, the following article is a largely non-technical account. Future articles will focus on the configuration and use of particular pieces from the Linux audio applications stack. Meanwhile I hope you enjoy this report, my first for LWN.net. ]
My jet lag is gone, I've finally come back to ground, and at last I can start to sort out my experiences at the 10th annual Linux Audio Conference, held this year at CCRMA, the Center For Computer Research In Music And Acoustics at Stanford University in Palo Alto, California USA. It was the first time the event had been held in the States, and the organizers obviously intended to make a good impression. I'll cut to the spoiler right now to let you know that they succeeded, with honors.
Day 1 Thursday April 12
On the first day I suffered from the predictable problems with coordinating my train schedules, but I arrived in time to hear the latter half of Harry van Haaren's presentation of his on-going development of Luppp, his very cool looping sequencer. One year ago I watched Harry demonstrate his prototype at 3AM in a Maynooth hotel room, but what a difference a year has made. Luppp is rapidly becoming *the* loop sequencer to watch, and Harry has big plans for it. We may even see it evolve into a plugin (LV2 maybe?) or perhaps it will become a part of Rui Nuno Capela's QTractor. If Harry's current rate of progress continues Luppp will be a much-enhanced program a year from now. But right now you can pick up the code, check it out, and let Harry know what you'd like to see in it. You need only ask nicely.
Alas, I missed IOhannes zmölnig's presentation on the IEM Demosuite [PDF] - a large-scale "jukebox" for a concert venue - and Flavio Schiavoni's paper on the Medusa [PDF] network music distribution system. That's pretty much the way of things at this conference - it's simply impossible to take in everything, even when time permits. Nevertheless, I was able to attend most of the presentations I especially wanted to see. And I did finally get to chat with IOhannes, a major developer in the world of Pure Data and its GEM graphics library.
My timing was more fortunate in the afternoon sessions. I caught the tail end of Joachim Heintz's presentation on Csound as a realtime application, and I was able to sit in on most of Steven Yi's report on developing Csound for the Android mobile device. Both presentations were particularly timely - both Csound and development for mobiles were well-represented throughout the conference, indicating the great importance of Csound in the Linux audio world and the rapidly increasing attention given to general audio development on the new devices. As Steven emphasized, the Android OS has some problematic features at this time - its inherent audio latency is perhaps best termed "abysmal" - but the market has spoken and it wants cool audio apps on its devices. Given enough pressure from users (hint, hint), the latency issue can be resolved. Meanwhile, interested programmers should not hesitate to begin their projects for such platforms.
In the first of the final afternoon sessions Robin Gareus presented an update of his work with integrating a video timeline into the Ardour3 digital audio workstation. I've followed Robin's work with his xjadeo video monitor, and I've built Ardour with his timeline patches. His software works, and it is an exciting experience to see a synchronized video timeline in Ardour. However, users should not expect an early delivery date for an official integration. Paul Davis, Ardour's chief designer, has stated that while he favors Robin's patches the stabilization of Ardour3 must come first. Meanwhile, we are free to apply Robin's patches to the Ardour codebase ourselves. Just remember to report your experience back to Robin.
I was eager to hear Yann Orlary's presentation on INScore, "an environment for the design of live music scores". This project has great potential - it allows realtime permutation of a running score, and it can utilize a variety of filetypes as source material (i.e. soundfiles, images, text) as score elements. INScore is rather hard to define until you've seen it in action, and I hoped to watch Yann thrill me with his typical professional report. Alas, his efforts were foiled by not one but two uncooperative machines. However, Yann is a seasoned presenter, and we still got some notion of INScore's possibilities from his excellent verbals.
The final session of the day was presented remotely by Ivica Ico Bukvic via a Skype audio/video connection. Ico reported on his work and experience with L2Ork, his laptop orchestra at Virginia Tech. I've seen and heard his groups in person, and I can testify to the enormous efforts that have gone into their development. The L2Ork group has been performing and touring extensively during the past year, giving Ico a rich source of experience and ideas for improving his own groups and the general model of the laptop orchestra.
Incidentally, I must mention that Ico's remote session was flawlessly transmitted and received, thanks to the terrific work by the conference's audio/video crew headed by Jörn Nettingsmeier and Robin Gareus. Jörn is an experienced veteran of previous conferences, a consummate professional who insists on high standards in audio and video representation. The entire conference was videotaped - the LAC2012 site has announced that recordings and pictures are on-line now - and all sessions were available for remote viewers in realtime video feeds and over IRC.
Day 2 Friday April 13
Oh no, Friday the 13th ! As far as I could tell, nothing out of the ordinary happened, but then the conference was populated by folks already a little out of the ordinary. The day's presentation schedule was organized neatly, starting with a series of reports on Linux in the deployment of multichannel/multispeaker systems, with an emphasis on the use and development of Ambisonics. Following those reports Lawrence Fyfe presented his team's work on JunctionBox [PDF], a very cool toolkit for designing control interfaces for Android devices. Next up, Edgar Berdahl and Julius Smith introduced their Synth-O-Modeler [PDF], a compiler "for open-source sound synthesis using physical models". Alas, I had to miss these last two presentations while I conducted a workshop on the use of Jean-Pierre Lemoine's AVSynthesis, an environment for combining sound and music composition with 3D graphics processing. My presentation focused on using the program's powerful audio capabilities provided by the Csound API. The final partition of the day included a series of reports on projects built around the FAUST DSP programming environment (more about FAUST later), two audio spatialization demonstrations in CCRMA's Listening Room, and Andrew Allen's workshop on his GRE [video] (Graduate Rhythmic Examination) - which can be described as both software and composition.
Day 3 Saturday April 14
Saturday's schedule began with Jörn Nettingsmeier's excellent report on with-height surround sound production with the Ambisonics system. I expect such a presentation to be detailed with considerable mathematics that usually leave me mentally stunned, but Jörn is a most engaging speaker who illustrates theory with real-world practice. Such a presentation method gets more practical information across to the interested composer and musician - e.g. myself - who wants to advance to multichannel/multispeaker output and arrangement. You can check out Jörn's presentation yourself, and I think you'll agree when I say "Jörn, write a book!".
Next up, my keynote address. I had been asked to summarize my experience in Linux audio and to comment on events I considered to be of outstanding importance to that world. I've been using Linux since 1995, with particular emphasis on its use in music and sound composition, though at that time only a few applications were mature enough to compete with similar offerings on other platforms. Without going into the details here - you can view the keynote speech on-line - it's sufficient to say that a lot has changed, in both the quantity and quality of the base sound system and the Linux audio applications stack. However, I refused to make predictions - one simply never knows what might happen - and instead I focused on the understanding and generosity I experienced from so many amazing people as I made my way into the vast world of Linux/UNIX. I was truly ignorant of the simplest things, but I was willing to learn and to put time into basic research before asking basic questions. The willingness paid off, and eventually I was able to make some minor contributions of my own. That history culminated in a book (The Book Of Linux Music And Sound) and a career as a specialized journalist, which then launched my life on to a new path that has led me to that keynote address (and a new gig with LWN!). But as I said, you can follow the entire address on-line. Now, on to the next presentations.
Conference organizer Fernando Lopez-Lezcano reported on a unique project that uses JACK with UDP to send audio as packets over a standard packet-switching network. This project has significance for audio professionals, and a hardware solution was demonstrated that could effectively replace bulky "snakes", i.e. bundles of audio lines connecting stage-located devices to an off-stage mixing desk. Fernando's presentation was followed by a remote transmission from Fons Adriaensen on a method of "controlling adaptive resampling", but alas, another commitment took me away from Fons's talk only a few minutes after he started.
I was able to attend all three of the final presentations for Day 3. These reports included an update in DSP libraries for FAUST, a demonstration of the use of the Pure-Data-based PCSlib in a touch-sensitive UI for music, and an analysis of methods used in one of the compositions performed in the evening concert. All the presentations were interesting to me, but Julius Smith's work with FAUST was truly inspiring. FAUST has great potential for anyone who would like to learn about DSP programming, with the added performance benefit of multiple output targets such as LADSPA (the Linux Audio Developers Simple Plugin API), LV2, and VST plugin formats. I don't claim that anyone can be a DSP wizard just by using it - you'll get more out of it if you know how to put more into it - but FAUST does provide a new-user-friendly entry into a programming domain often equated with a requirement for deep math skills. The third and final presentation of the day was from composer Krzysztof Gawlas, and although it was equally inspiring I'm going to reserve further comment on Krzysztof's work until my report on the conference concert series (see below).
Day 4 Sunday April 15
Sunday's schedule included presentations on the use of C++ in the development of multichannel audio applications and the use of the BeagleBoard hardware as an audio processor. The Minivosc "virtual oscillator driver for ALSA" was also introduced, while Joachim Heintz conducted a workshop on using Csound in live performance. I missed all of those events, but I had a good excuse. I had received a message from Bill Schottstaedt to let me know that he'd be at the Sunday coffee starter. I hoped to meet Bill in person - at one time I worked a lot on the GUI code for his SND audio editor/processor, and Bill's assistance was indispensable. My LISP skills - well, Guile skills, to be accurate - were about non-existent, but Bill must have figured that if I was willing to learn what to do then he'd be willing to help me learn it. I did some neat things with SND, and I acquired a deep respect for LISP and its progeny. I simply wouldn't have got far at all without Bill's help.
So, the opportunity to meet him got me from Oakland to Palo Alto in time for the morning meet-up, where at last I was introduced to the man himself. For the benefit of readers who may not know about Bill Schottstaedt, a brief summary of his contributions to the development of music made with computers would include numerous compositions written for the combination of the KL10 computer and the Samson Box synthesizer; a collection of open-source music and sound software that includes CLM (Common LISP Music), CMN (Common Music Notation), and the SND environment for audio processing and composition, all currently maintained and in daily use around the world; and a variety of seminal articles published in MIT's Computer Music Journal (among others). Bill has a long and productive association with CCRMA, and I was interested in his accounts of his experiences there. As our time passed, we settled in a lounge at CCRMA where we were joined by more conference members, including Dr. John Chowning, the chief designer of FM synthesis (who also happens to be the founder of CCRMA). The conversation was much enriched with anecdotes and stories of the Center's history and the various amazing personalities that have populated - and continue to populate - its hallways and classrooms. Alas, the afternoon passed too quickly, but when the group finally dispersed I think we were all a bit "intoxicated by reason of fascinating discussion".
As the group disbanded I had the further pleasure of a conversation with Oscar Pablo di Liscia and Juan Reyes. I was familiar with Oscar's excellent book Generacion y procesamiento de sonido i musica a traves del programa Csound, but I was nicely surprised when he presented me with a copy of his latest publication Musica y Espacio: Ciencia, tecnologia, y estetica, a collection of articles and essays on the musical aspects of space and the spatial aspects of music. The gift was most timely after a discussion with Aaron Heller regarding an Ambisonics installation for my studio. Composer/researcher Juan Reyes is another one of those remarkable persons CCRMA seems to attract. I knew him only by name until this conference - now I've had the pleasure of his conversation and his music, good ways to get to know good people.
Something must have been in the water bottle in that lounge area, because later another random group gathered there. This group included conference Tonmeister Jörn Nettingsmeier and CCRMA DSP wizard Julius O. Smith, but the mood now had definitely turned towards the musical. Julius found his well-worn Ramirez classical guitar, Jörn pounded out rhythms on an empty water jug, I sang, and everyone else grabbed what they could find to beat, pluck, or breathe into. Given that this lounge is in a building dedicated to research into music and acoustics, it didn't take long for everyone to be playing on some kind of instrument or something that resembled some kind of instrument. It suffices to say that hilarity ensued until we all left for the after-conference dinner/celebration for even more talk and a last good time to bring this wonderful event to its conclusion.
LAC2012: The Music
If I counted correctly, four distinct music venues were organized for the conference. The venues included a concert series spanning three evenings, a continuous cycle of various pieces played in the Listening Room, two audio/video installations, and the hallowed Linux Sound Night. I was able to attend all the concerts, I got to hear some of the pieces played in the Listening Room, and I had to miss the Sound Night. I can attest to the musical value of the concert series - the level of professionalism was high, and it was certainly obvious that Linux can be used to make music these days. More proof could be found in the pieces played in the Listening Room, and if you still need convincing, check out the Linux Sound Night videos recorded by Rui Capela. While it's true that we don't have Lady Gaga in our camp, we do have Deb & Duff, aka Juliana Snapper and Miller Puckette. Yes, the same Miller Puckette of Max/MSP and Pure Data fame. Frankly, all props to Lady G, but I think we got the better bargain.
I mentioned earlier that I had some comments to make regarding Krzysztof Gawlas's report on the making of his piece Rite Of The Earth [PDF]. His presentation focused on the various methods used to create his sonic resources, which was indeed all very interesting, but it did not prepare us for the beauty of his composition. Rite Of The Earth is not solely a Linux-based production, but free and open-source software figured significantly in the making of this music, and I don't hesitate to recommend the piece to my readers.
Incidentally, I should point out that there was wide variance in the represented musical styles. Everything from severely atonal composition to basic rock and dub - and even blues and country music - was represented, and I felt strongly that Linux audio software has definitely come of age. Again I say that the music was wholly engaging - which strikes me as kind of the point of the whole thing - and I can only wait to hear what wonders what will be produced by our talented Linux-based musicians over the next year.
Closing Remarks
So give three cheers and one cheer more for conference organizers Fernando Lopez-Lezcano and Bruno Ruviaro. The entire event was one smooth-running machine from start to finish (though the ever-mobile organizers might not have seen it that way), and I think all the attendees were happy and content by the time it closed. That smooth surface hid the details of what must have been an enormous effort, and I can only say "Thanks again!" to Nando and Bruno for their dedication to making LAC2012 a valuable and memorable experience.
The conference had a few topical biases, especially with regard to Csound, mobile devices, FAUST, Pure Data (Pd), and Ambisonics. That is not a complaint, merely an observation that such topics are timely and of importance to the advance of Linux audio development. Would I have liked to have seen coverage of other major topics? Of course, but there's only so much time, and the organizers must have had some rare fun juggling so many schedules and appointments.
As always, I'm revived and revivified after such an incredible meeting. LAC2012 was further proof of the viability of Linux audio development - the presence of many younger developers was most heartening, especially since they will define the future of the domain. I saw and heard much interest and open-mindedness towards all aspects of audio development, and if I may allow myself a single prediction I'll claim that Linux sound and music software will continue to thrive and will grow in its appeal to new and not-so-new users. We have commercial interest from prestigious developers such as Loomer Productions, Harrison Consoles, Pianoteq, and the Guitar Pro group, and I expect we'll see a few more significant commercial entries arrive later this year. Of course free and open-source development will continue to drive this trend and others. For my part, I am most excited to see what's coming down the road. Whatever it is, it's sounding pretty good from here.
The 11th annual Linux audio conference will be hosted by IEM in Graz, Austria. We hope to see you there.
Security
Cybersecurity and CISPA
Depending on whom you listen to, "cybersecurity" is either an enormous national security concern or a largely overblown issue promulgated by those with something to gain. There is little question that there are security threats to computers that emanate from "cyberspace"—though that term might best be relegated to the science fiction where it originated—and that some of those threats could cause serious harm to the infrastructure of the internet and to systems connected to it. But, like most internet "protection" laws, the proposed US "Cyber Intelligence Sharing and Protection Act" (CISPA) does little to actually improve the problem it is slated to solve and is, instead, an enormous overreach into the private communications of internet users.
The ostensible purpose of CISPA is to facilitate the sharing of network traffic information between US government agencies and various US companies to assist in investigating and thwarting internet attacks. While that may sound relatively harmless—possibly even beneficial—the devil, as always, is in the details. In this case, the details aren't very clear; as the bill is written it could allow for nearly limitless internet data collection, with provisions to share that information with the US government, all with little or no oversight. It is, in short, an enormous circumvention of the usual protections against warrantless wiretapping (not that we haven't seen those protections ignored before, of course).
Part of the problem stems from overly vague language in CISPA. The bill only requires that cybersecurity or national security be "one significant purpose" of the government's use of the data being shared. That leaves a lot of wiggle room, not only because the two terms are not well-defined, but also because it allows the use of the data for non-security purposes if some kind of security tie can be made. Earlier versions of the bill specifically mentioned things like copyright enforcement as one of the things that the data could be used for.
CISPA would also shield companies (like ISPs or web sites) from civil and criminal liability for any "good faith" sharing of data. That would severely limit the legal recourse for users harmed by inappropriate data collection or sharing. The government is also shielded from legal recourse unless there is intentional or willful mishandling of the data—notably, negligent handling of the data is protected.
As we have seen time and time again (e.g. the PATRIOT Act, Digital Millennium Copyright Act (DMCA), the Computer Fraud and Abuse Act (CFAA), etc.) the vagueness of computer-related statutes makes them likely to be abused, either by prosecutors, government agents, companies, or private parties, to further aims that are arguably unrelated to the intent of the law—or at least its stated intent.
There have been claims that entering incorrect information in the registration for a web site can be construed as "unauthorized access" under the CFAA for example. Unauthorized access is one of the threats specifically mentioned by CISPA. That could potentially turn anyone who registered a false name or birth date with a social network (or violated the terms of service of some web site) into a cybersecurity threat under the law, which would allow the collection and sharing of their internet traffic. Proponents claim it would never be used that way, of course, but those same claims were made for the CFAA and others.
In an effort to clarify what else the government could use any of the
collected data for, the US House approved
an amendment to CISPA before passing the measure. Instead of being able
to use the data for "any lawful purpose
" (assuming it was
collected and shared due to some tie to cyber or national security), the
amendment narrowed it to five separate uses: "cybersecurity, cyber
crime, protecting people from harm, protecting children from exploitation,
and national security
". While that's better, certainly, it
enshrines an expansion of CISPA from strictly being about computer security
to cover additional illegal activities. That expansion is part of what
worried civil liberties organizations (the Electronic Frontier Foundation
(EFF), TechFreedom, American Civil Liberties Union (ACLU), Reporters
Without Borders, and on and on). CISPA is sold as protecting computers and
networks, but stretches further to protecting exploited children and
dealing with "cyber crime".
That's not to say that there isn't good reason to fight those kinds of problems, but there are already tools at hand to do so. Part of the selling point of CISPA is that cybersecurity threats are so fast moving that stopping to get a judge to issue a warrant could cause irreparable harm. That may be true, but it may also be less true for some of the other threats now listed in the House version of CISPA. The "extra" threats probably seem like an obvious addition, but they may really just end up allowing carte blanche fishing expeditions in the internet traffic of those suspected of being some kind of security threat.
Normally, it is the role of judges to impartially look at the reasons that law enforcement has for its suspicions before they grant search warrants. That is meant to provide some "checks and balances" in the system. Circumventing that requirement should not be taken lightly as it is only a question of when, not if, these kinds of provisions will be abused. There may be situations where it does make sense to short-circuit the search warrant process (at least for a short period of time), but it's not at all clear that the bill's proponents have clearly thought that out. Instead, it seems like the "threat du jour"; one that Congress must take action on.
The US Senate will also be considering CISPA sometime soon, though the Obama administration has threatened a presidential veto over privacy concerns. That threat isn't being taken very seriously by some, but passage by the Senate is far from assured anyway. That said, it is a worrisome bill and the EFF and others are gearing up to oppose it in the Senate.
If there truly is a need for some kind of sweeping cybersecurity legislation because existing laws cannot handle some violations—something that hasn't been well articulated by proponents—there are a number of steps that could be taken to make CISPA more palatable to civil liberties and privacy advocates. Adding a mandatory judicial review, reducing the scope to the actual problem being addressed, and not giving blanket protection against "good faith" misuse of the data to the government and internet carriers and providers would all be steps in the right direction. Unfortunately, while there have been amendments made, the core problems with CISPA remain.
While it may be tempting to write this off as a "US problem", passage of CISPA is likely to affect internet users worldwide. Large chunks of internet traffic pass through the US, which would make it vulnerable to collection. In addition, many internet services are based in the US, and those US companies might well be asked to hand over data on those in other countries perceived to be security threats. In fact, the supposed intent of CISPA is to protect against threats from "overseas".
In the end, CISPA is a poorly thought out, knee-jerk reaction to a real problem. The scope and severity of that problem is not well understood, however, and there is a burgeoning cybersecurity industry that is, at a minimum, cheerleading for tougher measures like this one. That's not a recipe for good legislation. CISPA is just another in a long line of proposed and enacted legislation with a stated intent that is far different from the language in the bill itself. But it is certainly something to keep an eye on.
Brief items
Security quotes of the week
Auriemma claims that nothing really happens for the first five seconds, but then he lost control of the TV, both manually on the control panel and with the remote. Then after another five seconds, he claims, the TV [automatically] restarts. Then the process repeats itself forever, even after unplugging the TV. Eventually, Auriemma managed to reset the TV in service mode. He writes that users can avoid the situation altogether by hitting ‘exit’ when prompted to ‘allow’ or ‘deny’ the new remote device.
Misconfigured hardware or software causing a denial of service problem? Cyberattack declared!
Anything that seems at all out of the ordinary and you want to pass the buck as quickly as possible? Cyberattack declared!
Fuzzing for Security (The Chromium Blog)
A posting on the Chromium blog describes the project's efforts to do fuzz testing of the browser. "Chrome’s fuzzing infrastructure (affectionately named "ClusterFuzz") is built on top of a cluster of several hundred virtual machines running approximately six-thousand simultaneous Chrome instances. ClusterFuzz automatically grabs the most current Chrome LKGR (Last Known Good Revision), and hammers away at it to the tune of around fifty-million test cases a day. That capacity has roughly quadrupled since the system’s inception, and we plan to quadruple it again over the next few weeks. [...] To appreciate just what that means, consider that ClusterFuzz has detected 95 unique vulnerabilities since we brought it fully online at the end of last year. In that time, 44 of those vulnerabilities were identified and fixed before they ever had a chance to make it out to a stable release." There is mention of pushing the fixes upstream to WebKit and FFmpeg, but there is no mention of whether the ClusterFuzz code will be made available, unfortunately.
The Tor Project's New Tool Aims To Map Out Internet Censorship (Forbes)
The OONI-probe (Open Observatory of Network Interference) is an early attempt to "collect data about local meddling with the computer’s network connections, whether it be censorship, surveillance or selective bandwidth slowdowns." Forbes takes a look at this new effort by Tor developers Arturo Filasto and Jacob Appelbaum. "
Tor’s OONI project, funded in part with a grant from Radio Free Asia, isn’t the first to monitor and measure Internet censorship around the world–other projects like the Open Net Initiative, the Berkman Center’s HerdictWeb and Google’s Transparency Report all aim to spot censorship and Internet slowdowns. But unlike those projects, OONI uses only open-source software and plans to make the raw data gathered by its tools public and accessible to any researcher. “This came from a bit of disappointment over the fact that all the existing tools out there for monitoring censorship were either not using open methodologies or not making their data available,” says Filasto, a 21-year old computer science student at Rome’s Sapienza university. “Our goal with OONI is to build that open framework, so that researchers can independently prove that the methodology is valid and repeat the tests.”" (Thanks to Paul Wise)
New vulnerabilities
bugzilla: security bypass/cross-site scripting
| Package(s): | bugzilla | CVE #(s): | CVE-2012-0466 CVE-2012-0465 | ||||||||
| Created: | May 1, 2012 | Updated: | May 2, 2012 | ||||||||
| Description: | From the CVE entries:
template/en/default/list/list.js.tmpl in Bugzilla 2.x and 3.x before 3.6.9, 3.7.x and 4.0.x before 4.0.6, and 4.1.x and 4.2.x before 4.2.1 does not properly handle multiple logins, which allows remote attackers to conduct cross-site scripting (XSS) attacks and obtain sensitive bug information via a crafted web page. (CVE-2012-0466) Bugzilla 3.5.x and 3.6.x before 3.6.9, 3.7.x and 4.0.x before 4.0.6, and 4.1.x and 4.2.x before 4.2.1, when the inbound_proxies option is enabled, does not properly validate the X-Forwarded-For HTTP header, which allows remote attackers to bypass the lockout policy via a series of authentication requests with (1) different IP address strings in this header or (2) a long string in this header. (CVE-2012-0465) | ||||||||||
| Alerts: |
| ||||||||||
cifs-utils: file existence disclosure flaw
| Package(s): | cifs-utils | CVE #(s): | CVE-2012-1586 | ||||||||||||||||||||||||||||||||||||||||||||
| Created: | May 1, 2012 | Updated: | July 16, 2012 | ||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Red Hat bugzilla:
A file existence disclosure flaw was found in the way mount.cifs tool of the Samba SMB/CIFS tools suite performed mount of a Linux CIFS (Common Internet File System) filesystem. A local user, able to mount a remote CIFS share / target to a local directory could use this flaw to confirm (non) existence of a file system object (file, directory or process descriptor) via error messages generated during the mount.cifs tool run. | ||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||
gridengine: code injection
| Package(s): | gridengine | CVE #(s): | |||||||||
| Created: | April 27, 2012 | Updated: | May 2, 2012 | ||||||||
| Description: | From the Fedora advisory:
Security update to prevent environment code injection and two other security issues. | ||||||||||
| Alerts: |
| ||||||||||
imagemagick: code execution
| Package(s): | imagemagick | CVE #(s): | CVE-2012-0259 CVE-2012-0260 CVE-2012-1185 CVE-2012-1186 CVE-2012-1610 CVE-2012-1798 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | April 30, 2012 | Updated: | May 19, 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Debian advisory:
Several integer overflows and missing input validations were discovered in the ImageMagick image manipulation suite, resulting in the execution of arbitrary code or denial of service. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Messaging: unauthorized cluster access
| Package(s): | Messaging | CVE #(s): | CVE-2011-3620 | ||||||||
| Created: | May 1, 2012 | Updated: | May 2, 2012 | ||||||||
| Description: | From the Red Hat advisory:
It was found that Qpid accepted any password or SASL mechanism, provided the remote user knew a valid cluster username. This could give a remote attacker unauthorized access to the cluster, exposing cluster messages and internal Qpid/MRG configurations. | ||||||||||
| Alerts: |
| ||||||||||
mozilla: multiple vulnerabilities
| Package(s): | firefox, thunderbird, seamonkey, xulrunner | CVE #(s): | CVE-2011-1187 CVE-2011-2986 CVE-2012-0475 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | April 27, 2012 | Updated: | July 23, 2012 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the CVE entries:
Google Chrome before 10.0.648.127 allows remote attackers to bypass the Same Origin Policy via unspecified vectors, related to an "error message leak." (CVE-2011-1187) Mozilla Firefox 4.x through 5, Thunderbird before 6, SeaMonkey 2.x before 2.3, and possibly other products, when the Direct2D (aka D2D) API is used on Windows, allows remote attackers to bypass the Same Origin Policy, and obtain sensitive image data from a different domain, by inserting this data into a canvas. (CVE-2011-2986) Mozilla Firefox 4.x through 11.0, Thunderbird 5.0 through 11.0, and SeaMonkey before 2.9 do not properly construct the Origin and Sec-WebSocket-Origin HTTP headers, which might allow remote attackers to bypass an IPv6 literal ACL via a cross-site (1) XMLHttpRequest or (2) WebSocket operation involving a nonstandard port number and an IPv6 address that contains certain zero fields. (CVE-2012-0475) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
nginx: code execution
| Package(s): | nginx | CVE #(s): | CVE-2012-2089 | ||||||||||||
| Created: | May 1, 2012 | Updated: | June 21, 2012 | ||||||||||||
| Description: | From the CVE entry:
Buffer overflow in ngx_http_mp4_module.c in the ngx_http_mp4_module module in nginx 1.0.7 through 1.0.14 and 1.1.3 through 1.1.18, when the mp4 directive is used, allows remote attackers to cause a denial of service (memory overwrite) or possibly execute arbitrary code via a crafted MP4 file. | ||||||||||||||
| Alerts: |
| ||||||||||||||
openstack-nova: denial of service
| Package(s): | openstack-nova | CVE #(s): | CVE-2012-2101 | ||||||||
| Created: | May 1, 2012 | Updated: | May 4, 2012 | ||||||||
| Description: | From the Red Hat bugzilla:
Dan Prince reported a vulnerability in Nova. He discovered that there was no limit on the number of security group rules a user can create. By creating a very large set of rules, an unreasonable number of iptables rules will be created on compute nodes, resulting in a denial of service. | ||||||||||
| Alerts: |
| ||||||||||
rubygems: require valid certificates
| Package(s): | rubygems | CVE #(s): | CVE-2012-2125 CVE-2012-2126 | ||||||||||||||||||||||||||||||||||||||||||||
| Created: | May 1, 2012 | Updated: | September 5, 2013 | ||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Rubygems history:
This release increases the security used when RubyGems is talking to an https server. If you use a custom RubyGems server over SSL, this release will cause RubyGems to no longer connect unless your SSL cert is globally valid. | ||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||
samba: privilege escalation
| Package(s): | samba | CVE #(s): | CVE-2012-2111 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | May 1, 2012 | Updated: | May 7, 2012 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the CVE entry:
The (1) CreateAccount, (2) OpenAccount, (3) AddAccountRights, and (4) RemoveAccountRights LSA RPC procedures in smbd in Samba 3.4.x before 3.4.17, 3.5.x before 3.5.15, and 3.6.x before 3.6.5 do not properly restrict modifications to the privileges database, which allows remote authenticated users to obtain the "take ownership" privilege via an LSA connection. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
spip: multiple vulnerabilities
| Package(s): | spip | CVE #(s): | |||||
| Created: | April 27, 2012 | Updated: | May 2, 2012 | ||||
| Description: | From the Debian advisory:
Several vulnerabilities have been found in SPIP, a website engine for publishing, resulting in cross-site scripting, script code injection and bypass of restrictions. | ||||||
| Alerts: |
| ||||||
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 3.4-rc5, released on April 29. "And like -rc4, quite a bit of the changes came in on Friday (with some more coming in yesterday). And we haven't been calming down, quite the reverse. -rc5 has almost 50% more commits than -rc4 had. Not good." That said, what's going in is mostly fixes; see the announcement for the short-form changelog.
Stable updates: the 3.0.30 and 3.3.4 updates were released on April 27 with the usual set of important fixes.
Quotes of the week
I used to be _special_ dammit. Snif.
Some useful perf documentation
For those who would like more information on how to use the Linux perf subsystem, there is an extensive tutorial posted by Google, written by Stephane Eranian. It probably merits a bookmark for anybody wanting to learn how to do interesting things with perf.
Kernel development news
Better active/inactive list balancing
Memory management is a notoriously tricky task, though the underlying objective is quite clear: look into the future and ensure that the pages that will be needed by applications are in memory. Unfortunately, existing crystal ball peripherals tend not to work very well; they also usually require proprietary drivers. So the kernel is stuck with a set of heuristics that try to guess future needs based on recent behavior. Adjusting those heuristics is always a bit of a challenge; it is easy to put in changes that will break obscure workloads years in the future. But that doesn't stop developers from trying.A core part of the kernel's memory management subsystem is a pair of lists called the "active" and "inactive" lists. The active list contains anonymous and file-backed pages that are thought (by the kernel) to be in active use by some process on the system. The inactive list, instead, contains pages that the kernel thinks might not be in use. When active pages are considered for eviction, they are first moved to the inactive list and unmapped from the address space of the process(es) using them. Thus, once a page moves to the inactive list, any attempt to reference it will generate a page fault; this "soft fault" will cause the page to be moved back to the active list. Pages that sit in the inactive list for long enough are eventually removed from the list and evicted from memory entirely.
One could think of the inactive list as a sort of probational status for pages that kernel isn't sure are worth keeping. Pages can get there from the active list as described above, but there's another way to inactive status as well: file-backed pages, when they are faulted in, are placed in the inactive list. It is quite common that a process will only access a file's contents once; requiring a second access before moving file-backed pages to the active list lets the kernel get rid of single-use data relatively quickly.
Splitting memory into two pools in this manner leads to an immediate policy decision: how big should each list be? A very large inactive list gives pages a long time to be referenced before being evicted; that can reduce the number of pages kicked out of memory only to be read back in shortly thereafter. But a large inactive list comes at the cost of a smaller active list; that can slow down the system as a whole by causing lots of soft page faults for data that's already in memory. So, as is the case with many memory management decisions, regulating the relative sizes of the two lists is a balancing act.
The way that balancing is done in current kernels is relatively straightforward: the active list is not allowed to grow larger than the inactive list. Johannes Weiner has concluded that this heuristic is too simple and insufficiently adaptive, so he has come up with a proposal for a replacement. In short, Johannes wants to make the system more flexible by tracking how long evicted pages stay out of memory before being faulted back in.
Doing so requires some significant changes to the kernel's page-tracking infrastructure. Currently, when a page is removed from the inactive list and evicted from memory, the kernel simply forgets about it; that clearly will not do if the kernel is to try to track how long the page remains out of memory. The page cache is tracked via a radix tree; the kernel's radix tree implementation already has a concept of "exceptional entries" that is used to track tmpfs pages while they are swapped out. Johannes's patch extends this mechanism to store "shadow" entries for evicted pages, providing the needed long-term record-keeping for those pages.
What goes into those shadow entries is a representation of the time the page was swapped out. That time can be thought of as a counter of removals from the inactive list; it is represented as an atomic_t variable called workingset_time. Every time a page is removed from the inactive list, either to evict it or to activate it, workingset_time is incremented by one. When a page is evicted, the current value of workingset_time is stored in its associated shadow entry. This time, thus, can be thought of as a sort of sequence counter for memory management events.
If and when that page is faulted back in, the difference between the current workingset_time and the value in the shadow entry gives a count of how many pages were removed from the inactive list while that page was out of memory. In the language of Johannes's patch, this difference is called the "refault distance." The observation at the core of this patch set is that, if a page returns to memory with a refault distance of R, its eviction and refaulting would have been avoided had the inactive list been R pages longer. R is thus a sort of metric describing how much longer the inactive list should be made to avoid a particular page fault.
Given that number, one has to decide how it should be used. The algorithm used in Johannes's patch is simple: if R is less than the length of the active list, one page will be moved from the active to the inactive list. That shortens the active list by one entry and places the formerly-active page on the inactive list immediately next to the page that was just refaulted in (which, as described above, goes onto the inactive list until a second access occurs). If the formerly-active page is still needed, it will be reactivated in short order. If, instead, the working set is shifting toward a new set of pages, the refaulted page may be activated instead, taking the other page's place. Either way, it is hoped, the kernel will do a better job of keeping the right pages active. Meanwhile, the inactive list gets slightly longer in the hope of avoiding refaults in the near future.
How well all of this works is not yet clear: Johannes has not posted any benchmark results for any sort of workload. This is early-stage work at this point, a long way from acceptance into a mainline kernel release. So it could evolve significantly or fade away entirely. But more sophisticated balancing between the active and inactive lists seems like an idea whose time may be coming.
TCP connection repair
Migrating a running container from one physical host to another is a tricky job on a number of levels. Things get even harder if, as is likely, the container has active network connections to processes outside of that container. It is natural to want those connections to follow the container to its new host, preferably without the remote end even noticing that something has changed, but the Linux networking stack was not written with this kind of move in mind. Even so, it appears that transparent relocation of network connections, in the form of Pavel Emelyanov's TCP connection repair patches, will be supported in the 3.5 kernel.The first step in moving a TCP connection is to gather all of the information possible about its current state. Much of that information is available from user space now; by digging around in /proc and /sys, one can determine the address and port of the remote end, the sizes of the send and receive queues, TCP sequence numbers, and a number of parameters negotiated between the two end points. There are still a few things that user space will need to obtain, though, before it can finish the job; that requires some additional support from the kernel.
With Pavel's patch, that support is available to suitably privileged processes. To dig into the internals of an active network connection, user space must put the associated socket into a new "repair mode." That is done with the setsockopt() system call, using the new TCP_REPAIR option. Changing a process's repair mode status requires the CAP_NET_ADMIN capability; the socket must also either be closed or in the "established" state. Once the socket is in repair mode, it can be manipulated in a number of ways.
One of those is to read the contents of the send and receive queues. The send queue contains data that has not yet been successfully transmitted to the remote end; that data needs to move with the connection so it can be transmitted from the new location. The receive queue, instead, contains data received from the remote end that has not yet been consumed by the application being moved; that data, too, should move so it will be waiting on the new host when the application gets around to reading it. Obtaining the contents of these queues is done with a two-step sequence: (1) call setsockopt(TCP_REPAIR_QUEUE) with either TCP_RECV_QUEUE or TCP_SEND_QUEUE, then (2) call recvmesg() to read the contents of the selected queue.
It turns out there is only one other important piece of information that cannot already be obtained from user space: the maximum value of the MSS (maximum segment size) negotiated between the two endpoints at connection setup time. To make this value available, Pavel's patch changes the semantics of the TCP_MAXSEG socket option (for getsockopt()) when the connection is in repair mode: it returns the maximal "clamp" MSS value rather than the currently active value.
Finally, if a connection is closed while it is in the repair mode, it is simply deleted with no notification to the remote end. No FIN or RST packets will be sent, so the remote side will have no idea that things have changed.
Then there is the matter of establishing the connection on the new host. That is done by creating a new socket and putting it immediately into the repair mode. The socket can then be bound to the proper port number; a number of the usual checks for port numbers are suspended when the socket is in repair mode. The TCP_REPAIR_QUEUE setsockopt() call comes into play again, but this time sendmsg() is used to restore the contents of the send and receive queues.
Another important task is to restore the send and receive sequence numbers. These numbers are normally generated randomly when the connection is established, but that cannot be done when a connection is being moved. These numbers can be set with yet another call to setsockopt(), this time with the TCP_QUEUE_SEQ option. This operation applies to whichever queue was previously selected with TCP_REPAIR_QUEUE, so the refilling of a queue's content and the setting of its sequence number are best done at the same time.
A few negotiated parameters also need to be restored so that the two ends will remain in agreement with each other; these include the MSS clamp described above, along with the active maximum segment size, the window size, and whether the selective acknowledgment and timestamp features can be used. One last setsockopt() option, TCP_REPAIR_OPTIONS, has been added to make it possible to set these parameters from user space.
Once the socket has been restored to a state approximating that which existed on the old host, it's time to put it into operation. When connect() is called on a socket in repair mode, much of the current setup and negotiation code is shorted out; instead, the connection goes directly to the "established" state without any communication from the remote end. As a final step, when the socket is taken out of the repair mode, a window probe is sent to restart traffic between the two ends; at that point, the socket can resume normal operation on the new host.
These patches have been through a few revisions over a number of months; with version 4, networking maintainer David Miller accepted them into net-next. From there, those changes will almost certainly hit the mainline during the 3.5 merge window. The TCP connection repair patches do not represent a complete solution to the problem of checkpointing and restoring containers, but they are an important step in that direction.
Fixing the unfixable autofs ABI
One of the few hard rules of kernel development is that breaking the user-space binary interface is not acceptable. If there is user-space code that depends on specific behavior, that behavior must be maintained regardless of how inconvenient that may be. But what is to be done if two different programs depend on mutually-incompatible behaviors, so that it is seemingly impossible to keep them both working? The answer may be to violate another rule by putting an ugly hack into the kernel—or to do something rather more tricky.The "autofs" protocol is used to communicate between the kernel and an automounter daemon. It allows the automounter to set up special virtual filesystems that, when referenced by user space, can be replaced by a remote-mounted real filesystem. Much of this protocol is implemented with ioctl() calls on a special autofs device, but it also makes use of pipes between the kernel and user space when specific filesystems are mounted.
This protocol is certainly part of the kernel ABI, so its components have been defined with some care. One of the key elements of the autofs protocol is the autofs_v5_packet structure, which is sent from the kernel to user space via a pipe; it is used, among other things, to report that a filesystem has been idle for some time and should be unmounted. This structure looks like:
struct autofs_v5_packet {
struct autofs_packet_hdr hdr;
autofs_wqt_t wait_queue_token;
__u32 dev;
__u64 ino;
__u32 uid;
__u32 gid;
__u32 pid;
__u32 tgid;
__u32 len;
char name[NAME_MAX+1];
};
The size of every field is precisely defined, so this structure should look the same on both 32- and 64-bit systems. And it does, except for one tiny little problem. The size of the structure as defined is 300 bytes, which is not divisible by eight. So if two of these structures were to be placed contiguously in memory, the 64-bit ino field would have to be misaligned in one of them. To avoid this problem, the compiler will, on 64-bit systems, round the size of the structure up to a multiple of eight, adding four bytes of padding at the end. So sizeof() on struct autofs_v5_packet will return 300 on a 32-bit system, and 304 on a 64-bit system.
That disparity is not a problem most of the time, but there is an exception. Automounting is one of the many tasks being assimilated by the systemd daemon. When systemd reads one of the above structures from the kernel, it checks the size of what it read against its idea of the size of the structure to ensure that everything is operating as it should be. That check works just fine, as long as systemd and the kernel agree on that size. And normally they do, but there is an exception: if systemd is running as a 32-bit process on a 64-bit kernel, it will get a 304-byte structure when it is expecting 300 bytes. At that point, systemd concludes that something has gone wrong and gives up.
In February, Ian Kent merged a patch to deal with this problem. One could be forgiven for calling the solution hacky: on 64-bit systems, the kernel's automount code will subtract four from the size of that structure if (and only if) it is talking with a user-space client running in 32-bit mode. This patch makes systemd work in this situation; it was merged for 3.3-rc5 and fast-tracked into the various stable kernel releases. Everybody then lived happily ever after.
...except they didn't. It seems that the automount program from the autofs-tools package, which is still in use on a great many systems, had run into this problem a number of years ago. At that time, the autofs-tools developers decided to work around the problem in user space. So, if automount determines that it is running in 32-bit mode on a 64-bit kernel (Linus has little respect for how that determination is done, incidentally), it will correct its idea of what the structure size should be. If the kernel messes with that size, the automount "fix" no longer works, so Ian's patch fixes systemd at the cost of breaking automount.
So we are now in a situation where two deployed programs have different ideas of how the autofs protocol should work. On pure 32- or 64-bit systems, both programs work just fine, but, depending on which kernel is being run, one or the other of the two will break in the 32-on-64 configuration. If Ian's patch remains, some users will be most unhappy, but reverting it will upset other users. It is, in other words, a somewhat unfortunate situation.
Unfortunate, but not necessarily unrecoverable. One possible way to fix things can be seen in this patch from Michael Tokarev. In short, this patch looks at the name of the current command (current->comm) and compares it against "automount". If the currently-running program is called "automount," the structure-size tweak is not applied and things work again. For any other program (including systemd), the previous fix remains. So things are fixed at the expense of having the kernel ABI depend on the name of the running program. At best, this solution can be described as "inelegant." At worst, there may be some other, unknown program with a different name that breaks in the same way automount does; any such program will remain broken with this fix in place.
Still, Linus has conceded that "it's
probably what we have to go with
". But he preferred to look for
a less kludgy and more robust solution. One possibility was for the kernel
to look at the
size of the read() operation that would obtain the
autofs_v5_packet
structure prior to writing that structure; if that size is either 300 or
304, the kernel could give the
calling program the size it is expecting. The problem here is that
the read() operation is hidden behind the pipe, so the autofs
code does not actually have access to the size of the buffer provided by
user space.
So Linus came up with a different solution, the concept of "packetized pipes". A packetized pipe resembles the normal kind with a couple of exceptions: each write() is kept in a separate buffer, and a read() consumes an entire buffer, even if the size of the read is smaller than the amount of data in the buffer. With a packetized pipe, the kernel can always just write the larger (304-byte) structure size; if user space is only trying to read 300 bytes, then it will get what it expects and be happy. So there is no need for special hacks in the kernel, just a slightly different type of pipe dynamics. Following a suggestion from Alan Cox, Linus made an open with O_DIRECT turn on the packetized behavior, so user space can create such pipes if need be.
After a couple of false starts, Linus got this patch working and merged it just prior to the 3.4-rc5 release. So the 3.4 kernel should work fine for either automount or systemd.
The kernel community got a bit lucky here; it was possible for a suitably clever and motivated developer to figure out a way to give both programs what they expect and make the system work for everybody. The next time this kind of problem arises, the solution may not be so simple. Maintaining ABI stability is not always easy or fun, but it is necessary to keep the system viable in the long term.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Development tools
Device drivers
Filesystems and block I/O
Memory management
Virtualization and containers
Page editor: Jonathan Corbet
Distributions
Kubuntu gets a new sponsor
Users of Kubuntu, the Ubuntu-based KDE distribution, underwent an anxious few months in early 2012 when Canonical announced its decision to pull paid employees off the project and reclassify it as a community-managed variant. Any concern over potential problems for the project subsided in late April, when not one but two developers announced that they had found full-time employment to continue working on the distribution. Exactly who they will be working for remains a bit more mysterious, since the company involved gives out little information about its make-up or its plans.
Canonical and Kubuntu
Kubuntu is one of the oldest variants of Ubuntu; it debuted with the second-ever Ubuntu release, Hoary Hedgehog, in 2005. It differed from the purely community-built derivatives in two respects, however: first, Canonical offered commercial support services for it (thus making it an "official" Canonical product), and second, a Canonical staffer was paid to work on it (as one might expect for a commercial product). That employee was Jonathan Riddell, who described his duties as:
Kubuntu was not Riddell's only responsibility while at Canonical, though,
and in February 2012 the company decided to stop offering Kubuntu support
services, and move Riddell to other projects. The Kubuntu community heard
the news through a message to the kubuntu-devel list by
Riddell. According to that message, the now-released 12.04 would be the
last Kubuntu version to receive support from Canonical. Riddell said that
he would
still be able to participate in Kubuntu-related projects on work time,
such as the Qt framework, but said that the community would need to pick
up slack in several areas, including the "long, slow, thankless
task
" of ISO testing. He also encouraged community members to apply
for support to attend the Ubuntu Developer Summits and continue to
participate.
Despite the cutback, the announcement did not signal the end of all investment in Kubuntu by Canonical. It moved the distribution to the ranks of "recognized Ubuntu flavors," a list of derivatives that also includes Edubuntu, Xubuntu, Lubuntu, Ubuntu Studio, Mythbuntu, and several localized-language flavors. These projects all use Ubuntu's official infrastructure, including the package repositories, build system, ISO distribution, security updates, and various community tools. Furthermore, in spite of the source of Riddell's paychecks, Kubuntu had always been managed as a community project, with an annually-elected council leading the decision-making process.
Still, the announcement struck many in the Kubuntu community hard, to the point where some worried that it meant the end of the project. Harald Sitter (among others) posted a message in support of Kubuntu, noting that the other recognized flavors were doing just fine, and had done so for years without any paid developers.
The Return of the developer(s)
Had the story ended there, the distribution perhaps would have continued on its own as a purely community-developed offering. But on April 2, Riddell joined the Ubuntu Technical Board for one of its scheduled meetings, and inquired whether the board would object to another company financially supporting Kubuntu. The board ruled that it had no objection, and on April 10, Riddell announced that he had accepted a job offer to work full time on Kubuntu. The next day, Kubuntu contributor (and Canonical employee) Aurélien Gâteau announced that he, too, had been hired away for Kubuntu work.
The company that hired both Riddell and Gâteau was Blue Systems, and the news was well-received among Kubuntu fans, not just for the continuity of Riddell's continued participation, but for doubling the number of full-time developers.
But one piece of the puzzle was frustratingly absent: exactly who Blue Systems was, and what business it was in. The Blue Systems web site is spartan, containing only a list of other projects supported financially by the company, all of which are either KDE- or Qt-related. The H Online was one of the first to observe the mysterious lack of information when it reported Blue Systems' support of Linux Mint back in January 2012. The H article pointed to a Linux Mint blog post that said the company was based in Germany, but that was about it.
Kubuntu forum users dug around to
try and find more information, tracking the domain name registration to a
privately-owned German IT services company, but achieving little else. For
his part, Riddell said via email that Blue Systems was "best thought
of as a trust fund rather than a commercial company
" that simply
has an interest in KDE's continued success. He also told Muktware that Blue Systems' involvement would cause
"no changes
" in the way the Kubuntu project functions — in
particular, it will remain part of Ubuntu, rather than venturing off on
its own.
Blue Systems
David Wonderly from the Kubuntu Community Council also noticed the concern of Kubuntu users about the lack of information surrounding Blue Systems, and told the kubuntu-users mailing list that he would be meeting with Blue Systems near the end of April. On May 1, he posted a brief note to his blog providing some more information about the company. Somewhat disappointingly from a news standpoint, there is nothing exotic about Blue Systems (e.g., a front for organized crime, Dan Brown-style secret society, etc.). Instead, Blue Systems is simply the company name chosen by Netrunner founder Clemens Toennies.
Netrunner is based on Kubuntu, albeit with the added emphasis of
out-of-the-box GNOME and WINE functionality, so the Netrunner team has a
deep
stake in the continued health of Kubuntu as a whole. Toennies also
reiterated to Wonderly that he had no intention of changing the way the
Kubuntu project functions. Regarding the perhaps-unintentional air of
mystery about the company, Riddell said that he had met with Blue Systems
at CeBIT, and that although the founder was "a pretty reserved
chap
" he also met the "Kubuntu criteria
" of being friendly and
wanting to improve the world.
Understanding who Blue Systems is answers some other lingering questions about the present state and future of Kubuntu. For example, there was speculation in April that Canonical's trademark policy would result in difficulty for the new source of funding. The issue is that Canonical holds the trademark on the name "Kubuntu" (as it also does for Edubuntu and Xubuntu, but not for all of the official Ubuntu flavors). Muktware speculated in the previously linked article that the distribution might have to change its name now that a different company was financing development. But that reading of the policy does not gel with Blue Systems' involvement. Specifically it states that commercial use of the name requires getting a trademark license from Canonical. As the comments by Riddell and Toennies indicate, Blue Systems is only funding developer time, not basing a product or service around using the Kubuntu name.
But it's still an open question whether any other third-party will offer its own commercial support for Kubuntu, since Canonical's departure leaves a gap. After all, there are businesses who purchased support contracts from Canonical while Kubuntu was a product; presumably those contracts have a fixed end date. Even though non-commercial Kubuntu installations will continue to receive package updates (via the official Ubuntu repositories), a real support contract entails more: deployment assistance, incident response, legal aid, and so on. Whether Canonical decided that the support business was losing money or simply decided to focus on other areas is unknown. The Kubuntu project may not need such commercial support contracts to fund developer time, but there seems to be at least some demand for it. Blue Systems appears not to be chasing it — so perhaps someone else will seize the opportunity.
Brief items
Distribution quotes of the week
OpenBSD 5.1 released
OpenBSD 5.1 has been released. There are plenty of improvements and new features in this release. The announcement (click below) has some details. The song for this release is Bug Busters!Tails 0.11 : The Amnesic Incognito Live System
The Tails Project has announced the release of Tails 0.11, The Amnesic Incognito Live System. Tails is a live system (DVD or USB) aimed at preserving privacy and anonymity. "'The new 0.11 release is an important milestone in the Tails history, and a big step towards Tails 1.0, that is scheduled for release later this year,' said one Tails developer. 'No one should have to become computer experts to protect their privacy and online activities. Our recent focus on ‘persistent’ files and settings finally enables human rights workers and freedom activists, among others, to focus on important work instead of technical details.'"
Tizen 1.0 Larkspur SDK and Source Code Release
Tizen was formed from the MeeGo and LiMo projects to create mobile devices such as smartphones and tablets. The project has announced the 1.0 release of its Software Development Kit (SDK) and the platform source code. The release notes for the SDK and the source code contain the details.Ubuntu 12.04 LTS "Precise Pangolin" released
Ubuntu has announced the release of Ubuntu 12.04 LTS (Long-Term Support), which is code named "Precise Pangolin". For desktop users, 12.04 introduces the "heads-up display" (HUD) in Unity, a switch to Rhythmbox as the default music player, a 3.2.14 kernel, LibreOffice 3.5.2, and lots more. The server release has the latest OpenStack release, updates to Java, an officially supported Xen, and more. The release notes page has links to information for other editions as well. "To be a bit more precise about what we're releasing today... There are 54 product images and 2 cloud images being shipped with this 12.04 LTS release, with translations available in 41 languages. The Ubuntu project's 12.04 archive currently has 39,226 binary packages in it, built from 19,179 source packages, so lots of good starting points for your imagination!"
Yocto Project 1.2 released
Version 1.2 of the Yocto Project embedded distribution builder is available. New features include a new version of the HOB tool (used to customize and build embedded Linux images), a 3.2.11 kernel, better license compliance tools, and this interesting addition: "Implementation of the MSG (Magic Smoke Generator) which enables the on board generation of 'magic smoke' to enhance the longevity of embedded device components."
Distribution News
Fedora
Fedora 18 release name
The votes are in. The Fedora 18 release name is Spherical Cow.Fedora Elections: General information, and questionnaire opening.
Fedora elections are coming up soon. People are currently invited to submit questions for the candidates. Questions must be in by May 8. Nominations begin May 9. There are three seats open on the advisory board, five seats on the Fedora Engineering Steering Committee (FESCo), and all seven seats open on the Fedora Ambassadors Steering Committee (FAmSCo). FAmSCo has announced new election guidelines which have prompted the current vacancies on all seats.
Ubuntu family
Quantal open for development
Ubuntu's Quantal Quetzal (12.10) is open for development. "The development version starts with updated versions of GCC and OpenJDK, some soname changes (boost, hdf5), and some changes with setting the build flags for package builds. We are finally targeting Python3 as the only Python version on the ISO/installation images."
Newsletters and articles of interest
Distribution newsletters
- Debian Project News (April 30)
- DistroWatch Weekly, Issue 454 (April 30)
- Fedora Weekly News Issue 294 (end of April)
- Maemo Weekly News (April 30)
- Ubuntu Weekly Newsletter, Issue 263 (April 29)
Two Years Fly By: Ubuntu Precise Pangolin Pads Into Production (Linux.com)
Plenty of reviews have followed last week's release of Ubuntu 12.04 LTS. In this one at Linux.com by Carla Schroder cuts through the debates to talk about what you actually get in this release. "So I am going to ignore Unity, and I am not tell you how to download, install, or upgrade Precise Pangolin. I'm not going to take a passionate stand on the default color scheme or string together random screenshots and call it a day. I'm not going to breathlessly adore/loathe Mark Shuttleworth. Instead, just to break tradition and be weird for the fun of it, let's talk about the myriad other aspects of Ubuntu 12.04 LTS Precise Pangolin, the many features that distinguish Ubuntu from the rest of the great thundering Linux distro herd."
Poettering: The Most Awesome, Least-Advertised Fedora 17 Feature
Lennart Poettering writes about the Fedora 17 multiseat feature. "With this code in Fedora 17 we cover the big use cases of multi-seat already: internet cafes, class rooms and similar installations can provide PC workplaces cheaply and easily without any manual configuration. Later on we want to build on this and make this useful for different uses too: for example, the ability to get a login screen as easily as plugging in a USB connector makes this not useful only for saving money in setups for many people, but also in embedded environments (consider monitoring/debugging screens made available via this hotplug logic) or servers (get trivially quick local access to your otherwise head-less server)."
The Dawn of Haiku OS (Spectrum)
IEEE Spectrum has a lengthy overview of the Haiku OS project which is working to create an open-source reimplementation of BeOS. "One of the first things people notice about it is that it doesn’t feel anything like Windows or OS X or Linux. It’s unique. Linux, for instance, is based around a core—called a kernel—that was originally designed for use in servers and only later modified for desktop systems. As a consequence, the kernel sometimes gives short shrift to the user interface, which Linux users experience as annoying delays when their computers are doing especially taxing things, like burning a DVD or compiling code. Haiku’s kernel has always been for a desktop system, and so it always gives priority to whatever is happening by way of its graphical user interface."
Page editor: Rebecca Sobol
Development
X11R7.7 will bring multi-touch, friction, and better synchronization
The Wayland display server may grab more headlines these days, but that does not mean that X is sitting still. In fact, the X.org Foundation is due to release X11R7.7 shortly, with a host of new input and display features.
The last major release of the X Window System was X11R7.6, from December 2010. The modular design of X releases (adopted with X11R7.0) means that different parts of the source tree evolve at different paces; consequently it can be a little work to sort out precisely what improvements land with each full release. Release manager Alan Coopersmith recently posted a draft of the upcoming release notes, though, which gives a concise summary of the changes slated to arrive.
The XInput2 extension (XI2) is a highly-visible example of important, user-facing changes that land via extensions rather than in the core. XI2 2.2 added multi-touch input earlier this year; a long-awaited feature implemented by Peter Hutterer that enables the X server to recognize multiple input points from a single hardware device, thus allowing applications to interpret simultaneous input events, such as gestures. It also defines touch event sequences and cleanly differentiates between touch-sensitive input devices like trackpads and touch-sensitive screens. Both are enhancements that will prove useful as application toolkits like GTK+ and Qt make moves into the consumer tablet space.
But XI2 2.1 was also released between X11R7.6 and now, bringing with it a large patch set from Daniel Stone that improves the smooth-scrolling behavior. Other changes added a way for applications to track raw device events (largely of interest to game controllers), and added several missing defines to the XI2 API. The latter issue was that XI2 provided some-but-not-all defines in its own XI* namespace, and programmers were forced to fall back on core defines to keep track of other constants, such as AnyPropertyType or GrabSuccess. Falling back on such core defines is bad style, particularly because it was easy to accidentally do so when a relevant XI2 define did exist (such as using AnyModifier instead of XIAnyModifier), which would cause the program to break.
New features for client applications
The X Fixes extension adds the notion of "pointer barriers" in this release. The feature is derived from applications' desire to set easy-to-reach targets as active regions on the screen — such as the upper-left-corner of the display, which is used by several desktop environments to activate a "dashboard" style overlay. While monitoring the upper-left corner is child's play in a single monitor setup, the task becomes trickier with multiple displays. Some developers might want the upper-left-corner of the right-hand display — which is physically in the middle of the combined screen — to retain its active target status and still be easily reachable. Pointer barriers allow them to do so, by giving the desired target region some "friction" (for lack of a better term) that constrains pointer motion; the result is that the cursor slows down when it hits the target area, making it more difficult to overshoot.
The X Synchronization extension (XSync) now allows client applications to create "fence" objects that they can use to synchronize different rendering back-ends in use at the same time. This is important because not all renderers function at the same speed — such as when OpenGL is used to render only one portion of the client window. X is normally oblivious to this reality, but, as Aaron Plattner explained it when the work began, the application can create a fence (initially in an un-triggered state), and tell the OpenGL renderer to wait until the fence is triggered. When the rest of the X screen is finished rendering, the fence is triggered, and OpenGL updates its portion of the screen immediately.
The font system receives several improvements in this release. The first is that the location of installed X11 fonts can be changed at configuration time; the example given in the draft release notes moves them to /usr/share/fonts/X11/, which would put them closer to other system-installed fonts than the old location, /usr/X11R6/lib/X11/fonts/. The second change is the deprecation of some old code. Previous X releases included a bundled PostScript Type1 font backend; in X11R7.7 this backend is removed and FreeType is used instead. Support for the now-obsolete CID-keyed font format has been replaced entirely, as its functionality has been taken over by the OpenType format.
Under the hood
Primarily for debugging purposes, the X Resource extension now allows client applications to request more information about other clients, including their process IDs and the size of other clients' allocated resources. There are also several new debugging functions in the X server that can be bound to key sequences — although to prevent tragic mishaps, they are not bound by default. XF86LogGrabInfo will write the current set of input grabs to the X log file, and XF86LogWindowTree will write the current window tree. XF86Ungrab will release all active input grabs, while XF86ClearGrab will kill the connections of all active input grabs. These two potentially-dangerous functions were the source of CVE-2012-0064, which allowed attackers to bypass password-protected screensavers. Leaving the functions unbound offers partial protection only, so the new release also requires users to explicitly turn on support for the functions in their X Keyboard extension (XKB) configuration file.
Also of potential interest to developers is progress being made on the X protocol C Bindings (XCB). XCB is intended to one day completely replace the aging Xlib, which is already accessed primarily through compatibility layers provided by higher-level toolkits. Although it remains a work in progress, XCB developers have begun to add support for the OpenGL X extension (GLX) and XKB.
Several components are now marked as deprecated, specifically the nested and virtual X servers Xvfb, Xnest, Xephyr, and Xfake. All are designed to either display their contents inside another X server or to render content into a virtual framebuffer. The plan is for both types of functionality to eventually be replaced by substitute video drivers, xf86-video-nested and xf86-video-dummy. The functionality will remain unchanged, but with less maintenance demanded of the project.
Last but by no means least, the project has migrated its documentation to DocBook. That documentation includes the X protocol as well as the APIs and ABIs. Previously the documentation existed in a variety of different formats; the standardization on DocBook not only unifies it, but thanks to DocBook's XML underpinnings, allows it to be cross-linked for reference.
Of course, there are always long lists of changes and bugfixes made to the video drivers, input drivers, and modules that do not bring new functionality, but still provide a better experience. The project has a massive combined change-log document online that provides an entry point into every fix released during this development cycle. Between that and the new DocBook documentation, X11 fans have lots of reading to look forward to.
Brief items
Quote of the week
Ceres solver released
Google has open-sourced its Ceres solver nonlinear least squares library. "Ceres Solver is used at Google to estimate the pose of Street View cars, aircrafts, and satellites; to build 3D models for PhotoTours; to estimate satellite image sensor characteristics, and more."
ODB 2.0.0
ODB is an object-relational mapping system for C++; the 2.0.0 release is now available. New features include C++11 integration, support for polymorphism, composite object ID support, and more; see this article for more information.Red Hat launches OpenShift Origin
Red Hat has announced the release of its OpenShift "platform as a service" system as the open-source OpenShift Origin project. "The cloud in general, and Infrastructure-as-a-Service (IaaS) and PaaS implementations specifically, should not be vehicles that promote vendor lock-in, nor should they be under the control or 'guidance' of vendors. For the cloud to remain open and vibrant, implementations should be truly open, not only in license, but in governance. The OpenShift Origin project sets a high bar for PaaS offerings, developed and governed by developers, for developers."
Xfce 4.10 released
Version 4.10 of the Xfce desktop environment is out. New features include a rewritten application finder, tiling support, and more; see the Xfce 4.10 tour page for an overview.
Newsletters and articles
Development newsletters from the last week
- Caml Weekly News (May 1)
- What's cooking in git.git (April 26)
- Perl Weekly (April 30)
- PostgreSQL Weekly News (April 29)
- Ruby Weekly (April 26)
- Tahoe-LAFS Weekly News (April 28)
Buculei: A history of Mozilla browsers design
On his blog, Nicu Buculei has an interesting retrospective of browser design, complete with screen shots. Readers may or may not agree with his opinions, but the walk down memory lane is fun. "[It's] debatable if their move to rewrite the entire suite from scratch was a good thing or not (on one hand it was a chance to get rid of the old cruft, on the other it was a huge delay), but here I talk only about the design, so the rewrite brought also a new interface and new widgets. At the time there were "milestone release" from the development tree, working to various degrees. It also has a new interface with a "futuristic", non-native, look, to show the power of XUL and allowed for a new feature: theming - at the time making the interface themeable was the rage of the industry, I don't know who invented it, but Winamp (by then also an AOL property, as Netscape) made it really popular."
Meeks: A LibreOffice/Apache OpenOffice Comparison
Michael Meeks has posted a comparison of new features added to LibreOffice and the upcoming OpenOffice 3.4 release. As a LibreOffice developer, Michael is clearly trying to prove a point. "On the other hand, thus far, there are rather few really new features in the [Apache OpenOffice] release that did not come from Oracle's existing work; that is outside of some pleasant drawing improvements, which we hope to merge into LibreOffice for our next major release."
PostgreSQL Magazine #01 released
The first issue of PostgreSQL magazine is out. The "read it online" page appears to require flash, but the PDF edition works fine and looks quite slick. Articles in the inaugural edition cover PostgreSQL 9.1, NoSQL, and interview with Stefan Kaltenbrunner, a look forward to the 9.2 release, and more.
Page editor: Jonathan Corbet
Announcements
Brief items
ColorHug drops remote disable
The ColorHug open-source colorimeter comes with a remote disable feature; see this article from January for details. As of the next software release, though, that feature will no longer be present. "Of the 350 packages I've sent so far, 3 packages have been lost, and none of them have triggered the blacklist feature. I think open-source people are more honest than my bank manager thought they would be. Hindsight is a wonderful thing, and all that."
Linux Audio Conference 2012 at CCRMA
The conference proceedings and videos from the Linux Audio Conference, which was recently held at the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University in California, are available.
Articles of interest
Fair use or "first excuse"? Oracle v. Google goes to the jury (ars technica)
The first phase of Oracle's lawsuit over the use of Java in Android has gone to the jury ars technica reports. The question in the first phase is whether 37 Java APIs were illegally copied by Google into Android, though there are some other issues as well. "Oracle is "not even in the ballpark" when it comes to proving similarities between the 37 Java APIs it claims ownership of, and Android's own APIs. And, he [defense lawyer Robert Van Nest] emphasized, Oracle isn't accusing Google of copying code—because it can't. After designing a computer program to analyze Android's millions of lines of code, Oracle found only nine lines of copied code in a function called rangeCheck(). That code, accidentally inserted by a Google engineer who testified last week, has been removed from all current versions of Android. "Other than the nine lines of code in rangecheck, everything in Android is original," said Van Nest—created entirely by Google engineers, or with Apache open source code." The verdict is expected later this week, but the judge has reserved the right to determine that the APIs aren't copyrightable, which could potentially overturn the jury's decision.
Is open hardware creating a more open world? (isgtw)
International science grid this week covers some open hardware projects. "The Village Telco project has created a cheap and partially open-source device, with a range of between 300 and 400 meters, that enables anyone to communicate freely over a Wi-Fi network using a phone. The device is called a Mesh Potato. “We weren't thinking that much about open hardware when we started. Our priority was simply creating affordable access in Africa,” said Stephen Song, who founded the project." (Thanks to Paul Wise)
New Books
Developing Web Applications with Haskell and Yesod--New from O'Reilly Media
O'Reilly Media has released "Developing Web Applications with Haskell and Yesod" by Michael Snoyman.The dRuby Book--New from Pragmatic Bookshelf
Pragmatic Bookshelf has released "The dRuby Book" by Masatoshi Seki.Fitness for Geeks--New from O'Reilly
O'Reilly Media has released "Fitness for Geeks" by Bruce W. Perry.Node: Up and Running--New from O'Reilly
O'Reilly Media has released "Node: Up and Running" by Tom Hughes-Croucher and Mike Wilson.
Upcoming Events
Presentation List for Akademy 2012 Tallinn
The program for Akademy 2012 has been announced. The 2012 KDE conference, Akademy, will take place in Tallinn, Estonia June 30-July 6.PyCon Australia 2012 Early Bird registration
Early bird conference registration for PyCon Australia is open. PyCon Australia will be held August 18-19, 2012 in Hobart, Tasmania. "Early bird registration will be extended to the first 60 confirmed conference registrations, or until Friday 1 June, whichever comes first."
C Conference reverse call for proposals
C Conference takes place August 28, 2012 in San Diego, California, co-located with LinuxCon. The reverse cfp is open for ideas and proposals for talks.Events: May 3, 2012 to July 2, 2012
The following event listing is taken from the LWN.net Calendar.
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
