LWN.net Weekly Edition for May 26, 2011
Updates from Linaro
Over the last month or so, I have sat in on a few different talks about Linaro and the work that it is doing to improve the ARM Linux landscape. While Linaro is a consortium of six major ARM companies, its work is meant to be available to all ARM companies, developers, or users. The focus may be on the needs of its member companies, but Linaro's influence and efforts are likely to spread well beyond just those needs. It is an organization that wants to try to keep the best interests of the entire ARM ecosystem in mind—perhaps that's not so surprising with both ARM, Ltd. and major ARM silicon fabricator IBM on board.
In the year since Linaro began to take shape, and roughly eleven months since it was announced to the world, the organization has expanded its scope, changed to a monthly release cycle, and stepped in to to try to help head off a crisis in the ARM tree. It has also made progress in many of the problem areas (kernel, toolchain, graphics, ...) that it set out for itself. But there is, of course, lots more to do.
Linaro Developer Summit
![[George Grey]](https://static.lwn.net/images/2011/lds-grey-sm.jpg)
Linaro CEO George Grey spoke briefly at the opening of the Linaro Developer
Summit (LDS), which was co-located with Ubuntu Developer Summit in
Budapest, and described some industry trends and things he sees coming for Linaro. Grey
noted that the last twelve months have shown some "extraordinary
changes
" in the use of open source software in real world products.
"Android in particular has startled a lot of people
", he
said. The Android share of the smartphone market has risen from 5% to 25%
in that time span, he said, which is something that has never happened before.
Device manufacturers are no longer happy to get a board support package (BSP) that is out of date and requires a BSP-specific toolchain. Instead, they are looking for product-quality open source platforms, which is something that Linaro is delivering. The Linaro automated validation architecture (LAVA)—a test and validation platform—will be very important to that effort as it will help increase the quality of the software that Linaro delivers.
Getting development platforms into the hands of open source developers is another area where Linaro can make a difference, Grey said. In the past, it has been difficult for smaller players to get their hands on these development platforms because it is hard to get the attention of the system-on-chip (SoC) vendors. But, these days there are development platforms at reasonable prices for ARM SoCs from Texas Instruments, Freescale, and ST-Ericsson (all Linaro members), with more coming.
That solves the hardware problem, and Linaro will be providing software to run on those boards. That means that companies or developers with limited budgets can get complete hardware with multiple I/O devices and a product quality software stack. Grey is excited to see what the community can do with all of that.
Grey also announced that there would be no Linaro 11.11 (it had been making releases based on Ubuntu's schedule, though delayed by one month) in favor of monthly (or more frequent) code drops. The various Linaro working groups will be continuously pushing their code upstream, while there will be daily releases of some code, mostly for internal use. There will also be monthly supported releases of the kernel and toolchain for external use.
Looking to the future, Grey noted that work was progressing on support for
the Cortex
A15 processor. The intent is to have the processor supported by Linux
when it releases, rather than a year or two later as has been the case in
the past. He also said that there "clearly is going to be a market
for ARM-based servers
", and Linaro is doing some early work on that
market segment as well.
![[Christian Reis]](https://static.lwn.net/images/2011/lds-reis-sm.jpg)
Linaro VP of Engineering Christian "Kiko" Reis spoke after Grey, mostly about the nuts and bolts of how LDS would operate, and how attendees could get the most out of it. He also looked back over Linaro's first year, noting that a lot of progress had been made, particularly in the areas of communication and collaboration between various ARM vendors. Starting Linaro wasn't easy, he said, but there are harder things still to be done, including bringing the whole ARM community together.
Specifically, Reis mentioned the ARM kernel consolidation work that still
needs to be done. Linaro "spent a year not really doing
this
", and now is the time to get it done. Determining the right
course for memory management for
embedded graphics devices is also high on the list. Delivering that will
take more
than just a year, but planning for that is one of the priorities for the week.
The organization of Linaro
David Rusling, Linaro CTO, gave a talk at the Embedded Linux Conference back in
April to give an overview of the organization, highlight some of the
accomplishments in the first ten months, and look ahead to some future
plans. Rusling started off by clarifying that Linaro is meant to be an
engineering organization and not a "tea and cakes in Honolulu
standards thing
". He also pointed out that "Linaro" was the "least
hated name
" among the candidates, which mirrored Dirk Hohndel's
explanation earlier in the day of the choice of the name "Yocto" for that
project.
Rusling said that he recognized that the fates of ARM and Linux were
intertwined in 2009 or so, when he was at ARM, Ltd., but there were lots of
companies "running around
" and not collaborating. There was
a clear need for more collaboration on the things that were common between
various SoCs, but there were not enough engineers working on those
problems. Linaro was born out of that need.
Linaro started out with around 20 engineers and was envisioned as an
"upstream engineering
" organization. The "genius
move
" was to do all of that in the open, he said. Everything is on
the wiki, though some things may be hard to find, and "upstream is
our mantra
".
Much of what Linaro does is "social engineering
", Rusling
said. There are a number of misconceptions about Linux and open source that
need to be dispelled, including the idea that open source is difficult to
deal with. There are gatekeepers in Linux and other projects that have
strong views, but interested organizations and vendors simply need to
"engage with them
". The "really bad false
statement
" is that open source is cheaper. That's not really true
as working in the open source communities requires a deeper involvement
because it's all about influencing development direction; there is no
control, he said.
The six member companies want to drive the technical agenda for Linaro, which
does its work through the
working groups that have been established. Those working groups are
"very independent
" and the member companies aren't trying to
run the projects directly, but are instead allowing the groups to work
upstream on solving problems in their areas.
There is also a platform
group that builds, tests, and benchmarks the work that is done by the
working groups (and upstream projects). The idea is to prove that new
kernel features work or that tool changes make things go faster by creating
and testing evaluation builds. "Any time you do any changes you have
to measure
" the impact of those changes, he said. There are also
landing teams for each of the member SoC vendors (Samsung, Texas
Instruments, ST-Ericsson, and Freescale) that take the outputs from the
platform team and turn it into usable builds for their customers. The
landing teams are the only teams in Linaro that are closed to community
participation.
It is not just kernel work that lands upstream, as the toolchain working group is doing a lot of work on GCC and other tools. Support for ARMv7a, Thumb 2, Neon, and SMP are being added to GCC 4.7, which won't be released until April 2012, and won't get into distributions until October 2012 or so, sometime after the 4.7.1 release. In the interim, the toolchain group will be making "consolidation builds" that can be used by ARM developers prior to the GCC release. In addition to work on GCC, the group is also adding functionality to tools like gdb, QEMU, and valgrind, he said.
After the first release in November, two new working groups were added to
address graphics and multimedia issues. In addition, the other working
groups started looking at long-term problems, Rusling said. The kernel
group started adding device tree support for all of the Linaro members'
hardware. Work on vectorizing support for GCC was one focus of the
toolchain group, and the power management group started tackling segmented
memory so that portions of the memory can be powered down. All of those
things are "tricky areas that require a lot of coordination within the
ARM space, but also upstream
", he said.
For multimedia, much of the work involves testing, benchmarking, and tuning various codecs for ARM. Standardizing on the OpenMax media libraries and the GStreamer framework is the direction that Linaro is going. Android has gone its own way in terms of multimedia support, he said.
Rusling, like Grey,
also pointed to the work being done on LAVA as something that is very
important to Linaro, but also to the community. It is a "completely
open
" test and validation system that could be used by others in the
ARM community or beyond.
There were some hard lessons learned in the first year or so of Linaro's
existence, Rusling said. It is difficult to build a new engineering
organization from scratch with engineers donated from multiple companies.
On the other hand, people thought that the ARM community couldn't
collaborate but Linaro has shown that not to be the case. Everything takes
longer than he would like, and there is still a lot to learn, he
said. "Open source is wonderful
", but there are challenges to
using it.
It is clear that ARM has become a very important architecture for Linux, and is completely dominating the low-power mobile device market. That doesn't look likely to change anytime soon, and it may be that ARM's efforts to move into the server space will bear fruit in the next few years. Certainly power consumption (and the associated heat produced) are very important not just in pockets, but in data centers as well. Linaro has, so far, been making many of the right moves to ensure that ARM is well-supported—and maintainable—in Linux. It will be interesting to see what the next year (and beyond) hold.
Examining MeeGo's openness and transparency
The MeeGo project is barely a year old, and in addition to the usual bootstrapping issues, it has had to deal with the headaches of merging two pre-existing, corporate-backed Linux distributions (Intel's Moblin and Nokia's Maemo), and spawning several independent target platforms: tablets, handhelds, netbooks, vehicles, and set-top boxes. At the MeeGo Conference in San Francisco, several speakers tackled the openness and transparency of the project head-on.
The elephant in the room at the event is Nokia's February announcement that it was backing off its own MeeGo Handset UX plans in favor of a deal with Microsoft for Windows Phone 7 devices. That event spawned a great public perception problem for MeeGo, in which many consumers — for whom MeeGo was synonymous with "Linux phone OS" — assumed the project was finished. Both the Maemo and Moblin sides of the MeeGo community are still extremely active, however. MeeGo's importance as an OEM platform seems solid (particularly in tablet space and in non-"consumer device" verticals such as set-top boxes and in-vehicle infotainment). The community seems resigned to the fact that until there is a phone on the market, many outsiders won't pay attention to the platform, but it appears to be undeterred.
Messaging
![[Carsten Munk]](https://static.lwn.net/images/2011/meego-munk-sm.jpg)
Carsten Munk is a veteran of the Maemo project and maintainer of the MeeGo port for Nokia's N900 phone. He also started the short-lived Mer project which sought to re-build an entirely community-maintained version of Maemo for officially unsupported devices. Thus, in spite of holding an official position in the MeeGo project, he has well-earned clout when it comes to the subject of openness. His talk on Monday afternoon was entitled "Transparency, Inclusion, and Meritocracy in MeeGo: Theory and Practice," and was a hard (but not disparaging) examination of how well the project is living up to its advertised principles.
The three factors Munk explored, transparency, inclusion and meritocracy, derive indirectly from the project's branding. The meego.com web site and MeeGo marketing materials mention meritocracy repeatedly as a core value, Munk said, but never define what the project thinks it means or how it works in practice. Digging further into the site, Munk turned up only one vague description of the project's governance as "based on meritocracy and the best practices and values of the Open Source culture
" and scattered references to vendor-neutrality and community encouragement.
Searching with Google reveals what the broader public consensus is of open source's "best practices and values," he said, including transparent processes, peer review, and the absence of membership fees or contracts before getting involved. Thus, real transparency and real inclusiveness are prerequisites to a meritocracy culture, he argued: after all, how can one be meritocratic if one cannot see what is happening and actually participate.
The project is failing to define and argue for these core values to outsiders, he said, it needs to state them clearly and prominently both on the site and in its public relations campaigns. Without a clear message on these core values, he said, developers will eventually drift away to other projects: the meritocracy culture is what keeps volunteer developers coming back to donate their time.
Practice
Munk then turned to an examination of the MeeGo project's real-life processes and structures, measuring each on transparency, inclusion, and meritocracy. He came away with three recurring patterns of behavior he sees in MeeGo, and provided feedback on how to improve problem areas in each measured category.
The first pattern Munk explained was the "gatekeeper" pattern, in which a distinct team follows a transparent, well-defined process, but where all of the decisions are made by the team alone. Typically, the team uses transparent processes to interact with the larger project and community, but its internal discussions and decision-making are a black box. The intra-team black-box makes it difficult for non-team-members to get involved (a low inclusiveness quotient), and as a result, gatekeeper teams fall short on meritocracy -- even if, in theory, the team is receptive to ideas from outsiders.
Munk's primary example of the gatekeeper pattern is MeeGo's release engineering team, which he describes as perpetually operating out-of-sight. Most developers within the MeeGo project never see the release engineering team; they simply get an email from them when a release is ready. Other examples from around the project include the IT infrastructure team, legal services, the architecture team, the program- and product-management groups, and the Technical Steering Group (TSG). Fortunately, Munk said, improving on gatekeeper patterns is straightforward: simply having the team conduct its meetings and discussions in the open. There are always going to be areas where legal or security requirements dictate closing the meeting-room door, he said, but openness should be the default.
The second pattern is the "open office" pattern, typified by broad openness in discussions, communication tools, and meetings, placing all participants (core and casual) into one shared virtual space. Although this pattern scores extremely high on transparency, Munk said, it can actually cause unintentional harm on inclusiveness and meritocracy simply because the volume of information can create overload. Suggestions can be lost in the din, and individual contributions can be accidentally overlooked.
Munk's example open office team is the MeeGo Community Office, which incorporates forum, mailing list, wiki, and IRC communication channels, as well as community bug tracking and feedback. The MeeGo developer and user community can be large enough for individual contributions or comments to get lost in the crowd, and often includes almost disjoint sets of contributors (Dave Neary and Dawn Foster discussed that last issue in the "MeeGo Community Dashboard" talk: forum users and mailing list users represent largely non-overlapping communities, although both regularly contribute to IRC discussions). Some other MeeGo teams that operate using the open office pattern include the ARM/n900 team, the SDK team, QA and bug triage, the events team, and the internationalization team.
Munk's suggestions for improving the open office pattern's weak points include creating formal processes to separate out and recognize contributions, and subdividing teams that become too large. Recognizing individuals' contributions, of course, is necessary for inclusiveness, but in order to do it, the team needs to have a regular process in place to report and analyze community involvement. The MeeGo Community Office does have this, as Foster and Neary explained in their talk, although it still needs streamlining.
The third (and by far the most problem-ridden) pattern Munk identified is the "invisible effort," in which not only are a team's communication and processes closed to outside eyes, but even its membership and makeup is a mystery. In the worst case scenario, there is no public mailing list, wiki presence, bug tracker activity, source code repository, roadmap, or even documentation of the team's existence. Along with the complete lack of transparency, this pattern makes it impossible for the team to be inclusive (since it is impossible to contact a team whose identity is hidden), and impossible for it to be meritocratic.
[PULL QUOTE: No one on the public side of the project is sure who is on the design team, and their work remains undocumented and secret up until the moment it is revealed to the general public. END QUOTE]MeeGo does have several invisible effort teams, Munk said, most notably
the UI design team. No one on the public side of the project is sure who
is on the design team, and their work remains undocumented and secret up
until the moment it is revealed to the general public. Other examples
include the MeeGo Bugzilla support team (who Munk said is invisible until a
change to the project Bugzilla is rolled out), and teams responsible for
the occasional "big reveal
" announcements (which are typically
product-related or reveal the involvement of new partners). There are also
a few smaller MeeGo teams that are not formally defined, so they function
similar to invisible efforts, albeit unintentionally.
On the plus side, Munk's "how to improve" list for this pattern is quite
simple: "Anything!
" From a complete lack of transparency,
inclusiveness, and meritocracy, any step forward is welcome. Although, he
said, improving on transparency is the obvious first step. Many of the
invisible effort patterns arise because the team in question is not
distributed, and instead is an internal office inside one vendor, which has
never before had to interact with an open source project. It may be a
radical change in mindset, but fixing the invisible effort team pattern is not optional just because the team has always done things its way.
Traps and pitfalls
In the final section of his talk, Munk outlined common traps and pitfalls that any team or individual might slip into, and which have a detrimental effect on transparency, inclusiveness, and meritocracy. The first is the CC List, in which an individual starts an important discussion outside of the public mailing list by including a long string of CC'ed addresses. This bypasses the transparency expected of the project, and is often a relic of corporate culture. The "big reveals" mentioned earlier fall into this category as well, as does the "BS bug", in which someone files a bug report or feature request that states unequivocally "MeeGo Needs X
", without supporting evidence or discussion, then uses the bug report to urge action.
"Corporate nominations" are another trap, in which the TSG appoints someone from outside the MeeGo project to fill a new or important role, without discussion or an open nomination period. Often, the person so nominated does not have a publicly-accessible resume or known reputation to justify their new leadership role. Munk allowed that a certain amount of this was necessary in the early days of MeeGo, when there were not enough active contributors to fill every need, but said now that the project is no longer in "bootstrap
" mode, the practice needed to end. He did not give examples of these nominations, but the audience seemed to concur that they have, in fact, happened.
In Monday's keynote, Imad Sousou touched on this same topic in reference to the TSG membership itself in response to an audience question about the TSG's makeup. Sousou said that the TSG's membership was open to the community, and anyone was welcome to make a nomination (including nominating themselves). I asked several community members about that comment after the keynote. All regarded it with disbelief, with several pointing out that there was no documented nomination or election process, nor a charter that spelled out the TSG's makeup at all.
The first few pitfalls were corporate-behavior-related, but Munk mentioned several common to the community side as well. The "loudest argument wins" trap is a common one, where one voice simply wears out all of the others (by volume or belligerence), by flooding lists or forums. This sidetracks meritocracy, because the discussion does not stick to debatable facts. A second pitfall is the "more talk / less doing" trap, which many in open source would recognize: someone who refuses to participate in the solution, but prolongs debate endlessly or repeatedly makes complaints. The third is the "information vampire" pitfall, which is often unintentional. In that trap, someone slows down a team or a process by making too many personal requests for information. The sheer "paperwork" of keeping a vampire satisfied detracts from a team's energy to push the project forward. Lest anyone think he was only out to point fingers, Munk identified himself as often being guilty of information vampirism.
In conclusion, Munk reiterated that the core values of transparency, inclusiveness, and meritocracy are not merely nice ideals, but are a practical "win" for everyone involved in MeeGo (or any other large open source effort). The three are interrelated, but all begin with transparency, he said. 100% transparency is not practical, but he encouraged the community to hold the MeeGo project to its advertised principles, for the benefit of all sides.
On the day after his talk, I asked Munk whether he had received any feedback since the talk, and he said that he obviously hadn't offended enough people, because so far no one had complained to him about what he said. Obviously he could have gone further had he wanted to, such as pointing out specific instances of the pitfalls and shortcomings he outlined in the session, but that would not have really changed the point. My impression of the MeeGo community is that everyone is well aware of what parts of the project are working smoothly and where there are hiccups, strained communications, and difficult processes. Fifteen months is not a lot of time to roll out a large, multi-faceted distributed project like MeeGo, especially one that targets multiple audiences (such as hardware makers, developers, and consumers).
Perhaps the best thing about Munk's talk was his ability to systematically break down the issues that various pieces of the MeeGo machine are struggling with. By shining some light directly on the problems, there is less chance of argument and more room for repairing the cracks — which is clearly what everyone wants. Other large projects could probably take a cue from Munk's taxonomy, and take a look at their own teams, subprojects, and processes: regardless of the target audience of the project, most in the free software world share the same set of goals.
OpenOffice.org and contributor agreements
As part of an interview in LWN, Mark Shuttleworth is quoted as wanting the community to view the use of contribution licensing agreements (CLAs) as a necessary prerequisite for open source growth and the refusal by others to donate their work under them as a barrier to commercial success. He implies that the use of a CLA by Sun for OpenOffice.org was a good thing. Mark is also quoted accusing The Document Foundation (TDF) of somehow destroying OpenOffice.org (OO.o) because of its decision to fork rather than contribute changes upstream under the CLA. But I'd suggest a different view of both matters.
[ Disclosures: Having been directly involved while at Sun in some of the events I discuss, I should say that everything recounted here is a matter of public record. While I'd love to explain some of the events at greater length, I'm not free to do so - such is the penalty of employment contracts. Suffice to say that just because a company takes certain actions, it doesn't mean everyone agrees with them, even senior people. I should also disclose that I was honored to be granted membership in TDF recently, although I'm not speaking on their official behalf here. ]
My experience of CLAs has led me to conclude that they prevent competitors from collaborating over a code base. The result is that any project which can only be monetized in its own right rather than as a part of a greater collective work will have a single commercial contributor if a CLA is in place. Mark would like us all to accept contributor agreements so that a legacy business model based on artificial scarcity can be used to make a profit that would otherwise be unavailable without the unrewarded donation of the work of others - who are not permitted the same opportunity. I've covered this in greater detail in my article "Transparency and Privacy".
While pragmatically there may be isolated circumstances under which CLAs are the lesser of evils, their role in OO.o has contributed more to its demise than offered it hope. By having a CLA for OpenOffice.org, Sun guaranteed that core contributions by others would be marginal and that Sun would have to invest the vast majority of the money in its development. That was a knowing, deliberate choice made in support of the StarOffice business models.
Despite that, others did work on the code - a testimony to the importance of OO.o. The most significant community work was localization, and the importance of the work done by the many teams globally making OpenOffice.org work for them can't be overstated. In the core code, Novell worked inside the community, accepting the premise Mark is asserting and contributing significant value in small quantities over an extended period. Smaller contributions came from Red Hat and others. IBM worked outside the community, developing the early version of Symphony, contributing nothing back to the OO.o community, and thus preventing any network effect from being accelerated by their work.
Sun used the aggregated copyright in 2005 to modernize the licensing arrangements for OO.o, closing the dual-license loophole IBM had been using to avoid contribution. Some (me included) hoped this would bring IBM into the community, but instead Sun also used the aggregated copyright to allow IBM to continue Symphony development outside the community rather than under the CLA. Stung by this rejection of transparent community development - and in distaste at proprietary deals rather than public licensing driving contribution - Novell started to bulk up the Go-OO fork, by ceasing to give their contributions back. This all happened long before the Oracle acquisition.
The act of creating The Document Foundation and its LibreOffice project did no demonstrable harm to Oracle's business. There is no new commercial competition to Oracle Open Office (their commercial edition of OO.o) arising from LibreOffice. No contributions that Oracle valued were ended by its creation. Oracle's ability to continue development of the code was in no way impaired. Oracle's decision appears to be simply that, after a year of evaluation, the profit to be made from developing Oracle Open Office and Oracle Cloud Office did not justify the salaries of over 100 senior developers working on them both. Suggesting that TDF was in some way to blame for a hard-headed business decision that seemed inevitable from the day Oracle's acquisition of Sun was announced is at best disingenuous.
So what's left of the OpenOffice.org community now? Almost all of the people who worked on it outside of Sun/Oracle have joined together to create the Document Foundation. As far as I can tell all that's left of OO.o is the project shell, with no development work in progress because Oracle has now stood down all the staff it had working on the project, plus one or two voices trying desperately to make us believe they still have some authority to comment because they used to be involved with it.
The CLA bears a great deal of responsibility for this situation, having kept the cost of development high for Sun/Oracle and prevented otherwise willing contributors from fully engaging. There is one hope left from it though. Because of the CLA, Oracle is free to choose to donate the whole of OO.o to The Document Foundation where it can be managed by the diverse and capable community that's gathered there. If they don't do that they probably need to change the license to a weak copyleft license - the new Mozilla Public License v2 would be perfect - to allow IBM to join the main community. It's my sincere hope that they will finally put the CLA to a good use and allow it to reunite the community it has divided for so long.
Next week's weekly edition comes out June 3
Due to the US Memorial Day holiday on May 30, next week's weekly edition will be delayed by one day. That means that it will be published on Friday, June 3. Several US holidays are arranged to land on Mondays and we have finally decided to go ahead and take some of those days off. July 4 (Independence Day) and September 5 (Labor Day) are also holidays that we plan to take off this year. Hope this doesn't inconvenience anyone too much, and have a nice Monday whether you are celebrating the holiday or not.
Security
WebGL vulnerabilities
A recent report that highlighted some potential and actual security vulnerabilities in WebGL has been widely circulated. It should probably not come as a surprise that a new whiz-bang feature that is meant to allow web content to interact with complex 3D graphics hardware might lead to problems. Since it is all-but-certain that browser makers will be enabling—in many cases already have enabled—this feature, it will be interesting to see how the security holes will be filled as they make their way from theoretical to actual vulnerabilities.
WebGL is a low-level 3D graphics API that is based on the OpenGL ES 2.0 standard implemented by libraries for most fairly recent 3D graphics cards. For browsers with WebGL support, the HTML canvas element can be used to display accelerated graphics in the browser that can be controlled via JavaScript. For gaming, exploring 3D landscapes, undoubtedly annoying advertisements, and plenty of other uses, WebGL will be a welcome addition to web browsers. But allowing arbitrary content flowing across the internet to interact with complex hardware certainly seems like it might lead to security problems.
Graphics hardware typically consists of one or more graphics processing units (GPUs) that are accessed through a driver. The driver provides some standardized interface to higher-level libraries that implement a graphics standard like OpenGL. But, in order to provide the most flexibility for graphics programmers, much of what gets handed off to the libraries are special-purpose programs called shaders. Shaders are written to handle the complexities of the graphics to be rendered, and the libraries and drivers turn those programs into the proper form for the GPU(s) in the hardware.
Essentially it means that malicious web sites can craft semi-arbitrary programs to run on the hardware of the user. That alone should be enough to give one pause from a security perspective. One obvious outcome is that malicious shaders could be written to essentially monopolize the graphics hardware, to the detriment of anything else that's trying to write to the display (e.g. other windows). In the worst case, it could lead to the user having to reinitialize the graphics hardware—possibly requiring a reboot.
That kind of denial of service could be extremely annoying to users, but doesn't really directly impact the security of the desktop. It would not leak user data to the malicious site, though it could potentially result in data loss depending on what else the user was doing at the time. It is, in some ways, similar to the problems of malicious, infinitely looping JavaScript, which can lock up a browser (but not generally the whole desktop). Running browser tabs as separate processes, as Chromium does and Firefox is moving to, also mitigates the JavaScript problem to a large extent.
But that's not the only problem that the report from Context, a UK-based security consulting company, outlined. Another potential attack is a cross-domain image theft. When canvas elements include cross-domain content, say an image from another site, there is a "origin-clean" flag that gets cleared in the browser which disables some of the JavaScript functions that might be used to extract the, potentially sensitive, image data from the other domain. However, a malicious canvas element could create shaders that will effectively leak the image contents.
The attack relies on a technique long-used to extract cryptographic keys by doing a timing analysis. If shaders were written to take longer based on the "brightness" of a pixel, JavaScript could be used to regenerate the image based on how long each pixel takes to render. It is a complicated attack to do, and finding real-world exploits using it may be somewhat convoluted, but it is a cross-domain vulnerability. An example that Context gives is a victim site that puts up a specific profile image based on the session information stored in a browser cookie for the site, which gets sent to the site as part of the request for the image. The malicious site that included the victim image couldn't get at the cookie, but could infer the logged-in user by comparing the displayed image to a list of known "interesting" profile images.
Mozilla hacker JP Rosevear responded
to Context's report noting that the cross-domain image theft problem is
real, even though it may be difficult to exploit in practice:
"While it is not immediately obvious that it can be exploited in a
practical attack right now,
experience in security shows that this is a matter of when, not if.
"
His suggested fix is the cross-origin
resource sharing (CORS) proposal that would allow sites to explicitly
list which other sites can include their content.
The denial of service problem is harder, though. The only real defense against maliciously written shaders is to validate that code in ways that, hopefully, eliminates bad shaders. That, of course, is something of an arms race, so Rosevear also suggests that some kind of user confirmation before displaying WebGL content may be required.
There are also some efforts afoot to try to handle denial of service issues in the hardware itself. GL_ARB_robustness (and GL_ARB_robustness_2) are mechanisms that the hardware makers can use to detect these kinds of problems and reset the hardware when they occur. As Context's FAQ indicates, though, that may not be real solution in the long term:
From a security standpoint, allowing any random web site to send code that
is more-or-less directly executed on system hardware is always going to be
somewhat problematic. Rosevear points out that there is separation between
the components of WebGL that should provide some isolation:
"Nevertheless, claims of kernel level hardware access via WebGL are
speculative at best since WebGL shaders run on the GPU and shader compilers
run in user mode.
" That assumes that the libraries and drivers don't
have exploitable bugs of their own, of course.
As Rosevear notes, "significant
attacks against [WebGL] may be possible
". This is clearly an area
that bears
watching.
Brief items
Security quotes of the week
So I'm sorry for throwing cold water on you guys, but the whole "let's come up with a new security gadget" thing just makes me go "oh no, not again".
Expression is not like that. The notion that expression is like that is entirely a consequence of taking a system of expression and transporting it around, which was necessary before there was the Internet, which has the capacity to do this infinitely at almost no cost.
Successful timing attacks on elliptic curve cryptography (The H)
The H reports on a successful timing attack against the Elliptic Curve digital signature algorithm in OpenSSL:
No working countermeasures have so far been found; the US-CERT advises that ECDSA should no longer be used for digital signatures. To prevent this type of attack, the researchers recommend implementing time-independent functions for operations on elliptic curves.
OneSwarm: Privacy preserving peer-to-peer data sharing
A BitTorrent-compatible peer-to-peer application, with privacy preservation
features, called OneSwarm, has released version
0.7.5. The code is available on github and uses source
address rewriting and SSL encryption to protect the privacy of its users.
"OneSwarm is a new peer-to-peer tool that provides users with
explicit control over their privacy by letting them determine how data is
shared. Instead of sharing data indiscriminately, data shared with OneSwarm
can be made public, it can be shared with friends, shared with some friends
but not others, and so forth. We call this friend-to-friend (F2F) data
sharing.
"
New vulnerabilities
apr: denial of service
Package(s): | apr | CVE #(s): | CVE-2011-1928 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | May 20, 2011 | Updated: | August 2, 2011 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Mandriva advisory:
It was discovered that the fix for CVE-2011-0419 under certain conditions could cause a denial-of-service (DoS) attack in APR. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
cyrus-imapd: man-in-the-middle attack
Package(s): | cyrus-imapd | CVE #(s): | CVE-2011-1926 | ||||||||||||||||||||||||||||||||||||||||
Created: | May 24, 2011 | Updated: | August 15, 2011 | ||||||||||||||||||||||||||||||||||||||||
Description: | From the Mandriva advisory:
The STARTTLS implementation in Cyrus IMAP Server before 2.4.7 does not properly restrict I/O buffering, which allows man-in-the-middle attackers to insert commands into encrypted sessions by sending a cleartext command that is processed after TLS is in place, related to a plaintext command injection attack, a similar issue to CVE-2011-0411. | ||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
feh: remote code execution
Package(s): | feh | CVE #(s): | CVE-2010-2246 | ||||||||||||
Created: | May 25, 2011 | Updated: | October 14, 2011 | ||||||||||||
Description: | An attacker can cause the feh image viewer to execute arbitrary code if the user can be made to open a specially-crafted URL. | ||||||||||||||
Alerts: |
|
gnome-screensaver: lock bypass
Package(s): | gnome-screensaver | CVE #(s): | CVE-2010-0285 | ||||
Created: | May 19, 2011 | Updated: | May 25, 2011 | ||||
Description: | From the Mandriva advisory: gnome-screensaver 2.14.3, 2.22.2, 2.27.x, 2.28.0, and 2.28.3, when the X configuration enables the extend screen option, allows physically proximate attackers to bypass screen locking, access an unattended workstation, and view half of the GNOME desktop by attaching an external monitor (CVE-2010-0285). | ||||||
Alerts: |
|
kernel: multiple vulnerabilities
Package(s): | kernel | CVE #(s): | CVE-2011-0999 CVE-2011-1023 | ||||||||||||||||||||
Created: | May 19, 2011 | Updated: | July 14, 2011 | ||||||||||||||||||||
Description: | From the Red Hat advisory: * A flaw was found in the Linux kernel's Transparent Huge Pages (THP) implementation. A local, unprivileged user could abuse this flaw to allow the user stack (when it is using huge pages) to grow and cause a denial of service. (CVE-2011-0999, Moderate) * A flaw was found in the transmit methods (xmit) for the loopback and InfiniBand transports in the Linux kernel's Reliable Datagram Sockets (RDS) implementation. A local, unprivileged user could use this flaw to cause a denial of service. (CVE-2011-1023, Moderate) | ||||||||||||||||||||||
Alerts: |
|
kernel: multiple vulnerabilities
Package(s): | kernel | CVE #(s): | CVE-2011-1173 CVE-2011-1585 CVE-2011-1593 CVE-2011-1598 CVE-2011-1748 CVE-2011-1759 CVE-2011-1767 CVE-2011-1770 CVE-2011-1776 CVE-2011-2022 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | May 25, 2011 | Updated: | November 21, 2011 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | This set of kernel vulnerabilities includes information disclosure from the Acorn Econet protocol implementation (CVE-2011-1173), CIFS authentication bypass (CVE-2011-1585), denial of service (CVE-2011-1593, CVE-2011-1767), null pointer dereference (CVE-2011-1598, CVE-2011-1748), privilege escalation (CVE-2011-1759), remote denial of service and information disclosure (CVE-2011-1770), information disclosure via crafted storage device (CVE-2011-1776) and privilege escalation (CVE-2011-2022). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
kvm: code execution
Package(s): | kvm | CVE #(s): | CVE-2011-1751 | ||||||||||||||||||||||||||||
Created: | May 19, 2011 | Updated: | July 7, 2011 | ||||||||||||||||||||||||||||
Description: | From the openSUSE advisory: By causing a hot-unplug of the pci-isa bridge from within guests the qemu process could access already freed memory. A privileged user inside the guest could exploit that to crash the guest instance or potentially execute arbitrary code on the host (CVE-2011-1751). | ||||||||||||||||||||||||||||||
Alerts: |
|
mediawiki: multiple vulnerabilities
Package(s): | mediawiki | CVE #(s): | CVE-2011-1765 CVE-2011-1766 | ||||||||||||
Created: | May 23, 2011 | Updated: | May 25, 2011 | ||||||||||||
Description: | From the Fedora advisory:
Mediawiki 1.16.5 was released to correct two security flaws: The first issue is yet another recurrence of the Internet Explorer 6 XSS vulnerability that caused the release of 1.16.4. It was pointed out that there are dangerous extensions with more than four characters, so the regular expressions we introduced had to be updated to match longer extensions. (CVE-2011-1765) The second issue allows unauthenticated users to gain additional rights, on wikis where $wgBlockDisablesLogin is enabled. By default, it is disabled. The issue occurs when a malicious user sends cookies which contain the user name and user ID of a "victim" account. In certain circumstances, the rights of the victim are loaded and persist throughout the malicious request, allowing the malicious user to perform actions with the victim's rights. (CVE-2011-1766) | ||||||||||||||
Alerts: |
|
opera: memory corruption
Package(s): | opera | CVE #(s): | |||||||||
Created: | May 20, 2011 | Updated: | June 24, 2011 | ||||||||
Description: | From the Opera advisory:
Framesets allow web pages to hold other pages inside them. Certain frameset constructs are not handled correctly when the page is unloaded, causing a memory corruption. To inject code, additional techniques will have to be employed. | ||||||||||
Alerts: |
|
pure-ftpd: denial of service
Package(s): | pure-ftpd | CVE #(s): | CVE-2011-0418 | ||||||||||||||||
Created: | May 19, 2011 | Updated: | June 21, 2011 | ||||||||||||||||
Description: | From the Mandriva advisory: A denial-of-service (DoS) attack related to glob brace expansion was discovered and fixed in pure-ftpd (CVE-2011-0418). | ||||||||||||||||||
Alerts: |
|
ruby: arbitrary code execution
Package(s): | ruby | CVE #(s): | CVE-2011-0188 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | May 23, 2011 | Updated: | August 15, 2011 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Mandriva advisory:
The VpMemAlloc function in bigdecimal.c in the BigDecimal class in Ruby does not properly allocate memory, which allows context-dependent attackers to execute arbitrary code or cause a denial of service (application crash) via vectors involving creation of a large BigDecimal value within a 64-bit process, related to an integer truncation issue. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
syslog-ng: denial of service
Package(s): | syslog-ng | CVE #(s): | |||||
Created: | May 25, 2011 | Updated: | May 25, 2011 | ||||
Description: | The syslog-ng suffers from a minimally-described "PCRE input validation error" which can enable a denial of service attack. | ||||||
Alerts: |
|
thunar: denial of service
Package(s): | thunar | CVE #(s): | CVE-2011-1588 | ||||||||
Created: | May 20, 2011 | Updated: | May 31, 2011 | ||||||||
Description: | From the openSUSE advisory:
Due to a format string error thunar could crash when copy&pasting a file name with format characters. | ||||||||||
Alerts: |
|
tigervnc: password disclosure
Package(s): | tigervnc | CVE #(s): | CVE-2011-1775 | ||||||||
Created: | May 25, 2011 | Updated: | June 15, 2011 | ||||||||
Description: | The vncviewer program can be made to send a password to a malicious server without first verifying its X.509 certificate. | ||||||||||
Alerts: |
|
viewvc: resource-consumption attack
Package(s): | viewvc | CVE #(s): | CVE-2009-5024 | ||||||||||||||||||||||||
Created: | May 24, 2011 | Updated: | May 31, 2011 | ||||||||||||||||||||||||
Description: | From the CVE entry:
ViewVC before 1.1.11 allows remote attackers to bypass the cvsdb row_limit configuration setting, and consequently conduct resource-consumption attacks, via the limit parameter, as demonstrated by a "query revision history" request. | ||||||||||||||||||||||||||
Alerts: |
|
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The 2.6.39 kernel is out; it was, as predicted, released by Linus immediately after last week's Edition was published. He indicated some uncertainty about whether another -rc release would have been appropriate:
Prominent features in this release include IPset, the media controller subsystem, a couple of new network flow schedulers, the block plugging rework, the long-awaited removal of the big kernel lock, and more. See the KernelNewbies 2.6.39 page, the LWN merge window summaries (part 1, part 2, and part 3) and Thorsten Leemhuis's summary on The H for more information about this release.
Stable updates: 2.6.38.7 was released on May 21, followed by 2.6.32.41 and 2.6.33.14 on May 23. Each contains the usual list of important fixes.
Quotes of the week
- Being a great hacker does not imbue moral or ethical characteristics.
- Being a great coder doesn't mean you're not a crackpot.
- Working on a great project doesn't mean you share my motivations about it.
This wasn't obvious to me, and it seems it's not obvious to others.
Linux wireless support education videos
Wireless networking hacker Luis Rodriguez has put together a set of videos on how the Linux 802.11 layer works aimed at developers writing and supporting wireless drivers. "If you have engineers who need to support the 802.11 Linux subsystem you at times see yourself needing to educate each group through some sessions. In hopes of reusing educational sessions I've decided to record my own series and post it on YouTube." Topics covered include overviews of the 802.11 subsystem, how the development process works, driver debugging, and more.
2.8.0?
Linus's message warning that this merge window may be a little shorter than usual ends with an interesting postscript: "The voices in my head also tell me that the numbers are getting too big. I may just call the thing 2.8.0. And I almost guarantee that this PS is going to result in more discussion than the rest, but when the voices tell me to do things, I listen."
Seccomp filters: permission denied
Last week's article on the idea of expanding the "secure computing" facility by integrating it with the perf/ftrace mechanism mentioned the unsurprising fact that the developers of the existing security module mechanism were not entirely enthusiastic about the creation of a new and completely different security framework. Since then, discussion of the patch has continued, and opposition has come from an entirely different direction: the tracing and instrumentation developers.
Peter Zijlstra started off the new discussion with a brief note reading: "I strongly oppose
to the perf core being mixed with any sekurity voodoo (or any other active
role for that matter).
" Thomas Gleixner jumped in with a more detailed description of
his objections. In his view, adding security features to tracepoints will
add overhead to the tracing system, make it harder to change things in the
future, and generally mix tasks which should not be mixed. It would be
better, he said, to keep seccomp as a separate facility which can share the
filtering mechanism once a suitable set of internal APIs has been worked
out.
Ingo Molnar, a big supporter of this patch, is undeterred; his belief is that more strongly integrated mechanisms will create a more powerful and useful tool. Since the security decisions need to be made anyway, he would like to see them made using the existing instrumentation to the highest level possible. That argument does not appear to be carrying the day, though; Peter replied:
As of this writing, that's where things stand. Meanwhile, the expanded secure computing mechanism - which didn't use perf in its original form - will miss this merge window and has no clear path into the mainline. Given that Linus doesn't like the original idea either, it's not at all clear that this functionality has a real future.
Kernel development news
What's coming in $NEXT_KERNEL_VERSION, part 1
As of this writing, some 5400 non-merge changesets have been pulled into the mainline kernel for the next release. The initial indications are that this development cycle will not have a huge number of exciting new features, but there are still some interesting additions. Among the user-visible changes are the following:
- There are two new POSIX clock types: CLOCK_REALTIME_ALARM and
CLOCK_BOOTTIME_ALARM; they can be used to set timers that
will wake the system from a suspended state. See this article for more information on
these new clocks.
- The Quick Fair Queue
packet scheduler has been added to the network stack.
- The just-in-time compiler for BPF packet
filters has been merged; only x86-64 is supported for now.
- There is a new networking system call:
int sendmmsg(int fd, struct mmsghdr *mmsg, unsigned int vlen, unsigned int flags);
It is the counterpart to recvmmsg(), allowing a process to send multiple messages with a single system call.
- The ICMP sockets feature has been
merged; its main purpose is to allow unprivileged programs to send
echo-request datagrams.
- Two new sysctl knobs allow the capabilities given to user-mode helpers
invoked by the kernel to be restricted; see the
commit for details.
- The tmpfs filesystem has gained support for extended attributes.
- The Xen block backend driver (allowing guests to export block devices
to other guests) has been merged.
- New hardware support includes:
- Systems and processors:
Netlogic XLR/XLS MIPS CPUs,
Lantiq MIPS-based SOCs,
PowerPC A2 and "wire speed processor" CPUs, and
Armadeus APF9328 development boards.
- Audio/video: Philips TEA5757 radio tuners,
Digigram Lola boards,
Apple iSight microphones,
Maxim max98095 codecs,
Wolfson Micro WM8915 codecs,
Asahi Kasei AK4641 codecs,
HP iPAQ hx4700 audio interfaces,
NXP TDA18212 silicon tuners,
Micron MT9V032 sensors,
Sony CXD2820R DVB-T/T2/C demodulators,
RedRat3 IR transceivers,
Samsung S5P and EXYNOS4 MIPI CSI receivers, and
Micronas DRXD tuners.
- Input:
PenMount dual touch panels,
Maxim max11801 touchscreen controllers,
Analog Devices ADP5589 I2C QWERTY keypad and I/O expanders, and
Freescale MPR121 Touchkey controllers.
- Network:
Marvell "WiFi-Ex" wireless adapters (SD8787 initially) and
Marvell 8787 Bluetooth interfaces.
- USB:
Renesas USBHS controllers,
Samsung S5P EHCI controllers,
Freescale USB OTG transceivers, and
Samsung S3C24XX USB high-speed controllers.
- Miscellaneous: CARMA DATA-FPGA programmers, Broadcom's "advanced microcontroller bus architecture," Freescale SEC4/CAAM security engines, Samsung S5PV210 crypto accelerators, Maxim MAX16065, MAX16066, MAX16067, MAX16068, MAX16070, and MAX16071 system managers, Maxim MAX6642 temperature sensors, TI UCD90XXX system health controllers, TI UCD9200 system controllers, Analog Devices ADM1275 hot-swap controllers, Analog Devices AD5504, AD5501, AD5760, and AD5780 DACs, Analog Devices AD7780 and AD7781 analog to digital convertors, Analog Devices ADXRS450 Digital Output Gyroscopes, Xilinx PS UARTs, TAOS TSL2580, TSL2581, and TSL2583 light-to-digital converters, Intel "management engine" interfaces, nVidia Tegra embedded controllers, and IEEE 1588 (precision time protocol) clocks.
Also added to the staging tree is the user-space support code for the USB/IP subsystem which allows a system to "export" its USB devices over the net.
- Systems and processors:
Netlogic XLR/XLS MIPS CPUs,
Lantiq MIPS-based SOCs,
PowerPC A2 and "wire speed processor" CPUs, and
Armadeus APF9328 development boards.
Changes visible to kernel developers include:
- Prefetching is no longer used in linked list and hlist traversal;
this may be the
beginning of a much more extensive program to remove explicit prefetch
operations. See this article for more
information on the prefetch removal.
- There is a new strtobool() function for turning user-supplied
strings into boolean values:
int strtobool(const char *s, bool *res);
Anything starting with one of [yY1] is considered to be true, while strings starting with one of [nN0] are false; anything else gets an -EINVAL error.
- There is a whole series of new functions for converting user-space
strings to kernel-space integer values; all follow this pattern:
int kstrtol_from_user(const char __user *s, size_t count, unsigned int base, long *res);
These functions take care of safely copying the string from user space and performing the integer conversion.
- The kernel has a new generic binary search function:
void *bsearch(const void *key, const void *base, size_t num, size_t size, int (*cmp)(const void *key, const void *elt));
This function will search for key in an array starting at base containing num elements of the given size.
- The use of threads for the handling of interrupts on specific lines
can be controlled with irq_set_thread() and
irq_set_nothread().
- The static_branch() interface for
the jump label mechanism has been merged.
- The function tracer can now support multiple users with each tracing a
different set of functions.
- The alarm timer mechanism - which can set timers that fire even if the
system is suspended - has been merged.
- An object passed to kfree_rcu() will be handed to
kfree() after the next read-copy-update grace period. There
are a lot of RCU callbacks which only call kfree(); it should
be able to replace those with kfree_rcu() calls.
- The -Os (optimize for size) option is no longer the default for kernel
compiles; the associated costs in code quality were deemed to be too
high. Linus said: "
I still happen to believe that I$ miss costs are a major thing, but sadly, -Os doesn't seem to be the solution. With or without it, gcc will miss some obvious code size improvements, and with it enabled gcc will sometimes make choices that aren't good even with high I$ miss ratios.
" - The first rounds of ARM architecture cleanup patches have gone in. A
number of duplicated functionalities have been consolidated, and
support for a
number of (probably) never-used platform and board configurations have
been removed.
- The W= parameter to kernel builds now takes values from 1 to 3. At the first level, only warnings deemed to have a high chance of being relevant; a full kernel build generates "only" 4800 of them. At W=3, developers get a full 86,000 warnings to look at. Note that if you want all of the warnings, you need to say W=123.
The merge window for this development cycle is likely to end on May 29, just before Linus boards a plane for Japan. At that time, presumably, we will learn what the next release will be called; Linus has made it clear that he thinks the 2.6.x numbers are getting too high and that he thinks it's time for a change. Tune in next week for the conclusion of this merge window and the end of kernel version number suspense.
Kernel address randomization
Last week's Kernel Page included a brief item about the hiding of kernel addresses from user space. This hiding has come under fire from a number of developers who say that it breaks things (perf, for example) and that it does not provide any real additional security. That said, there does seem to be a consensus around the idea that it's better if attackers don't know where the kernel keeps its data structures. As it turns out, there might be a better way to do that than simply hiding pointer values.There is no doubt that having access to the layout of the kernel in memory is useful to attackers. As Dan Rosenberg put it:
The hiding of kernel addresses is meant to deprive attackers of that extra information, making their task harder. One big problem with that approach is that most systems out there are running stock distribution kernels. Getting the needed address information from the distributor's kernel package is not a particularly challenging task. So, on these systems, there is no real mystery about the layout of the kernel, regardless of whether pointer values are allowed to leak to user space or not.
While all of this was being discussed, another idea came out: why not randomize the location of the kernel in memory at boot time? Address space layout randomization has been used to resist canned attacks for a long time, but the kernel takes no such measure for itself. Given that the kernel image is relocatable, there is no real reason why it always needs to be loaded at the same address. If the kernel calculated a different offset for itself at every boot, it could subtract that offset from pointer values before passing them to user space. Those pointers could then be used by tools like perf, but they would no longer be helpful for anybody seeking to overwrite kernel data structures.
Dan has been looking into kernel-space randomization with some success; it turns out that simply relocating the kernel is not that hard. That said, he has run up against a few potential problems. The first of those is that there is very little entropy available at the beginning of the boot process, so the generation of a sufficiently random base address for the kernel is not entirely straightforward. It seems that enough bits of entropy can be derived from the real-time clock and time stamp counter to make it hard for an attacker to derive the base address later on, but a real random number would be better.
Next, as Linus pointed out, the kernel is not infinitely relocatable. There are a number of alignment requirements which constrain the kernel's placement, so, according to Linus, there is a maximum of 8-12 bits of randomization available. That means that an exploit attempt could find the right offset after a maximum of a few thousand tries. Given that computers can try things very quickly, that does not give a site administrator much time to respond.
As others pointed out, though, that amount of randomness is probably enough. Failed exploit attempts have a high probability of generating a kernel oops; even if an administrator does not notice the oops immediately, it should come to their attention at some point. So the ability to stealthily take over a system is gone. Beyond that, failed exploits may well take down the system entirely (especially if, as is the case with many RHEL systems, the "panic on oops" flag is set) or leave it in a state where further exploit attempts cannot work. There is, it seems, a real advantage to forcing an attacker to guess.
That advantage evaporates, though, if an attacker can somehow figure out what offset a given system used at boot time. Dan noticed one way that could happen: the unprivileged SIDT instruction can be used to locate the system's interrupt descriptor table. That location could, in turn, be used to calculate the kernel's base offset. Dynamic allocation of the table can solve that problem at the cost of messing with some tricky very-early-boot code. There could be other advantages to dynamically allocating the table, though; if the table were put into the per-CPU area, it might make the system a little more scalable.
So this problem can be solved, but there will, beyond doubt, be other places where it will be possible for an attacker to obtain a real kernel-space address. There are simply too many ways in which that information might leak into user space. Plugging all of those leaks looks like one of those long-term tasks that is never really done. It may, however, be possible to get close enough to done that attackers will not be able to count on knowing the true location of the kernel in a running system. That may be a bit of security through obscurity that is worth having.
The problem with prefetch
Over time, software developers tend to learn that micro-optimization efforts are generally not worthwhile, especially in the absence of hard data pointing out a specific problem. Performance problems are often not where we think they are, so undirected attempts to tweak things to make them go faster can be entirely ineffective. Or, indeed, they can make things worse. That is a lesson that the kernel developers have just relearned.At the kernel level, performance often comes down to cache behavior. Memory references which must actually be satisfied by memory are extremely slow; good performance requires that needed data be in a CPU cache much of the time. The kernel goes out of its way to use cache-hot memory when possible; there has also been some significant work put into tasks like reordering structures so that fields that are commonly accessed together are found in the same cache line. As a general rule, these optimizations have helped performance in measurable ways.
Cache misses are often unavoidable, but it is sometimes possible to attempt to reduce their cost. If the kernel knows that it will be accessing memory at a particular location in the near future, it can use a CPU-specific prefetch instruction to begin the process of bringing the data into cache. This instruction is made available to kernel code via the generic prefetch() function; developers have made heavy use of it. Consider, for example, this commonly-used macro from <linux/list.h>:
#define list_for_each(pos, head) \ for (pos = (head)->next; prefetch(pos->next), pos != (head); \ pos = pos->next)
This macro (in a number of variants) is used to traverse a linked list. The idea behind the prefetch() call here is to begin the process of fetching the next entry in the list while the current entry is being processed. Hopefully by the time the next loop iteration starts, the data will have arrived - or, at least, it will be in transit. Linked lists are known to be cache-unfriendly data structures, so it makes sense that this type of optimization can help to speed things up.
Except that it doesn't - at least, not on x86 processors.
Andi Kleen may have been the first to question this optimization when he tried to remove the prefetches from list operations last September. His patch generated little discussion, though, and apparently fell through the cracks. Recently, Linus did some profiling on one of his favorite workloads (kernel builds) and found that the prefetch instructions were at the top of the ranking. Performing the prefetching cost time, and that time was not being repaid through better cache behavior; simply removing the prefetch() calls made the build go faster.
Ingo Molnar, being Ingo, jumped in and did a week's worth of research in an hour or so. Using perf and a slightly tweaked kernel, he was able to verify that using the prefetch instructions caused a performance loss of about 0.5%. That is not a headline-inspiring performance regression, certainly, but this is an optimization which was supposed to make things go faster. Clearly something is not working the way that people thought it was.
Linus pointed out one problem at the outset: his test involved a lot of traversals of singly-linked hlist hash table lists. Those lists tend to be short, so there is not much scope for prefetching; in fact, much of the time, the only prefetch attempted used the null pointer that indicates the end of the list. Prefetching with a null pointer seems silly, but it's also costly: evidently every such prefetch on x86 machines (and, seemingly, ARM as well) causes a translation lookaside buffer miss and a pipeline stall. Ingo measured this effect and came to the conclusion that each null prefetch cost about 20 processor cycles.
Clearly, null prefetches are a bad idea. It would be nice if the CPU would simply ignore attempts to prefetch using a null pointer, but that's not how things are, so, as is often the case, one ends up trying to solve the problem in software instead. Ingo did some testing with a version of prefetch() which would only issue prefetch instructions for non-null pointers; that version did, indeed, perform better. But it still performed measurably worse than simply skipping the prefetching altogether.
CPU designers are well aware of the cost of waiting for memory; they have put a great deal of effort into minimizing that cost whenever possible. Among other things, contemporary CPUs have their own memory prefetch units which attempt to predict which memory will be wanted next and start the process of retrieving it early. One thing Ingo noticed in his tests is that, even without any software prefetch operations, the number of prefetch operations run by the CPU was about the same. So the hardware prefetcher was busy during this time - and it was doing a better job than the software at deciding what to fetch. Throwing explicit prefetch operations into the mix, it seems, just had the effect of interfering with what the hardware was trying to do.
Ingo summarized his results this way:
One immediate outcome from this work is that, for 2.6.40 (or whatever it ends up being called), the prefetch() calls have been removed from linked list, hlist, and sk_buff list traversal operations - just like Andi Kleen tried to do in September. Chances are good that other prefetch operations will be removed as well. There will still be a place for prefetch() in the kernel, but only in specific situations where it can be clearly shown to help performance. As with other low-level optimizations (likely() comes to mind), tossing in a prefetch because it seems like it might help is often not the right thing to do.
One other lesson to be found in this experience is that numbers matter. Andi was right when he wanted to remove these operations, but he did not succeed in getting his patch merged. One could come up with a number of reasons why things went differently this time, starting with the fact that Linus took an interest in the problem. But it's also true that performance-oriented patches really need to come with numbers to show that they are achieving the desired effect; had Andi taken the time to quantify the impact of his change, he would have had a stronger case for merging it.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Development tools
Device drivers
Filesystems and block I/O
Memory management
Networking
Security-related
Virtualization and containers
Page editor: Jonathan Corbet
Distributions
Linux Mint goes to 11
Fedora and Ubuntu are making sweeping changes with their most recent releases, but Linux Mint is taking a more conservative approach. Though it's not quite finished yet, the release candidate of Linux Mint 11 (codenamed "Katya") offers the same GNOME 2 desktop that users have come to know and (in some cases) love.
Linux Mint should require no introduction. The distribution got its start in 2006 as an Ubuntu derivative that offered different default applications, a modified theme, and pre-installed multimedia codecs. The main edition of Linux Mint continues to be derived from Ubuntu's main release, though the project has branched out a bit over the years with more Ubuntu-derived releases with a focus on KDE, LXDE, and Xfce as well. More recently, the project also started offering a rolling release based on Debian Testing with Mint themes and management tools, and has rebased the Xfce Mint flavor with Debian as well.
The main Ubuntu-flavored release continues to be the most popular. According to the most recent statistics released in April 2011 (scroll to the bottom of the page for the statistics), the Mint 10 release has more than 52% of users, the Mint 9 release (which is an LTS release) has about 22%, and the Debian-based LMDE has almost 9% of users. Older Mint releases account for the rest — the desktop used is not indicated in the statistics.
What's changed in 11
![[Desktop]](https://static.lwn.net/images/2011/mint-desktop-sm.png)
The release candidate was announced on on May 9, and includes a few significant changes since the last release to Mint-specific applications, and in the default application selection. Mint has followed Ubuntu in replacing Rhythmbox with Banshee as the default music player, and in switching to LibreOffice as the default office suite. Instead of going with Shotwell to replace F-Spot in 11.04, the Katya release uses gThumb instead. The social media craze seems to have died out at Mint headquarters, as Gwibber is no longer installed by default, but the microblogging client is not replaced with any of the alternatives.
Perhaps a harbinger of things to come, the Desktop Settings tool in Mint is being made "desktop agnostic" in 11, and the release notes indicate that it will be extended to offer settings for KDE, LXDE, Fluxbox, and Xfce users in future editions of Mint. The Settings tool allows users to change a few things about the desktop's behavior that aren't exposed through the usual GNOME tools — for instance, turning on the infamous "wireframe dragging" that GNOME developers insisted on hiding in Gconf. A small, but welcome, change in this release is the ability to turn off the fortune
quotes users see at login in the console, or when opening a terminal, that are on by default. It's always been possible to kill these by editing /etc/bash.bashrc
, but that's not particularly obvious to many of Mint's user base.
![[Software Manager]](https://static.lwn.net/images/2011/mint-swmanager-sm.png)
The Software Manager that comes with Mint has also had a minor face lift and a few features have been added that provide more detail about what will be installed when users choose to install a package. The Mint Software Manager is fairly slick, and holds its own next to Ubuntu's — it might be a better choice for multi-distribution projects seeking a unified front-end for software installation, given that it's GPLv2 and Mint doesn't have an onerous contributor's agreement required to work on it.
![[Giver]](https://static.lwn.net/images/2011/mint-giver-sm.png)
One application that makes an appearance for the first time in 11 is Giver, a file-sharing application for that uses Avahi (a libre implementation of Zeroconf) to discover other Giver clients on the same network. Using Giver you can simply start the application and anyone on your local network also running Giver can share or receive files from your machine just by clicking the target user and selecting the files and folders to share. It's a lightweight file-sharing client that makes a lot of sense for users who are in a meeting or at an event, rather than using a more cumbersome centralized file-sharing service.
Overall, the Katya release is not that different from Linux Mint 10 — but quite different from its Ubuntu 11.04 upstream, at least where the desktop is concerned. It has all the obligatory package updates (Firefox, GIMP, etc.) but doesn't look or feel much different than Linux Mint 10 at all.
Decision time for Mint
The main Mint release has long been based on Ubuntu with some fairly minimal changes — the addition of codecs that aren't shipped with Ubuntu, Mint-specific management tools and themes, and a slightly different selection of default applications. The Katya release is the first to feature what amounts to a completely different desktop environment than Ubuntu.
The GNOME packages used for the default desktop in Mint 11 are still part of Ubuntu 11.04. However, with 11.10 Ubuntu will be removing the "classic" GNOME desktop fallback. This leaves the Mint team with a handful of options — maintain GNOME 2.32 for another release, embrace Unity or GNOME Shell, or switch to another desktop like Xfce. Mint has an Xfce-based release as well, but it is a rolling release based on Debian, not on Ubuntu.
So what is Mint going to do? I asked Linux Mint founder Clement Lefebvre by email, and he says that it's up in the air:
For 11, Lefebvre says sticking with GNOME 2 was the right choice. He said that the main challenge was "the gnome settings daemon not working as well as before and a multitude of regressions occurring in Compiz.
" Though GNOME 2 won't be supported in the future, he says it's still a modern desktop and (as many users have pointed out compared to GNOME 3) "extremely mature
".
For Mint, maturity counts more than whiz-bang new features. According to Lefebvre, the main goal is to "provide the desktop people have come to enjoy with Linux Mint
", and that could mean moving to GNOME 3, sticking with GNOME 2, or having a fork of either of those desktops. "
We know precisely what we want, we're keeping a close eye on the new desktop alternatives (Gnome 3, Unity) and as usual we'll choose what works best for us.
"
Linux Mint has a very small group of contributors. Lefebvre is the only person working on Mint full time, though he says the project was close to getting a second contributor who had to bow out "due to personal circumstances
". Lefebvre does say that there is a lot he'd like to do.
There's a lot of R&D and development planned for the future, we want to test our own base, we're looking at snapshot/restoration scenarios and local network communications in particular, and we've got some really ambitious projects in mind, but we won't be able to tackle these with limited resources. There's a strong community behind us, the support we're getting is amazing and we've got the power to push further. I personally look forward to extending the team and getting more developers working full time for Linux Mint in the near future.
Extending the team might be difficult without a few changes. The planning and release process for Linux Mint is a bit opaque. Mint does not have a mailing list for developer discussions, so any and all of the development discussions (such as they are) take place on the Linux Mint forum. And very little is discussed on the forum. Those who would care to contribute to Linux Mint are directed to the Get Involved page, which emphasizes monetary donations, spreading the word about Mint, bug reporting, and providing forum support to other Mint users. Interested contributors can follow along on GitHub, but recruiting significant contributors of code does not seem to be a priority for the project.
Lefebvre acknowledges that "it can be confusing for people when it
comes to getting involved.
" He points to the forums and other
community features on the site for contributing ideas, but says
"things happen when people come and talk to us. Whether it's on IRC,
or directly by email, the best way to get something done is by direct
communication.
" He continues:
If the idea is good we can discuss it and implement it. If it's bad, we can acknowledge it and move on to something else. Either way things progress fast once we start talking about it, and it's also in these circumstances that we get people involved and welcome them in the team. If somebody comes to us not only with an idea but also with the skills to implement it, in most cases we'll talk to that person and try to make him/her implement his/her own idea for inclusion in the upcoming release. This collaboration with the person who took the initiative and the work we do together is often the start of a great relationship and often leads to having this person join the development team.
Compared to projects like Ubuntu and Fedora, this requires a lot more legwork for users to make the leap to contributors. Still, Mint might just find a few contributors ready to jump in with the 11 release. While some users have been happy with Unity and GNOME Shell so far, quite a few would prefer to stick with the tried-and-true GNOME experience. Mint 11 might be a good refuge for those users, and this would be a good opportunity for the Mint project to pick up more hands that can help.
Compared to Ubuntu 11.04 or the Fedora 15 release scheduled for this week, Mint is a fairly modest upgrade. For many Linux users, that's all that's really wanted.
Brief items
Distribution quotes of the week
In my ever so humble, non-lawyer opinion, I think a Fedora ambassador has a much better chance of getting hit by a comet while on a date with Angelina Jolie.
Fedora 15 released
The Fedora 15 release is out. The announcement contains a long list of new features, including GNOME 3, LibreOffice, better btrfs support, KDE 4.6, GCC 4.6, systemd, and more. "Fedora 15 now includes the Robotics Suite, a collection of packages that provides a usable out-of-the-box robotics development and simulation environment. This ever-growing suite features up-to-date robotics frameworks, simulation environments, utility libraries, and device support, and consolidates them into an easy-to-install package group."
MeeGo 1.2 released
The MeeGo 1.2 release has been announced. "This release provides a solid baseline for device vendors and developers to start creating software for various device categories on Intel Atom and ARMv7 architectures." Enhancements include better telephony support, updated user experience modules, and a tablet developer preview.
Red Hat releases RHEL 6.1
Red Hat has announced the release of Red Hat Enterprise Linux 6.1. "Red Hat Enterprise Linux 6.1 enhancements provide customers with improvements in system reliability, scalability and performance, coupled with support for upcoming system hardware. Red Hat Enterprise Linux 6.1 also delivers patches and security updates, while maintaining application compatibility and OEM/ISV certifications." In addition to the press release, the What's New page has some more details.
Distribution News
Fedora
Cooperative Bug Isolation for Fedora 15
The Cooperative Bug Isolation Project (CBI) is now available for Fedora 15. "CBI (http://research.cs.wisc.edu/cbi/) is an ongoing research effort to find and fix bugs in the real world. We distribute specially modified versions of popular open source software packages. These special versions monitor their own behavior while they run, and report back how they work (or how they fail to work) in the hands of real users like you. Even if you've never written a line of code in your life, you can help make things better for everyone simply by using our special bug-hunting packages."
Fedora town hall meetings
The Fedora Project will be holding IRC town hall meetings with the board election candidates on May 30 and the FESCo election candidates on May 31.The candidates' responses to the questionnaire are available.
Gentoo Linux
Timetable For 2011 Gentoo Council Election
The schedule for the Gentoo Council election is out. Nominations open June 3 and close on June 17. Voting opens June 20 and closes July 4.
Newsletters and articles of interest
Distribution newsletters
- DistroWatch Weekly, Issue 406 (May 23)
- Fedora Weekly News Issue 275 (May 18)
- openSUSE Weekly News, Issue 176 (May 21)
Singh: RHEL 6.1 and CentOS 6.x
Karanbir Singh looks at plans for CentOS 6.x. "Most people will want to know how this [RHEL 6.1 release] impacts CentOS and the CentOS-6 plans. We are, at this time, on course to deliver CentOS-6 within the next couple of weeks. We will carry on with those plans as is, and deliver a 6.0 release and then goto work on 6.1. I am fairly confident that we can get to a 6.1 release within a few weeks of the 6.0 set being finalised. Partially due to the automation and the testing process's being put into place to handle the entire CentOS-6 branch."
Page editor: Rebecca Sobol
Development
Scale Fail (part 2)
In Part One of Scale Fail, I discussed some of the major issues which prevent web sites and applications from scaling. As was said there, most scalability issues are really management issues. The first article covered a few of the chronic bad decisions — or "anti-patterns" — which companies suffer from, including compulsive trendiness, lack of metrics, "barn door troubleshooting", and single-process programming. In Part Two, we'll explore some more general failures of technology management which lead to downtime.
No Caching
"We have memcached installed somewhere."
"How is it configured? What data are you caching? How are you invalidating data?"
"I'm ... not sure. We kind of leave it up to Django."
I'm often astonished at how much money web companies are willing to spend on faster hardware, and how little effort on simple things which would make their applications much faster in a relatively painless way. For example, if you're trying to scale a website, the first thing you should be asking yourself is: "where can I add more useful caching?"
While I mention memcached above, I'm not just talking about simple data caching. In any really scalable website you can add multiple levels of caching, each of them useful in their own way:
- database connection, parse and plan caching
- complex data caching and materialized views
- simple data caching
- object caching
- server web page caching
Detailing the different types of caching and how to employ them would be an article series on its own. However, every form of caching shares the ability to bring data closer to the user and make it more stateless, reducing response times. More importantly, by reducing the amount of resources required by repetitive application requests, you improve the efficiency of your platform and thus make it more scalable.
Seems obvious, doesn't it? Yet I can count our clients who were employing an effective caching strategy before they hired us on one hand.
A common mistake we see clients making with data caching is to leave caching entirely up to the object-relational mapper (ORM). The problem with this is that out-of-the-box, the ORM is going to be very conservative about how it uses the cache, only retrieving cached data for a user request which is absolutely identical, and thus lowering the number of cache hits to nearly zero. For example, I have yet to see an ORM which dealt well with caching the data for a paginated application view on its own.
The worst case I've seen of this was an online auction application where every single thing the user did ... every click, every pagination, every mouse-over ... resulted in a query to the back-end PostgreSQL database. Plus the auction widget polled the database for auction updates 30 times a second per user. This meant that each active application user resulted in over 100 queries per second to the core transactional database.
As much as a lack of caching is a very common bad decision, it's really symptomatic of a more general anti-pattern I like to call:
Scaling the Impossible Things
"Seems awfully complicated. Have you considered just caching the most common searches instead?"
"That wouldn't work. Our data is too dynamic."
"Are you sure? I did some query analysis, and 90% of your current database hits fall in one of these four patterns ..."
"I told you, it wouldn't work. Who's the CTO here, huh?"
Some things which you need for your application are very hard to scale, consuming large amounts of system resources, administration time, and staff creativity to get them to scale up or scale out. These include transactional databases, queues, shared filesystems, complex web frameworks (e.g. Django or Rails), and object-relational mappers (ORMs).
Other parts of your infrastructure are very easy to scale to many user requests, such as web servers, static content delivery, caches, local storage, and client-side software (e.g. javascript).
Basically, the more stateful, complex, and featureful a piece of infrastructure is, the more resources it's going to use per application user and the more prone it's going to be to locking — and thus the harder it's going to be to scale out. Given this, you would think that companies who are struggling with rapidly growing scalability problems would focus first on scaling out the easy things, and put off scaling the hard things for as long as possible.
You would be wrong.
Instead directors of development seem to be in love with trying to scale the most difficult item in their infrastructure first. Sharding plans, load-balancing master-slave-replication, forwarded transactional queues, 200-node clustered filesystems — these get IT staff excited and get development money flowing. Even when their scalability problems could be easily and cheaply overcome by adding a Varnish cache or fixing some unnecessarily resource-hungry application code.
For example, one of our clients had issues with their Django servers constantly becoming overloaded and falling over. They'd gone up from four to eight application servers, and were still having to restart them on a regular basis, and wanted to discuss doubling the number of application servers again, which also would require scaling up the database server. Instead, we did some traffic analysis and discovered that most of the resource usage on the Django servers was from serving static images. We moved all the static images to a content delivery network, and they were able to reduce their server count.
After a month of telling us why we "didn't understand the application", of course.
SPoF
"Through a Zeus load-balancing cluster."
"From the web servers to the middleware servers?"
"The same Zeus cluster."
"Web servers to network file storage? VPN between data centers? SSH access?"
"Zeus."
"Does everything on this network go through Zeus?"
"Pretty much, yes."
"Uh-huh. Well, what could possibly go wrong?"
SPoF, of course, stands for Single Point of Failure. Specifically, it refers to a single component which will take down your entire infrastructure if it fails, no matter how much redundancy you have in other places. It's dismaying how many companies fail to remove SPoFs despite having lavished hardware and engineering time on making several levels of their stack high availability. Your availability is only as good as your least available component.
The company in the dialog above went down for most of a day only a few weeks after that conversation. A sysadmin had loaded a buggy configuration into Zeus, and instantly the whole network ceased to exist. The database servers, the web servers, the other servers were all still running, but not even the sysadmins could reach them.
Sometimes your SPoF is a person. For example, you might have a server or even a data center which needs to be failed over manually, and only one staff member has the knowledge or login to do so. More sinister SPoFs often lurk in your development or recovery processes. I once witnessed a company try to deploy a hot code fix in response to a DDOS attack, only to have their code repository — their only code repository — go down and refuse to come back up.
A "Cascading SPoF" is a SPoF which looks like it's redundant. Here's a simple math exercise: You have three application servers. Each of these servers is operating at 80% of their capacity. What happens when one of them fails and its traffic gets load balanced onto the other two?
A component doesn't have to be the only one of its kind to be a SPoF; it just has to be the case that its failure will take the application down. If all of the components at any level of your stack are operating at near-capacity, you have a problem, because a minority server failure or even a modest increase in traffic can result in cascading failure.
Cloud Addiction
"We can't discuss a move from until the next fiscal year."
"So, you'll be wanting the $40K contract then?"
Since I put together the Ignite talk early this year, I've increasingly seen a new anti-pattern we call "Cloud addiction". Several of our clients are refusing to move off of cloud hosting even when it is demonstrably killing their businesses. This problem is at its worst on Amazon Web Services (AWS) because Amazon has no way to move off their cloud without leaving Amazon entirely, but I've seen it with other public clouds as well.
The advantage of cloud hosting is that it allows startups to get a new application running and serving real users without ever making an up-front investment in infrastructure. As a way to lower the barriers to innovation, cloud hosting is a tremendous asset.
The problem comes when the application has outgrown the resource limitations of cloud servers and has to move to a different platform. Usually a company discovers these limits in the form of timeouts, outages, and rapidly escalating numbers of server instances which fail to improve application performance. By limitations, I'm referring to the restrictions on memory, processing power, storage throughput and network configuration inherent on a large scale public cloud, as well as the high cost of round-the-clock busy cloud instances. These are "good enough" for getting a project off the ground, but start failing when you need to make serious performance demands on each node.
That's when you've reached scale fail on the cloud. At that point, the company has no experience managing infrastructure, no systems staff, and no migration budget. More critically, management doesn't have any process for making decisions about infrastructure. Advice that a change of hosting is required are met with blank stares or even panic. "Next fiscal year", in a startup, is a euphemism for "never".
Conclusion
Of course, these are not all the scalability anti-patterns out there. Personnel mismanagement, failure to anticipate demand spikes, lack of deployment process, dependencies on unreliable third parties, or other issues can be just as damaging as the eight issues I've outlined above. There are probably as many ways to not scale as there are web companies. I can't cover everything.
Hopefully this article will help you recognize some of these "scale fail" patterns when they occur at your own company or at your clients. Every one of the issues I've outlined here comes down to poor decision-making rather than any technical limits in scalability. In my experience, technical issues rarely hold back the growth of a web business, while management mistakes frequently destroy it. If you recognize the anti-patterns, you may be able to make one less mistake.
[ Note about the author: to support his habit of hacking on the PostgreSQL database, Josh Berkus is CEO of PostgreSQL Experts Inc., a database and applications consulting company which helps clients make their PostgreSQL applications more scalable, reliable, and secure. ]
Brief items
Quotes of the week
0install 1.0 released
The 0install project describes itself as "a decentralised cross-distribution software installation system available under the LGPL. It allows software developers to publish programs directly from their own web-sites, while supporting features familiar from centralised distribution repositories such as shared libraries, automatic updates and digital signatures." The 1.0 release is out; see 0install.net for more information.
The first Calligra snapshot release
The Calligra fork of KOffice has announced the availability of its first snapshot release in the hope of getting useful feedback from users. "Our goal is to provide the best application suite on all platforms based on open standards. That is no small goal. We feel now that we have improved the foundation enough to start working seriously on improving the user experience. For this, we need the help of our users!" There are two new applications (a diagram and flowchart editor and a note-taking tool), performance improvements, better text layout, and more.
SSL FalseStart Performance Results (The Chromium Blog)
Over at the Chromium blog, Google is reporting on the performance of SSL FalseStart, which is implemented in the Chromium browser. SSL FalseStart takes a seemingly legal (in a protocol sense) shortcut in the SSL handshake, which leads to 30% less latency in SSL startup time. In order to roll it out, Google also needed to determine which sites didn't support the feature: "To investigate the failing sites, we implemented a more robust check to understand how the failures occurred. We disregarded those sites that failed due to certificate failures or problems unrelated to FalseStart. Finally, we discovered that the sites which didn't support FalseStart were using only a handful of SSL vendors. We reported the problem to the vendors, and most have fixed it already, while the others have fixes in progress. The result is that today, we have a manageable, small list of domains where SSL FalseStart doesn't work, and we've added them to a list within Chrome where we simply wont use FalseStart. This list is public and posted in the chromium source code. We are actively working to shrink the list and ultimately remove it." It seems likely that other browsers can take advantage of this work.
Libcloud becomes a top-level Apache project
The Apache Software Foundation has announced that Libcloud has graduated into a top-level project. "Apache Libcloud is an Open Source Python library that provides a vendor-neutral interface to cloud provider APIs. The current version of Apache Libcloud includes backend drivers for more than twenty leading providers including Amazon EC2, Rackspace Cloud, GoGrid and Linode."
Jato v0.2 released
Jato is a JIT-only virtual machine for Java; the 0.2 release is now out. New features include Jython and JRuby support, annotation support, improved JNI support, and a lot of fixes.Pinpoint 0.1.0 released
Pinpoint is "a simple presentation tool that hopes to avoid audience death by bullet point and instead encourage presentations containing beautiful images and small amounts of concise text in slides." The first release (0.1.0) has been made with a basic set of presentation features.
Newsletters and articles
Development newsletters from the last week
- Caml Weekly News (May 24)
- LibreOffice development summary (May 25)
- PostgreSQL Weekly News (May 22)
Modders Make Android Work the Way You Want (Wired)
Wired profiles the CyanogenMod team. CyanongenMod is, of course, the alternate firmware for Android devices built from the code that Google releases. "CyanogenMod expanded into a team of 35 different "device maintainers," who manage the code for the 32 different devices that the project supports. Like Google, the team publishes its code to an online repository and accepts online submissions for changes to the code from other developers. Seven core members decide which of the submitted changes make it into the next release of CyanogenMod, and which don't."
What Every C Programmer Should Know About Undefined Behavior #3/3
The final segment of the LLVM blog's series on undefined behavior is up. "In this article, we look at the challenges that compilers face in providing warnings about these gotchas, and talk about some of the features and tools that LLVM and Clang provide to help get the performance wins while taking away some of the surprise."
Rapid-release idea spreads to Firefox 5 beta (CNet)
Stephen Shankland takes a look at the beta for Firefox 5. "Firefox 5 beta's big new feature is support for CSS animations, which let Web developers add some pizzazz to actions such as making dialog boxes pop up or switching among photos. Also in the new version is canvas, JavaScript, memory, and networking, according to the release notes and bug-fix list."
Page editor: Jonathan Corbet
Announcements
Brief items
Open Virtualization Alliance
The recently announced Open Virtualization Alliance is "a consortium committed to fostering the adoption of open virtualization technologies including Kernel-based Virtual Machine (KVM)." The founding companies include BMC Software, Eucalyptus Systems, HP, IBM, Intel, Red Hat, and SUSE. "
The Open Virtualization Alliance will provide education, best practices and technical advice to help businesses understand and evaluate their virtualization options. The consortium complements the existing open source communities managing the development of the KVM hypervisor and associated management capabilities, which are rapidly driving technology innovations for customers virtualizing both Linux and Windows applications." (Thanks to Romain Francoise)
TDF announces the Engineering Steering Committee
The Document Foundation has announced the members of its Engineering Steering Committee. "The 10 members of the ESC are Andras Timar (localization), Michael Meeks and Petr Mladek of Novell, Caolan McNamara and David Tardon of RedHat, Bjoern Michaelsen of Canonical, Michael Natterer of Lanedo, Rene Engelhard of Debian, and the independent contributors Norbert Thiebaud and Rainer Bielefeld (QA). The ESC convenes once a week by telephone to discuss the progress of the time-based release schedule and coordinate development activities. Their meetings routinely include other active, interested developers and topic experts."
The Humble Homebrew Collection
The latest in the Humble games series is the Humble Homebrew Collection. All games in this collection are released under the MIT license and will run on a jailbroken PS3, Linux, Windows, Mac or Android. "The main purpose of this website it to serve as a petition against Sony's unjust behavior towards their customers. You are encouraged to sign the petition, donate to the developers or you can simply download the games. The choice is yours. If you wish to do so, you may also donate to the developers of this homebrew application, as well as to the EFF, which helps defend our rights in this digital age."
Articles of interest
Mozilla rejects WebP image format, Google adds it to Picasa (ars technica)
Ars technica looks at WebP adoption. "Building mainstream support for a new media format is challenging, especially when the advantages are ambiguous. WebM was attractive to some browser vendors because its royalty-free license arguably solved a real-world problem. According to critics, the advantages of WebP are illusory and don't offer sufficient advantages over JPEG to justify adoption of the new format. Aside from Google—which introduced official support for WebP in Chrome 12—Opera is the only other browser that natively supports the format. Despite the ease with which it can be supported, other browser vendors are reluctant to endorse the format and make it a permanent part of the Web landscape."
Aslett: Opening up the Open Source Initiative
The 451 Group's Matthew Aslett has posted a writeup of the plans to reform the Open Source Initiative as discussed at the Open Source Business Conference. "Arguably, a fate equal to the subversion of the OSI would be irrelevance. Rather than assuming that organisations will seek to over-run the OSI, I believe more attention should be being placed on ensuring that organisations will seek to join. The OSI remains well-respected, but I believe that for many of the different constituencies in the open source community it is not entirely clear what it is that the OSI contributes beyond its traditional role of protecting the Open Source Definition and approving associated licenses."
A Liberating Betrayal? (ComputerWorld)
Simon Phipps discusses the end of Skype support for Asterisk in this ComputerWorld UK column. "The proprietary interests hold all the cards here. The community can't just 'rehost and carry on' because the crucial add-on is proprietary. Even if wasn't, the protocol it's implementing is proprietary and subject to arbitrary change - very likely to happen if anyone attempts to reverse-engineer the interface and protocol. Asterisk may be open source, but if you're dependent on this interface to connect with your customers on Skype you've no freedoms - that's the way 'open core' works."
FSFE: Fellowship interview with Florian Effenberger
The Free Software Foundation Europe has an interview with Florian Effenberger. "Florian Effenberger has been a Free Software evangelist for many years. Pro bono, he is founding member and part of the Steering Committee at The Document Foundation. He has previously been active in the OpenOffice.org project for seven years, most recently as Marketing Project Lead. Florian has ten years' experience in designing enterprise and educational computer networks, including software deployment based on Free Software. He is also a frequent contributor to a variety of professional magazines worldwide on topics such as Free Software, Open Standards and legal matters."
Kuhn: Clarification on Android, its (Lack of) Copyleft-ness, and GPL Enforcement
Bradley M. Kuhn shares some thoughts about claims that Android code violates the GPL or LGPL. "Of course, as a software freedom advocate, I'm deeply dismayed that Google, Motorola and others haven't seen fit to share a lot of the Android code in a meaningful way with the community; failure to share software is an affront to what the software freedom movement seeks to accomplish. However, every reliable report that I've seen indicates that there are no GPL nor LGPL violations present." (Thanks to Paul Wise)
New Books
New Zenoss Core Book Available
Packt Publishing has released "Zenoss Core 3.x Network and System Monitoring", by Michael Badger.
Calls for Presentations
PyCon DE 2011 - Call for Papers
PyCon DE will be held in Leipzig, Germany October 4-9, 2011. The conference language is German, so the call for papers (click below) is also in German.
Upcoming Events
Desktop Summit 2011 conference program announced
The program for the 2011 Desktop Summit has been announced. The conference, which combines GNOME's GUADEC and KDE's Akademy, will be held in Berlin, Germany, August 6-12. "The Desktop Summit team welcomes keynote speakers from outside the GNOME and KDE world: confirmed speakers are Claire Rowland (Fjord, UX Design) and Thomas Thwaites (Technologist & Designer), with more to come! There will also be keynote addresses from representatives of both the GNOME and KDE communities. [...] The focus of the conference is collaboration between the Free Desktop projects. This year's conference tracks reflect the shared interests of its community. Just as the Desktop Summit unites the annual conferences of GNOME and KDE, each track similarly includes speakers from both desktop camps."
Postgres Open Conference
The Postgres Open Conference will be held September 14-16, 2011 in Chicago, Illinois. "The theme of the inaugural conference is "disruption of the database industry". Topics will include new features in the latest version of PostgreSQL, use cases, product offerings and important announcements. Invited talks and presentations will cover many of the innovations in version 9.1, such as nearest-neighbor indexing, serializable snapshot isolation, and transaction-controlled synchronous replication. Vendors will also be announcing and demonstrating new products and services to enhance and extend PostgreSQL."
ScilabTEC 2011
The program is available and registration is open for ScilabTEC 2011, the third Scilab Users Day, which takes place June 29 in Palaiseau, France.Events: June 2, 2011 to August 1, 2011
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
June 1 June 3 |
Workshop Python for High Performance and Scientific Computing | Tsukuba, Japan |
June 1 June 3 |
LinuxCon Japan 2011 | Yokohama, Japan |
June 3 June 5 |
Open Help Conference | Cincinnati, OH, USA |
June 6 June 10 |
DjangoCon Europe | Amsterdam, Netherlands |
June 10 June 12 |
Southeast LinuxFest | Spartanburg, SC, USA |
June 13 June 15 |
Linux Symposium'2011 | Ottawa, Canada |
June 15 June 17 |
2011 USENIX Annual Technical Conference | Portland, OR, USA |
June 20 June 26 |
EuroPython 2011 | Florence, Italy |
June 21 June 24 |
Open Source Bridge | Portland, OR, USA |
June 27 June 29 |
YAPC::NA | Asheville, NC, USA |
June 29 July 2 |
12º Fórum Internacional Software Livre | Porto Alegre, Brazil |
June 29 | Scilab conference 2011 | Palaiseau, France |
July 9 July 14 |
Libre Software Meeting / Rencontres mondiales du logiciel libre | Strasbourg, France |
July 11 July 16 |
SciPy 2011 | Austin, TX, USA |
July 11 July 12 |
PostgreSQL Clustering, High Availability and Replication | Cambridge, UK |
July 11 July 15 |
Ubuntu Developer Week | online event |
July 15 July 17 |
State of the Map Europe 2011 | Wien, Austria |
July 17 July 23 |
DebCamp | Banja Luka, Bosnia |
July 19 | Getting Started with C++ Unit Testing in Linux | |
July 24 July 30 |
DebConf11 | Banja Luka, Bosnia |
July 25 July 29 |
OSCON 2011 | Portland, OR, USA |
July 30 July 31 |
PyOhio 2011 | Columbus, OH, USA |
July 30 August 6 |
Linux Beer Hike (LinuxBierWanderung) | Lanersbach, Tux, Austria |
If your event does not appear here, please tell us about it.
Audio and Video programs
Videos from the Android Builders Summit
Videos of the talks from the Android Builders Summit, held in April, have now been posted. There are talks on Android internals, Linux audio for smartphones, device provisioning, Gerrit, GPL compliance, and more.FOSDEM and Embedded Linux Conference videos posted
The folks at Free Electrons have posted videos from the FOSDEM 2011 embedded track and from the 2011 Embedded Linux Conference. All videos are encoded in the WebM format.
Page editor: Rebecca Sobol