LWN.net Weekly Edition for August 30, 2012
Rethinking linux.conf.au
Linux.conf.au has long been one of your editor's favorite events anywhere in the world. It typically features one of the most diverse and interesting programs and is hosted in a different city every year. And the whole thing is fueled by that classic Australian energy and humor — even when it is held in New Zealand. With well over a decade of history, LCA seems like a solid and well established event. So it was a surprise to run across a discussion suggesting that there might be no LCA in 2014. LCA, it seems, has found itself needing to rethink how the conference is organized and run.Linux Australia council member James Polley started the discussion with a post on the current status of LCA 2014:
This task turns out to be trivially simple, because to date we have not received any bids. Several teams and individuals have expressed an interest, but the number of bids received is zero.
James noted that LCA was once the only major community conference in Australia; now, instead, there are several. Perhaps, he surmised, there is no longer a need for LCA? Or, perhaps, it is time to move from volunteer organizers to a professionally-managed event? Or, perhaps, it's time to take a break and see if any interest develops for 2015?
Participants in the discussion raised a lot of concerns that the conference has simply gotten too big and complex. Potential organizers, they say, are being put off by the sheer time commitment required. Some past organizers (such as Russell Stuart, Brisbane 2011) disagreed, saying that the actual time required is not as much as it seems. But there is no denying the fact that LCA organizers tend to look awfully tired and haggard even at the beginning of the event and thoroughly fried by the end. Putting together a conference like LCA is a lot of work.
So it is natural to think about ways to reduce that work. Perhaps LCA should go back to being a smaller event? There were proposals to reduce the number of talk tracks, eliminate various social events, and even to drop the 1-2 days of miniconfs that precede the conference itself. LCA did not originally include miniconfs; they were first added by the Brisbane team in 2002. But the miniconfs have since become an integral part of the conference. Their contents are not under the control of the program committee, with the result that the areas covered — and the quality — vary widely. But the best miniconf talks tend to be quite good indeed, and the miniconfs serve as an important entry path for speakers trying to get into LCA proper. It would be a shame to lose them.
Another idea that came up was to settle down and have the conference in the same city every year. That, in your editor's opinion, would risk repeating the story of the Ottawa Linux Symposium. There is a long list of reasons for that once-dominant conference's decline, but one of them was certainly the organizers' unwillingness to move the event to new locations. Even a city as nice as Ottawa gets a little tiresome after several years in a row. A new location every year helps to keep LCA fresh and interesting.
The volunteer organizer model also helps in this regard. LCA has managed to evolve a mechanism where each year's team is given a great deal of freedom in how it runs the conference. Behind the scenes, though, a "ghosts" committee (made up of prior organizers) oversees the effort, provides advice, and sounds the alarm when it sees something in danger of going wrong. The end result has been a conference that is, in some ways, new every year, but which still runs like a smoothly oiled machine.
A shift to a professionally-organized event might take some strain off the volunteer organizers but it would have to be done carefully if it were not to kill the magic that has made LCA such a good event for so many years. That would not be impossible to do; the Linux Plumbers Conference has thrived with a great deal of organizational help from the Linux Foundation. Such a setup requires professionals that are willing to defer to the "amateurs" for most of the important decisions; it can be done, but it's not something that happens by itself.
Donna Benjamin (Melbourne 2008) thinks that workload issues could be addressed, perhaps with the help of professional organizers and a team that is distributed across the country. But, she says, there is another, more difficult problem: the fact that the organizing team must sign up for a lot of criticism from the community.
This sentiment was echoed by a number of other participants in the discussion. In our community, it seems, no good deed goes unpunished; even an event as well run as LCA is going to draw its share of complainers. When a difficult job starts to appear thankless as well, the number of volunteers is certain to decrease. But potential organizers should also heed the words of Andrew Ruthven (Wellington 2010):
Finally, one could also argue that most conferences have a limited lifetime. Linux Expo and LinuxWorld are long gone. Even the much-respected Linux-Kongress, arguably the first Linux conference, was last held in 2010. LCA, having started as the Conference of Australian Linux Users in 1999, has certainly had a long run. Perhaps LCA, too, is reaching the end of its life span?
Your editor does not believe that to be the case. We are not witnessing a conference heading into senescence; instead, it is a middle-age crisis at worst. There is too much that is valuable and unique worldwide in LCA, and the people who attend the conference every year clearly appreciate it. LCA can be seen as a sort of free software project that, after years of success, needs to reevaluate its processes and governance. Once that task is done, LCA is likely to be stronger and more vital than ever.
For 2014, the deadline for bids has been extended for a few weeks, so there is still a chance for interested groups to sign up for a chance to host the event. There is talk of putting together a distributed team that, most likely, would propose to return LCA to Sydney. One expects that somebody will step up to the plate and make the event happen; who knows, perhaps 2014 will be the year that LCA finally is held in Broome.
PowerTOP tops up to 2.1
You may run free software, but electricity still isn't free. Until it is, monitoring and minimizing power consumption is a necessary evil. Intel released version 2.1 of its power-consumption utility PowerTOP on August 15. PowerTOP has been around since 2007, but the 2.0 release from May 2012 heralded a major rewrite of the code and a shift for the project itself. 2.1 builds on the new design, adding a handful of new features, but if it has been a while since you examined PowerTOP, you may be surprised at how much it has grown.
The new and improved
As the name suggests, PowerTOP monitors system power consumption and reports its statistics to you in a top-like summary. It also makes suggestions for system tweaks that could improve the power usage profile of your system, including everything from peripheral device options down to processor performance states and scheduler settings. In PowerTOP's early days, most of these recommendations were tweaks that the user or system administrator would need to manually apply. Over time, however, the various Linux distributions began to use PowerTOP as a profiling tool to automatically track down optimal settings, which would become the out-of-the-box defaults. This shift in usage patterns was one of the reasons maintainer Arjan van de Ven cited for the rewrite that became 2.0.
Two other factors Van de Ven cited were the unstructured way in which more and more features had been bolted on to the code base, and that PowerTOP is increasingly used as a diagnostic tool to track down specific power consumption problems — such as an errant application or driver — rather than profiling an entire system. As a result, the 2.0 rewrite refactors the code with an eye toward making subsequent extensions easier to add, and it provides a report-generation facility that can output CSV or HTML diagnostic reports suitable for later study.
The 2.x series also utilizes the kernel's perf events subsystem to gather processor-related data, which leads to more accurate numbers. The project has been steadily adding support for additional power-management features in hardware; the supported list includes CPUs, GPUs, storage devices, and an extensive collection of peripherals (network controllers, audio chips, USB adapters, and so on). The 2.1 release itself adds support for monitoring processor cores without performance states (a.k.a. P-states, which correspond to different clock frequencies). This is primarily useful for GPUs, which do have power states (or C-states, which allow the system to put the processor to sleep in stages), allowing inactive cores to be powered-down, but do not have the frequency scaling features now common in CPUs.
In addition to the technical changes, the 2.x series saw the project move away from its original home at lesswatts.org and into Intel's new open source project warehouse at 01.org. The new site hosts news updates, downloads, and a new mailing list. Unfortunately this change meant cutting off the site from several years' worth of mailing list archives and existing online documentation at the old site — although for the time being lesswatts.org remains online. In other changes, the source code is now hosted at GitHub, and the project is using Transifex for UI string translation. The 2.1 announcement advertises nine languages at present, which is not many in the grand scheme of things, but signals an improvement. Compiling PowerTOP is a simple affair; it uses autotools and has few dependencies. The main issue to consider is the kernel version of the machine you wish to profile: 2.6.36 is required for perf support, and newer kernels add additional measurement tools.
Usage
![[PowerTOP live monitor]](https://static.lwn.net/images/2012/powertop-ncurses-sm.png)
PowerTOP has two major operating modes: interactive monitoring and report generation. Running powertop (as root) launches the interactive mode, which provides a five-tab ncurses monitor. The overview tab sports a summary line displaying the current number of wakeups-per-second, GPU operations-per-second, virtual filesystem (VFS) operations-per-second, and CPU activity. Beneath this line is a list of power-consumption events of various types (for example, network activity, active processes, and interrupts). CPU information is split into two tabs: Idle Stats and Frequency Stats. The former shows C-state information as an activity percentage for the processor package and for each core; the latter reports the same information labeled with the actual clock frequencies rather than the C-state level.
The fourth tab, Device Stats, contains power usage information from the other hardware devices on the system: the battery (where applicable), screen, GPU(s), networking hardware, and everything else. For laptops, PowerTOP reports the battery's discharge rate, and estimates the discharge rate of the other components. For an accurate report, however, you should first run the calibration routine with powertop --calibrate. This will cycle through the hardware options (e.g., the supported backlight levels) and log the power consumption characteristics. PowerTOP supports even more accurate measurement with a USB-attached power analyzer tool from Extech, although considering the four-digit suggested retail price of said instruments, you would need to tweak quite a few machines to recoup the investment in electricity bills.
The fifth tab, Tunables, lists system settings that can affect power consumption performance. Highlighting an individual entry allows you to toggle the setting on or off with the Return key, so that you can quickly assess its impact. Most of the controls are sysfs parameters (such as SATA link power management or USB device auto-suspend), but others involve separate interfaces (such as wake-on-LAN status for Ethernet adapters). The tab sorts the list into "Good" and "Bad" states, with the Bad listed first, so you can quickly work your way through the list and see if you notice a difference.
![[PowerTOP HTML report]](https://static.lwn.net/images/2012/powertop-html-sm.png)
PowerTOP's other mode is report generation, which you can invoke by running either sudo powertop --html or sudo powertop --csv. The CSV output is essentially the same information as that presented in the first four tabs of the interactive mode, with header information demarked by asterisks. The "Tunables" information is listed as well, but the commands required to alter each setting are not supplied. The HTML output presents the same information in a nicely CSS-styled HTML file, complete with element classes on each table cell that might allow further customization or processing.
A bit more interesting, however, is the HTML output's Tuning tab, which includes the command necessary to change each "Bad" setting — for example:
echo 'min_power' > '/sys/class/scsi_host/host0/link_power_management_policy'
This is particularly helpful for bus devices whose specific address would be hard to guess at otherwise, such as /sys/bus/pci/devices/0000:00:1f.2/power/control or /sys/bus/usb/devices/3-2/power/control. One of the lingering gripes many end users have about the program is that the changes they make to device settings do not persist after reboot. PowerTOP's stance is that it does not want to be in the business of writing permanent or semi-permanent changes to the system configuration — for a number of reasons, including the fact that such changes can introduce performance or even stability problems. But having the correct commands at the ready allows the user to assemble a start-up script with little hassle, whether the machine in question is an old netbook with battery trouble or an expensive server.
In addition to the broader changes discussed above, each release of PowerTOP tracks new features available in updated processor and device architectures. The latest releases, for example, have improved support for ARM power management features and for Intel graphics adapters. But even if your processor is already well-supported, the project has made impressive improvements in data collection and in presenting its findings is human-consumable terms. I last looked at PowerTOP in late 2008, so by comparison the advancement of the feature set seems dramatic. But the more important factor is that it has proven itself to be a useful diagnostic tool, and it allows even novice users to instantly apply and test power management options — which could at least de-mystify the world of power-saving, one system at a time.
Look and feel lawsuits, the second time around
The thicket of lawsuits surrounding the mobile industry has grown to the point that it is hard for any individual action to stand out. If any case has managed to make itself visible in that crowd anyway, it is the battle between Apple and Samsung currently being fought in the US. The first stage of that battle has just been resolved, heavily in Apple's favor. It will be some time before this story truly reaches its end, but some of the more interesting implications for the industry, and for free software, can already be seen.
Let's take a moment to look briefly at the history of the industry, because this is not Apple's first attempt to eliminate competitors with litigation. Back in 1988, Apple sued Microsoft for the crime of offering a system that placed icons and overlapping windows on the screen. Apple didn't invent the graphical display, of course, but it still asserted the right to be the only company offering such displays in the market. At that time, the Free Software Foundation announced a boycott; none of its software would be ported to A/UX and purchase of Apple products would be discouraged. Chances are that an FSF boycott in 1989 failed to make Apple's executives reconsider their business practices in any serious way, but it did convey a loud and clear point. The boycott was maintained until after Apple finally lost the suit and gave up.
Apple is currently engaged in a second round of look-and-feel lawsuits; the big difference is that, this time, they appear to be winning and there is little response from the community. Indeed, we enthusiastically buy their hardware and port our systems to it. Perhaps, soon, we'll have rather fewer alternatives.
This time around, Apple accused Samsung of violating three utility patents: 7,469,381 (bouncy scrolling), 7,844,915 (pinch-to-zoom), and 7,864,163 (tap-to-zoom on a web page). Samsung was also accused of infringing four design patents: D504,889 (rectangular electronic device), D593,087 (ditto), D618,677 (ditto again), and D604,305 (iconic application directory with a dock at the bottom). The jury concluded that Samsung had indeed violated all of those patents with the exception of 504,889, the most tablet-like of the design patents. Apple has been awarded a bit over $1 billion, and there will soon be discussions regarding blocking various Samsung products from the US market. Samsung's countersuit, which involved some patent infringement claims of its own, lost out entirely.
Amusingly, some commentators have begun to say that this outcome is, in fact, a significant win for Samsung. For a mere $1 billion, the company was able to break into the smartphone market in a big way; that looks cheap when compared to how much some other companies have spent. Meanwhile, Apple can be said to have proved, in a court of law, that Samsung's products are just as good as its own; maybe that will translate into more Apple customers being willing to check out iStuff alternatives. These ideas seem a little far fetched, but one never knows.
Victory or not, this ruling will certainly be appealed. There are various allegations that the jury, in a rush to protect a US company from a foreign competitor, disregarded the instructions it had been given and did not even consider many of Samsung's claims. But, even without the possibility of invalidating the jury's ruling, an appeal would make sense: it will keep the matter open long enough for most of the products involved to reach the end of their normal commercial lives, and large monetary awards are often reduced on appeal. So expect this story to play out for a while yet.
It's worth pointing out another reason for this story to be a long one: this is not a USA-only fight. Apple and Samsung are fighting the same battle in several countries around the world; one amusing result is that products from both companies have been banned in South Korea. Samsung has been struggling in Germany as well; other countries could well join the list. Software patents may be mostly a problem in the US, but design patents are much more widely recognized.
One good thing about design patents, though, is that they are usually relatively easy to work around. Indeed, Samsung is already doing so; for details, see The Samsung Galaxy S III: The First Smartphone Designed Entirely By Lawyers on the Android Police site. The device in question (The Galaxy S III) is not quite rectangular, uses a different rounding radius on the top and bottom corners, is not black, etc. Only one of the workarounds requires software changes: the dock disappears when the application directory is brought up. Mostly trivial stuff; one may argue that giving Apple a monopoly on black rectangles with rounded corners is unfair, but it also does not make things that much harder for competitors.
The utility patents are another story, of course. Arguably the most significant problem in this particular set of patents is the concept of zooming the display with a two-finger gesture. That gesture has become sufficiently universal that a device lacking it will feel decidedly inferior. As numerous commentators have noted, it is somewhat like giving one automobile manufacturer exclusive rights to a circular steering wheel.
But the real problem is that things won't stop there. Every company involved in this market will continue to bulk up on these patents, and they will continue to assert them against each other. Many of these patents will come closer to what we do in the free software community. It will become increasingly hard for anybody to field a mobile product until somebody, somehow, cuts through the thicket. What is really needed is some sort of reform of the patent regime. Perhaps this very case, should it make it to the US Supreme Court, could play a role in that reform. Failing that, we're left depending on politicians to fix the problem; that, unfortunately, seems like a long shot indeed.
Security
The perils of big data
Data about us—our habits, associates, purchases, and so on—is collected all the time. That's been true at smaller scales for hundreds or even thousands of years, but today's technology makes it much easier to gather, store, and analyze that data. While some of the results of that analysis may make (some) people's lives better—think tailored search results or Amazon's recommendations—there is a strong temptation to secretly, or at least quietly, use the collected data in other, less benign, ways.
Because the data collection and analysis is typically done without any fanfare, it often flies under the radar. So it makes sense to stop and think about what it all means from a privacy perspective. A recent essay by Alistair Croll does exactly that. He notes that we have reached a time where the constraint of "big, fast, and varied—pick any two" for databases is no longer valid. Because of that, it is common for data to be collected without any particular plan for how it will be used, under the assumption that some use will eventually be found. It doesn't cost that much to do, which leads to the rise of "big data".
There are some eye-opening things that can be done using big data. It is not difficult to determine someone's race, gender, and sexual orientation using just the words in their Twitter or Facebook feeds, for example. Much of that information is completely public, and could be mined fairly easily by banks, insurance companies, prospective employers, and so on. Those attributes that can be derived could then be used to set rates, deny coverage, choose to interview or not, and more.
It is easy to forget that the data collection is even happening. "Loyalty" cards that provide a discount at grocery and other stores gather an enormous amount of information about our habits, for example. Deriving race, gender, family size, and other characteristics from that data should not be very difficult. If that information is used to give discounts on other products one might be likely to buy, it may seem relatively harmless. But if it is being sold to others to help determine voting patterns, foreclosure likelihood, or credit-worthiness, things are definitely amiss. But, as Croll points out, that is exactly what is happening with that data at times.
Croll notes several different examples in his essay, but examples are not hard to come by. Almost every day, it seems, there are new abuses, or worries about abuses of big data. People in Texas are concerned about the kinds of data that would be collected by "smart" electricity meters—to the point of running off the smart meter installers. Mitt Romney's campaign for the US Presidency is using a secretive organization to analyze data to find potential donors—President Obama's campaign is certainly doing much the same.
Another example is the "anonymized" data sets that have been released for various purposes over the past few years They show that it is quite difficult to truly anonymize data. When trying to derive a signal from the data (movie recommendations for Netflix, for example), surprising correlations can be made. This shows the power of big data even when someone is trying not to reveal our secrets in a data set. A new technique may help by providing a way to release data without compromising privacy.
The real problems may come when these disparate data sets are combined. Truly personally identifiable information correlated from multiple sources is likely to give a distressingly accurate picture of an individual. It could be used by companies and other organizations for a wide range of purposes. Those could be relatively harmless, even helpful, or downright malicious depending on one's perspective and privacy consciousness. One organization that is likely quite interested in this kind of data is the same that some would like to turn to for protection from abuses of big data: government.
There are clearly good uses that such data can be put to. Croll identifies things like detecting and tracking disease outbreaks, improving learning, reducing commute times, etc. But the "Big Brother" overtones are worrisome as well. It's not at all clear how regulations would impact the collection and analysis of big data, and governments' interest in using it (for good or "bad" purposes) makes for an interesting conundrum. Until and unless a solid chunk of people are concerned about the problem—and express that concern to their governments and to other organizations in some visible way—things will continue much as they are. In that, the problem is little different than many other privacy issues; those who truly care are going to have to jealously guard their privacy themselves, as best they can.
Brief items
Security quotes of the week
[...] V. Don’t abdicate your teaching responsibility. Students do not magically gain the ability at the end of the school day or after graduation to navigate complex, challenging, unfiltered digital information spaces. If you don’t teach them how to navigate the unfiltered Internet appropriately and safely while you have them, who’s going to?
New vulnerabilities
amsn: denial of service
Package(s): | amsn | CVE #(s): | CVE-2006-0138 | ||||
Created: | August 27, 2012 | Updated: | August 29, 2012 | ||||
Description: | From the CVE entry:
aMSN (aka Alvaro's Messenger) allows remote attackers to cause a denial of service (client hang and termination of client's instant-messaging session) by repeatedly sending crafted data to the default file-transfer port (TCP 6891). | ||||||
Alerts: |
|
drupal6-ctools: multiple vulnerabilities
Package(s): | drupal6-ctools | CVE #(s): | |||||||||
Created: | August 29, 2012 | Updated: | August 29, 2012 | ||||||||
Description: | ctools 6.x-1.9 fixes multiple vulnerabilities. See the ctools advisory for details. | ||||||||||
Alerts: |
|
flash-plugin: multiple vulnerabilities
Package(s): | flash-plugin | CVE #(s): | CVE-2012-4163 CVE-2012-4164 CVE-2012-4165 CVE-2012-4166 CVE-2012-4167 CVE-2012-4168 | ||||||||
Created: | August 23, 2012 | Updated: | August 29, 2012 | ||||||||
Description: | From the Red Hat advisory: This update fixes several vulnerabilities in Adobe Flash Player. These vulnerabilities are detailed on the Adobe security pages APSB12-18 and APSB12-19, listed in the References section. Specially-crafted SWF content could cause flash-plugin to crash or, potentially, execute arbitrary code when a victim loads a page containing the malicious SWF content. (CVE-2012-1535, CVE-2012-4163, CVE-2012-4164, CVE-2012-4165, CVE-2012-4166, CVE-2012-4167) A flaw in flash-plugin could allow an attacker to obtain sensitive information if a victim were tricked into visiting a specially-crafted web page. (CVE-2012-4168) | ||||||||||
Alerts: |
|
kernel: privilege escalation
Package(s): | kernel | CVE #(s): | CVE-2012-3520 | ||||||||||||||||||||||||||||||||||||||||||||||||
Created: | August 23, 2012 | Updated: | February 10, 2013 | ||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Red Hat bugzilla entry: A flaw was found in the way Netlink messages without explicitly set SCM_CREDENTIALS were delivered. The kernel passes all-zero SCM_CREDENTIALS ancillary data to the receiver if the sender did not provide such data, instead of including the correct data from the peer (as it is the case with AF_UNIX). Programs that set SO_PASSCRED option on the Netlink socket and rely on SCM_CREDENTIALS for authentication might accept spoofed messages and perform privileged actions on behalf of the unprivileged attacker. | ||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
mozilla: multiple vulnerabilities
Package(s): | mozilla, firefox, thunderbird, seamonkey, xulrunner | CVE #(s): | CVE-2012-1970 CVE-2012-1972 CVE-2012-1973 CVE-2012-1974 CVE-2012-1975 CVE-2012-1976 CVE-2012-3956 CVE-2012-3957 CVE-2012-3958 CVE-2012-3959 CVE-2012-3960 CVE-2012-3961 CVE-2012-3962 CVE-2012-3963 CVE-2012-3964 CVE-2012-3966 CVE-2012-3967 CVE-2012-3968 CVE-2012-3969 CVE-2012-3970 CVE-2012-3972 CVE-2012-3976 CVE-2012-3978 CVE-2012-3980 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | August 29, 2012 | Updated: | March 10, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Red Hat advisory:
A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox. (CVE-2012-1970, CVE-2012-1972, CVE-2012-1973, CVE-2012-1974, CVE-2012-1975, CVE-2012-1976, CVE-2012-3956, CVE-2012-3957, CVE-2012-3958, CVE-2012-3959, CVE-2012-3960, CVE-2012-3961, CVE-2012-3962, CVE-2012-3963, CVE-2012-3964) A web page containing a malicious Scalable Vector Graphics (SVG) image file could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox. (CVE-2012-3969, CVE-2012-3970) Two flaws were found in the way Firefox rendered certain images using WebGL. A web page containing malicious content could cause Firefox to crash or, under certain conditions, possibly execute arbitrary code with the privileges of the user running Firefox. (CVE-2012-3967, CVE-2012-3968) A flaw was found in the way Firefox decoded embedded bitmap images in Icon Format (ICO) files. A web page containing a malicious ICO file could cause Firefox to crash or, under certain conditions, possibly execute arbitrary code with the privileges of the user running Firefox. (CVE-2012-3966) A flaw was found in the way the "eval" command was handled by the Firefox Web Console. Running "eval" in the Web Console while viewing a web page containing malicious content could possibly cause Firefox to execute arbitrary code with the privileges of the user running Firefox. (CVE-2012-3980) An out-of-bounds memory read flaw was found in the way Firefox used the format-number feature of XSLT (Extensible Stylesheet Language Transformations). A web page containing malicious content could possibly cause an information leak, or cause Firefox to crash. (CVE-2012-3972) It was found that the SSL certificate information for a previously visited site could be displayed in the address bar while the main window displayed a new page. This could lead to phishing attacks as attackers could use this flaw to trick users into believing they are viewing a trusted site. (CVE-2012-3976) A flaw was found in the location object implementation in Firefox. Malicious content could use this flaw to possibly allow restricted content to be loaded. (CVE-2012-3978) For technical details regarding these flaws, refer to the Mozilla security advisories for Firefox 10.0.7 ESR. You can find a link to the Mozilla advisories in the References section of this erratum. Red Hat would like to thank the Mozilla project for reporting these issues. Upstream acknowledges Gary Kwong, Christian Holler, Jesse Ruderman, John Schoenick, Vladimir Vukicevic, Daniel Holbert, Abhishek Arya, Frédéric Hoguin, miaubiz, Arthur Gerkis, Nicolas Grégoire, Mark Poticha, moz_bug_r_a4, and Colby Russell as the original reporters of these issues. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
mozilla: multiple vulnerabilities
Package(s): | firefox | CVE #(s): | CVE-2012-1971 CVE-2012-1956 CVE-2012-3965 CVE-2012-3971 CVE-2012-3973 CVE-2012-3974 CVE-2012-3975 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | August 29, 2012 | Updated: | October 11, 2012 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Mandriva advisory:
Mozilla developers identified and fixed several memory safety bugs in the browser engine used in Firefox and other Mozilla-based products. Some of these bugs showed evidence of memory corruption under certain circumstances, and we presume that with enough effort at least some of these could be exploited to run arbitrary code (CVE-2012-1971). Security researcher Mariusz Mlynski reported that it is possible to shadow the location object using Object.defineProperty. This could be used to confuse the current location to plugins, allowing for possible cross-site scripting (XSS) attacks (CVE-2012-1956). Security researcher Mariusz Mlynski reported that when a page opens a new tab, a subsequent window can then be opened that can be navigated to about:newtab, a chrome privileged page. Once about:newtab is loaded, the special context can potentially be used to escalate privilege, allowing for arbitrary code execution on the local system in a maliciously crafted attack (CVE-2012-3965). Using the Address Sanitizer tool, Mozilla security researcher Christoph Diehl discovered two memory corruption issues involving the Graphite 2 library used in Mozilla products. Both of these issues can cause a potentially exploitable crash. These problems were fixed in the Graphite 2 library, which has been updated for Mozilla products (CVE-2012-3971). Mozilla security researcher Mark Goodwin discovered an issue with the Firefox developer tools' debugger. If remote debugging is disabled, but the experimental HTTPMonitor extension has been installed and enabled, a remote user can connect to and use the remote debugging service through the port used by HTTPMonitor. A remote-enabled flag has been added to resolve this problem and close the port unless debugging is explicitly enabled (CVE-2012-3973). Security researcher Masato Kinugawa reported that if a crafted executable is placed in the root partition on a Windows file system, the Firefox and Thunderbird installer will launch this program after a standard installation instead of Firefox or Thunderbird, running this program with the user's privileges (CVE-2012-3974). Security researcher vsemozhetbyt reported that when the DOMParser is used to parse text/html data in a Firefox extension, linked resources within this HTML data will be loaded. If the data being parsed in the extension is untrusted, it could lead to information leakage and can potentially be combined with other attacks to become exploitable (CVE-2012-3975). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
phpmyadmin: information leak
Package(s): | phpMyAdmin | CVE #(s): | CVE-2012-4219 | ||||||||||||
Created: | August 29, 2012 | Updated: | August 29, 2012 | ||||||||||||
Description: | From the CVE entry:
show_config_errors.php in phpMyAdmin 3.5.x before 3.5.2.1 allows remote attackers to obtain sensitive information via a direct request, which reveals the installation path in an error message, related to lack of inclusion of the common.inc.php library file. | ||||||||||||||
Alerts: |
|
phpmyadmin: cross-site scripting
Package(s): | phpmyadmin | CVE #(s): | |||||
Created: | August 27, 2012 | Updated: | August 29, 2012 | ||||
Description: | From the phpmyadmin advisory:
Using a crafted table name, it was possible to produce a XSS : 1) On the Database Structure page, creating a new table with a crafted name 2) On the Database Structure page, using the Empty and Drop links of the crafted table name 3) On the Table Operations page of a crafted table, using the 'Empty the table (TRUNCATE)' and 'Delete the table (DROP)' links 4) On the Triggers page of a database containing tables with a crafted name, when opening the 'Add Trigger' popup 5) When creating a trigger for a table with a crafted name, with an invalid definition. Having crafted data in a database table, it was possible to produce a XSS : 6) When visualizing GIS data, having a crafted label name. | ||||||
Alerts: |
|
roundcubemail: cross-site scripting
Package(s): | roundcubemail | CVE #(s): | CVE-2012-3507 CVE-2012-3508 | ||||||||||||||||
Created: | August 29, 2012 | Updated: | October 11, 2012 | ||||||||||||||||
Description: | From the CVE entries:
Cross-site scripting (XSS) vulnerability in program/steps/mail/func.inc in RoundCube Webmail before 0.8.0, when using the Larry skin, allows remote attackers to inject arbitrary web script or HTML via the email message subject. (CVE-2012-3507) Cross-site scripting (XSS) vulnerability in program/lib/washtml.php in Roundcube Webmail 0.8.0 allows remote attackers to inject arbitrary web script or HTML by using "javascript:" in an href attribute in the body of an HTML-formatted email. (CVE-2012-3508) | ||||||||||||||||||
Alerts: |
|
rubygem-actionpack: three cross-site scripting vulnerabilities
Package(s): | rubygem-actionpack | CVE #(s): | CVE-2012-3463 CVE-2012-3464 CVE-2012-3465 | ||||||||||||||||||||||||||||
Created: | August 23, 2012 | Updated: | March 29, 2013 | ||||||||||||||||||||||||||||
Description: | From the Red Hat bugzilla entries [1, 2, 3]: CVE-2012-3463: When a "prompt" value is supplied to the `select_tag` helper, the "prompt" value is not escaped. If untrusted data is not escaped, and is supplied as the prompt value, there is a potential for XSS attacks. CVE-2012-3464: The HTML escaping code in Ruby on Rails does not escape all potentially dangerous characters. In particular the code does not escape the single quote character. The helpers used in Rails itself never use single quotes, so most applications are unlikely to be vulnerable, however all users running an affected release should still upgrade. CVE-2012-3465: There is an XSS vulnerability in the strip_tags helper in Ruby on Rails, the helper doesn't correctly handle malformed html. As a result an attacker can execute arbitrary javascript through the use of specially crafted malformed html. All users who rely on strip_tags for XSS protection should upgrade or use the work around immediately. | ||||||||||||||||||||||||||||||
Alerts: |
|
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel remains 3.6-rc3; no new -rc releases have been made in the past week.Stable updates: the 3.0.42, 3.4.10, and 3.5.3 stable updates were released on August 26 with the usual pile of important fixes. There are some reports of Intel graphics problems with 3.0.42 and 3.4.10, so users may want to proceed carefully with those.
Quotes of the week
First we had two earthquakes - fine, this week God not only hates republicans, but apparently us kernel developers too. But we kernel developers laugh in the face of danger, and a 5.5 earthquake just makes us whimper and hide in the closet for a while.
But after we've stopped cowering in the closet, there's a knock on the door, and the conference organizers are handing out skate boards, with the innocent explanations of "We're in San Diego, after all".
If that's not a sign of somebody trying to kill us, I don't know what is.
Add a feature removal of removing the feature-removal-schedule.txt.
Long-term support for the 3.2 kernel
Ben Hutchings announces: "I intend to maintain Linux 3.2.y for at least as long as Debian 7.0 is supported. The end-of-life for each Debian release is 1 year after the following release, so this would probably be around the end of 2015."
Mourning Doug Niehaus
Thomas Gleixner has sent us an extended obituary for Doug Niehaus, who passed away on August 21. "Doug has to be considered one of the pioneers of real-time Linux. His efforts of making Linux a venerable choice for real-time systems reach back into the mid 1990s. While his KURT (Kansas University Real-Time) project did not get the attention that Victor Yodaikens RT-Linux development received for various reasons, his influence on the Linux kernel and on today's most popular Linux Real-Time project reaches much farther than most people are aware of."
Kernel development news
The 2012 Kernel Summit
The 2012 Kernel Summit was held in San Diego, CA, USA, over three days, 27-29 August. As with the 2011 Kernel Summit in Prague (and following on from discussions at the 2010 Kernel Summit), the 2012 summit followed a different format from the ten previous summits. For 2012, the event took the form of an invitation-only plenary-session day followed by two days of minisummits and additional technical sessions shared with the co-located 2012 Linux Plumbers Conference that kicked off on 29 August; the agenda for days 1 and 3 can be found here. (The ARM minisummit was something of an exception to this format: it ran for two days, starting on the same day as the plenary sessions.)
Main summit, day 1
The first day of the Kernel Summit, on 27 August, consisted of plenary sessions attended by around 80 invitees. Among the topics were the following:
- The future of kernel regression
tracking; the kernel development community is in strong agreement
on the value of regression tracking, and is currently looking for some
person(s) to take up this high-profile work.
- Supporting old/oddball architectures, tool
chains, and devices: how long must we support ancient hardware and
software, and how do we leave it behind?
- Regression testing; how can we do a
better job of finding bugs before they bite users?
- Distributions and upstream; what can
kernel developers do to make life easier for their main customers —
the distributors?
- Lightning talks: quick sessions on RCU
callbacks and Smatch.
- Kernel build and boot testing; a new
framework for quickly finding regressions.
- Android upstreaming: the ongoing
process of getting the Android kernel code into the mainline.
- Improving the maintainer model; do our
subsystem maintainers scale?
- Stable kernel management; how is the
stable process working?
- Tracing and debugging, and how to get
better oops output in particular.
- Linux-next and related improvements to the development process.
Main summit, day 2
- The memcg/mm minisummit covering a wide range of topics related to memory management.
Main summit, day 3
- Module signing; toward a way to
finally get this feature into the kernel.
- Kernel summit feedback; how did the event work out this year, and what changes should be made for future years?
ARM minisummit, day 1
The first day of this year's Kernel Summit coincided with day one of the ARM minisummit. Given that the "minisummit" spanned two days, there was talk of false advertising, but there was lots to cover.
- Secure monitor API: how best to
support the secure monitor mode
across a wide variety of processors.
- Stale platform deprecation: some ARM platform support has
clearly not been used for years; how do we clean out the cruft?
- Virtualization is coming to ARM, but
brings some issues of its own.
- DMA mapping has seen a lot of work in the last year, but there is still a fair amount to be done.
ARM minisummit, day 2
- Process review for the arm-soc tree:
how well is this tree working toward the goal of cleaning up the ARM
architecture code?
- Toward a single kernel image: what
needs to be done to get a single kernel that boots on multiple ARM
processor families?
- AArch64: the current status of 64-bit
support for the ARM architecture.
- A big.LITTLE update; how can the
kernel support this novel architecture?
- DMA issues and how to best support generic DMA engines in particular.
Linux Security Summit
- Secure Boot: keynote from Matthew Garrett.
- Secure Linux containers: using SELinux
to create sandboxed containers.
- Integrity for directories and special
files: extending the Integrity Measurement Architecture (IMA) to handle
directories and other special files.
- DNSSEC: a look at the
"cryptographically secured globally distributed database" for domain names
and more.
- Security modules and RPM: expanding the
hooks in RPM to support Smack and other security technologies.
- Kernel security subsystem reports: reports from subsystem maintainers.
Notes from others
- PCI minisummit, notes posted by Bjorn
Helgaas.
- ARM minisummit, posted by Will Deacon.
- Media workshop notes, part 1 by Mauro
Carvalho Chehab.
- The realtime microconference from LPC, courtesy of Darren Hart.
Acknowledgments
Michael would like to thank the Linux Foundation for supporting his travel to San Diego for this event; Jake would like to thank LWN subscribers for the same.
KS2012: The future of kernel regression tracking
For several years, Rafael Wysocki tracked regressions in the kernel, producing lists and statistical analysis of regressions in each kernel release. This task, which provided an (imperfect) measure of the increase or decrease in the quality of successive kernel releases, was considered so valuable by other kernel developers that Rafael was regularly invited to present his observations at the Kernel Summit (2008, 2009, 2010, and 2011). However, his presentation on this topic on the first day of the 2012 Kernel Summit had a rather different flavor, asking his peers what might be the future of regression tracking.
Over time, Rafael has steadily moved to working on other tasks in the kernel, and has had less time for regression tracking. Fortunately, for a time, a couple of other people stepped in to assist with the task of creating and maintaining the kernel Bugzilla reports that were used to track regressions. However, this work did not run all that smoothly. Rafael had already noted on previous occasions that the kernel Bugzilla was not well suited to the task of generating lists of kernel regressions. In addition, as Rafael stepped still further back from regression tracking, there seemed to be some differences of understanding between his successors and various kernel developers about the how the Bugzilla should be used to track regressions. (Of note is the fact that Rafael was using Bugzilla bugs merely as a tool to track and measure regressions; whether kernel maintainers made use of those bugs as part of their work in fixing those regressions was a matter left to the maintainers.) These differences in understanding appear to be one of the reasons that Rafael's successors also stepped back from the task of regression tracking.
Which brings us to where we are today: for nearly half a year, there has been no tracking of kernel regressions. Furthermore, Rafael noted that his other commitments meant that he would not have time to return to this task in the future. This led him to ask a simple question: do we want track kernel regressions?
At this point, many kernel developers spoke up to emphasize how
valuable they had found Rafael's work. H. Peter Anvin, for example,
noted that he is not a fan of Bugzilla, "But, I found the lists of
regressions useful. It made me do things I didn't want to do.
" Linus
Torvalds also noted that he loved the kind of overview that Rafael's work
provided to him.
The session digressed into a variety of other topics. Rafael wondered whether the Bugzilla is even very useful as a tool for tracking and resolving kernel bugs. Responses from various developers showed that use of Bugzilla varies greatly across subsystems, with some subsystems relying on it heavily, while others avoid it in preference to mechanisms such as email. James Bottomley made the point that Bugzilla allows people unfamiliar with mailing lists to fire a bug onto Bugzilla, which then (automatically) appears on a mailing list. Bugzilla thus provides those users with a means to interact with the kernel developer workflow. Later in the session, the topic of Bugzilla versus mailing lists led Rafael to raise another concern: when some subsystems use Bugzilla while others use mailing lists or other mechanisms, what should the kernel developer community tell bug reporters about how to report bugs? That question often forms a difficult first hurdle for bug reporters. Unfortunately, there was little time to delve into that topic.
There was some general discussion about how Bugzilla should be used to track regressions, and whether there might be better tools than Bugzilla for this task, but no concrete alternatives were proposed. In the end, it was agreed that the question of tooling is secondary, and the tool choice might best be left to whomever takes on the task of regression tracking. The main point was that there was widespread consensus in the room that developers would like to see the regression tracking list return, and that the top priority was to find a person (or, possibly, several, so as to avoid overloading one individual and to ensure continuity when people are absent for vacations and so on) who would be willing to take on this task.
At this point then, there's a vacancy for one or more kernel regression trackers. Although the work is unpaid, regression tracking is clearly a task that is highly valued by many kernel developers, and as Rafael's experience shows, when the work is done in a way that matches with the development community's needs, the role has a high profile. (Interested volunteers should contact Rafael.)
KS2012: Supporting old/oddball architectures, tool chains, and devices
H. Peter Anvin began his presentation on the first day of the 2012 Kernel Summit by noting that Linux supports an astonishing variety of hardware. In particular, it offers long-term support for older hardware, and is popular with some users for precisely that reason. Most of the time, supporting older and unusual hardware and tool chains is a worthwhile activity. Nevertheless, Peter's question was: what limits does the kernel development community want to set on the amount of effort invested in supporting old or unusual hardware architectures, build tool chains, and devices?
Peter's point is that supporting old and oddball systems inevitably has associated effort and problems. When it comes to supporting these systems, there is a balancing act: those systems may matter a lot to a small set of users, but supporting them imposes a development burden across a wide spectrum of the kernel development community.
Peter gave a few examples. The kernel still includes code to support old Intel 386 CPUs (i.e., the predecessor to 486 chips that appeared in 1989). One might ask whether anyone out there is still trying to boot Linux on such old hardware. However, Peter noted that as an x86 maintainer he had recently received a bug report from someone trying to boot Linux on such a system. But the genesis of that bug report is interesting. In essence, it came from someone who asked themselves, would I be able to boot a modern Linux kernel on this ancient system that I happen to have lying around? In response, Peter's question is, should the kernel developer community continue to invest effort to support such users?
There are various difficulties associated with supporting old and
obscure systems. For example, maintainers may want to make kernel changes
that could impact code that supports legacy systems, but it may be
difficult to assess what that impact may be. Furthermore, it's unlikely
that a maintainer will have the hardware needed to test the impact of
changes that affect legacy code. Legacy code thus becomes rarely executed
code, and rarely executed code is code that hides breakages and security
holes. (Peter noted that architecture-specific "hooks" are the
"common
" solution for dealing with oddball architectures. Such
hooks are a form of "come from" code about which it is very difficult to
reason. The only way of dealing with that difficulty is careful
documentation of the hook preconditions, postconditions, and invariants. It
may come as no surprise that the likelihood that such documentation exists
falls "somewhere between rarely and never
".)
There are a few questions to consider when deciding whether to continue to support a legacy system. For example, do users of that system exist, how numerous are they, and how important is it to continue supporting them with modern kernels? The problem is that it is difficult to obtain data that answers these questions for obscure systems. In particular, Peter questioned the notion that bug reports equate to active users.
Legacy build tool chains present a similar problem. Peter notes that
"People complain when we introduce something that triggers a bug in a
7-year-old tool chain. At some point we need to be able to say: if you want
to use a new kernel, you've got to be prepared to install an appropriate
tool chain.
"
Peter extended his discussion of legacy support to include user-space interfaces. One example he gave was compatibility code that has been unused for a long time. As a specific case, a.out support on x86-64 has been the source of several serious security holes; however, the a.out executable format was already largely obsolete by time the x86-64 architecture arrived, and likely has never seen serious use. Is there any good reason to support such obsolete interfaces, and is there a sane path to deprecating and removing them?
At this point, Linus spoke up
to detail his stance on changes to user-space interfaces: "I never
said don't change the ABI. We have done it to fix bad bugs. Just don't
break user space. If we can change the ABI, but no one complains, then we
are good to go.
" As a step in the removal process, Rusty Russell
proposed the addition of a CONFIG_MODERN option that would be on
by default, and could be disabled if access is needed to legacy
features. Len Brown responded that this would likely suffer a similar fate
to CONFIG_EXPERIMENTAL, which was always turned on (i.e.,
CONFIG_MODERN would likely always be turned off by distributors).
Thomas Gleixner instead suggested to place legacy features under configuration options marked as "bool
n", preventing the options from being selected, and then removing the legacy features unless someone complains within a reasonable time.
The session ended without any particular conclusions, but a few people expressed sentiments in favor of removing support for legacy systems, and no one raised serious objections. Thus, it may be that in the future we see some serious effort devoted to cleaning out kernel code for systems that clearly see no serious contemporary use. (It is noteworthy that a discussion on removing stale platforms was also taking place in the ARM minisummit.)
KS2012: ARM: Secure monitor API
Will Deacon led the first KS2012 ARM minisummit session on creating a common API for the secure monitor mode of ARM processors. This security feature—part of the ARM TrustZone extensions—uses a secure monitor call (SMC) instruction to switch between secure and non-secure modes. It can be used to implement digital rights management (DRM), secure payment, and more. The secure monitor also provides services for booting and idling the processor. Currently, Linux has no common API to access secure mode, so various ARM platforms implement things differently. The current situation is painful, Deacon said.
What's needed, Deacon suggested, is a common API that could be used by any ARM system-on-a-chip (SoC) that needed it. Samsung has proposed an API that handles the boot and idle SMCs, which could be used as a starting point. But, handling SMCs is still likely to require some amount of board-specific code.
For some ARM boards, there are SMCs that need to be done in the early boot assembly code. There are various SoC errata that need to be worked around via SMCs early in the boot process. It would be preferable if calling the SMCs could be pushed into device drivers, but that may not be possible for all processors.
There are also some services provided via SMC, like video and audio format handling, which need to be accounted for. Those kinds of services could be described in the device tree for the processor if there is some agreement on how to create those descriptions. That would allow access to those services via drivers using the common API.
Deacon suggested that there was no need to solve all of the problems "in one go", and that focusing on the things that could be done from drivers would be the place to start. There will still need to be platform-specific code to handle set up that needs to be done before the MMU is enabled, as well as to handle quirks of some of the platforms.
Another problem is that the actual calls into secure mode are not standardized, and there are already multiple existing implementations. The differences can be in the mapping from a number to the actual SMC it corresponds with (somewhat akin to system call numbers). The parameters can also be different. Those differences could be described in the device tree for the platform, and used by the common SMC framework. Actually invoking an SMC would stay in the platform-specific code.
There is a recent recommendation from ARM to standardize the SMCs, but it is just that: a recommendation. That document also describes how to use SMCs for doing thermal and power management, so the common API could eventually incorporate some or all of those kinds of calls, but they are just recommendations that may or may not catch on.
By starting with what can be done in the C code from drivers, at least a partial solution to the "complete mess" that exists today can be achieved. Starting with the cpu_boot() and do_idle() hooks that the Samsung API provides, then adding additional SMCs as needed, will start cleaning up that mess.
KS2012: ARM: Stale platform deprecation
There are a handful of "minor" ARM platforms in the mainline that haven't been touched for as many as five years, Olof Johansson said in kicking off a discussion of perhaps deprecating some of those platforms. The code for those platforms gets updated whenever there are sweeping changes, but they may not have been tested in years. They build just fine, but no active developers have the hardware to actually test them. He wondered when or if those platforms can ever be removed from the tree.
He suggested that once device tree has been proven as a solution for reining in the explosion of board files in the ARM tree, perhaps a one or two-year deadline could be set. Those platforms that don't update to use device tree could then be dropped.
There are still lots of older ARM platforms that are supported; OMAP 1 was cited as an example. Even though some in the discussion were a bit skeptical about older chips running mainline kernels, that does happen. There are hobbyists or others who keep the older chips working. Thus, those are not targets for deprecation.
But, Tony Lindgren noted that OMAP 2 has some 30 different board files and that there is "no way" those can all be converted to device tree. Arnd Bergmann suggested that in cases like that, the drivers should be converted to work with device tree and the board files should just be removed from the tree. Users of those platforms can either continue using older kernels or create device tree descriptions using the updated drivers.
That may not be possible in all cases. Lindgren mentioned that ARM maintainer Russell King has an automated board test setup that could be affected if those board files are removed. On some platforms, it may require major work to get power management and other features working with device trees. For legacy boards, it is unlikely that anyone will actually do that work.
There is a question of how to decide which board files should be deprecated, Lindgren said. A list of proposed deprecations should be created. Some kind of tool that uses Git to find which platforms have not been updated recently would be useful here.
In the discussion that followed, several different boards were mentioned as candidates for deprecation. Some participants spoke up for specific boards or noted that King used them in his testing. That led to a joking suggestion that someone find a way to surreptitiously relieve King of the burden of some of those boards.
The bcmring platform is particularly problematic because it was completely broken for roughly two years, but has recently been picked up by some hobbyists (who happen to work for Broadcom, creator of the platform). It has a "horrible" OS abstraction layer, so it doesn't really belong in the mainline in its present form. Will Deacon suggested that perhaps the platform could be moved to the staging tree; a checklist could be provided for what is needed before it would be acceptable in the mainline.
For defconfig files, Johansson proposed that "superset configurations" be created, which turn on every driver and feature that could be present on a platform. Those can then be "whittled down" by board or SoC vendors as needed. That will help increase the amount of build testing. Bergmann agreed, saying that having 30 defconfigs was not really a problem, even if only five are being used in practice. That would be a big improvement over the 130 or more defconfigs that are currently in the tree, Johansson said.
Specific platforms that were mentioned as targets for deprecation included ks8695, h720x, l7200, netx, w90x900. In addition, ixp4xx was targeted for deprecation, but that may still be a ways off.
KS2012: ARM: Virtualization
The next KS2012 ARM minisummit session discussed the virtualization work that has been going on for ARM. Both KVM and Xen are under development for ARM, but neither has gotten to the point of being merged. Marc Zyngier gave an overview of the KVM status, while Stefano Stabellini reported on Xen.
Zyngier began by noting that virtualization extensions were added to the most recent revisions of the ARMv7 architecture. There is now a hypervisor mode in the processor, which runs at a higher privilege level than the OS.
For KVM, physical interrupts are handled by the host, with guests only seeing virtual interrupts. That stands in contrast to Xen where certain physical interrupts are delivered to the guests, as Stabellini reported. According to Olof Johansson, the virtualization model provided by ARM fits the Xen hypervisor-based virtualization better than KVM's kernel-based model.
Paul Walmsley asked about vendors who were using the hypervisor mode for doing cluster switch operations, and wondered how well that would work with KVM. Zyngier said that it would work "badly", because KVM and the cluster code would "fight" over hypervisor mode; whoever got there first would win. Will Deacon noted that those who wanted to run KVM on their systems would need to move the cluster code to a higher level.
In answer to a question from Magnus Damm, Zyngier said that KVM on ARM would not support virtual machine nesting. It also would not support the emulation of other CPUs, so the guest CPU must match that of the underlying hardware. The QEMU developers have decided that the work necessary to do that emulation was not worth the trouble, as one of the participants reported.
The KVM guests run at privilege level 1 (PL1), which is the level used for normal kernels, but the host kernel runs at PL2. That means that switching between guests requires lots of transitions, from PL1 to PL2, then back to PL1 for the switched-to guest (and possibly to a lower privilege level depending on what the guest is running).
Guests get preempted whenever a physical interrupt occurs, but the guests never see those, Zyngier said. A stage 2 page table is populated by the host for each of the guests, and the host has a stage 1 page table. There are no shadow page tables. Guests can also be preempted when pages need to be faulted in via the stage 2 page tables.
Devices are mapped into the guests. The virtual CPU interface—part of the ARM generic interrupt controller (GIC)—is mapped in as well. It is believed that all devices can be mapped into the guests, but that has yet to be tried. Because of that, the same kernel can be used for both host and guests. Stabellini noted that the same is true for Xen, which is unlike the x86 situation.
Caches and TLB entries are tagged with an 8-bit virtual machine ID (VMID). Guests are not aware that there are no physical devices, they just poke what they think are hardware registers, a stage 2 translation is done, and the data is forwarded on to the hardware. These memory-mapped IO devices are emulated by QEMU.
Interrupts are injected into the guest by manipulating bits on the guest stack to indicate an interrupt. Xen, on the other hand, uses a "spare" interrupt to signal events to the guest. There is some concern that there is no real guarantee that there is always a free interrupt number to be used. Right now, Xen uses a fixed interrupt number, but that will likely change.
In order to boot a KVM host, the kernel must be started in hypervisor mode. That requires a KVM-compliant bootloader. When booting, a very small hypervisor is loaded, whose "only purpose in life is to be replaced". It has a simple API with just two calls, one to return a pointer to the stub itself, and one to query whether hypervisor mode is available. Zyngier said that he believes Xen could also use that hypervisor stub if desired. One possible problem area is that some "other payloads" (alternate operating systems) may not be able to handle being started with hypervisor mode on, so there may need to be a way to turn it off in the bootloader, Johansson said.
In contrast to KVM, Xen is a hypervisor that sits directly on the hardware, Stabellini said. Everything else is a guest, including Linux. All of the guests are fully aware that they are running on a hypervisor. Xen for ARM assumes that the full virtualization extensions are present and that nested page tables are available. Zyngier noted that KVM makes the same assumptions.
The Xen ARM guest is based on the Versatile Express board, but with far fewer devices defined in the device tree. The Xenbus virtualized bus is used to add paravirtualized devices into the guest. QEMU is not used, so there is no emulated hardware.
Xen ARM is "completely reliant" on device tree, Stabellini said. His biggest worry is that device tree might go away for ARM as he has heard that ACPI may be coming to ARM. The problem there is that the ACPI parser is too large to go into the Xen hypervisor (it roughly doubles the code size). Parsing device trees is much easier, and requires much less code, so trying to do the same things with ACPI "would be a nightmare".
Johansson pointed out that the decision about ACPI would not be made by Linux developers or ARM; there is a large company in Washington that will determine that. For power management on some devices, ACPI handling may be required. But, as Zyngier said, adding ACPI to ARM does not mean the death of device tree.
The governance of ACPI is closed now, and that needs to change so that the ARM community can participate, one participant said. According to Arnd Bergmann, embedded systems will not be moving to ACPI any time soon, but there is a real danger that it will be present on server systems. ARM devices that are targeted at booting other OSes will be using UEFI, which can pass the device tree to the kernel in the right format, he said.
The ARM Xen hypervisor is almost fully upstream in the Xen tree at this point. The Linux kernel side has been posted, and is not very intrusive, Stabellini said. The patches to the kernel are mostly self-contained, with only small changes to the core.
Another concern was the stabilization of the device tree format. If that changes between kernel releases, there can be a mismatch between the device tree and the kernel. Bergmann said that kernel developers are being asked to ensure that anything they add to the device tree formats continues to work in the future, while firmware developers are being warned not to assume a given device tree works with any earlier kernels. Once all main platforms have been described with device trees, there will be an effort to ensure that those don't break in the future, he said.
KS2012: ARM: DMA mapping
In the last discussion on day one of the 2012 ARM minisummit, Marek Szyprowski gave a status update on changes in the ARM DMA subsystem over the last year. There has been a lot of work in that time, with most of it having been merged in 3.5. The most important change is the conversion to dma_map_ops, which provides a common DMA framework that can be implemented as needed for each architecture. It allows for both coherent and non-coherent devices, supports bounce buffers, and IOMMUs.
The second most important change was the addition of the Contiguous Memory Allocator (CMA). It is in 3.5, but is still marked as experimental. It has been tested on some systems, and Szyprowski hopes that it will be stabilizing over the next kernel cycle or so.
Lastly, a bunch of new attributes for DMA operations have been added. These are mostly for improving performance and to "avoid some hacks", Szyprowski said. For upcoming releases, he would like to work on better support for declaring coherent areas.
For 3.5, there was work to remove some of the limits on DMA, in particular, the 2MB limit on mappings. The fixed-sized coherent area has been replaced with memory from vmalloc(). That can't be done in atomic context, however, so there is a small pre-allocation for use in that context. For some devices that buffer was too small, so the size has been made platform dependent. The IOMMU implementation had no support for an atomic buffer at all, but patches have been posted recently, which he hopes to get into 3.6.
The IOMMU code is not particularly ARM-specific, Szyprowski said; it could be used for other architectures. There is a bit more work to isolate the common code and make it generic, but he would need to coordinate that work with the other architectures. Arnd Bergmann suggested just moving the code to a generic place, but leaving it turned off for other architectures. That would allow others interested to turn it on and try it out.
Bergmann noted that when CMA was proposed a year and a half ago, it was envisioned that it would be unconditionally built for all v6 and v7 platforms. But that would make all recent ARM architectures depend on an experimental feature, so he suggested that it might be time to turn off the experimental designation.
There are still some issues that need to be resolved before that can happen, Szyprowski said. There are cases where the allocation can fail because of different accounting between movable and non-movable regions. But Mel Gorman strongly recommended building CMA by default since the problems just result in an allocation failure, and did not cause a full system failure. He suggested making CMA the default with a fall-back to the old code if it fails. That way people will start using the feature, potentially see fall-back warnings, and help fix the problems. If it stays as an experimental feature, he fears that no one will actually use and test CMA.
Bergmann thought that any platform using a boot time reservation of memory (i.e. a "carve out") should be forced into using CMA. One of the problems with that idea is that some of the carve-outs are not upstream because they are for out-of-tree graphics hardware. In addition, the vendors are moving on and are no longer interested in adding features or updating their drivers to use a new feature like CMA.
Noting that there are multiple ways to do carve-outs, Gorman also suggested creating a core carve-out API for code consolidation. It could provide memory that is isolated or DMA-able, for example, so that all of the carve-outs in the kernel could use it. CMA could underlie that API, and it could implement the fall-back until CMA shakes out.
Fragmentation within CMA regions was mentioned as a concern. While Gorman didn't think it all that likely to happen in practice, some noted that there were already problems when using memory regions for OpenGL. User space actions can cause significant fragmentation in that case. Szyprowski suggested using separate CMA regions as a way to reduce the problem.
CMA still needs work to support highmem; there is no reason that it needs to be restricted to lowmem. Szyprowski hopes to get some time to work on that in the future. Wiring up CMA to x86 DMA is another thing that he plans to work on.
KS2012: ARM: Process review for the arm-soc tree
Arnd Bergmann and Olof Johansson started day two of the ARM minisummit with a look at the arm-soc tree that they have been managing. They wanted to go over what has happened with the tree during the last year to see what was working and what could be improved. We are "trying to make you all happy", Bergmann said, while also trying to keep Linus Torvalds happy, which are conflicting goals at times.
The work split between the two has worked well, Bergmann said. When one of them has no time, the other has been able to pick up the slack. From a personal perspective, Bergmann said he is most unhappy when he has to reject a patch set. Actually it is worse when he has to make a decision about patches; some are easy to reject out of hand, but others are more difficult. If a huge patch set comes in, perhaps late in terms of getting it ready for the next merge window, or with lots of good patches but some that do "really nasty" things, he has to decide whether to reject it or not.
One thing to note, Bergmann said, is that Torvalds said that he is "not totally hating our guts anymore" in the Kernel Summit. That's progress. Paul Walmsley asked what things Torvalds is most sensitive to in terms of the ARM tree these days. Bergmann said that he was not sure what the problems are now, but, in the past, the totally uncoordinated nature of ARM development was the main problem.
It used to be that Torvalds would get 15 pull requests for various sub-architectures. That could lead to lots of merge conflicts and dependencies between trees, which annoyed him. The last merge window didn't have many of those problems. The number of patches was down slightly, but not hugely, and not enough to explain that reduction, Bergmann said.
Walmsley followed up by asking what the arm-soc maintainers would like to see from the sub-architecture maintainers. Johansson said that using signed Git tags would be very nice. That helps because the commit message ends up in the merge commit. It also identifies that the patches came from who they purport to, but the most important thing that signed tags bring is that message in the merge commit. Bergmann added that he tries to come up with something for the merge commit if there is no signed tag, but he would much rather get something from the maintainer directly.
One of the goals of the arm-soc tree is to facilitate (and force) the ARM cleanup process. The hope was that it would help pressure maintainers' managers to free up more time for that work. Bergmann asked if that process was working. Linus Walleij noted that the best pressure on management comes from customers, which, for him, are the handset and equipment manufacturers. Those manufacturers or Google make for an effective lever to change things. He is not sure how it came about (and Bergmann expressed surprise as well) but some customers are now asking for device tree support, which makes it easier to convince his management to spend time on that work.
Pushback from distributions is missing currently, Tony Lindgren said. Right now, ARM is distribution-unfriendly; device makers and SoC vendors are not getting the feedback to fix that. Walmsley wondered if the distribution and customer requirements would be in conflict, which could lead to problems.
Lindgren said that he sees tablets running different distributions in the future, but the device makers may not know what the distributions need. But Johansson cautioned that device makers aren't very interested in hearing from those who aren't shipping significant volumes of their product. Volumes of five and even six-digit numbers just aren't of that much interest to the device makers. For the most part those manufacturers are just following Android, Stephen Warren added.
Ben Dooks was concerned that ARM driver maintenance would suffer as those drivers move out of the arch/arm tree. Bergmann disagreed with that assessment because he thinks the overall work will become easier. The drivers will be centralized and use the same frameworks, so the maintenance burden will actually decrease.
Overall, there weren't many complaints about how things are going. For the most part, participants seemed pleased with how the arm-soc tree, and the overall ARM development process, was working. There's still plenty to do, but the process piece seems largely nailed down.
KS2012: ARM: Toward a single kernel image
Over the last two or three years, the ARM Linux development community has been working toward the goal of having a single kernel image that can boot on multiple ARM platforms. One of the preconditions for creating such an image is the elimination of duplicate header files in the tree, which has mostly been completed, Arnd Bergmann said. The biggest problem now is that platform-specific header files are included into the drivers. When building a multi-platform kernel, which of the platform's headers does the driver get?
Drivers really shouldn't be including the platform-specific headers (from the mach-* directories), but many do. There are 300-350 header files under mach-* that are currently used by drivers. There are a number of reasons why this happened: various frameworks were missing for things like power domains, it was easier to add a header file into a directory that is owned by the platform rather than arguing about getting it into a more generic place, and so on. mach-* became a dumping ground, Bergmann said.
He has a patch that would rename all of those include files so that the platform name becomes the prefix of the header filename. It also changes the references in the driver source files to include the proper file. That doesn't solve the dumping ground problem, it simply works around it so that multi-platform kernels can be built.
Bergmann said that ARM maintainer Russell King was not in favor of that approach. Instead, King would like to use the single zImage as something of a carrot to get the sub-architecture maintainers to clean things up. King thinks that the platform-specific directories should not be in the include path for building the drivers, which would force the issue.
One participant suggested that there aren't that many things to fix per platform, but Bergmann disagreed. There is a lot of work to do for some platforms, including some of those that are the "most interesting", such as Samsung and OMAP.
Magnus Damm suggested that checkpatch be extended to complain about drivers that include files from the platform-specific directories. That would help to ensure new drivers were not including improper headers. But, Bergmann said that he didn't use checkpatch before accepting patches, though he admitted that maybe he should do so. Paul Walmsley said that OMAP requires patches to be checkpatch clean (other than 80 column warnings) before accepting them.
Rob Herring has an alternative approach that is likely to be more acceptable to King. He has reworked the header files without renaming them, which reduces the code churn. There are still three problematic header files, though: uncompress.h, gpio.h, and timex.h. But Herring can build a number of platforms into a single zImage without using Bergmann's renaming trick.
Bergmann wanted to see if the assembled ARM developers could come to a conclusion on the right approach. Basically, either of the two header file rearrangement solutions could solve the technical problems in building multi-platform kernels, but they wouldn't force the cleanups that King would like to see happen. In general, most in the room seemed in favor of getting things cleaned up so that there is a clean separation between drivers and platforms—as King has advocated.
It is a perfect task for Linaro, as Walmsley pointed out. It was noted that the worst offenders are all Linaro members, which makes it align well with the organization's mission. Bergmann said that Linaro has some people working on multi-platform kernels who could potentially work on the project.
The conversation turned toward how to get there. Tony Lindgren said that he could do an initial pass on OMAP in the next week or so to start to figure out how to fix up the drivers. There are certain frameworks (common clock, sparse IRQ) that drivers and platforms will be required to use in order to be included in single zImage effort. In addition, SMP-capable platforms will need to use Marc Zyngier's smp_ops framework, which Bergmann will be reworking and posting in the near future.
Using Herring's header file changes, but not renaming all the mach-* include files, is the basic approach chosen. That will still use some parts of Bergmann's changes. In the end, it will still be a fair amount of code churn, so there was discussion of how to manage those changes in the arm-soc tree. The intent is to try to make it work for the 3.7 cycle, with a fallback to making those changes the base patch for the arm-soc tree for 3.8 if it gets too messy.
Bergmann also demonstrated the Kconfig changes that he has made so that kernel developers can enable multiple platforms in their kernel builds. Once multi-platform support is selected, then one or more of the ARM architecture versions (v4-7) can be chosen. For each architecture, possible SoCs are listed and, if none is chosen, a default is picked. In addition, SoC maintainers can decide whether to expose individual boards for selection. Those Kconfig changes could be used as the basis for building multi-platform kernels once the rest of the work is done.
The header file renaming script will still be useful, Bergmann said, to help figure out the include file dependencies, which drivers require which platforms, and so on. Using shell tools and grep on a renamed tree can give some insights into how things are currently organized. That will help as these driver problems are unwound on the way to a multi-platform ARM kernel image.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Device drivers
Documentation
Filesystems and block I/O
Memory management
Networking
Virtualization and containers
Miscellaneous
Page editor: Jonathan Corbet
Distributions
Whither Mandriva? Part 2
When LWN last looked in April 2012, the Mandriva community was in an uncertain state. Information was sparse, and those involved were being cautious about what they said in public. Four months later the caution remains, but the general outline of events has become clearer, even if the details remain vague. It now appears that Mandriva S.A., the company behind the commercial distribution, is turning over development of the code to a community foundation, in which it and ROSA are the main supporters, at least partly in the hopes of winning back community support.
Four months ago, Mandriva S. A. was known to be in financial trouble and considering its options. On the mailing list for Cooker, Mandriva's development repository, rumors were flying about a dispute between Mandriva and ROSA (see below), and an accusation was made that ROSA was poaching Mandriva employees. A major concern was that this dispute would prevent the cooperation necessary to create a foundation for the Mandriva code, which most observers seemed to think was overdue.
Today, that dispute has been settled, at least publicly. However, its
causes remains uncertain. Konstantin Kochereshkin, PR Manager for ROSA Lab,
would say only that, "Those disagreements have now been
solved.
"
Jean-Manuel Croset, the new CEO of Mandriva S.A. was not much more
forthcoming, saying:
This statement could suggest any number of origins and outcomes for the dispute while supporting none for certain.
Whatever the nature of the disagreement, both Mandriva and ROSA remain
reluctant to give details, and currently both are emphasizing that they are
cooperating with each other. According to Kochereshkin, the attacks on ROSA
in the Cooker thread were made by a Russian competitor, not anyone with any
connection with Mandriva S. A., and "his posts are just not
true.
"
Similarly, Croset stresses that Mandriva was "[not] the ones who made
these accusations, nor do we feel raided in any way by ROSA.
" He
went on to state that today, "cooperation is good
" between
Mandriva and ROSA.
Restructuring in the corporation and the community
The last four months have also seen major changes in the organization of the Mandriva community. To start with, on April 30, Mandriva S.A. shareholders approved a recapitalization strategy. Details of this strategy have not been made public — according to Croset because of non-disclosure agreements and a wish not to dwell on the past.
The most that Croset would say about Mandriva in the past was that
"the main problem was that too many resources were invested in the
desktop and too little in our other product lines. The structure was also a
little bit too loose.
" However, Croset did indicate that, in the
future, Mandriva will move away from efforts to sell the distribution
— an effort that most commercial distributions abandoned over a
decade ago — and adopt a mixed services and server-based business
model similar to that of Canonical or Red Hat. Croset describes this
reorganization as being about 75% complete.
In pursuit of that reorganization, Mandriva has hired five new employees in the last few months. The most prominent of the new hires is Charles Schulz, a former employee of MandrakeSoft (Mandriva's former name). Schultz is best known as a co-founder and member of the board of directors for The Document Foundation, the organization behind LibreOffice. Hired as Director of Community, Schulz's first task has been the creation of the Mandriva Foundation to oversee the community development of the code. Schulz met with stakeholders in Paris on June 19 to begin organization of the foundation.
Schulz also posted a poll to rename the foundation's distribution, explaining that "Mandriva" would remain the name for the distribution used by Mandriva S.A. in its products. Originally, the poll was supposed to close in one week, but, six weeks later, it remains open with "Mandala Linux" and "Open Mandriva" being the top choice. The final decision on the name is due in September.
According to Croset, the structure of the Mandriva Foundation (or whatever its final name becomes), and the extent to which it will be controlled by Mandriva S.A. are still being determined. However, Schulz's diagram of the foundation's proposed structure suggests that ROSA and Mandriva S.A. are being considered as the major corporate members and distributions involved, with derivative distributions like Unity Linux involved, but with less input or control.
What the diagram does not make clear is the role envisioned in the foundation for the Mandriva community. Considering some of the dissatisfaction in the community over Mandriva's past stewardship of the code, the community's role may be the main reason for the time needed to organize the foundation, which currently lacks a web site as well as a final name. With all the distrust that has developed in the last couple of years, community members might be determined to limit the role of commercial interests before they agree to participate.
Certainly a notable absence in the diagram is Mageia, the distribution
founded by former Mandriva employees and community members in September
2010 with the declaration
that, "We do not trust the plans of Mandriva S.A. any more and we
don't think the company (or any company) is a safe host for such a
project.
"
These sentiments do not appear to have changed in the last two years;
when asked
if Mageia had any interest in the foundation, Patricia Fraser, Mageia's
marketing communications team leader, replied,
Since Mageia.org is already registered as a non-profit, merging with the Mandriva Foundation would naturally be difficult. Fraser's reply suggests that the major fork in the Mandriva code base is likely to continue. Croset had announced in a podcast that Mandriva S.A. would use Mageia for its server products, but, Fraser's comment would seem to preclude any cooperative development of Mageia within the Mandriva Foundation.
Whether other community members have the same reluctance is uncertain since the initial organization work is happening in private. However, if they do, then the time needed to organize the Mandriva Foundation would be even more understandable.
Under the ROSA
In the middle of these reorganizations, ROSA has appeared as something of a
wild card, a relative newcomer about which little is known.
According to co-founder Dmitry Komissarov, ROSA is another mixed community-
corporate venture — a combination that he believes necessary to make
free and open source software a going concern in Russia. It includes ROSA Lab, a subsidiary that has been the
most public aspect of the company, that is "focused on research and
new technologies development [and] should become a base for future ROSA
product advantages.
"
Komissarov has co-founded five technology companies in the last decade — many of which appear to be quite small, since although he lists them as still active, they lack any website. He was also on the board of Mandriva between August 2010 and October 2011.
ROSA itself has some 100 employees, having increased its size four times since it was founded. Those employees include Deputy CEO Vladimir Rubanov, a director at the Linux Verification Center that has done some work with the Linux Standard Base and R&D Director Eugene Sokolov, who started the now-defunct Linux XP distribution. The company has also employed Jeff Johnson, the maintainer of RPM5, apparently on a contractual basis.
ROSA's first and so far only released product is the free-licensed ROSA Marathon, a long term support distribution based on Mandriva which has attracted attention for its innovations on the KDE desktop. These innovations include a fixed bottom panel whose task bar displays open windows as icons, a full-screen menu window with a timeline for locating recently used applications and customizable previews of documents and downloads. In addition, ROSA Marathon features some minor tweaking of the Dolphin file manager and ROSA Media Player, which supports both audio and video, and ROSA Sync, which configures a cloud client.
A commercial distribution, ROSA Desktop 2012, is scheduled for release by the end of 2012. ROSA is also developing products for mobile devices and a cloud storage service.
Whatever the past causes for friction with Mandriva S.A., at this point
ROSA is acting as a major player in the unfolding events. Kochereshkin
talked of how ROSA will "work jointly
" with Mandriva S.A. in
the foundation and how "ROSA and Mandriva met as peer members
"
during discussions.
Some community members may still be unsure what to make of ROSA, but these
references suggest that either ROSA wants to become a major player or else
has already become one. A logical inference would be that part of the delay
in finalizing the reorganization is the negotiating of exactly what ROSA's
position will be.
Still unanswered
This summary raises almost as many questions as it answers. Mandriva S.A. has yet to disclose its investors or anything beyond its general directions for the future. Nor is it clear yet how the foundation will be organized and governed, and whether community interests will have an equal voice with commercial ones.
Looking into the future, you might also ask: can Mandriva S.A. turn its troubled fortunes around? Moreover, can any structure for the foundation unite all those working with the Mandriva code base, or are the divisions too deep to overcome?
At this point, the apparent slowness of the reorganization could indicate that key aspects are difficult and likely to be only partially successful. Anybody who has been involved in corporate decision-making can appreciate the reluctance to complicate ongoing negotiations with premature publicity, but, combined with the slowness of results, at some point the reticence may help to create an impression that difficulties are being hidden precisely because those trying to solve them are floundering.
You have to wonder, too, whether greater transparency would help to improve relationships with the community. However, obligations to shareholders may make transparency impractical. Very likely, too, community distrust runs too deeply in some places to be overcome quickly, no matter what the policy.
From the available glimpses, corporate and community Mandriva are struggling with some determination to reinvent themselves. However, the tactics of that struggle are still uncertain to any outside observer — let alone what chances they might offer for success.
Brief items
Distribution quote of the week
FreedomBox 0.1 released
The very first release of the FreedomBox software has been announced. "This 0.1 version is primarily a developer release, which means that it focuses on architecture and infrastructure rather than finish work. The exception to this is privoxy-freedombox, the web proxy discussed in previous updates, which people can begin using right now to make their web browsing more secure and private and which will very soon be available on non-FreedomBox systems."
OpenIndiana lead Alasdair Lumsden resigns
OpenIndiana is a project aiming to create a Linux-like system build on the Illumos (OpenSolaris) kernel. That project's lead has just resigned in a most bitter manner. "All of you, Joyent, Nexenta, Delphix, are complicit in the increasing irrelevance of Illumos. OI, even in it's current current state, is by far the most widely used Illumos distro, so by not supporting it beyond contributing to the Illumos core, you've all shot yourselves in the foot." (Thanks to Paul Wise).
Newsletters and articles of interest
Distribution newsletters
- DistroWatch Weekly, Issue 471 (August 27)
- Maemo Weekly News (August 27)
- Ubuntu Weekly Newsletter, Issue 280 (August 26)
Page editor: Rebecca Sobol
Development
GStreamer Conf: The approach of GStreamer 1.0
The GStreamer framework handles media capture and delivery for many of desktop Linux's most popular applications, but more than ten years out, it still has not made a 1.0 release. To be sure, the actual version number does not matter nearly as much as the stability of the software and the reliability of the API, but GStreamer's 1.0 release has seemed tantalizingly close for the past several months. The third annual GStreamer Conference was held in San Diego the last week of August, and, while 1.0 did not make its debut at the event, the core developers did give a solid deadline, which should be welcome news to application authors and distributions alike.
Ironically, one of the key impediments to a 1.0 release has been the success of the current (0.10) stable release. With the framework working well, a 1.0 release might seem like a simple project governance decision. But because an N.0 release carries with it the implicit promise of API and ABI stability, the project decided it had to implement several key changes before it could put the final stamp on the release. GStreamer maintainer Wim Taymans opened the conference with a keynote talk covering the 0.11 development cycle, including what needed to change from the 0.10 series.
The two biggest challenges, he said, were the advent of embedded Linux (often system-on-chip (SoC) boards) systems using GStreamer and the move toward GPU-accelerated rendering on the desktop. SoC platforms posed a challenge because they include substantially different hardware, such as different memory architectures and hardware rendering pipelines (which could include discrete components such as a video scaler unit). GPU acceleration on the desktop can result in significant performance gains, but it comes at the cost of supporting multiple, incompatible GPU libraries from the various hardware vendors. Those library incompatibilities are not limited to the lowest levels, either. Some GPU chipsets support higher-level features like subtitle compositing, while others do not.
Along with the changing face of hardware, the project had several wishlist items it wanted to implement for a 1.0 release. One was an infrastructure that allowed a GStreamer application to attach useful non-media metadata to a buffer. Such data might include the coordinates of a "region of interest," for example. If the GStreamer pipeline processing a video knows that it will need to crop the video in order to display it, the optimal performance is usually obtained by marking the region, but let the video hardware do the cropping at render time. There are other uses for attaching metadata, of course, such as tagging parts of an image with facial recognition information. Along similar lines, GStreamer 1.0 will support passthrough of compressed audio to the playback hardware. This allows headsets and USB sound cards (an increasing number of which support MP3 or AAC decoding in hardware) to handle the audio stream, again saving CPU cycles.
Last but not least, the GStreamer team wanted to fix the outstanding issue of dynamic pipelines. GStreamer allows application authors to construct pipelines that connect media "source" elements, apply transformations and filters, then deliver the stream to a "sink" element. But up through the 0.10 series, changing the pipeline after its initial setup was painful. Pulling out and replacing an element caused downstream elements to lose context information like timing and formatting details, forcing the application to re-examine the pipeline. It was not esoteric use-cases hit by this limitation, either: plugging in a headset while audio was playing forced a pipeline change (by using a different sink element), as did applying a video filter in an application like Cheese (by inserting a filter element).
On top of the wishlist features were the usual improvements to memory management and cleanups that accumulate over the lifetime of a complex framework. Version 1.0 allows applications to pool buffers to reducing copying, query the supported memory types (such as video acceleration API (VA-API) surfaces), and split buffers up into separate memory blocks (which could allow faster handling of streaming media like VoIP packets). Also 1.0 strives to simplify application authors' work by providing a number of new audio and video "base classes" that represent the most commonly-needed media formats — in contrast to 0.10, which required the author to fully describe each media format.
Taymans reported that the 0.11 code had replaced GStreamer's master branch in March
2012, and was currently at version 0.11.93. The team is "kind of out of ideas of
things to add
", he said, and most of the common media players using GStreamer
have already been ported over to the 0.11/1.0 API. Nevertheless, there are a few pieces
that require more testing, including the dynamic pipeline support and the new base classes.
However, the Linux distributions may have other challenges that hold up deployment. For example, he said, the Fluendo MP3 codec and DVD playback element shipped by Ubuntu are built to work on 0.10, and may not be updated for 1.0.
GStreamer 0.10 and 1.0 will be installable in parallel, which means Ubuntu could include the Fluendo elements in a 0.10 pipeline and still use 1.0 elsewhere. But this is not a good long-term solution, Taymans added. For one thing, 0.10 is deprecated and will eventually stop receiving updates. But the more pressing issue from GStreamer's perspective is that GNOME has decided to use 1.0 for its upcoming GNOME 3.6 release. GNOME cannot reasonably be expected to base its entire desktop environment on a deprecated version of GStreamer, but it cannot ship with a dependency on an unreleased branch either. As a result, GStreamer will formally release its version 1.0 before the release of GNOME 3.6. That does not give a precise date (although Taymans said the team hoped to settle on one over the course of the conference), but it does provide a more concrete anchor to the when-will-1.0-arrive question than GStreamer users have had up until now. GNOME 3.6 is scheduled for release on September 26, and while it might slip a few days, that still places GStreamer 1.0 within the month.
GStreamer also saw the first release of its software development kit (SDK) this summer, as Andoni Morales explained in another session. The GStreamer SDK is a bundle designed to help GStreamer application authors by providing stable, installable builds for Linux, Windows, Mac OS X, and soon for Android as well. The issue that led to the SDK's creation is that despite GStreamer's staunch commitment to being a cross-platform media framework, it is still predominantly a Linux-only tool. But a big part of the reason behind that situation is the lack of installers for Mac and Window developers. The SDK provides binary installers, plus tutorial-based documentation. In addition, it splits up the GStreamer package into more modular components — the "good,""bad," and "ugly" plugin packages found in Linux repositories are not particularly descriptive, and they are quite large.
The first version of the SDK is based on 0.10, and provides a tested set of playback codecs, plus Python bindings and integration with Apple's Xcode IDE and Windows' Visual Studio. Several of the other talks over the course of the event also dealt with GStreamer development tools, including Rene Stadler's debugging tool (which turns GStreamer's famously verbose log files into easier to parse visualizations) and Edward Hervey's survey of CPU and memory optimization utilities. Consequently, whenever GStreamer 1.0 does arrive, it looks as if the project will be prepared to make a bigger push to engage application developers.
Brief items
Quotes of the week
Linus, despite being a low-level kernel guy, set the tone for our community years ago when he dismissed binary compatibility for device drivers. The kernel people might have some valid reasons for it, and might have forced the industry to play by their rules, but the Desktop people did not have the power that the kernel people did. But we did keep the attitude.
Diaspora to become a "community project"
The Diaspora project, aiming to create a more freedom-oriented social networking system, has announced that it is becoming a more community-oriented project. "This will not be an immediate shift over. Many details still need to be stepped through. It is going to be a gradual process to open up more and more to community governance over time. The goal is to make this an entirely community-driven and community-run project. Sean Tilley, our Open Source Community Manager will spearhead community efforts to see that this happens."
Sony opens up the Dynamic Android Sensor HAL (DASH)
Sony has announced that the Dynamic Android Sensor Hardware Abstraction Layer (HAL) is now open for collaboration on GitHub. Since DASH was opened up in February, there have already been contributions from the CyanogenMod team, and this move is meant to foster more of that. "As a next step, we are now making DASH available as an open source project on GitHub. Here, custom ROM developers can find the source code files and “make” files for the sensors in Xperia™ smartphones, which is used to enable and disable multiple sets of sensors for each device, including the accelerometer, proximity sensor, ambient light sensor, magnetometer, gyroscope, and pressure sensor. We plan to keep adding more sensor code as we release new phones. Anyone in the Android open community is free to contribute their own sensor implementations and other improvements to the DASH code."
"We the people" source released
The White House (the office of the US president) has made its "We the people" platform, the Drupal-based system that handles its petitions site, available on Github under the GPLv2+ license. "Releasing the source code for this application is meant to empower other governments and organizations to use this platform to engage their own citizens and constituencies. In addition, public review and contribution to the application’s code base will help strengthen and improve the platform."
Systemd v189 available
Lennart Poettering has announced release 189 of systemd, sporting a number of changes, including deprecation of reading kernel messages from /proc/kmsg in favor of /dev/kmsg, and cryptographig "sealing" of log files to prevent attackers from modifying logs undetected.
MongoDB 2.2 Released
Version 2.2 of MongoDB is available. This release incorporates a number of new features, most notably a new aggregation framework and "data center awareness
" to better manage geographically separate clusters.
Newsletters and articles
Development newsletters from the last week
- Caml Weekly News (August 28)
- What's cooking in git.git (August 27)
- Haskell Weekly News (August 22)
- Mozilla Hacks Weekly (August 23)
- OpenStack Community Weekly Newsletter (August 24)
- Perl Weekly (August 27)
- PostgreSQL Weekly News (August 26)
- Ruby Weekly (August 23)
Oracle Broadens Support for Open-source R Analytics (PCWorld)
At PCWorld, Chris Kanaracus is reporting that Oracle has announced several new initiatives supporting the open source statistical analysis tool R. Ports to Solaris and AIX are among the new offerings, as is compatibility with some additional tools. "Oracle will deliver an open-source distribution of R that can work with the Sunperf, MKL and ACML math libraries, which are provided by Oracle, Intel and AMD, respectively. This will allow R 'to run extremely fast on all compatible hardware,' Oracle said in a statement
".
Has cash corrupted open source? (The Register)
At The Register, Matt Asay discusses the effects that the increasing number of corporate-sponsored open source projects have on development. "There was a brief honeymoon when Google, Facebook and other tech giants were able to release open-source code without commercial involvement, but this didn't last long, with startups setting out to monetise projects such as Hadoop, Cassandra, and Storm.
"
Page editor: Nathan Willis
Announcements
Brief items
The EFF's 2012 Pioneer Award winners
The Electronic Frontier Foundation has announced the recipients of its 2012 Pioneer Awards: Andrew "bunnie" Huang, Jérémie Zimmermann, and the Tor Project. "Andrew (bunnie) Huang is an activist who takes a push-and-pull approach to open hardware: he contributes original open designs and also liberates closed designs. Huang's book on reverse engineering, 'Hacking the Xbox,' is a widely respected tool for hardware hackers."
Helios - Ken Starks - the immediate need is over
Last week we reported a fund raising campaign for Ken Starks. The goals have been exceeded. "Asking for anything more would be taking advantage of a loving and generous community. While it is far from adequate, the only thing I can offer you is my eternal thanks. It just seems so....small of a thing to give in return."
Open Invention Network expands coverage
The Open Invention Network has announced the expansion of its "Linux System Definition" to include 18 new packages of interest in the mobile arena. They include the core Android code and the Dalvik virtual machine — code that seems more than usually likely to draw patent suits.
Articles of interest
Stop the inclusion of proprietary licenses in Creative Commons 4.0 (freeculture.org)
Creative Commons (CC) is in the process of drafting version 4.0 of its license set. Freeculture.org is urging CC to remove the NonCommercial (NC) and NoDerivatives (ND) clauses from the new version. "Neither of them provide better protection against misappropriation than free culture licenses. The ND clause survives on the idea that rightsholders would not otherwise be able protect their reputation or preserve the integrity of their work, but all these fears about allowing derivatives are either permitted by fair use anyway or already protected by free licenses. The NC clause is vague and survives entirely on two even more misinformed ideas. First is rightsholders’ fear of giving up their copy monopolies on commercial use, but what would be considered commercial use is necessarily ambiguous. Is distributing the file on a website which profits from ads a commercial use? Where is the line drawn between commercial and non-commercial use? In the end, it really isn’t. It does not increase the potential profit from work and it does not provide any better protection than than Copyleft does (using the ShareAlike clause on its own, which is a free culture license)."
There's a Verdict in Apple v. Samsung (Groklaw)
Groklaw has the details and a lot of pointers to further coverage on Apple's victory in its suit against Samsung. "This is definitely not the end of this story. It can't be. It's preposterous."
Kügler: Best practises for writing defensive publications
On his blog, Sebastian Kügler gathers some tips on writing defensive publications. These publications are a weapon that can be used to ensure that techniques used by free software don't get patented by others. "A defensive publication is a technical document that describes ideas, methods or inventions and is a form of explicit prior art. Defensive publications are published by Open Invention Network in a database that is searched by patent offices during a patent exam. A good defensive publication will prevent software patents from being granted on ideas that are not new and inventive.These will help protect your freedom to operate."
Twitter Joining the Linux Foundation (TechCrunch)
Scott Merrill at TechCrunch reports that Twitter is set to join the Linux Foundation. The formal announcement will be at LinuxCon North America next week, where Twitter's Manager of Open Source, Chris Aniszczyk, will be presenting a keynote address. Also joining Inktank (makers of the Ceph distributed filesystem) and hardware vendor Servergy.
Does anyone want Linux.conf.au 2014? (TechRepublic)
TechRepublic reports that there have been no bids for LCA 2014. "“We should now be in the final stages of meeting with bid teams and visiting the proposed venues, ready to make a decision in the next few weeks. This task turns out to be trivially simple, because to date, we have not received any bids,” James Polley, Linux Australia executive council member, wrote on the Linux Australia mailing list."
Calls for Presentations
PyCon UA 2012 - Call for Speakers
PyCon Ukraine will take place October 20-21 in Kyiv, Ukraine. The proposal deadline is September 30.HelloGcc 2012 Workshop
The HelloGcc 2012 Workshop will take place in Beijing, China on November 10. The call for topic speakers is open. "Every year, we hold a technical workshop in order to improve communication among open source developers and fans. The activity will be held on Nov. 10th this year. We're calling for topic speakers now. As soon as you prefer to give a technical report, welcome to contact us."
Strata Conference CfP
The O'Reilly Strata Conference will take place February 26-28, 2013 in Santa Clara, California. The call for proposals closes September 20.
Upcoming Events
LPI Hosts Exam Labs at SUSECon 2012
The Linux Professional Institute (LPI) will offer all of its LPIC certification exams at SUSECon, September 18-21, 2012 in Orlando, Florida.Kernel Recipes, a kernel-focused event
Kernel Recipes Paris 2012 will take place in Paris, France on September 21. "The aim of the day is to provide information, exchange, demonstrations in a privileged setting. The topics will be related to the coming news, the process of release of the kernel and its issues, problems and hardware support relationships with manufacturers, security."
LCA2013 Community Miniconfs Announced
linux.conf.au has announced a second set of mini-conferences at the 2013 LCA (January 28-February 2, 2013 in Canberra, Australia). "Community is one of the most important and noticeable aspects of open source development. Without the open source community, there is no open source. At linux.conf.au, the community miniconfs are aimed at bringing like-minded people together to discuss the things they're passionate about."
Events: August 30, 2012 to October 29, 2012
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
August 28 August 30 |
Ubuntu Developer Week | IRC |
August 29 August 31 |
2012 Linux Plumbers Conference | San Diego, CA, USA |
August 29 August 31 |
LinuxCon North America | San Diego, CA, USA |
August 30 August 31 |
Linux Security Summit | San Diego, CA, USA |
August 31 September 2 |
Electromagnetic Field | Milton Keynes, UK |
September 1 September 2 |
Kiwi PyCon 2012 | Dunedin, New Zealand |
September 1 September 2 |
VideoLAN Dev Days 2012 | Paris, France |
September 1 | Panel Discussion Indonesia Linux Conference 2012 | Malang, Indonesia |
September 3 September 8 |
DjangoCon US | Washington, DC, USA |
September 3 September 4 |
Foundations of Open Media Standards and Software | Paris, France |
September 4 September 5 |
Magnolia Conference 2012 | Basel, Switzerland |
September 8 September 9 |
Hardening Server Indonesia Linux Conference 2012 | Malang, Indonesia |
September 10 September 13 |
International Conference on Open Source Systems | Hammamet, Tunisia |
September 14 September 16 |
Debian Bug Squashing Party | Berlin, Germany |
September 14 September 21 |
Debian FTPMaster sprint | Fulda, Germany |
September 14 September 16 |
KPLI Meeting Indonesia Linux Conference 2012 | Malang, Indonesia |
September 15 September 16 |
Bitcoin Conference | London, UK |
September 15 September 16 |
PyTexas 2012 | College Station, TX, USA |
September 17 September 19 |
Postgres Open | Chicago, IL, USA |
September 17 September 20 |
SNIA Storage Developers' Conference | Santa Clara, CA, USA |
September 18 September 21 |
SUSECon | Orlando, Florida, US |
September 19 September 20 |
Automotive Linux Summit 2012 | Gaydon/Warwickshire, UK |
September 19 September 21 |
2012 X.Org Developer Conference | Nürnberg, Germany |
September 21 | Kernel Recipes | Paris, France |
September 21 September 23 |
openSUSE Summit | Orlando, FL, USA |
September 24 September 25 |
OpenCms Days | Cologne, Germany |
September 24 September 27 |
GNU Radio Conference | Atlanta, USA |
September 27 September 29 |
YAPC::Asia | Tokyo, Japan |
September 27 September 28 |
PuppetConf | San Francisco, US |
September 28 September 30 |
Ohio LinuxFest 2012 | Columbus, OH, USA |
September 28 September 30 |
PyCon India 2012 | Bengaluru, India |
September 28 October 1 |
PyCon UK 2012 | Coventry, West Midlands, UK |
September 28 | LPI Forum | Warsaw, Poland |
October 2 October 4 |
Velocity Europe | London, England |
October 4 October 5 |
PyCon South Africa 2012 | Cape Town, South Africa |
October 5 October 6 |
T3CON12 | Stuttgart, Germany |
October 6 October 8 |
GNOME Boston Summit 2012 | Cambridge, MA, USA |
October 11 October 12 |
Korea Linux Forum 2012 | Seoul, South Korea |
October 12 October 13 |
Open Source Developer's Conference / France | Paris, France |
October 13 October 14 |
Debian BSP in Alcester (Warwickshire, UK) | Alcester, Warwickshire, UK |
October 13 October 14 |
PyCon Ireland 2012 | Dublin, Ireland |
October 13 October 15 |
FUDCon:Paris 2012 | Paris, France |
October 13 | 2012 Columbus Code Camp | Columbus, OH, USA |
October 13 October 14 |
Debian Bug Squashing Party in Utrecht | Utrecht, Netherlands |
October 15 October 18 |
OpenStack Summit | San Diego, CA, USA |
October 15 October 18 |
Linux Driver Verification Workshop | Amirandes,Heraklion, Crete |
October 17 October 19 |
LibreOffice Conference | Berlin, Germany |
October 17 October 19 |
MonkeySpace | Boston, MA, USA |
October 18 October 20 |
14th Real Time Linux Workshop | Chapel Hill, NC, USA |
October 20 October 21 |
PyCon Ukraine 2012 | Kyiv, Ukraine |
October 20 October 21 |
Gentoo miniconf | Prague, Czech Republic |
October 20 October 21 |
PyCarolinas 2012 | Chapel Hill, NC, USA |
October 20 October 23 |
openSUSE Conference 2012 | Prague, Czech Republic |
October 20 October 21 |
LinuxDays | Prague, Czech Republic |
October 22 October 23 |
PyCon Finland 2012 | Espoo, Finland |
October 23 October 25 |
Hack.lu | Dommeldange, Luxembourg |
October 23 October 26 |
PostgreSQL Conference Europe | Prague, Czech Republic |
October 25 October 26 |
Droidcon London | London, UK |
October 26 October 27 |
Firebird Conference 2012 | Luxembourg, Luxembourg |
October 26 October 28 |
PyData NYC 2012 | New York City, NY, USA |
October 27 | Central PA Open Source Conference | Harrisburg, PA, USA |
October 27 October 28 |
Technical Dutch Open Source Event | Eindhoven, Netherlands |
October 27 | pyArkansas 2012 | Conway, AR, USA |
October 27 | Linux Day 2012 | Hundreds of cities, Italy |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol