User: Password:
|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for August 23, 2012

GUADEC: GNOME OS conversations

By Nathan Willis
August 22, 2012

At GUADEC, the "GNOME OS" concept was discussed off and on several times during the course of the week. The first mention of the subject came in a talk from Igalia's Juan José Sanchez Penas and Xan Lopez on the first day of the event. Their talk A bright future for GNOME dealt largely with what the GNOME project needed to do to address the mobile and embedded space. In that context, GNOME's current build and release system — which is focused solely on the desktop computing experience — offers nothing for mobile device makers to build on.

[Tower of Hercules]

But, they said, if GNOME were to start producing a bootable OS image as one of its "deliverables," device makers would have a starting point that they could adapt to their own hardware. Although they did not provide specifics, they said that Igalia has spoken to mobile device makers who are not satisfied with the current market offering of Apple's iOS and Google's Android. GNOME has already done a lot of design and technical work to make GNOME Shell and other components touch-screen capable, they observed, but it remains bound to traditional PC hardware. A mobile-friendly GNOME would have a leg up on competing open source projects like Tizen, webOS, and Firefox OS, which have all had to "start from scratch." Their definition of "scratch" is not entirely clear, but it is certainly common for new Linux-based mobile platforms to write their own applications and supporting frameworks.

Although Sanchez and Lopez spoke of the benefits of having an installable GNOME for use as a base platform for mobile device makers, that was not the only reason the GNOME OS buzzword came up over the course of the week. The other — and perhaps more frequently-raised — issue is that GNOME has essentially never been presented as an end-user ready product. The cause is clear enough; as Colin Walters discussed in his talk, most Linux users get their software through a distribution's package manager. The trouble from GNOME's perspective is that distribution packages are typically delivered six months after GNOME drops its stable release, so when bug reports arrive they are almost a full development cycle behind. In addition, every distribution makes enough changes that whatever bug reports users do send in are difficult to triage and diagnose.

Making a bootable GNOME image one of the pieces in each GNOME release would allow users to try the unaltered packages sooner, and provide faster and better feedback to the project. It would also allow GNOME to develop an SDK for application developers who are interested in writing distribution-neutral GNOME code. Sanchez and Lopez proposed setting an "ambitious plan for 3.8 through 3.12" that would culminate in a mobile-capable release for GNOME 4.0. That time frame equates to two years using GNOME's current release schedule — not immediate, but not too far off to plan. Post-4.0, they proposed planning a GNOME SDK and working on application distribution channels and other components that a mobile GNOME ecosystem might require.

The meaning of OS

Allan Day addressed both the improved-testing-and-feedback rationale and the improve-GNOME-for-application-developers goal on his blog shortly after GUADEC. Nevertheless, there are still those who conflate the plan with a desire to transform GNOME into a full-fledged Linux distribution, a confusion that was evident in audience members' questions at GUADEC, too. It ought to be clear that GNOME would need to add a significant number of developers (not to mention packagers and infrastructure) to support a complete distribution, but perhaps "GNOME OS" is simply a poor choice of terminology. Sanchez and Lopez did refer to GNOME OS as a "distribution" in their talk, but when an audience member asked about it, they clarified that use of the term was a slip not meant to be taken absolutely.

Admittedly, there are those in and around the GNOME project who have more ambitious goals (Lennart Poettering had a session I was unable to attend that dealt with integrating GNOME components more directly with the kernel), but they are the exception. At its core, the idea is really about bridging the existing gap between the project and its users — as well as the gap between the project and application developers — in order to collaborate better with them. Given the number of times in recent years that the project has run into end-user backlash over design changes (in particular those instances that seem to revolve around a perceived lack-of-responsiveness to feedback), that would seem to be an admirable goal.

Vision quest

But the GNOME OS discussion has a subtext, which is the perception that GNOME as a project no longer has a long-term goal. On the one hand, that means that the original goal of producing a quality free software desktop environment has largely (or perhaps even completely) succeeded. But it also means that GNOME as a project is searching for a new target. There are plenty of people who feel that mobile devices are the answer; others (like Lionel Dricot) contend that online services are the new frontier, or (like Eitan Isaacson) that believe that targeting high-end workstation users is best.

[Beach in A Coruña]

The vision question also arose at the GNOME Foundation general meeting, which kicked off with the Release Team asking attendees what they wanted the Release Team's role to be. Specifically, the team asked whether or not the project ought to have a Technical Board to set the long-term vision and to make technical decisions. The team reported that it felt like some members of the project expected it to fill such a role, but that driving development was not its mission.

The resulting discussion was an interesting one; GNOME's culture has been "individual maintainers rule their modules" for a long time, but that concept does not extend well into a long-term roadmap. Bastien Nocera pointed out that in years past, it was often a single individual — such as Jeff Waugh — who either set or articulated the vision for GNOME. Since Waugh's departure, no one has replaced him in that function, although Nocera pointed out that Jon McCann's UI demos have served as a de facto substitute in recent years.

Others pointed out that while vision was an important topic on its own, practical matters still dominate, such as making the final call on which version of Python to support. A Technical Board should make such a decision (which affects many modules), but it is hardly a matter of "vision." Clearly individual GNOME developers are producing high-quality work and driving the project forward. But focusing that energy, whether into GNOME OS or toward another goal, is a task that the project is still working out.

[The author would like to thank the GNOME Foundation for travel assistance to A Coruña for GUADEC. The event was deftly organized and smoothly run from start to finish, sported a universally high-caliber program, and an enthusiastic crowd at every turn. Plus, as the photographs in the story above hint, A Coruña was an inspiringly-scenic location in which to spend a week discussing and learning about open source. Thanks to everyone who put in time and energy making the conference a success.]

Comments (31 posted)

On the need for a new removable device filesystem

By Jonathan Corbet
August 22, 2012
Removable storage devices, such as the USB "thumb drive," can be a pain. They are slow and often prone to errors, but, perhaps worst of all, they all seem to be designed for the VFAT filesystem. VFAT gets the job done much of the time, but it is showing its age; this filesystem was never meant for the size of contemporary devices or files. There is also the little nagging issue of the patents on the filesystem format and the associated Linux-hostile company that is actively asserting those patents. Despite all of this, removable devices are often the easiest way to ship files between machines. Given that, do we need to come up with a new filesystem to ease the pain of using these devices?

Dan Luedtke's answer is "yes"; he has implemented a new filesystem called "Lanyard" (or "LanyFS") intended for use on removable devices. He claims better performance and scalability than VFAT along with a native Linux implementation. The code shows its early-stage nature — there are a lot of things that would need to be fixed before it could be considered for inclusion into the mainline kernel — but the mainline is clearly where Dan would like it to go. The rest of the development community is not entirely convinced that we need a new filesystem for this use case, though.

The first question is: why not stick with VFAT? For all of its troubles, it has worked well enough for a long time. The biggest motivator for a change, arguably, is the 4GB limit on file size. One can deal with poor performance, especially when the real bottleneck is likely to be the device itself. But if one wants to store a sufficiently large file on the device, VFAT will simply fail. Such files are increasingly common, so users are running into this problem. The exFAT filesystem format is held out as an alternative, but it is far more proprietary than VFAT. Given that VFAT has already been the subject of lawsuits, vendors will think carefully before switching to exFAT; Sharp has licensed the filesystem for Android devices, but there do not appear to be a whole lot of other takers at this time.

Given increasing networking speeds, one could certainly consider just using the network to move a file that is too large for VFAT. On a local network this approach might well be faster than using a removable drive. Setting up network transfers is not always easy, though; most computers are, by default, configured in ways that do not allow random strangers to dump large files on their drives. Getting around that obstacle is likely to be too much even for moderately skillful users. Use of a third-party site to transfer files is workable when the files are small; even if it is possible for very large files, it's not something that will be tolerably fast on most networks.

Removable drives, instead, are easy, so the "sneakernet" approach to file transfer is likely to stay with us for some time. Does that mean that we need a new filesystem format to better support this use? Filesystem developer Ted Ts'o thinks not:

I used to think that we would need an IP unencumbered file system, given issues around TomTom and Microsoft, but these days, given how quickly Linux has taken over the embedded and mobile landscape for all but the most tiniest of devices, I don't think that's as important of an issue, since we can just simply use a native linux file system. In the time that it would take to get some other new file system adopted across the industry, it's likely Linux will have enough market share to perhaps compel the other OS vendors to provide interoperability solutions.

That is an interesting thought: Linux is now strong and prevalent enough that we can simply expect the industry to pick up our way of doing things. That approach has not always worked out in the past, but things might truly be different this time around. Increasingly, devices like music players, handsets, and digital cameras run Linux internally; these gadgets already are, to a first approximation, removable storage devices with a bit of extra hardware. Other devices, such as televisions, also tend to run Linux internally. Supporting a native Linux filesystem on these devices should be a relatively easy thing to do. It would be faster (assuming the underlying storage isn't severely optimized for VFAT only), more feature-rich, and lacking in patent aggressors. There is very little, in other words, not to like.

Well, there would be a few small problems. There are still some pesky users out there with non-Linux systems that might want to access the filesystems on their devices. In many cases, the increasing use of the MTP protocol could sidestep that question altogether; indeed, recent MTP-using Android devices are likely using it to export an ext4 filesystem. There would still be cases where users on these other platforms would want to mount filesystems directly, though, especially on pure storage devices; bringing proper implementations of Linux filesystems to those platforms is, evidently, not as easy as one might think.

Filesystems like ext4 also were not designed with removable devices in mind. They tend not to be all that robust against unexpected removal of the device unless fairly aggressive flushing of data is used (in fairness, VFAT filesystems are also easily corrupted that way). The file ownership model used by Linux filesystems tends not to translate well to removable devices, since one system's user IDs typically have no meaning elsewhere. So something like the user and group mount options patch may be required to make things work well. Most Linux filesystems have not been designed around the very large pages and erase blocks used on flash devices and, thus, do not perform as well as they could; see this article for lots of details. These are issues that can be worked out, certainly, but they remain in need of working out at this time.

There is one other complication: according to Arnd Bergmann there is another filesystem waiting on the wings:

There will be patches very soon for a new file system from a major flash vendor that I'm cooperating with. I haven't seen the patches myself yet, but the design is similar to a prototype that was done as a thesis I supervised. I hope that the new implementation is similarly simple to this design, and also able to provide optimum performance on most flash media.

Needless to say, such an entry has the potential to stir things up a bit. A filesystem designed with input from both "a major flash vendor" and a developer like Arnd should work well indeed on small removable devices and should be well integrated into Linux. This manufacturer could also employ the "include a windows driver in a small partition on the device" trick, making interoperability with most Windows systems Just Work. Putting the filesystem code into the Linux kernel would make support readily available on mobile devices. This scheme might just succeed.

So what we may see is not Linux pushing one of its native filesystem formats onto the world. Instead, the world might just adopt a new format that just happened to be well supported in Linux first. That could be the best of all worlds: we would have a way to interoperate on removable drives that is free, scalable, and widely supported. Getting there may well be worth the trouble of adding yet another filesystem type.

Comments (26 posted)

Mobile patent wars: Google goes on the attack

By Jonathan Corbet
August 22, 2012
Whenever one looks at the mobile patent wars, it is natural to conclude that everybody is suing everybody else. Thus far, though, that has not actually been true. Google has been on the receiving end of a number of lawsuits, either directly or indirectly via attacks on manufacturers shipping Android devices, but Google has not, itself, launched patent attacks against others. That situation has just changed, though, with the report that Google has filed a case against Apple with the US International Trade Commission.

In short, Google is trying to use seven of its patents (just acquired from Motorola Mobility) to block the import of Apple's products into the US. Those of us who fear the effect of software patents on free software might be forgiven for feeling that it is only just for Apple to be on the receiving end of the sort of attacks it has launched against Android. But Google's transformation into a patent aggressor may not bode well in the long term, regardless of how the current cases end up.

So what is Google claiming? The seven patents asserted against Apple are:

(Credit is due to Florian Mueller, who found and posted the specific patents at issue).

As is so often the case, there is not much in these patents that appears to be particularly novel or worthy of protection. Once one concludes that a particular problem (moving video playback from the handset to the television, say) is in need of solution, the form of the solution becomes fairly obvious. The patents asserted by Apple against Android seem trivial, but it is hard to come up with a way to say that Google's patents are less so.

If one is concerned about attacks against Android and other platforms based on free software, one might be tempted to hope that Google will find some success against Apple and, in so doing, deter further attacks on the platform. The mobile patent wars could be declared to be a draw, and the companies involved could get back to their real business: running on the consumer electronics product treadmill and trying to create better products to sell to their customers. Barring real reform of the patent laws in the US, that might well be a best-case outcome.

What seems more likely, though, is that the companies involved, having shown that they can make each other hurt, will come to some sort of understanding involving the sharing of patents and, perhaps, the passing of undisclosed amounts of cash between some of the parties. Such an agreement would presumably make the world safer for Android and for at least some of the manufacturers who use Android in their products. But it's not at all clear that the situation would improve for free software as a whole, or for anybody who is outside of this agreement and who wants to break into the mobile market.

A worst-case scenario could involve Google asserting these patents (and others from the massive pile it acquired from Motorola) against devices based on Tizen, Nemo, Firefox OS, or other free platforms. Unlike some companies, Google has not pledged not to attack free software projects with its patents. Such an attack would certainly be widely considered to be "evil," but the sad fact is that, in an extended fight, one tends to become more like one's enemy. Having found that it can further its goals with patent attacks (assuming that is, indeed, the outcome), Google may find it hard to resist making more of them in the future.

In the end, that may be the environment we are stuck with until the software patent situation can be addressed. Until then, it will be impossible to achieve a certain level of success in the software area and not be subject to patent attacks, either from trolls or from competitors. Given the nature of the game, it is hard to fault Google for playing hardball. Hopefully, the company's recent suggestions that software patents should be eliminated entirely are sincere and we are not witnessing the birth of another patent problem.

Comments (69 posted)

Page editor: Jonathan Corbet

Security

Forward secure sealing

By Jake Edge
August 22, 2012

When a system is compromised, the attackers may try to cover their tracks so that the administrator is not alerted to the attack. One way for an attacker to hide is by removing log file entries that might lead an administrator (or a log file analyzer) to notice. A new feature in the systemd journal, "forward secure sealing" (FSS) is meant to detect log file tampering.

Traditionally, administrators have written log files to external systems across the network or to a local printer—though paper is notoriously hard to grep—to defeat log file tampering. As long as the other system is not compromised, and log file lines are written immediately, an attacker can't help but leave their "fingerprints" behind. But FSS provides a way to at least detect tampering using only a single system, though it won't provide all of the assurances that external logging can.

Systemd developer Lennart Poettering announced FSS on August 20. The basic idea is that the binary logs handled by the systemd journal can be "sealed" at regular time intervals. That seal is a cryptographic operation on the log data such that any tampering prior to the seal can be detected. So long as a sealing operation happens before the attacker gets a chance to tamper with the logs, their fingerprints will be sealed with the rest of the log data. They can still delete the log files entirely, but that is likely to be noticed as well.

The algorithm for FSS is based on "Forward Secure Pseudo Random Generators" (FSPRG), which comes from some post-doctoral research by Poettering's brother Bertram. The paper on FSPRG has not been published but will be soon, according to (Lennart) Poettering.

The announcement on Google+ and its long comment thread do give some details, however. FSS is based on two keys that are generated using:

    journalctl --setup-keys
One key is the "sealing key" which is kept on the system, and the other is the "verification key" which should be securely stored elsewhere. Using the FSPRG mechanism, a new sealing key is generated periodically using a non-reversible process. The old key is then securely deleted from the system after the change.

The verification key can be used to calculate the sealing key for any given time range. That means that the attacker can only access the current sealing key (which will presumably be used for the next sealing operation), while the administrator can reliably generate any sealing key to verify previous log file seals. Changing log file entries prior to the last seal will result in a verification failure.

As a bell—or perhaps a whistle—the key generator can create a QR code of the verification key, which can be scanned so that the key doesn't have to be typed in.

Anything that happens after the system is compromised is under control of the attacker, as was pointed out multiple times in the comments. That means that local logs cannot be relied on after that point, but it also applies to remotely stored—or even printed—log files. The latter two methods do protect against an attacker simply deleting the local log files, though.

By default, FSS will seal the logs every 15 minutes, but that can be changed at key generation time with a flag: "--interval=10s" for example. The system clock time is used in the generation of each new sealing key, which is why the interval must be specified when the keys are generated. The default value surprisingly leaves a rather large window for an attacker who immediately turns to altering the log file, though. One also wonders if subtle (or not so subtle) manipulations of the system clock might be a way to subvert or otherwise interfere with the key generation.

Securely deleting the old sealing key is handled by setting the FS_SECRM_FL and FS_NOCOW_FL file attributes, which may or may not be implemented by the underlying filesystem. That could potentially lead to leaks of previous sealing keys, which would allow an attacker to make changes to earlier entries. Obviously, losing control of the verification key means that all bets are off as well.

The code is available already in the systemd Git repository. Poettering notes that it will also be available in Fedora 18.

FSS is an interesting feature that will likely prove useful for some administrators. It certainly doesn't solve all of the problems with detecting attackers or compromised systems, but it could definitely help by raising red flags. There is more to do, of course, starting with a security audit of the code—more eyes can only be helpful in ferreting out any holes in the algorithm or implementation. Once that's done, administrators can feel more confident that their log files aren't undetectably changing out from under them—at least if they are using the systemd journal.

Comments (67 posted)

Brief items

Security quotes of the week

For example, if the year is 2013 but the current month is less than the target month (say February), then the condition would return a result as if the current date lies before the August 2012 checkpoint value. In fact, this logic is simply flawed and incorrect. This error indirectly confirms our initial conclusion that the Shamoon malware is not the Wiper malware that attacked Iranian systems. Wiper is presumed to be a cyber-weapon and, if so, it should have been developed by a team of professionals. But experienced programmers would hardly be expected to mess up a date comparison routine.
-- Dmitry Tarakanov of Kaspersky Lab analyzes the Shamoon malware

Windows 8, set for release on 26 October, automatically deletes entries in the HOSTS file for specific domains. Try, for example, to prevent attempts to access Facebook.com, Twitter.com or ad servers such as ad.doubleclick.net by rerouting them to 127.0.0.1 by adding entries to the HOSTS file and the relevant entries will soon disappear from the HOSTS file as if by magic, leaving nothing but an empty line.
-- The H

Most importantly, a series of leaks over the past few years containing more than 100 million real-world passwords have provided crackers with important new insights about how people in different walks of life choose passwords on different sites or in different settings. The ever-growing list of leaked passwords allows programmers to write rules that make cracking algorithms faster and more accurate; password attacks have become cut-and-paste exercises that even script kiddies can perform with ease.
-- Dan Goodin in ars technica

As a Data Privacy Engineer at Google you will help ensure that our products are designed to the highest standards and are operated in a manner that protects the privacy of our users. Specifically, you will work as member of our Privacy Red Team to independently identify, research, and help resolve potential privacy risks across all of our products, services, and business processes in place today.
-- Google is looking for privacy engineers

Comments (4 posted)

Google warns of using Adobe Reader - particularly on Linux (The H)

The H reports on some 40-60 Adobe PDF reader holes found by Google employees—not all of which were fixed in the August 14 update. In fact, none of them were fixed for Linux as no release was made for Linux. "Google employees Mateusz Jurczyk and Gynvael Coldwind initially examined the PDF engine of the Chrome browser and discovered numerous holes. They then tested Adobe Reader and found about 60 issues that triggered crashes, 40 of which are potential attack vectors. When the two researchers reported their discoveries to Adobe, the company promised to provide fixes – but also indicated that not all the holes would be closed on Patch Day in August."

Comments (55 posted)

New vulnerabilities

emacs: code execution

Package(s):emacs CVE #(s):CVE-2012-3479
Created:August 16, 2012 Updated:January 10, 2013
Description:

From the Slackware advisory:

Patched to fix a security flaw in the file-local variables code. When the Emacs user option `enable-local-variables' is set to `:safe' (the default value is t), Emacs should automatically refuse to evaluate `eval' forms in file-local variable sections. Due to the bug, Emacs instead automatically evaluates such `eval' forms. Thus, if the user changes the value of `enable-local-variables' to `:safe', visiting a malicious file can cause automatic execution of arbitrary Emacs Lisp code with the permissions of the user. Bug discovered by Paul Ling.

Alerts:
Gentoo 201403-05 emacs 2014-03-20
Mandriva MDVSA-2013:076 emacs 2013-04-08
Debian DSA-2603-1 emacs23 2013-01-09
openSUSE openSUSE-SU-2012:1348-1 emacs 2012-10-15
Ubuntu USN-1586-1 emacs23 2012-09-27
Mageia MGASA-2012-0261 emacs 2012-09-09
Fedora FEDORA-2012-11876 emacs 2012-08-22
Fedora FEDORA-2012-11872 emacs 2012-08-22
Slackware SSA:2012-228-02 emacs 2012-08-15

Comments (none posted)

flash-plugin: code execution

Package(s):flash-plugin CVE #(s):CVE-2012-1535
Created:August 16, 2012 Updated:August 23, 2012
Description:

From the Red Hat advisory:

Specially-crafted SWF content could cause flash-plugin to crash or, potentially, execute arbitrary code when a victim loads a page containing the malicious SWF content. (CVE-2012-1535)

Alerts:
Gentoo 201209-01 adobe-flash 2012-09-04
Red Hat RHSA-2012:1203-01 flash-plugin 2012-08-23
SUSE SUSE-SU-2012:1001-1 flash-player 2012-08-17
Mageia MGASA-2012-0229 flash-player-plugin 2012-08-21
SUSE SUSE-SU-2012:1001-2 flash-player 2012-08-17
openSUSE openSUSE-SU-2012:0996-1 flash-player 2012-08-16
Red Hat RHSA-2012:1173-01 flash-plugin 2012-08-15

Comments (none posted)

gdb: code execution

Package(s):gdb CVE #(s):CVE-2011-4355
Created:August 17, 2012 Updated:March 11, 2013
Description:

From the Red Hat advisory:

It was discovered the the GNU Debugger (gdb) would load untrusted files from the current working directory when .debug_gdb_scripts was defined. While this was a design decision, it is an insecure one and users who do not pre-inspect untrusted files may execute arbitrary code with their privileges.

Alerts:
CentOS CESA-2013:0522 gdb 2013-03-09
Scientific Linux SL-gdb-20130228 gdb 2013-02-28
Oracle ELSA-2013-0522 gdb 2013-02-25
Red Hat RHSA-2013:0522-02 gdb 2013-02-21
Fedora FEDORA-2012-6614 gdb 2012-08-17

Comments (none posted)

gimp: code execution

Package(s):gimp CVE #(s):CVE-2012-3403 CVE-2012-3481
Created:August 20, 2012 Updated:September 4, 2012
Description: From the Red Hat advisory:

A heap-based buffer overflow flaw was found in the GIMP's KiSS CEL file format plug-in. An attacker could create a specially-crafted KiSS palette file that, when opened, could cause the CEL plug-in to crash or, potentially, execute arbitrary code with the privileges of the user running the GIMP. (CVE-2012-3403)

An integer overflow flaw, leading to a heap-based buffer overflow, was found in the GIMP's GIF image format plug-in. An attacker could create a specially-crafted GIF image file that, when opened, could cause the GIF plug-in to crash or, potentially, execute arbitrary code with the privileges of the user running the GIMP. (CVE-2012-3481)

Alerts:
Debian DSA-2813-1 gimp 2013-12-09
Gentoo 201311-05 gimp 2013-11-10
Mandriva MDVSA-2013:082 gimp 2013-04-09
Ubuntu USN-1559-1 gimp 2012-09-10
openSUSE openSUSE-SU-2012:1131-1 gimp 2012-09-07
openSUSE openSUSE-SU-2012:1080-1 gimp 2012-09-03
Fedora FEDORA-2012-12364 gimp 2012-09-02
Fedora FEDORA-2012-12383 gimp 2012-08-28
SUSE SUSE-SU-2012:1038-1 gimp 2012-08-24
SUSE SUSE-SU-2012:1029-1 gimp 2012-08-23
SUSE SUSE-SU-2012:1027-1 gimp 2012-08-23
Mageia MGASA-2012-0236 gimp 2012-08-23
Red Hat RHSA-2012:1180-01 gimp 2012-08-20
Oracle ELSA-2012-1181 gimp 2012-08-20
Oracle ELSA-2012-1180 gimp 2012-08-20
Mandriva MDVSA-2012:142 gimp 2012-08-21
CentOS CESA-2012:1180 gimp 2012-08-20
Scientific Linux SL-gimp-20120820 gimp 2012-08-20
Scientific Linux SL-gimp-20120820 gimp 2012-08-20
CentOS CESA-2012:1181 gimp 2012-08-20
Red Hat RHSA-2012:1181-01 gimp 2012-08-20

Comments (none posted)

gimp: code execution

Package(s):gimp CVE #(s):CVE-2012-3402 CVE-2009-3909
Created:August 20, 2012 Updated:September 28, 2012
Description: From the Red Hat advisory:

Multiple integer overflow flaws, leading to heap-based buffer overflows, were found in the GIMP's Adobe Photoshop (PSD) image file plug-in. An attacker could create a specially-crafted PSD image file that, when opened, could cause the PSD plug-in to crash or, potentially, execute arbitrary code with the privileges of the user running the GIMP. (CVE-2009-3909, CVE-2012-3402)

Alerts:
Gentoo 201209-23 gimp 2012-09-28
SUSE SUSE-SU-2012:1027-1 gimp 2012-08-23
Scientific Linux SL-gimp-20120820 gimp 2012-08-20
Red Hat RHSA-2012:1181-01 gimp 2012-08-20
Oracle ELSA-2012-1181 gimp 2012-08-20
CentOS CESA-2012:1181 gimp 2012-08-20

Comments (none posted)

glibc: code execution

Package(s):glibc CVE #(s):CVE-2012-3480
Created:August 20, 2012 Updated:August 28, 2012
Description: From the Red Hat bugzilla:

Multiple integer overflows, leading to stack-based buffer overflows were found in various stdlib functions of GNU libc (strtod, strtof, strtold, strtod_l and related routines). If an application, using the affected stdlib functions, did not perform user-level sanitization of provided inputs, a local attacker could use this flaw to cause such an application to crash or, potentially, execute arbitrary code with the privileges of the user running the application.

Alerts:
Debian-LTS DLA-165-1 eglibc 2015-03-06
Gentoo 201503-04 glibc 2015-03-08
Mandriva MDVSA-2013:162 glibc 2013-05-07
Ubuntu USN-1589-2 glibc 2012-12-17
Ubuntu USN-1589-1 eglibc, glibc 2012-10-01
Scientific Linux SL-glib-20120827 glibc 2012-08-27
Scientific Linux SL-glib-20120827 glibc 2012-08-27
Oracle ELSA-2012-1207 glibc 2012-08-27
Oracle ELSA-2012-1208 glibc 2012-08-27
Fedora FEDORA-2012-11928 glibc 2012-08-27
CentOS CESA-2012:1208 glibc 2012-08-27
CentOS CESA-2012:1207 glibc 2012-08-27
Red Hat RHSA-2012:1207-01 glibc 2012-08-27
Red Hat RHSA-2012:1208-01 glibc 2012-08-27
Fedora FEDORA-2012-11927 glibc 2012-08-18

Comments (none posted)

glpi: multiple vulnerabilities

Package(s):glpi CVE #(s):CVE-2012-4002 CVE-2012-4003
Created:August 16, 2012 Updated:August 30, 2012
Description:

From the Mandriva advisory:

Multiple cross-site request forgery (CSRF) and cross-site scripting (XSS) flaws has been found and corrected in GLPI (CVE-2012-4002, CVE-2012-4003).

Alerts:
Mageia MGASA-2012-0250 glpi 2012-08-30
Mandriva MDVSA-2012:132 glpi 2012-08-15

Comments (none posted)

imagemagick: code execution

Package(s):imagemagick CVE #(s):CVE-2012-3437
Created:August 22, 2012 Updated:April 10, 2013
Description: From the Ubuntu advisory:

Tom Lane discovered that ImageMagick would not always properly allocate memory. If a user or automated system using ImageMagick were tricked into opening a specially crafted PNG image, an attacker could exploit this to cause a denial of service or possibly execute code with the privileges of the user invoking the program.

Alerts:
Debian-LTS DLA-242-1 imagemagick 2015-06-11
Mandriva MDVSA-2013:092 imagemagick 2013-04-09
openSUSE openSUSE-SU-2013:0535-1 ImageMagick 2013-03-26
Mandriva MDVSA-2012:160 imagemagick 2012-10-05
Fedora FEDORA-2012-11746 ImageMagick 2012-08-27
Fedora FEDORA-2012-11762 ImageMagick 2012-08-27
Mageia MGASA-2012-0243 imagemagick 2012-08-27
Ubuntu USN-1544-1 imagemagick 2012-08-22

Comments (none posted)

libapache2-mod-rpaf: denial of service

Package(s):libapache2-mod-rpaf CVE #(s):
Created:August 22, 2012 Updated:August 22, 2012
Description: From the Debian advisory:

Sébastien Bocahu discovered that the reverse proxy add forward module for the Apache webserver is vulnerable to a denial of service attack through a single crafted request with many headers.

Alerts:
Debian DSA-2532-1 libapache2-mod-rpaf 2012-08-22

Comments (none posted)

openstack-nova: symlink attack

Package(s):openstack-nova CVE #(s):CVE-2012-3447
Created:August 21, 2012 Updated:August 22, 2012
Description: From the CVE entry

virt/disk/api.py in OpenStack Compute (Nova) 2012.1.x before 2012.1.2 and Folsom before Folsom-3 allows remote authenticated users to overwrite arbitrary files via a symlink attack on a file in an image that uses a symlink that is only readable by root. NOTE: this vulnerability exists because of an incomplete fix for CVE-2012-3361.

Alerts:
Fedora FEDORA-2012-11756 openstack-nova 2012-08-21

Comments (none posted)

pcp: multiple vulnerabilities

Package(s):pcp CVE #(s):CVE-2012-3418 CVE-2012-3419 CVE-2012-3420 CVE-2012-3421
Created:August 20, 2012 Updated:September 4, 2012
Description: From the Red Hat bugzilla [1], [2], [3], [4]:

[1] Florian Weimer of the Red Hat Product Security Team discovered multiple integer and heap-based buffer overflow flaws in PCP (Performance Co-Pilot) libpcp protocol decoding functions. These flaws could lead to daemon crashes or the execution of arbitrary code with root privileges. Many of these flaws can be exploited without requiring the attacker to be authenticated. (CVE-2012-3418)

[2] Florian Weimer of the Red Hat Product Security Team discovered that pmcd (the PCP (Performance Co-Pilot) performance metrics collector daemon) exports part of the /proc file system, including privileged information that could be used to aid in bypassing ASLR, as well as full commandline information on running programs. (CVE-2012-3419)

[3] Florian Weimer of the Red Hat Product Security Team discovered two memory leaks in libpcp that can be abused by an unauthenticated remote attacker to crash pmcd (the PCP (Performance Co-Pilot) performance metrics collector daemon) or to consume enough memory to trigger the OOM killer, which may have impact on other processes. (CVE-2012-3420)

[4] Florian Weimer of the Red Hat Product Security Team discovered a denial of service flaw in pmcd (the PCP (Performance Co-Pilot) performance metrics collector daemon) due to incorrect event-driven programming. Because the pduread() function in libpcp performs a select locally, waiting for more client data, an unauthenticated remote attacker could send individual bytes one by one, avoiding the timeout, and blocking pmcd in order to prevent it from responding to other legitimate requests. (CVE-2012-3421)

Alerts:
SUSE SUSE-SU-2013:0190-1 pcp 2013-01-23
openSUSE openSUSE-SU-2012:1081-1 pcp 2012-09-03
openSUSE openSUSE-SU-2012:1079-1 pcp 2012-09-03
openSUSE openSUSE-SU-2012:1036-1 pcp 2012-08-24
Debian DSA-2533-1 pcp 2012-08-23
Fedora FEDORA-2012-12076 pcp 2012-08-20
Fedora FEDORA-2012-12024 pcp 2012-08-20

Comments (none posted)

phpmyadmin: cross-site scripting

Package(s):phpmyadmin CVE #(s):CVE-2012-4345
Created:August 17, 2012 Updated:August 29, 2012
Description:

From the phpMyAdmin advisory:

Using a crafted table name, it was possible to produce a XSS : 1) On the Database Structure page, creating a new table with a crafted name 2) On the Database Structure page, using the Empty and Drop links of the crafted table name 3) On the Table Operations page of a crafted table, using the 'Empty the table (TRUNCATE)' and 'Delete the table (DROP)' links 4) On the Triggers page of a database containing tables with a crafted name, when opening the 'Add Trigger' popup 5) When creating a trigger for a table with a crafted name, with an invalid definition. Having crafted data in a database table, it was possible to produce a XSS : 6) When visualizing GIS data, having a crafted label name.

Alerts:
openSUSE openSUSE-SU-2012:1062-1 phpMyAdmin 2012-08-30
Fedora FEDORA-2012-12060 phpMyAdmin 2012-08-28
Fedora FEDORA-2012-12031 phpMyAdmin 2012-08-28
Mandriva MDVSA-2012:136 phpmyadmin 2012-08-17

Comments (none posted)

postgresql: file disclosure

Package(s):postgresql CVE #(s):CVE-2012-3488 CVE-2012-3489
Created:August 20, 2012 Updated:September 28, 2012
Description: From the postgresql advisory:

This security release fixes a vulnerability in the built-in XML functionality, and a vulnerability in the XSLT functionality supplied by the optional XML2 extension. Both vulnerabilities allow reading of arbitrary files by any authenticated database user, and the XSLT vulnerability allows writing files as well. The fixes cause limited backwards compatibility issues.

Alerts:
openSUSE openSUSE-SU-2012:1299-1 postgresql 2012-10-06
openSUSE openSUSE-SU-2012:1288-1 postgresql, postgresql-libs 2012-10-04
Gentoo 201209-24 postgresql-server 2012-09-28
openSUSE openSUSE-SU-2012:1251-1 postgresql 2012-09-26
Scientific Linux SL-post-20120914 postgresql84 2012-09-14
Scientific Linux SL-post-20120914 postgresql 2012-09-14
Oracle ELSA-2012-1263 postgresql, postgresql84 2012-09-14
Oracle ELSA-2012-1263 postgresql, postgresql84 2012-09-14
Oracle ELSA-2012-1264 postgresql 2012-09-14
CentOS CESA-2012:1263 postgresql84 2012-09-13
CentOS CESA-2012:1263 postgresql 2012-09-13
CentOS CESA-2012:1264 postgresql 2012-09-13
Red Hat RHSA-2012:1263-01 postgresql, postgresql84 2012-09-13
Red Hat RHSA-2012:1264-01 postgresql 2012-09-13
Mageia MGASA-2012-0242 postgresql 2012-08-26
Fedora FEDORA-2012-12156 postgresql 2012-08-26
Fedora FEDORA-2012-12165 postgresql 2012-08-26
Debian DSA-2534-1 postgresql-8.4 2012-08-25
Ubuntu USN-1542-1 postgresql-8.3, postgresql-8.4, postgresql-9.1 2012-08-20
Mandriva MDVSA-2012:139 postgresql 2012-08-19

Comments (none posted)

redeclipse: file disclosure

Package(s):redeclipse CVE #(s):
Created:August 20, 2012 Updated:August 22, 2012
Description: From the Fedora advisory:

A flaw was found in the way Red Eclipse handled config files. In cube2-engine games, game maps can be transmitted either from the server to a client, or from client to client. These maps include a config file (mapname.cfg) in "cubescript" format, which allows for an attacker to send a malicious script via a new map. This map must either be chosen by an administrator on the server, or created in co-operative editing mode. A malicious script could then be used to read or write to any files that the user running the client has access to when the victim loads a map with the malicious configuration file.

Alerts:
Fedora FEDORA-2012-11582 redeclipse 2012-08-19

Comments (none posted)

rssh: shell command injection

Package(s):rssh CVE #(s):CVE-2012-3478
Created:August 16, 2012 Updated:September 11, 2012
Description:

From the Debian advisory:

Henrik Erkkonen discovered that rssh, a restricted shell for SSH, does not properly restrict shell access.

Alerts:
Gentoo 201311-19 rssh 2013-11-28
Fedora FEDORA-2012-20109 rssh 2012-12-19
Debian DSA-2530-1 rssh 2012-08-15

Comments (1 posted)

wireshark: multiple vulnerabilities

Package(s):wireshark CVE #(s):CVE-2012-4285 CVE-2012-4287 CVE-2012-4288 CVE-2012-4289 CVE-2012-4296 CVE-2012-4297 CVE-2012-4291 CVE-2012-4292 CVE-2012-4293 CVE-2012-4290
Created:August 16, 2012 Updated:December 26, 2012
Description:

From the Mandriva advisory:

Multiple vulnerabilities was found and corrected in Wireshark:

The DCP ETSI dissector could trigger a zero division (CVE-2012-4285).

The MongoDB dissector could go into a large loop (CVE-2012-4287).

The XTP dissector could go into an infinite loop (CVE-2012-4288).

The AFP dissector could go into a large loop (CVE-2012-4289).

The RTPS2 dissector could overflow a buffer (CVE-2012-4296).

The GSM RLC MAC dissector could overflow a buffer (CVE-2012-4297).

The CIP dissector could exhaust system memory (CVE-2012-4291).

The STUN dissector could crash (CVE-2012-4292).

The EtherCAT Mailbox dissector could abort (CVE-2012-4293).

The CTDB dissector could go into a large loop (CVE-2012-4290).

Alerts:
Scientific Linux SLSA-2013:1569-2 wireshark 2013-12-09
Oracle ELSA-2013-1569 wireshark 2013-11-26
Red Hat RHSA-2013:1569-02 wireshark 2013-11-21
Gentoo GLSA 201308-05:02 wireshark 2013-08-30
Gentoo 201308-05 wireshark 2013-08-28
Mandriva MDVSA-2013:055 wireshark 2013-04-05
Oracle ELSA-2013-0125 wireshark 2013-01-12
Scientific Linux SL-wire-20130116 wireshark 2013-01-16
Debian DSA-2590-1 wireshark 2012-12-26
openSUSE openSUSE-SU-2012:1067-1 wireshark 2012-08-30
Fedora FEDORA-2012-12085 wireshark 2012-08-27
Fedora FEDORA-2012-12091 wireshark 2012-08-27
openSUSE openSUSE-SU-2012:1035-1 wireshark 2012-08-24
Mageia MGASA-2012-0226 wireshark 2012-08-18
Mandriva MDVSA-2012:135 wireshark 2012-08-16
Mandriva MDVSA-2012:134 wireshark 2012-08-16

Comments (none posted)

xen: denial of service

Package(s):xen CVE #(s):CVE-2012-3433
Created:August 20, 2012 Updated:September 14, 2012
Description: From the Debian advisory:

A guest kernel can cause the host to become unresponsive for a period of time, potentially leading to a DoS. Since an attacker with full control in the guest can impact on the host, this vulnerability is consider with high impact.

Alerts:
Gentoo 201309-24 xen 2013-09-27
openSUSE openSUSE-SU-2012:1172-1 Xen 2012-09-14
openSUSE openSUSE-SU-2012:1174-1 Xen 2012-09-14
SUSE SUSE-SU-2012:1044-1 Xen 2012-08-27
SUSE SUSE-SU-2012:1043-1 Xen and libvirt 2012-08-27
Debian DSA-2531-1 xen 2012-08-18
Fedora FEDORA-2012-11785 xen 2012-08-21
Fedora FEDORA-2012-11755 xen 2012-08-21

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 3.6-rc3, released on August 22. Linus says: "Shortlog appended, there's nothing here that makes me go 'OMG! Scary!' or makes me want to particularly mention it separately. All just random updates and fixes."

Previously, 3.6-rc2 was released on on August 16. "Anyway, with all that said, things don't seem too bad. Yes, I ignored a few pull requests, but I have to say that there weren't all that many of those, and the rest looked pretty calm. Sure, there's 330+ commits in there, but considering that it's been two weeks, that's about expected (or even a bit low) for early -rc's. Yes, 3.5 may have been much less for -rc2, but that was unusual."

Stable updates: 2.6.34.13 and 3.2.28 were both released on August 20.

Comments (1 posted)

Quotes of the week

Our power consumption is worse than under other operating systems is almost entirely because only one of our three GPU drivers implements any kind of useful power management.
Matthew Garrett

Moving 'policy' into user-space has been an utter failure, mostly because there's not a single project/subsystem responsible for getting a good result to users. This is why I resist "policy should not be in the kernel" meme here.
Ingo Molnar

"inline" is now a vague, pathetic and useless thing. The problem is that the reader just doesn't *know* whether or not the writer really wanted it to be inlined.

If we have carefully made a decision to inline a function, we should (now) use __always_inline. If we have carefully made a decision to not inline a function, we should use noinline. If we don't care, we should omit all such markings.

This leaves no place for "inline"?

Andrew Morton

Copy and paste is the #1 cause for subtle bugs.
Thomas Gleixner

Comments (23 posted)

Long-term support for the 3.4 kernel

Greg Kroah-Hartman has announced that the 3.4 kernel will receive stable updates for a period of at least two years. It joins 3.0 (which has at least one more year of support) on the long-term support list.

Full Story (comments: 3)

Kernel development news

The return of power-aware scheduling

By Jonathan Corbet
August 21, 2012
Years of work to improve power utilization in Linux have made one thing clear: efficient power behavior must be implemented throughout the system. That certainly includes the CPU scheduler, but the kernel's scheduler currently has little in the way of logic aimed at minimizing power use. A recent proposal has started a discussion on how the scheduler might be made to be more power-aware. But, as this discussion shows, there is no single, straightforward answer to the question of how power-aware scheduling should be done.

Interestingly, the scheduler did have power-aware logic from 2.6.18 through 3.4. There was a sysctl knob (sched_mc_power_savings) that would cause the scheduler to try to group runnable processes onto the smallest possible number of cores, allowing others to go idle. That code was removed in 3.5 because it never worked very well and nobody was putting any effort into improving it. The result was the removal of some rather unloved code, but it also left the scheduler with no power awareness at all. Given the level of interest in power savings in almost every environment, having a power-unaware scheduler seems less than optimal; it was only a matter of time until somebody tried to put together a better solution.

Alex Shi started off the conversation with a rough proposal on how power awareness might be added back to the scheduler. This proposal envisions two modes, called "power" and "performance," that would be used by the scheduler to guide its decisions. Some of the first debate centered around how that policy would be chosen, with some developers suggesting that "performance" could be used while on AC power and "power" when on battery power. But that policy entirely ignores an important constituency: data centers. Operators of data centers are becoming increasingly concerned about power usage and its associated costs; many of them are likely to want to run in a lower-power mode regardless of where the power is coming from. The obvious conclusion is that the kernel needs to provide a mechanism by which the mode can be chosen; the policy can then be decided by the system administrator.

The harder question is: what would that policy decision actually do? The old power code tried to cause some cores, at least, to go completely idle so that they could go into a sleep state. The proposal from Alex takes a different approach. Alex claims that trying to idle a subset of the CPUs in the system is not going to save much power; instead, it is best to spread the runnable processes across the system as widely as possible and try to get to a point where all CPUs can go idle. That seems to be the best approach, on x86-class processors, anyway. On that architecture, no processor can go into a deep sleep state unless they all go into that state; having even a single processor running will keep the others in a less efficient sleep state. A single processor also keeps associated hardware — the memory controller, for example — in a powered-up state. The first CPU is by far the most expensive one; bringing in additional CPUs has a much lower incremental cost.

So the general rule seems to be: keep all of the processors busy as long as there is work to be done. This approach should lead to the quickest processing and best cache utilization; it also gives the best power utilization. In other words, the best policy for power savings looks a lot like the best policy for performance. That conclusion came as a surprise to some, but it makes some sense; as Arjan van de Ven put it:

So in reality, the very first thing that helps power, is to run software efficiently. Anything else is completely secondary. If placement policy leads to a placement that's different from the most efficient placement, you're already burning extra power...

So why bother with multiple scheduling modes in the first place? Naturally enough, there are some complications that enter this picture and make it a little bit less neat. The first of these is that spreading load across processors only helps if the new processors are actually put to work for a substantial period of time, for values of "substantial" around 100μs. For any shorter period, the cost of bringing the CPU out of even a shallow sleep exceeds the savings gained from running a process there. So extra CPUs should not be brought into play for short-lived tasks. Properly implementing that policy is likely to require that the kernel gain a better understanding of the behavior of the processes running in any given workload.

There is also still scope for some differences of behavior between the two modes. In a performance-oriented mode, the scheduler might balance tasks more aggressively, trying to keep the load the same on all processors. In a power-savings mode, processes might stay a bit more tightly packed onto a smaller number of CPUs, especially processes that have an observed history of running for very short periods of time.

But the conversation has, arguably, only barely touched on the biggest complication of all. There was a lot of talk of what the optimal behavior is for current-generation x86 processors, but that is far from the only environment in which Linux runs. ARM processors have a complex set of facilities for power management, allowing much finer control over which parts of the system have power and clocks at any given time. The ARM world is also pushing the boundaries with asymmetric architectures like big.LITTLE; figuring out the optimal task placement for systems with more than one type of CPU is not going to be an easy task.

The problem is thus architecture-specific; optimal behavior on one architecture may yield poor results on another. But the eventual solution needs to work on all of the important architectures supported by Linux. And, preferably, it should be easily modifiable to work on future versions of those architectures, since the way to get the best power utilization is likely to change over time. That suggests that the mechanism currently used to describe architecture-specific details to the scheduler (scheduling domains) needs to grow the ability to describe parameters relevant to power management as well. An architecture-independent scheduler could then use those parameters to guide its behavior. That scheduler will also need a better understanding of process behavior; the almost-ready per-entity load tracking patch set may help in this regard.

Designing and implementing these changes is clearly not going to be a short-term job. It will require a fair amount of cooperation between the core scheduler developers and those working on specific architectures. But, given how long we have been without power management support in the scheduler, and given that the bulk of the real power savings are to be had elsewhere (in drivers and in user space, for example), we can wait a little longer while a proper scheduler solution is worked out.

Comments (3 posted)

Link-time optimization for the kernel

By Jonathan Corbet
August 21, 2012
The kernel tends to place an upper limit on how quickly any given workload can run, so it is unsurprising that kernel developers are always on the lookout for ways to make the system go faster. Significant amounts of work can be put into optimizations that, on the surface, seem small. So when the opportunity comes to make the kernel go faster without the need to rewrite any performance-critical code paths, there will naturally be a fair amount of interest. Whether the "link-time optimization" (LTO) feature supported by recent versions of GCC is such an opportunity or not is yet to be proved, but Andi Kleen is determined to find out.

The idea behind LTO is to examine the entire program after the individual files have been compiled and exploit any additional optimization opportunities that appear. The most significant of those opportunities appears to be the inlining of small functions across object files. The compiler can also be more aggressive about detecting and eliminating unused code and data. Under the hood, LTO works by dumping the compiler's intermediate representation (the "GIMPLE" code) into the resulting object file whenever a source file is compiled. The actual LTO stage is then carried out by loading all of the GIMPLE code into a single in-core image and rewriting the (presumably) further-optimized object code.

The LTO feature first appeared in GCC 4.5, but it has only really started to become useful in the 4.7 release. It still has a number of limitations; one of those is that all of the object files involved must be compiled with the same set of command-line options. That limitation turns out to be a problem with the kernel, as will be seen below.

Andi's LTO patch set weighs in at 74 changesets — not a small or unintrusive change. But it turns out that most of the changes have the same basic scope: ensuring that the compiler knows that specific symbols are needed even if they appear to be unused; that prevents the LTO stage from optimizing them away. For example, symbols exported to modules may not have any callers in the core kernel itself, but they need to be preserved for modules that may be loaded later. To that end, Andi's first patch defines a new attribute (__visible) used to mark such symbols; most of the remaining patches are dedicated to the addition of __visible attributes where they are needed.

Beyond that, there is a small set of fixes for specific problems encountered when building kernels with LTO. It seems that functions with long argument lists can get their arguments corrupted if the functions are inlined during the LTO stage; avoiding that requires marking the functions noinline. Andi complains "I wish there was a generic way to handle this. Seems like a ticking time bomb problem." In general, he acknowledges the possibility that LTO may introduce new, optimization-related bugs into the kernel; finding all of those could be a challenge.

Then there is the requirement that all files be built with the same set of options. Current kernels are not built that way; different options are used in different parts of the tree. In some places, this problem can be worked around by disabling specific optimizations that depend on different compiler flags than are used in the rest of the kernel. In others, though, features must simply be disabled to use LTO. These include the "modversions" feature (allowing kernel modules to be used with more than one kernel version) and the function tracer. Modversions seems to be fixable; getting ftrace to work may require changes to GCC, though.

It is also necessary, of course, to change the build system to use the GCC LTO feature. As of this writing, one must have a current GCC release; it is also necessary to install a development version of the binutils package for LTO to work. Even a minimal kernel requires about 4GB of memory for the LTO pass; an "allyesconfig" build could require as much as 9GB. Given that, the use of 32-bit systems for LTO kernel builds is out of the question; it is still possible, of course, to build a 32-bit kernel on a 64-bit system. The build will also take between two and four times as long as it does without LTO. So developers are unlikely to make much use of LTO for their own work, but it might be of interest to distributors and others who are building production kernels.

The fact that most people will not want to do LTO builds actually poses a bit of a problem. Given the potential for LTO to introduce subtle bugs, due either to optimization-related misunderstandings or simple bugs in the new LTO feature itself, widespread testing is clearly called for before LTO is used for production kernels. But if developers and testers are unwilling to do such heavyweight builds, that testing may be hard to come by. That will make it harder to achieve the level of confidence that will be needed before LTO-built kernels can be used in real-world settings.

Given the above challenges, the size of the patch set, and the ongoing maintenance burden of keeping LTO working, one might well wonder if it is all worth it. And that comes down entirely to the numbers: how much faster does the kernel get when LTO is used? Hard numbers are not readily available at this time; the LTO patch set is new and there are still a lot of things to be fixed. Andi reports that runs of the "hackbench" benchmark gain about 5%, while kernel builds don't change much at all. Some networking benchmarks improve as much as 18%. There are also some unspecified "minor regressions." The numbers are rough, but Andi believes they are encouraging enough to justify further work; he also expects the LTO implementation in GCC to improve over time.

Andi also suggests that, in the long term, LTO could help to improve the quality of the kernel code base by eliminating the need to put inline functions into include files.

All told, this is a patch set in a very early stage of development; it seems unlikely to be proposed for merging into a near-term kernel, even as an experimental feature. In the longer term, though, it could lead to faster kernels; use of LTO in the kernel could also help to drive improvements in the GCC implementation that would benefit all projects. So it is an effort that is worth keeping an eye on.

Comments (46 posted)

Ask a kernel developer: maintainer workflow

August 22, 2012

This article was contributed by Greg Kroah-Hartman.

In this edition of "ask a kernel developer", I answer a multi-part question about kernel subsystem maintenance from a new maintainer. The workflow that I use to handle patches in the USB subsystem is used as an example to hopefully provide a guide for those who are new to the maintainer role.

As always, if you have unanswered questions relating to technical or procedural issues in Linux kernel development, ask them in the comment section, or email them directly to me. I will try to get to them in another installment down the road.

I have some questions about what I am supposed to be doing at different points of the release cycle. -rc1 and -rc2 are spelled out in Documentation/HOWTO, and I have a decent idea that patches I accept should be smaller and fix more critical bugs as the -rcX's roll out. The big question is what do I do with all of the other patches that come at random times?

First off, thanks so much for agreeing to maintain a kernel subsystem. Without maintainers like you, the Linux kernel development process would be much more chaotic and hard to navigate. I will try to explain how I have set up my development workflow and how I maintain the different subsystems I am in charge of. That example can help you determine how you wish to manage your own development trees, and how to handle incoming patches from developers.

To answer the question, yes, you will receive patches at any point in the release cycle, but not all of them are applicable to be sent to Linus at all points in time, depending on where we are in the release cycle. I'll go into more detail below, but for now, realize that in my opinion you should not require the other developers to wait for different points in the release cycle, and, instead, you should hold onto patches and send them upstream when they are needed. I think it is the maintainer's job to do the buffering.

How best do I organize my pull-request branches so that developers know which they can pull as dependencies, and which are for-next. I don't want to over-organize it, but do want to make it easy for board submitters to test from my trees. Should my pull-request branches be long-lived, or, kill them and create new after each cycle?

It's best to stick with a simple scheme for branches, work with that for a while, and then if you find that is too limiting, feel free to grow from there. I only have two branches in my git trees, one to feed to Linus for the current release cycle, and one that is for the next release cycle. This can be seen in the USB git tree on kernel.org, which shows three branches:

  • master, which tracks Linus's tree

  • usb-linus, which contains patches to go to Linus for this release cycle

  • usb-next, which contains the patches to go to Linus for the next release cycle.
Both of the usb-linus and usb-next branches are included in the nightly linux-next releases as well. That gives me and the USB developers quick feedback in case there are merge issues with other development trees or if there are build issues on other architectures that I missed.

I receive patches from lots of different developers all the time. All patches, after they pass an initial "is this sane" glance, get copied to a mailbox that I call TODO. Every few days, depending on my workload, I go through the mailbox and pick out all of the patches that are to be applied to various trees I am responsible for. For this example, I'll search on anything that touches the USB tree and copy those messages to a temporary local mailbox on the filesystem called s (I name my local mailboxes for their ease of typing, not for any other good reason.)

After digging all of the USB patches out (which is really a simple filter for all threads that have the "drivers/usb" string in them), I take a closer look at the patches in the s mailbox.

First I look to find anything that would be applicable to Linus's current tree. This is usually a bug fix for something that was introduced during this merge window, or a regression for systems that were previously working just fine. I pick those out and save them to another temporary mailbox called s1.

Now it's time to start testing to see if the patches actually apply to the tree. I go into a directory that contains my usb tree and check to see what branch I am on:

    $ cd linux/work/usb
    $ git b
      master     6dab7ed Merge branch 'fixes' of git://git.linaro.org/people/rmk/linux-arm
    * usb-linus  8f057d7 gpu/mfd/usb: Fix USB randconfig problems
      usb-next   26f944b usb: hcd: use *resource_size_t* for specifying resource data
      work-linus 8f057d7 gpu/mfd/usb: Fix USB randconfig problems
      work-next  26f944b usb: hcd: use *resource_size_t* for specifying resource data
Note, I have the following aliases in my ~/.gitconfig file:
    [alias]
	dc = describe --contains
	fp = format-patch -k -M -N
	b = branch -v
This enables me to use git b to see the current branch much easier, git fp to format patches in the style I need them in, and git dc to determine exactly what release a specific git commit was contained in.

As you can see by the list of branches, I have a local branch that mirrors the public versions of the usb-linus and usb-next branches called work-linus and work-next. I do the testing and development work in these local branches, and only when I feel they are "good enough" do I push them to the public facing branches and then out to kernel.org.

So, back to work. As I am working on patches [1, 2] that are to be sent to Linus first, let's change to the local working version of that branch:

    $ git checkout work-linus
    Switched to branch 'work-linus'

Then a quick sanity check to verify that the patches in s1 really will apply to this tree (sadly, they often do not.):

    $ p1 < ../s1
    patching file drivers/usb/core/endpoint.c
    patching file drivers/usb/core/quirks.c
    patching file drivers/usb/core/sysfs.c
    Hunk #2 FAILED at 210.
    1 out of 2 hunks FAILED -- saving rejects to file drivers/usb/core/sysfs.c.rej
    patching file drivers/usb/storage/transport.c
    patching file include/linux/usb/quirks.h

(Note, the 'p1' command is really:

    patch -p1 -g1 --dry-run
that I set up in my .alias file years ago as I quickly got tired of typing the full thing out.)

Here is an example of patches that will not apply to the work-linus branch, but it turns out that this was my fault. They were generated against the linux-next branch, and really should be queued up for the next merge window, not for this release.

So, let's switch back to the work-next branch, as that is where the patches really belong:

    $ git checkout work-next
    Switched to branch 'work-next'
And see if they apply there properly:
    $ p1 < ../s1
    patching file drivers/usb/core/endpoint.c
    patching file drivers/usb/core/quirks.c
    patching file drivers/usb/core/sysfs.c
    patching file drivers/usb/storage/transport.c
    patching file include/linux/usb/quirks.h
Much better.

Then I look at the patches themselves again in my email client, and edit anything that needs to be cleaned up. The changes could be in the Subject, the body of the patch, or any other things that need to be touched up. With developers who send patches all the time, no changes generally need to be done in this area, but, unfortunately, I end up editing this type of "metadata" all the time.

After the patches look clean, and I've done a review of them again to verify that I don't notice anything strange or suspicious, I do one last sanity check by running the checkpatch.pl tool:

    $ ./scripts/checkpatch.pl ../s1
    total: 0 errors, 0 warnings, 73 lines checked

    ../s1 has no obvious style problems and is ready for submission.

So all looks good, so let's apply them to the branch and see if the build works properly:

    $ git am -s ../s1
    Applying: usb/endpoint: Set release callback in the struct device_type \
              instead of in the device itself directly
    Applying: usb: convert USB_QUIRK_RESET_MORPHS to USB_QUIRK_RESET
    $ make -j8
If everything built, then it's time to test the patches. This can range from installing the changed kernel and ensuring that everything still works properly and the new modifications work as they say they should work, to doing nothing more than verifying that the build didn't break if I do not have the hardware that the changed driver controls.

After this, and everything looks sane, it's time to push the patches to the public kernel.org repository, as well as notifying the developer that their patch was applied to the tree and where they can find it. This I do with a script called do.sh that has grown over the years; it was originally based on a script that Andrew Morton uses to notify developers when he applies their patches. You can find a copy of it and the rest of the helper scripts I use for kernel development in my gregkh-linux GitHub tree.

The script does the following:

  • generates a patch for every changeset in the local branch that is not in the usb-next branch
  • emails the developer that this patch has now been applied and where it can be found
  • merges the branch to the local usb-next branch
  • pushes the branch to the public git.kernel.org repository
  • pushes the branch to a local backup server that is on write-only media
  • switches back to the work-next branch
With that, I'm free to delete the s1 mailbox, and start all over with more patches.

After this, people do sometimes find problems with patches that need to be fixed up. But, since my trees are public, I can't rebase them—otherwise any developer who had previously pulled my branches would get messed up. Instead, I sometimes revert patches, or apply fix-up patches on top of the current tree to resolve issues. It isn't the cleanest solution at times, but it is better to do this than rebase a public tree, which is something that no one should ever do.

Hopefully, this description gives you an idea how you can manage your trees and the patches sent to you to make things easier for yourself, the linux-next maintainer, and any developer who relies on your tree.

Comments (14 posted)

Patches and updates

Kernel trees

Architecture-specific

Build system

Core kernel code

Development tools

Device drivers

Filesystems and block I/O

Memory management

Networking

Security-related

Virtualization and containers

Page editor: Jonathan Corbet

Distributions

Debian looks at OpenRC

By Jake Edge
August 22, 2012

While Debian has discussed systemd—and Upstart—over the past year or more, that's not the whole story: another potential init replacement has appeared on the debian-devel mailing list. OpenRC is a Gentoo Linux project that was proposed as an alternative to the venerable System V init (sysvinit) that is currently the Debian default. That proposal spawned a long thread, even by debian-devel standards, and a more recent revival of the topic is adding more to the discussion. Though OpenRC has some features that sysvinit lacks, it doesn't bring the number of new features that systemd or Upstart do, so it makes some in the Debian community wonder whether it makes sense to add yet another init replacement into the mix.

OpenRC developer Patrick Lauer suggested that Debian look at OpenRC back in April. It is, he said, a "modern, slim, userfriendly init system with minimal dependencies". It would add support for stateful services (e.g. only one instance will be running at a given time), and dependency-based init scripts, without requiring all of what something like systemd requires ("dbus? udev? on my server?! and you expect a linux 3.0+ kernel? waaah!"). It would be a step up from sysvinit, while still in keeping with the "Unix way". In addition, it supports both Linux and the BSDs, which would eliminate one of the bigger gripes against systemd.

But an incremental improvement to init is not what some are looking for. To many, sysvinit and other shell-script-based solutions have not kept up with the changing hardware and kernel environment, so an event-based init is the right way forward. As Arto Jantunen put it:

Reliability in the case of modern kernels and modern hardware means event based, not static. The hardware in a modern computer comes and goes as it pleases (usb devices being the worst example, but scanning for pci or sata busses and loading drivers isn't exactly instant in all cases either), and the kernel has little choice in the matter. It can either sleep until "everything is surely detected by now" before passing control to userspace, or pass control and the problem along (by providing event notification when the device set changes). The kernel made its choice about this years ago, and we have been living on borrowed time and kludges since then.

As might be expected, there are plenty of folks who don't quite see things that way. While there are vocal advocates of systemd—and rather less vocal Upstart advocates—there are numerous opponents as well. OpenRC might provide something of a middle ground as Roger Leigh described:

While as others have mentioned that ideally a more dynamic init system such as systemd or upstart is where I think the general consensus is (we all know that sysvinit/insserv is flawed in many ways, even if we can't agree on what should replace it), there is always the possibility of having OpenRC as a sysvinit alternative in the interim, or potentially as a systemd/upstart alternative longer term.

To that end, Leigh started looking more closely at OpenRC, with an eye toward packaging it for Debian. One problem that he noted early on was the lack of support for LSB dependencies in the init scripts. The LSB headers are comments that specify the runtime dependencies for each init script. OpenRC has its own dependency system, but Leigh believed that LSB dependency handling could be added to OpenRC.

Over the intervening months, that is exactly what happened. On August 9, Benda Xu posted an intent to package (ITP) for OpenRC, which restarted the discussion. Leigh noted that Xu had gotten OpenRC to work with the LSB-based Debian init scripts, so that it could be a replacement for the sysv-rc package (which handles changing runlevels, starting and stopping services, and so on), while still using the init and scripts provided by sysvinit underneath. In addition, the OpenRC upstream is working on ways to allow other tools to access its dependencies, which would allow systemd or others to use OpenRC scripts. He concluded:

Working on getting OpenRC to work on Debian is not without value. For me, the entire point of the exercise is to explore the [feasibility] to port it and evaluate it as an alternative/replacement for sysv-rc; this is almost completely orthogonal to work on systemd/upstart, which will for the most part be unaffected by this.

Supporting multiple init systems is not without a cost, of course. There are now (or soon will be) at least four different kinds of configuration for init "scripts" (sysvinit, OpenRC, systemd, Upstart). While systemd and Upstart can use existing init scripts, and OpenRC is getting there as well, doing so loses much of the benefit of the alternatives. To some, there is simply an impedance mismatch between static dependency-based systems and those that are event-driven—though systemd advocates might not completely agree with the "event-driven" characterization. As Russ Allbery put it:

And lest someone think this is a theoretical exercise, we *frequently* get bugs filed against packages like OpenAFS, the Kerberos KDCs, or OpenLDAP that are boot-order-dependent and network-dependent and either don't start or start in unuseful ways or at unuseful times because of lack of event notification for when interfaces are *actually* ready or when network devices are *really* available.

Allbery said that these kinds of problems were not easily solvable with the existing init scripts: "The alternative is to add [significant] additional complexity to every package like those listed above that needs the network to loop and retry if the network isn't available when it first starts." That would be a "huge waste of effort".

One of the potential blockers for systemd, though, has been its reliance on Linux-only features, which makes it problematic for Debian GNU/kFreeBSD (and Debian GNU/Hurd down the road). OpenRC might not provide all of the features that systemd (and Upstart) do, but it could be enough of an upgrade to sysvinit that it makes sense to make that switch. That might actually pave the way for an event-driven init default for Debian GNU/Linux as Philip Hands described:

As a largely disinterested observer, it seems that this might at least provide a route to being able to provide enough support of the the features that make the systemd/upstart folk dizzy with excitement, such that non-linux platforms don't end up acting as a blocker for one of those two to be adopted for linux, while OpenRC covers non-linux enough so that init-agnostic start-up scripts can work anywhere.

At least some in the Debian community are particularly annoyed by the systemd team's unwillingness to take patches for portability to kernels beyond Linux. That led Adam Borowski to jokingly dismiss OpenRC because it lacks "a hostile upstream". More seriously, Leigh pointed out that OpenRC uses some of the same features as systemd, but does so with portability in mind:

OpenRC can (on Linux) use cgroups and hence do some of the more advanced stuff that systemd does. Yet it still runs on other platforms. This is in part due to the fact that OpenRC is written to be portable, while the systemd developers have an [astoundingly] bad attitude with respect to this. It would be perfectly possible for systemd to support other platforms if they really wanted to; it probably wouldn't even be that hard.

Others see it somewhat differently (of course). Maintaining a package for multiple platforms has its costs, and for a low-level package like systemd those costs may be rather high. It's not that the systemd upstream is "hostile", according to Matthias Klumpp, but that systemd is difficult to port and its developers don't want to maintain an #ifdef-heavy code base. Instead, the systemd folks suggest forking systemd and maintaining a parallel repository for any ports. But that isn't easy, Klumpp said: "So far nobody has created a non-Linux fork of systemd, and the reason is mainly that it is too much work."

There is also the underlying question of just how much "choice" there should be in a distribution's init system. Setting aside the "Linux is about choice" disagreements that always seem to arise in these kinds of discussions, there is a real question about how many different options Debian can and should support. As Allbery noted, Debian does not support switching to a different C library, for example. But Faidon Liambotis countered that it was only because no one had ever tried to show the "viability and usefulness" of switching to something other than glibc. Furthermore, things like kFreeBSD or building Debian with LLVM did not come about by some kind of consensus, rather it was due to someone deciding to make it work.

For init systems, though, Leigh believes that if OpenRC proves to be a viable replacement, it should supplant sysv-rc, rather than providing a choice. It wouldn't resolve the question of defaulting to an event-driven init (for Linux at least), but it would allow the rest of the Debian community to "get on with life while the upstart and systemd folk take chunks out of one another for a decade or so", as Hands put it.

While Linux may not be about choice exactly, its users are certainly accustomed to being able to fairly easily switch between different technologies: distributions, kernels, desktops, mail servers, web browsers, and so on. In some respects, Debian users are even more acclimated to a wide variety of choices. Its package repository is renowned for its breadth, and the distribution as a whole seems intent on providing choices whenever it is technically feasible. It is too soon to say for sure, but the addition of OpenRC may well provide a bridge that would upgrade init for those who aren't convinced of the "event-driven future", while staying out of the way of the systemd and Upstart efforts.

Comments (25 posted)

Brief items

Distribution quote of the week

Nurse Euca will run before any install and take everyone’s temperature, offer an aspirin or a splint where needed — or will let you know if one of your requirements is Dead On Arrival. (“I’m sorry, Doctor, but em1 appears to be in septic shock. I recommend against resuscitation.”) (That’s totally gonna be an error message, btw.)
-- Greg DeKoenigsberg

Comments (none posted)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

6.0: The beginning of the end for AV Linux (The H)

Glen MacArthur recently released AV Linux 6.0. AV Linux is a popular distribution for audio and video production. The H reports that 6.0 will be the final release. "While the project has received positive feedback from users, MacArthur says that version 6.0 will be the final release of AV Linux for a number of reasons, most notably a lack of donations. The distribution is being provided "as-is" and will not be updated and maintained, although the developer notes that tech support will be provided in the project's forums for one year. "Obviously this will be a disappointment to some users, however before you walk away I urge you to try the latest ISO and let it prove it's own worth," said MacArthur, adding that, "people who want to install and create multimedia will not be disappointed... people who live for the next software update will be better served by KXStudio or Dream Studio, both excellent projects in their own right."

Comments (13 posted)

Raspberry Pi now comes in Firefox OS flavour (The Register)

The Register reports that Mozilla's Firefox OS has been successfully ported to run on the diminutive Raspberry Pi platform. The port was apparently done by a Nokia employee, but is a side project: "Romashin seems to have undertaken this work off his own bat. So let’s stick to lame puns on “pie”, shall we, rather than wondering what Redmond will say about Nokia playing with an OS other than Windows Phone."

Comments (5 posted)

Page editor: Rebecca Sobol

Development

GUADEC: Work in the pipeline for GNOME

By Nathan Willis
August 22, 2012

GUADEC incorporated a blend of old and new business; there were status reports and updates from various GNOME projects and teams, but there were also a lot of sessions devoted to discussing new components and ideas for the coming development cycle, and for GNOME's long-term future. Some were wild concepts, of course, but not every new scheme was a radical departure — many were just solid bits of engineering that will make life easier for developers (and for users) over the next few releases.

Smooth G

For example, one of the week's largest crowds gathered for Owen Taylor's talk about enabling jitter-free animations in GTK+ applications. Smooth animations are not a new idea, but it has taken a while for the right approach to fall into place. Taylor addressed one specific type of animation: 2D redraws, of the kind that are commonly found when dragging or re-sizing an application window, or when translating an object across the screen. GTK+ and GNOME applications have never performed all that well in these circumstances, with tearing and jumpy updates being among the common complaints.

[Owen Taylor]

Taylor identified the root cause of such unpleasant visual artifacts as a lack of synchronization between the application and the compositor (following a lengthy investigation, documented on his blog). Historically, compositors attempted to draw new frames whenever they happened to arrive from the application. But this resulted in uneven timing. At times an application might generate more frames than the frame rate (say, 60 frames per second), causing some frames to be dropped, at other times it produces frames too slowly (either by consuming too much CPU or due to system load), and at still other times it may deliver frames too late to be drawn (with a buffer swap during the display's vertical blanking interval). The result is redraws that are unevenly spaced, so they appear jumpy to the eye. The compositor can attempt to be smart about which frames to draw and which to drop, but there has never been a mechanism for applications and compositors to keep in step with one another.

Taylor's solution is to introduce a frame synchronization protocol, which allows the application and the compositor to agree on redraws. The protocol centers on _NET_WM_SYNC_REQUEST_COUNTER, a counter (managed by the X Synchronization Extension, and visible to both the application and the compositor) which the application increments to an odd value whenever it begins to update a new frame of the animation. When the application finishes drawing the new frame, it increments the counter to even, and the compositor can draw the update to the screen. When the update is complete, the compositor sends a _NET_WM_FRAME_DRAWN message back to the application. This synchronization scheme does not enable faster frame rates, but it ensures that the compositor is drawing updates as fast as the application can produce them, be that 30 frames per second, 40, or any other number — and that the application will not draw new frames before the compositor is ready for them

Taylor also observed that there are side benefits to this scheme, including that frames are dropped only when the compositor fails, and that it is possible to benchmark compositor performance independently. That should enable future work benchmarking and improving compositor performance. Taylor implemented the frame synchronization protocol in the Mutter window manager used by GNOME, but he also posted it to the window manager specification list, where it drew positive reactions from KWin developers and others. He had demonstration animations on hand illustrating the smoother performance on several animation effects when frame synchronization was active. The demos included window dragging and rescaling, but there is still some work to be done to make more and easier-to-use animated effects in GTK+ itself.

To the trees

Colin Walters's OSTree does not extend new features to the GNOME environment or applications, but it is designed to make life simpler for both developers and users. The concept is to replace the package-centric installation model with a Git-like repository of the entire filesystem tree — which can be cloned and updated on the client machine. Installing a new operating system is a matter of cloning the repository, and updating it is a matter of pulling in the changes. But unlike a simple OS ghosting setup, OSTree can retain multiple, named versions of the tree and boot into any of them. That allows developers to do things like maintain experimental builds, roll back to earlier versions to try to reproduce bugs, and ensure that the entire development team can boot an identical set of components.

[Colin Walters]

In his talk, Walters described another feature that OSTree would provide to the core GNOME team: the ability to bisect regressions down to a single commit. As he described it, a regression means an unintentional break in functionality. GNOME has historically had problems identifying and fixing regressions because the vast majority of GNOME users do not install the environment directly: they install packages delivered by their distribution. That separates the discovery of the regression from the commit that caused it by a considerable gulf — both of time and activity. First is the commit, he said, followed by time, then the creation of a tarball, followed by more time, then the package, still more time, the release of the package to the repository, yet more time, installation of the package, updating the filesystem, then finally a reboot, after which the user notices the regression.

If developers and users could immediately see the changes, Walters said, they would find and fix regressions much faster. Walters's ostbuild is an auxiliary build tool that makes this possible. It watches a git repository and creates binary builds stored in an OSTree repository based on each commit. Because ostbuild only re-builds changed components (and OSTree only stores changed portions of the system), it does not use an excessive amount of space, but more importantly it allows developers to track down exactly which commit caused a regression.

The resulting improved feedback rate is one benefit, he said, but OSTree also makes reverting regressions simple: a new commit that fixes the regression can be deployed from the repository, and users can simply boot into a pre-regression OS until a fix is available. Such an option is impossible with traditional packages like those used in RPM and Debian systems, he said, because they rely heavily on the version numbers assigned to packages — and the definition that version numbers must strictly increase over time. OSTree and ostbuild are also fully atomic (side-stepping several problems common in package managers), they eliminate the overhead and headaches of working with GNOME's existing build system jhbuild, and they make it possible to incorporate continuous integration testing into GNOME development.

On the other hand, Walters cautioned that there are a number of issues with using OSTree that have yet to be worked out. For starters, there is no way to push out security updates (as distinct from any other commit), which could prove annoying for system administrators. It would also be necessary to find a way to integrate the OSTree distribution model with existing governance structures that define policy and longer-term strategic decision-making — and in a related issue, without the version numbers required by packages, it would be hard to do marketing and branding to highlight a new release. At a more technical level, there is not yet a preferred way to install applications from outside sources (although OSTree itself is agnostic to how applications are installed), configuration files in /etc make automatically rolling back to a previous version risky, and as Walters put it, the whole system is "barely documented."

Despite such shortcomings, Walters has been using OSTree for GNOME development for several months, via a service running at ostree.gnome.org.

Other bits

Plenty of other sessions dealt with GNOME's immediate and near-term future. Emmanuele Bassi addressed the plans for GTK+ and Clutter, the two most prominent GUI toolkits used in GNOME. One of the most frequently asked questions is whether the two should be merged into a single toolkit, given that in many ways their features are complimentary. Bassi's answer is that they need to remain separate, but both need to be adapted so that they will work better together. Clutter 2.0 is still in development, only after that will the plan for GTK+ 4.0 (including changes targeting better Clutter integration) take concrete form.

Alejandro Piñeiro Iglesias outlined GNOME's accessibility plans, including focus-tracking in the magnifier tool. Tim Muller discussed GStreamer's future, including more GPU support and improved memory management. By and large, major changes like Walters's OSTree were a rarity; most of the work that goes into each successive GNOME release consists of incremental improvements, even if (as in the case of Taylor's animation work) the result fixes a long-standing issue.

[The author would like to thank the GNOME Foundation for travel assistance to A Coruña for GUADEC.]

Comments (10 posted)

Brief items

Quotes of the week

I dislike seeing “review not granted” or “review canceled” in my bugmail.

Even if the reviewer provides helpful comments and points out things that you missed, those few words are the first thing you see about your patch. Part of me understands that these headlines are informative messages, that they are not judgements on me as a person, merely some sort of judgement on the patch that I have written. But the language doesn’t encourage that perspective. Permissions are not granted, entrance is not granted, your favorite show on television is canceled, and so forth. These are not heartening words: they are words that shut you out, that indicate your contribution was not desirable.

Nathan Froyd

The best and worst part of documentation strings is making something sensible out of 80 characters—doubly hard when the name of a function is the Lomb Normalized Periodogram. That's 27 characters right there!
Ben Lewis

Comments (2 posted)

GDB 7.5 released

Version 7.5 of the GDB debugger is out. New features include support for the Go language, a number of new targets (including the x32 ABI), better SystemTap integration, reverse debugging on ARM, and more.

Full Story (comments: 19)

Git v1.7.12 available

Git version 1.7.12 has been released, with several new features making their debut. Included are support for XDG-compatible $HOME/.config/ configuration files, more informative error messages, and an improved git apply that can "wiggle the base version and perform three-way merge when a patch does not exactly apply to the version you have."

Full Story (comments: none)

Bugfixes for grep and coreutils

Jim Meyering wrote in to announce two bugfix releases for the otherwise quite stable grep and coreutils packages:

Sure, there are always bug fixes, but at least for these two packages, bugs usually involve rarely-used corner cases, either involving odd combinations of options or evolving conditions like new file system types, new kernel behavior, race conditions, etc.

However, in the last week, we learned of bugs in two tools. These bugs are not like the others, in that they are relatively serious. Each is about two years old.

The relevant reports note that "grep -i '^$' could exit 0 (i.e., report a match) in a multi-byte locale, even though there was no match, and the command generated no output," and that "sort -u could fail to output one or more result lines" or read freed memory.

Full Story (comments: none)

Newsletters and articles

Development newsletters from the last week

Comments (none posted)

MariaDB: Disappearing test cases or did another part of MySQL just become closed source?

The MariaDB blog reports that Oracle has stopped bundling new test cases with the MySQL source. Evidently the revision history is also no longer public. "MySQL test cases were always an important part of the MySQL source tree. They were particularly useful for storage engine developers and for other people extending MySQL, for example, at Facebook, Twitter, and Taobao. But also for Linux distributions which add their patches to the base MySQL, and even to users, who don’t modify the sources — they still want to confirm that a particular bug was fixed or that their custom-built binary has no obvious flaws."

Comments (39 posted)

Baron: The bug system I wish I had

On his blog, Mozilla's David Baron describes a number of changes he would like to see in Bugzilla functionality, including several ideas about tracking bug metadata (such as what needs to be done next, workarounds, and assessing the expected behavior). "One of the difficult aspects of designing something like this, however, is the tradeoff between the cost of maintaining metadata and the desire to get work done quickly. There are currently many bugs in Bugzilla that have a bunch of fields that are just left at their defaults (e.g., severity, priority), and in many cases that's fine because we don't have a need to maintain these fields. But once a bug gets complicated enough, it's useful to be able to keep the discussion organized."

Comments (4 posted)

Gey: Open Source Instruments: I give up

On his blog, Nils Gey laments the lack of open source instrument files. He was attempting to create music where all parts (tools and instruments) were freely available so that anyone could learn from and modify the music. "Until a commercial company release their old instruments as open source or some rich guy hires several audio technicians, a whole orchestra and software developers for approx. one year and then gives it all away for free I see nothing on the horizon here. And the Salamander Piano and G-Town are very good as well, even better as single instruments than Sonatina. But not all compositions are for 'Piano, Anvils, Stomps and Fake Glass Bowls'."

Comments (30 posted)

Page editor: Nathan Willis

Announcements

Brief items

Help Ken Starks and HeliOS

Ken Starks is the founder of the HeliOS project, an effort to get Linux computers into the hands of school children. Ken is ill (more information on the HeliOS blog), and in need of funds for medication and surgery. Thomas A. Knight has set up a donations campaign to help Ken out. Surplus funds will go to the HeliOS project.

Comments (none posted)

Baserock secret-volcano release and Slab hardware

Lars Wirzenius wrote in to call attention to two recent releases from his projects. The first is "secret volcano," the first stable release of Baserock, "a method and toolset for developing embedded Linux systems in a way that we believe is going to be much better than anything currently out there." The second is an ARM-based server product designed for Baserock development.

Full Story (comments: none)

Articles of interest

Kamp: A Generation Lost in the Bazaar

Here's a troll of sorts by Poul-Henning Kamp, posted to the ACM Queue site. "That is the sorry reality of the bazaar [Eric] Raymond praised in his book: a pile of old festering hacks, endlessly copied and pasted by a clueless generation of IT 'professionals' who wouldn't recognize sound IT architecture if you hit them over the head with it. It is hard to believe today, but under this embarrassing mess lies the ruins of the beautiful cathedral of Unix, deservedly famous for its simplicity of design, its economy of features, and its elegance of execution." Perhaps it's just venting by somebody who got left behind, but perhaps he has a point: are we too focused on the accumulation of features at the expense of the design of the system as a whole?

Comments (190 posted)

New Books

Super Scratch Programming Adventure!--New from No Starch Press

No Starch Press has released "Super Scratch Programming Adventure!" "This book is a translation from a Chinese edition by the Learning through Engineering, Art, and Design (LEAD) Project, an educational initiative established to encourage the development of creative thinking through the use of technology."

Full Story (comments: none)

Calls for Presentations

PyCon ZA 2012 - Call for Speakers

PyCon ZA will take place October 4-5 in Cape Town, South Africa. The call for proposals deadline is September 15.

Full Story (comments: none)

Upcoming Events

LPI supports Software Freedom Day in Kenya with Linux Essentials

The Linux Professional Institute (LPI) and LPI-East Africa will host a "Linux Essentials" exam lab during Software Freedom Day, September 1, 2012 in Nairobi, Kenya. "LPI-East Africa is also co-sponsoring an "Open Source Developer Challenge" during the event with the Linux Professional Association of Kenya."

Full Story (comments: none)

DjangoCon Keynotes Announced

DjangoCon will be held September 3-8, 2012 in Washington, D.C. The keynote speakers have been announced. "The conference, themed "the Django community and ecosystem," showcases an array of tutorials, two tracks of talks over three days, lightning talks and a development sprint on topics such as creating dynamic applications, debugging live python web apps, internationalization, PostgreSQL and design tips."

Full Story (comments: none)

PyCon Ireland 2012

PyCon Ireland 2012 will take place October 13-14 in Dublin, Ireland. A list of speakers and talks is now available. They are looking for volunteers to help with sprints, workshops and tutorials.

Full Story (comments: none)

LCA2013 Cloud Computing Miniconfs Announced

LCA (linux.conf.au) will take place January 28-February 2, 2013 in Canberra, Australia. The first three miniconferences will be "Cloud Infrastructure, Distributed Storage and High Availability", "OpenStack" and "MobileFOSS".

Full Story (comments: none)

Events: August 23, 2012 to October 22, 2012

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
August 25 Debian Day 2012 Costa Rica San José, Costa Rica
August 27
August 28
GStreamer conference San Diego, CA, USA
August 27
August 29
Kernel Summit San Diego, CA, USA
August 27
August 28
XenSummit North America 2012 San Diego, CA, USA
August 28
August 30
Ubuntu Developer Week IRC
August 29
August 31
2012 Linux Plumbers Conference San Diego, CA, USA
August 29
August 31
LinuxCon North America San Diego, CA, USA
August 30
August 31
Linux Security Summit San Diego, CA, USA
August 31
September 2
Electromagnetic Field Milton Keynes, UK
September 1 Panel Discussion Indonesia Linux Conference 2012 Malang, Indonesia
September 1
September 2
Kiwi PyCon 2012 Dunedin, New Zealand
September 1
September 2
VideoLAN Dev Days 2012 Paris, France
September 3
September 8
DjangoCon US Washington, DC, USA
September 3
September 4
Foundations of Open Media Standards and Software Paris, France
September 4
September 5
Magnolia Conference 2012 Basel, Switzerland
September 8
September 9
Hardening Server Indonesia Linux Conference 2012 Malang, Indonesia
September 10
September 13
International Conference on Open Source Systems Hammamet, Tunisia
September 14
September 16
Debian Bug Squashing Party Berlin, Germany
September 14
September 21
Debian FTPMaster sprint Fulda, Germany
September 14
September 16
KPLI Meeting Indonesia Linux Conference 2012 Malang, Indonesia
September 15
September 16
PyTexas 2012 College Station, TX, USA
September 15
September 16
Bitcoin Conference London, UK
September 17
September 19
Postgres Open Chicago, IL, USA
September 17
September 20
SNIA Storage Developers' Conference Santa Clara, CA, USA
September 18
September 21
SUSECon Orlando, Florida, US
September 19
September 21
2012 X.Org Developer Conference Nürnberg, Germany
September 19
September 20
Automotive Linux Summit 2012 Gaydon/Warwickshire, UK
September 21 Kernel Recipes Paris, France
September 21
September 23
openSUSE Summit Orlando, FL, USA
September 24
September 25
OpenCms Days Cologne, Germany
September 24
September 27
GNU Radio Conference Atlanta, USA
September 27
September 28
PuppetConf San Francisco, US
September 27
September 29
YAPC::Asia Tokyo, Japan
September 28 LPI Forum Warsaw, Poland
September 28
September 30
Ohio LinuxFest 2012 Columbus, OH, USA
September 28
September 30
PyCon India 2012 Bengaluru, India
September 28
October 1
PyCon UK 2012 Coventry, West Midlands, UK
October 2
October 4
Velocity Europe London, England
October 4
October 5
PyCon South Africa 2012 Cape Town, South Africa
October 5
October 6
T3CON12 Stuttgart, Germany
October 6
October 8
GNOME Boston Summit 2012 Cambridge, MA, USA
October 11
October 12
Korea Linux Forum 2012 Seoul, South Korea
October 12
October 13
Open Source Developer's Conference / France Paris, France
October 13
October 14
Debian BSP in Alcester (Warwickshire, UK) Alcester, Warwickshire, UK
October 13 2012 Columbus Code Camp Columbus, OH, USA
October 13
October 14
PyCon Ireland 2012 Dublin, Ireland
October 13
October 14
Debian Bug Squashing Party in Utrecht Utrecht, Netherlands
October 13
October 15
FUDCon:Paris 2012 Paris, France
October 15
October 18
OpenStack Summit San Diego, CA, USA
October 15
October 18
Linux Driver Verification Workshop Amirandes,Heraklion, Crete
October 17
October 19
LibreOffice Conference Berlin, Germany
October 17
October 19
MonkeySpace Boston, MA, USA
October 18
October 20
14th Real Time Linux Workshop Chapel Hill, NC, USA
October 20
October 21
PyCon Ukraine 2012 Kyiv, Ukraine
October 20
October 21
Gentoo miniconf Prague, Czech Republic
October 20
October 21
PyCarolinas 2012 Chapel Hill, NC, USA
October 20
October 21
LinuxDays Prague, Czech Republic
October 20
October 23
openSUSE Conference 2012 Prague, Czech Republic

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol


Copyright © 2012, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds