At GUADEC, the "GNOME OS" concept was discussed off and on several times during the course of the week. The first mention of the subject came in a talk from Igalia's Juan José Sanchez Penas and Xan Lopez on the first day of the event. Their talk A bright future for GNOME dealt largely with what the GNOME project needed to do to address the mobile and embedded space. In that context, GNOME's current build and release system — which is focused solely on the desktop computing experience — offers nothing for mobile device makers to build on.
But, they said, if GNOME were to start producing a bootable OS image as one of its "deliverables," device makers would have a starting point that they could adapt to their own hardware. Although they did not provide specifics, they said that Igalia has spoken to mobile device makers who are not satisfied with the current market offering of Apple's iOS and Google's Android. GNOME has already done a lot of design and technical work to make GNOME Shell and other components touch-screen capable, they observed, but it remains bound to traditional PC hardware. A mobile-friendly GNOME would have a leg up on competing open source projects like Tizen, webOS, and Firefox OS, which have all had to "start from scratch." Their definition of "scratch" is not entirely clear, but it is certainly common for new Linux-based mobile platforms to write their own applications and supporting frameworks.
Although Sanchez and Lopez spoke of the benefits of having an installable GNOME for use as a base platform for mobile device makers, that was not the only reason the GNOME OS buzzword came up over the course of the week. The other — and perhaps more frequently-raised — issue is that GNOME has essentially never been presented as an end-user ready product. The cause is clear enough; as Colin Walters discussed in his talk, most Linux users get their software through a distribution's package manager. The trouble from GNOME's perspective is that distribution packages are typically delivered six months after GNOME drops its stable release, so when bug reports arrive they are almost a full development cycle behind. In addition, every distribution makes enough changes that whatever bug reports users do send in are difficult to triage and diagnose.
Making a bootable GNOME image one of the pieces in each GNOME release would allow users to try the unaltered packages sooner, and provide faster and better feedback to the project. It would also allow GNOME to develop an SDK for application developers who are interested in writing distribution-neutral GNOME code. Sanchez and Lopez proposed setting an "ambitious plan for 3.8 through 3.12" that would culminate in a mobile-capable release for GNOME 4.0. That time frame equates to two years using GNOME's current release schedule — not immediate, but not too far off to plan. Post-4.0, they proposed planning a GNOME SDK and working on application distribution channels and other components that a mobile GNOME ecosystem might require.
Allan Day addressed both the improved-testing-and-feedback rationale and the improve-GNOME-for-application-developers goal on his blog shortly after GUADEC. Nevertheless, there are still those who conflate the plan with a desire to transform GNOME into a full-fledged Linux distribution, a confusion that was evident in audience members' questions at GUADEC, too. It ought to be clear that GNOME would need to add a significant number of developers (not to mention packagers and infrastructure) to support a complete distribution, but perhaps "GNOME OS" is simply a poor choice of terminology. Sanchez and Lopez did refer to GNOME OS as a "distribution" in their talk, but when an audience member asked about it, they clarified that use of the term was a slip not meant to be taken absolutely.
Admittedly, there are those in and around the GNOME project who have more ambitious goals (Lennart Poettering had a session I was unable to attend that dealt with integrating GNOME components more directly with the kernel), but they are the exception. At its core, the idea is really about bridging the existing gap between the project and its users — as well as the gap between the project and application developers — in order to collaborate better with them. Given the number of times in recent years that the project has run into end-user backlash over design changes (in particular those instances that seem to revolve around a perceived lack-of-responsiveness to feedback), that would seem to be an admirable goal.
But the GNOME OS discussion has a subtext, which is the perception that GNOME as a project no longer has a long-term goal. On the one hand, that means that the original goal of producing a quality free software desktop environment has largely (or perhaps even completely) succeeded. But it also means that GNOME as a project is searching for a new target. There are plenty of people who feel that mobile devices are the answer; others (like Lionel Dricot) contend that online services are the new frontier, or (like Eitan Isaacson) that believe that targeting high-end workstation users is best.
The vision question also arose at the GNOME Foundation general meeting, which kicked off with the Release Team asking attendees what they wanted the Release Team's role to be. Specifically, the team asked whether or not the project ought to have a Technical Board to set the long-term vision and to make technical decisions. The team reported that it felt like some members of the project expected it to fill such a role, but that driving development was not its mission.
The resulting discussion was an interesting one; GNOME's culture has been "individual maintainers rule their modules" for a long time, but that concept does not extend well into a long-term roadmap. Bastien Nocera pointed out that in years past, it was often a single individual — such as Jeff Waugh — who either set or articulated the vision for GNOME. Since Waugh's departure, no one has replaced him in that function, although Nocera pointed out that Jon McCann's UI demos have served as a de facto substitute in recent years.
Others pointed out that while vision was an important topic on its own, practical matters still dominate, such as making the final call on which version of Python to support. A Technical Board should make such a decision (which affects many modules), but it is hardly a matter of "vision." Clearly individual GNOME developers are producing high-quality work and driving the project forward. But focusing that energy, whether into GNOME OS or toward another goal, is a task that the project is still working out.
[The author would like to thank the GNOME Foundation for travel assistance to A Coruña for GUADEC. The event was deftly organized and smoothly run from start to finish, sported a universally high-caliber program, and an enthusiastic crowd at every turn. Plus, as the photographs in the story above hint, A Coruña was an inspiringly-scenic location in which to spend a week discussing and learning about open source. Thanks to everyone who put in time and energy making the conference a success.]
Dan Luedtke's answer is "yes"; he has implemented a new filesystem called "Lanyard" (or "LanyFS") intended for use on removable devices. He claims better performance and scalability than VFAT along with a native Linux implementation. The code shows its early-stage nature — there are a lot of things that would need to be fixed before it could be considered for inclusion into the mainline kernel — but the mainline is clearly where Dan would like it to go. The rest of the development community is not entirely convinced that we need a new filesystem for this use case, though.
The first question is: why not stick with VFAT? For all of its troubles, it has worked well enough for a long time. The biggest motivator for a change, arguably, is the 4GB limit on file size. One can deal with poor performance, especially when the real bottleneck is likely to be the device itself. But if one wants to store a sufficiently large file on the device, VFAT will simply fail. Such files are increasingly common, so users are running into this problem. The exFAT filesystem format is held out as an alternative, but it is far more proprietary than VFAT. Given that VFAT has already been the subject of lawsuits, vendors will think carefully before switching to exFAT; Sharp has licensed the filesystem for Android devices, but there do not appear to be a whole lot of other takers at this time.
Given increasing networking speeds, one could certainly consider just using the network to move a file that is too large for VFAT. On a local network this approach might well be faster than using a removable drive. Setting up network transfers is not always easy, though; most computers are, by default, configured in ways that do not allow random strangers to dump large files on their drives. Getting around that obstacle is likely to be too much even for moderately skillful users. Use of a third-party site to transfer files is workable when the files are small; even if it is possible for very large files, it's not something that will be tolerably fast on most networks.
Removable drives, instead, are easy, so the "sneakernet" approach to file transfer is likely to stay with us for some time. Does that mean that we need a new filesystem format to better support this use? Filesystem developer Ted Ts'o thinks not:
That is an interesting thought: Linux is now strong and prevalent enough that we can simply expect the industry to pick up our way of doing things. That approach has not always worked out in the past, but things might truly be different this time around. Increasingly, devices like music players, handsets, and digital cameras run Linux internally; these gadgets already are, to a first approximation, removable storage devices with a bit of extra hardware. Other devices, such as televisions, also tend to run Linux internally. Supporting a native Linux filesystem on these devices should be a relatively easy thing to do. It would be faster (assuming the underlying storage isn't severely optimized for VFAT only), more feature-rich, and lacking in patent aggressors. There is very little, in other words, not to like.
Well, there would be a few small problems. There are still some pesky users out there with non-Linux systems that might want to access the filesystems on their devices. In many cases, the increasing use of the MTP protocol could sidestep that question altogether; indeed, recent MTP-using Android devices are likely using it to export an ext4 filesystem. There would still be cases where users on these other platforms would want to mount filesystems directly, though, especially on pure storage devices; bringing proper implementations of Linux filesystems to those platforms is, evidently, not as easy as one might think.
Filesystems like ext4 also were not designed with removable devices in mind. They tend not to be all that robust against unexpected removal of the device unless fairly aggressive flushing of data is used (in fairness, VFAT filesystems are also easily corrupted that way). The file ownership model used by Linux filesystems tends not to translate well to removable devices, since one system's user IDs typically have no meaning elsewhere. So something like the user and group mount options patch may be required to make things work well. Most Linux filesystems have not been designed around the very large pages and erase blocks used on flash devices and, thus, do not perform as well as they could; see this article for lots of details. These are issues that can be worked out, certainly, but they remain in need of working out at this time.
There is one other complication: according to Arnd Bergmann there is another filesystem waiting on the wings:
Needless to say, such an entry has the potential to stir things up a bit. A filesystem designed with input from both "a major flash vendor" and a developer like Arnd should work well indeed on small removable devices and should be well integrated into Linux. This manufacturer could also employ the "include a windows driver in a small partition on the device" trick, making interoperability with most Windows systems Just Work. Putting the filesystem code into the Linux kernel would make support readily available on mobile devices. This scheme might just succeed.
So what we may see is not Linux pushing one of its native filesystem formats onto the world. Instead, the world might just adopt a new format that just happened to be well supported in Linux first. That could be the best of all worlds: we would have a way to interoperate on removable drives that is free, scalable, and widely supported. Getting there may well be worth the trouble of adding yet another filesystem type.report that Google has filed a case against Apple with the US International Trade Commission.
In short, Google is trying to use seven of its patents (just acquired from Motorola Mobility) to block the import of Apple's products into the US. Those of us who fear the effect of software patents on free software might be forgiven for feeling that it is only just for Apple to be on the receiving end of the sort of attacks it has launched against Android. But Google's transformation into a patent aggressor may not bode well in the long term, regardless of how the current cases end up.
So what is Google claiming? The seven patents asserted against Apple are:
(Credit is due to Florian Mueller, who found and posted the specific patents at issue).
As is so often the case, there is not much in these patents that appears to be particularly novel or worthy of protection. Once one concludes that a particular problem (moving video playback from the handset to the television, say) is in need of solution, the form of the solution becomes fairly obvious. The patents asserted by Apple against Android seem trivial, but it is hard to come up with a way to say that Google's patents are less so.
If one is concerned about attacks against Android and other platforms based on free software, one might be tempted to hope that Google will find some success against Apple and, in so doing, deter further attacks on the platform. The mobile patent wars could be declared to be a draw, and the companies involved could get back to their real business: running on the consumer electronics product treadmill and trying to create better products to sell to their customers. Barring real reform of the patent laws in the US, that might well be a best-case outcome.
What seems more likely, though, is that the companies involved, having shown that they can make each other hurt, will come to some sort of understanding involving the sharing of patents and, perhaps, the passing of undisclosed amounts of cash between some of the parties. Such an agreement would presumably make the world safer for Android and for at least some of the manufacturers who use Android in their products. But it's not at all clear that the situation would improve for free software as a whole, or for anybody who is outside of this agreement and who wants to break into the mobile market.
A worst-case scenario could involve Google asserting these patents (and others from the massive pile it acquired from Motorola) against devices based on Tizen, Nemo, Firefox OS, or other free platforms. Unlike some companies, Google has not pledged not to attack free software projects with its patents. Such an attack would certainly be widely considered to be "evil," but the sad fact is that, in an extended fight, one tends to become more like one's enemy. Having found that it can further its goals with patent attacks (assuming that is, indeed, the outcome), Google may find it hard to resist making more of them in the future.
In the end, that may be the environment we are stuck with until the software patent situation can be addressed. Until then, it will be impossible to achieve a certain level of success in the software area and not be subject to patent attacks, either from trolls or from competitors. Given the nature of the game, it is hard to fault Google for playing hardball. Hopefully, the company's recent suggestions that software patents should be eliminated entirely are sincere and we are not witnessing the birth of another patent problem.
When a system is compromised, the attackers may try to cover their tracks so that the administrator is not alerted to the attack. One way for an attacker to hide is by removing log file entries that might lead an administrator (or a log file analyzer) to notice. A new feature in the systemd journal, "forward secure sealing" (FSS) is meant to detect log file tampering.
Traditionally, administrators have written log files to external systems across the network or to a local printer—though paper is notoriously hard to grep—to defeat log file tampering. As long as the other system is not compromised, and log file lines are written immediately, an attacker can't help but leave their "fingerprints" behind. But FSS provides a way to at least detect tampering using only a single system, though it won't provide all of the assurances that external logging can.
Systemd developer Lennart Poettering announced FSS on August 20. The basic idea is that the binary logs handled by the systemd journal can be "sealed" at regular time intervals. That seal is a cryptographic operation on the log data such that any tampering prior to the seal can be detected. So long as a sealing operation happens before the attacker gets a chance to tamper with the logs, their fingerprints will be sealed with the rest of the log data. They can still delete the log files entirely, but that is likely to be noticed as well.
The algorithm for FSS is based on "Forward Secure Pseudo Random Generators" (FSPRG), which comes from some post-doctoral research by Poettering's brother Bertram. The paper on FSPRG has not been published but will be soon, according to (Lennart) Poettering.
The announcement on Google+ and its long comment thread do give some details, however. FSS is based on two keys that are generated using:
journalctl --setup-keysOne key is the "sealing key" which is kept on the system, and the other is the "verification key" which should be securely stored elsewhere. Using the FSPRG mechanism, a new sealing key is generated periodically using a non-reversible process. The old key is then securely deleted from the system after the change.
The verification key can be used to calculate the sealing key for any given time range. That means that the attacker can only access the current sealing key (which will presumably be used for the next sealing operation), while the administrator can reliably generate any sealing key to verify previous log file seals. Changing log file entries prior to the last seal will result in a verification failure.
As a bell—or perhaps a whistle—the key generator can create a QR code of the verification key, which can be scanned so that the key doesn't have to be typed in.
Anything that happens after the system is compromised is under control of the attacker, as was pointed out multiple times in the comments. That means that local logs cannot be relied on after that point, but it also applies to remotely stored—or even printed—log files. The latter two methods do protect against an attacker simply deleting the local log files, though.
By default, FSS will seal the logs every 15 minutes, but that can be changed at key generation time with a flag: "--interval=10s" for example. The system clock time is used in the generation of each new sealing key, which is why the interval must be specified when the keys are generated. The default value surprisingly leaves a rather large window for an attacker who immediately turns to altering the log file, though. One also wonders if subtle (or not so subtle) manipulations of the system clock might be a way to subvert or otherwise interfere with the key generation.
Securely deleting the old sealing key is handled by setting the FS_SECRM_FL and FS_NOCOW_FL file attributes, which may or may not be implemented by the underlying filesystem. That could potentially lead to leaks of previous sealing keys, which would allow an attacker to make changes to earlier entries. Obviously, losing control of the verification key means that all bets are off as well.
The code is available already in the systemd Git repository. Poettering notes that it will also be available in Fedora 18.
FSS is an interesting feature that will likely prove useful for some administrators. It certainly doesn't solve all of the problems with detecting attackers or compromised systems, but it could definitely help by raising red flags. There is more to do, of course, starting with a security audit of the code—more eyes can only be helpful in ferreting out any holes in the algorithm or implementation. Once that's done, administrators can feel more confident that their log files aren't undetectably changing out from under them—at least if they are using the systemd journal.
|Created:||August 16, 2012||Updated:||January 10, 2013|
From the Slackware advisory:
Patched to fix a security flaw in the file-local variables code. When the Emacs user option `enable-local-variables' is set to `:safe' (the default value is t), Emacs should automatically refuse to evaluate `eval' forms in file-local variable sections. Due to the bug, Emacs instead automatically evaluates such `eval' forms. Thus, if the user changes the value of `enable-local-variables' to `:safe', visiting a malicious file can cause automatic execution of arbitrary Emacs Lisp code with the permissions of the user. Bug discovered by Paul Ling.
|Created:||August 16, 2012||Updated:||August 23, 2012|
From the Red Hat advisory:
Specially-crafted SWF content could cause flash-plugin to crash or, potentially, execute arbitrary code when a victim loads a page containing the malicious SWF content. (CVE-2012-1535)
|Created:||August 17, 2012||Updated:||March 11, 2013|
From the Red Hat advisory:
It was discovered the the GNU Debugger (gdb) would load untrusted files from the current working directory when .debug_gdb_scripts was defined. While this was a design decision, it is an insecure one and users who do not pre-inspect untrusted files may execute arbitrary code with their privileges.
|Package(s):||gimp||CVE #(s):||CVE-2012-3403 CVE-2012-3481|
|Created:||August 20, 2012||Updated:||September 4, 2012|
|Description:||From the Red Hat advisory:
A heap-based buffer overflow flaw was found in the GIMP's KiSS CEL file format plug-in. An attacker could create a specially-crafted KiSS palette file that, when opened, could cause the CEL plug-in to crash or, potentially, execute arbitrary code with the privileges of the user running the GIMP. (CVE-2012-3403)
An integer overflow flaw, leading to a heap-based buffer overflow, was found in the GIMP's GIF image format plug-in. An attacker could create a specially-crafted GIF image file that, when opened, could cause the GIF plug-in to crash or, potentially, execute arbitrary code with the privileges of the user running the GIMP. (CVE-2012-3481)
|Package(s):||gimp||CVE #(s):||CVE-2012-3402 CVE-2009-3909|
|Created:||August 20, 2012||Updated:||September 28, 2012|
|Description:||From the Red Hat advisory:
Multiple integer overflow flaws, leading to heap-based buffer overflows, were found in the GIMP's Adobe Photoshop (PSD) image file plug-in. An attacker could create a specially-crafted PSD image file that, when opened, could cause the PSD plug-in to crash or, potentially, execute arbitrary code with the privileges of the user running the GIMP. (CVE-2009-3909, CVE-2012-3402)
|Created:||August 20, 2012||Updated:||August 28, 2012|
|Description:||From the Red Hat bugzilla:
Multiple integer overflows, leading to stack-based buffer overflows were found in various stdlib functions of GNU libc (strtod, strtof, strtold, strtod_l and related routines). If an application, using the affected stdlib functions, did not perform user-level sanitization of provided inputs, a local attacker could use this flaw to cause such an application to crash or, potentially, execute arbitrary code with the privileges of the user running the application.
|Package(s):||glpi||CVE #(s):||CVE-2012-4002 CVE-2012-4003|
|Created:||August 16, 2012||Updated:||August 30, 2012|
From the Mandriva advisory:
Multiple cross-site request forgery (CSRF) and cross-site scripting (XSS) flaws has been found and corrected in GLPI (CVE-2012-4002, CVE-2012-4003).
|Created:||August 22, 2012||Updated:||April 10, 2013|
|Description:||From the Ubuntu advisory:
Tom Lane discovered that ImageMagick would not always properly allocate memory. If a user or automated system using ImageMagick were tricked into opening a specially crafted PNG image, an attacker could exploit this to cause a denial of service or possibly execute code with the privileges of the user invoking the program.
|Created:||August 22, 2012||Updated:||August 22, 2012|
|Description:||From the Debian advisory:
Sébastien Bocahu discovered that the reverse proxy add forward module for the Apache webserver is vulnerable to a denial of service attack through a single crafted request with many headers.
|Created:||August 21, 2012||Updated:||August 22, 2012|
|Description:||From the CVE entry
virt/disk/api.py in OpenStack Compute (Nova) 2012.1.x before 2012.1.2 and Folsom before Folsom-3 allows remote authenticated users to overwrite arbitrary files via a symlink attack on a file in an image that uses a symlink that is only readable by root. NOTE: this vulnerability exists because of an incomplete fix for CVE-2012-3361.
|Package(s):||pcp||CVE #(s):||CVE-2012-3418 CVE-2012-3419 CVE-2012-3420 CVE-2012-3421|
|Created:||August 20, 2012||Updated:||September 4, 2012|
|Description:||From the Red Hat bugzilla , , , :
 Florian Weimer of the Red Hat Product Security Team discovered multiple integer and heap-based buffer overflow flaws in PCP (Performance Co-Pilot) libpcp protocol decoding functions. These flaws could lead to daemon crashes or the execution of arbitrary code with root privileges. Many of these flaws can be exploited without requiring the attacker to be authenticated. (CVE-2012-3418)
 Florian Weimer of the Red Hat Product Security Team discovered that pmcd (the PCP (Performance Co-Pilot) performance metrics collector daemon) exports part of the /proc file system, including privileged information that could be used to aid in bypassing ASLR, as well as full commandline information on running programs. (CVE-2012-3419)
 Florian Weimer of the Red Hat Product Security Team discovered two memory leaks in libpcp that can be abused by an unauthenticated remote attacker to crash pmcd (the PCP (Performance Co-Pilot) performance metrics collector daemon) or to consume enough memory to trigger the OOM killer, which may have impact on other processes. (CVE-2012-3420)
 Florian Weimer of the Red Hat Product Security Team discovered a denial of service flaw in pmcd (the PCP (Performance Co-Pilot) performance metrics collector daemon) due to incorrect event-driven programming. Because the pduread() function in libpcp performs a select locally, waiting for more client data, an unauthenticated remote attacker could send individual bytes one by one, avoiding the timeout, and blocking pmcd in order to prevent it from responding to other legitimate requests. (CVE-2012-3421)
|Created:||August 17, 2012||Updated:||August 29, 2012|
From the phpMyAdmin advisory:
Using a crafted table name, it was possible to produce a XSS : 1) On the Database Structure page, creating a new table with a crafted name 2) On the Database Structure page, using the Empty and Drop links of the crafted table name 3) On the Table Operations page of a crafted table, using the 'Empty the table (TRUNCATE)' and 'Delete the table (DROP)' links 4) On the Triggers page of a database containing tables with a crafted name, when opening the 'Add Trigger' popup 5) When creating a trigger for a table with a crafted name, with an invalid definition. Having crafted data in a database table, it was possible to produce a XSS : 6) When visualizing GIS data, having a crafted label name.
|Package(s):||postgresql||CVE #(s):||CVE-2012-3488 CVE-2012-3489|
|Created:||August 20, 2012||Updated:||September 28, 2012|
|Description:||From the postgresql advisory:
This security release fixes a vulnerability in the built-in XML functionality, and a vulnerability in the XSLT functionality supplied by the optional XML2 extension. Both vulnerabilities allow reading of arbitrary files by any authenticated database user, and the XSLT vulnerability allows writing files as well. The fixes cause limited backwards compatibility issues.
|Created:||August 20, 2012||Updated:||August 22, 2012|
|Description:||From the Fedora advisory:
A flaw was found in the way Red Eclipse handled config files. In cube2-engine games, game maps can be transmitted either from the server to a client, or from client to client. These maps include a config file (mapname.cfg) in "cubescript" format, which allows for an attacker to send a malicious script via a new map. This map must either be chosen by an administrator on the server, or created in co-operative editing mode. A malicious script could then be used to read or write to any files that the user running the client has access to when the victim loads a map with the malicious configuration file.
|Created:||August 16, 2012||Updated:||September 11, 2012|
From the Debian advisory:
Henrik Erkkonen discovered that rssh, a restricted shell for SSH, does not properly restrict shell access.
|Package(s):||wireshark||CVE #(s):||CVE-2012-4285 CVE-2012-4287 CVE-2012-4288 CVE-2012-4289 CVE-2012-4296 CVE-2012-4297 CVE-2012-4291 CVE-2012-4292 CVE-2012-4293 CVE-2012-4290|
|Created:||August 16, 2012||Updated:||December 26, 2012|
From the Mandriva advisory:
Multiple vulnerabilities was found and corrected in Wireshark:
The DCP ETSI dissector could trigger a zero division (CVE-2012-4285).
The MongoDB dissector could go into a large loop (CVE-2012-4287).
The XTP dissector could go into an infinite loop (CVE-2012-4288).
The AFP dissector could go into a large loop (CVE-2012-4289).
The RTPS2 dissector could overflow a buffer (CVE-2012-4296).
The GSM RLC MAC dissector could overflow a buffer (CVE-2012-4297).
The CIP dissector could exhaust system memory (CVE-2012-4291).
The STUN dissector could crash (CVE-2012-4292).
The EtherCAT Mailbox dissector could abort (CVE-2012-4293).
The CTDB dissector could go into a large loop (CVE-2012-4290).
|Created:||August 20, 2012||Updated:||September 14, 2012|
|Description:||From the Debian advisory:
A guest kernel can cause the host to become unresponsive for a period of time, potentially leading to a DoS. Since an attacker with full control in the guest can impact on the host, this vulnerability is consider with high impact.
Page editor: Jake Edge
Brief itemsreleased on August 22. Linus says: "Shortlog appended, there's nothing here that makes me go 'OMG! Scary!' or makes me want to particularly mention it separately. All just random updates and fixes."
Previously, 3.6-rc2 was released on on August 16. "Anyway, with all that said, things don't seem too bad. Yes, I ignored a few pull requests, but I have to say that there weren't all that many of those, and the rest looked pretty calm. Sure, there's 330+ commits in there, but considering that it's been two weeks, that's about expected (or even a bit low) for early -rc's. Yes, 3.5 may have been much less for -rc2, but that was unusual."
If we have carefully made a decision to inline a function, we should (now) use __always_inline. If we have carefully made a decision to not inline a function, we should use noinline. If we don't care, we should omit all such markings.
This leaves no place for "inline"?
Kernel development news
Interestingly, the scheduler did have power-aware logic from 2.6.18 through 3.4. There was a sysctl knob (sched_mc_power_savings) that would cause the scheduler to try to group runnable processes onto the smallest possible number of cores, allowing others to go idle. That code was removed in 3.5 because it never worked very well and nobody was putting any effort into improving it. The result was the removal of some rather unloved code, but it also left the scheduler with no power awareness at all. Given the level of interest in power savings in almost every environment, having a power-unaware scheduler seems less than optimal; it was only a matter of time until somebody tried to put together a better solution.
Alex Shi started off the conversation with a rough proposal on how power awareness might be added back to the scheduler. This proposal envisions two modes, called "power" and "performance," that would be used by the scheduler to guide its decisions. Some of the first debate centered around how that policy would be chosen, with some developers suggesting that "performance" could be used while on AC power and "power" when on battery power. But that policy entirely ignores an important constituency: data centers. Operators of data centers are becoming increasingly concerned about power usage and its associated costs; many of them are likely to want to run in a lower-power mode regardless of where the power is coming from. The obvious conclusion is that the kernel needs to provide a mechanism by which the mode can be chosen; the policy can then be decided by the system administrator.
The harder question is: what would that policy decision actually do? The old power code tried to cause some cores, at least, to go completely idle so that they could go into a sleep state. The proposal from Alex takes a different approach. Alex claims that trying to idle a subset of the CPUs in the system is not going to save much power; instead, it is best to spread the runnable processes across the system as widely as possible and try to get to a point where all CPUs can go idle. That seems to be the best approach, on x86-class processors, anyway. On that architecture, no processor can go into a deep sleep state unless they all go into that state; having even a single processor running will keep the others in a less efficient sleep state. A single processor also keeps associated hardware — the memory controller, for example — in a powered-up state. The first CPU is by far the most expensive one; bringing in additional CPUs has a much lower incremental cost.
So the general rule seems to be: keep all of the processors busy as long as there is work to be done. This approach should lead to the quickest processing and best cache utilization; it also gives the best power utilization. In other words, the best policy for power savings looks a lot like the best policy for performance. That conclusion came as a surprise to some, but it makes some sense; as Arjan van de Ven put it:
So why bother with multiple scheduling modes in the first place? Naturally enough, there are some complications that enter this picture and make it a little bit less neat. The first of these is that spreading load across processors only helps if the new processors are actually put to work for a substantial period of time, for values of "substantial" around 100μs. For any shorter period, the cost of bringing the CPU out of even a shallow sleep exceeds the savings gained from running a process there. So extra CPUs should not be brought into play for short-lived tasks. Properly implementing that policy is likely to require that the kernel gain a better understanding of the behavior of the processes running in any given workload.
There is also still scope for some differences of behavior between the two modes. In a performance-oriented mode, the scheduler might balance tasks more aggressively, trying to keep the load the same on all processors. In a power-savings mode, processes might stay a bit more tightly packed onto a smaller number of CPUs, especially processes that have an observed history of running for very short periods of time.
But the conversation has, arguably, only barely touched on the biggest complication of all. There was a lot of talk of what the optimal behavior is for current-generation x86 processors, but that is far from the only environment in which Linux runs. ARM processors have a complex set of facilities for power management, allowing much finer control over which parts of the system have power and clocks at any given time. The ARM world is also pushing the boundaries with asymmetric architectures like big.LITTLE; figuring out the optimal task placement for systems with more than one type of CPU is not going to be an easy task.
The problem is thus architecture-specific; optimal behavior on one architecture may yield poor results on another. But the eventual solution needs to work on all of the important architectures supported by Linux. And, preferably, it should be easily modifiable to work on future versions of those architectures, since the way to get the best power utilization is likely to change over time. That suggests that the mechanism currently used to describe architecture-specific details to the scheduler (scheduling domains) needs to grow the ability to describe parameters relevant to power management as well. An architecture-independent scheduler could then use those parameters to guide its behavior. That scheduler will also need a better understanding of process behavior; the almost-ready per-entity load tracking patch set may help in this regard.
Designing and implementing these changes is clearly not going to be a short-term job. It will require a fair amount of cooperation between the core scheduler developers and those working on specific architectures. But, given how long we have been without power management support in the scheduler, and given that the bulk of the real power savings are to be had elsewhere (in drivers and in user space, for example), we can wait a little longer while a proper scheduler solution is worked out.
The idea behind LTO is to examine the entire program after the individual files have been compiled and exploit any additional optimization opportunities that appear. The most significant of those opportunities appears to be the inlining of small functions across object files. The compiler can also be more aggressive about detecting and eliminating unused code and data. Under the hood, LTO works by dumping the compiler's intermediate representation (the "GIMPLE" code) into the resulting object file whenever a source file is compiled. The actual LTO stage is then carried out by loading all of the GIMPLE code into a single in-core image and rewriting the (presumably) further-optimized object code.
The LTO feature first appeared in GCC 4.5, but it has only really started to become useful in the 4.7 release. It still has a number of limitations; one of those is that all of the object files involved must be compiled with the same set of command-line options. That limitation turns out to be a problem with the kernel, as will be seen below.
Andi's LTO patch set weighs in at 74 changesets — not a small or unintrusive change. But it turns out that most of the changes have the same basic scope: ensuring that the compiler knows that specific symbols are needed even if they appear to be unused; that prevents the LTO stage from optimizing them away. For example, symbols exported to modules may not have any callers in the core kernel itself, but they need to be preserved for modules that may be loaded later. To that end, Andi's first patch defines a new attribute (__visible) used to mark such symbols; most of the remaining patches are dedicated to the addition of __visible attributes where they are needed.
Beyond that, there is a small set of fixes for specific problems encountered when building kernels with LTO. It seems that functions with long argument lists can get their arguments corrupted if the functions are inlined during the LTO stage; avoiding that requires marking the functions noinline. Andi complains "I wish there was a generic way to handle this. Seems like a ticking time bomb problem." In general, he acknowledges the possibility that LTO may introduce new, optimization-related bugs into the kernel; finding all of those could be a challenge.
Then there is the requirement that all files be built with the same set of options. Current kernels are not built that way; different options are used in different parts of the tree. In some places, this problem can be worked around by disabling specific optimizations that depend on different compiler flags than are used in the rest of the kernel. In others, though, features must simply be disabled to use LTO. These include the "modversions" feature (allowing kernel modules to be used with more than one kernel version) and the function tracer. Modversions seems to be fixable; getting ftrace to work may require changes to GCC, though.
It is also necessary, of course, to change the build system to use the GCC LTO feature. As of this writing, one must have a current GCC release; it is also necessary to install a development version of the binutils package for LTO to work. Even a minimal kernel requires about 4GB of memory for the LTO pass; an "allyesconfig" build could require as much as 9GB. Given that, the use of 32-bit systems for LTO kernel builds is out of the question; it is still possible, of course, to build a 32-bit kernel on a 64-bit system. The build will also take between two and four times as long as it does without LTO. So developers are unlikely to make much use of LTO for their own work, but it might be of interest to distributors and others who are building production kernels.
The fact that most people will not want to do LTO builds actually poses a bit of a problem. Given the potential for LTO to introduce subtle bugs, due either to optimization-related misunderstandings or simple bugs in the new LTO feature itself, widespread testing is clearly called for before LTO is used for production kernels. But if developers and testers are unwilling to do such heavyweight builds, that testing may be hard to come by. That will make it harder to achieve the level of confidence that will be needed before LTO-built kernels can be used in real-world settings.
Given the above challenges, the size of the patch set, and the ongoing maintenance burden of keeping LTO working, one might well wonder if it is all worth it. And that comes down entirely to the numbers: how much faster does the kernel get when LTO is used? Hard numbers are not readily available at this time; the LTO patch set is new and there are still a lot of things to be fixed. Andi reports that runs of the "hackbench" benchmark gain about 5%, while kernel builds don't change much at all. Some networking benchmarks improve as much as 18%. There are also some unspecified "minor regressions." The numbers are rough, but Andi believes they are encouraging enough to justify further work; he also expects the LTO implementation in GCC to improve over time.
Andi also suggests that, in the long term, LTO could help to improve the quality of the kernel code base by eliminating the need to put inline functions into include files.
All told, this is a patch set in a very early stage of development; it seems unlikely to be proposed for merging into a near-term kernel, even as an experimental feature. In the longer term, though, it could lead to faster kernels; use of LTO in the kernel could also help to drive improvements in the GCC implementation that would benefit all projects. So it is an effort that is worth keeping an eye on.
In this edition of "ask a kernel developer", I answer a multi-part question about kernel subsystem maintenance from a new maintainer. The workflow that I use to handle patches in the USB subsystem is used as an example to hopefully provide a guide for those who are new to the maintainer role.
As always, if you have unanswered questions relating to technical or procedural issues in Linux kernel development, ask them in the comment section, or email them directly to me. I will try to get to them in another installment down the road.
I have some questions about what I am supposed to be doing at different points of the release cycle. -rc1 and -rc2 are spelled out in Documentation/HOWTO, and I have a decent idea that patches I accept should be smaller and fix more critical bugs as the -rcX's roll out. The big question is what do I do with all of the other patches that come at random times?
First off, thanks so much for agreeing to maintain a kernel subsystem. Without maintainers like you, the Linux kernel development process would be much more chaotic and hard to navigate. I will try to explain how I have set up my development workflow and how I maintain the different subsystems I am in charge of. That example can help you determine how you wish to manage your own development trees, and how to handle incoming patches from developers.
To answer the question, yes, you will receive patches at any point in the release cycle, but not all of them are applicable to be sent to Linus at all points in time, depending on where we are in the release cycle. I'll go into more detail below, but for now, realize that in my opinion you should not require the other developers to wait for different points in the release cycle, and, instead, you should hold onto patches and send them upstream when they are needed. I think it is the maintainer's job to do the buffering.
How best do I organize my pull-request branches so that developers know which they can pull as dependencies, and which are for-next. I don't want to over-organize it, but do want to make it easy for board submitters to test from my trees. Should my pull-request branches be long-lived, or, kill them and create new after each cycle?
It's best to stick with a simple scheme for branches, work with that for a while, and then if you find that is too limiting, feel free to grow from there. I only have two branches in my git trees, one to feed to Linus for the current release cycle, and one that is for the next release cycle. This can be seen in the USB git tree on kernel.org, which shows three branches:
I receive patches from lots of different developers all the time. All patches, after they pass an initial "is this sane" glance, get copied to a mailbox that I call TODO. Every few days, depending on my workload, I go through the mailbox and pick out all of the patches that are to be applied to various trees I am responsible for. For this example, I'll search on anything that touches the USB tree and copy those messages to a temporary local mailbox on the filesystem called s (I name my local mailboxes for their ease of typing, not for any other good reason.)
After digging all of the USB patches out (which is really a simple filter for all threads that have the "drivers/usb" string in them), I take a closer look at the patches in the s mailbox.
First I look to find anything that would be applicable to Linus's current tree. This is usually a bug fix for something that was introduced during this merge window, or a regression for systems that were previously working just fine. I pick those out and save them to another temporary mailbox called s1.
Now it's time to start testing to see if the patches actually apply to the tree. I go into a directory that contains my usb tree and check to see what branch I am on:
$ cd linux/work/usb $ git b master 6dab7ed Merge branch 'fixes' of git://git.linaro.org/people/rmk/linux-arm * usb-linus 8f057d7 gpu/mfd/usb: Fix USB randconfig problems usb-next 26f944b usb: hcd: use *resource_size_t* for specifying resource data work-linus 8f057d7 gpu/mfd/usb: Fix USB randconfig problems work-next 26f944b usb: hcd: use *resource_size_t* for specifying resource dataNote, I have the following aliases in my ~/.gitconfig file:
[alias] dc = describe --contains fp = format-patch -k -M -N b = branch -vThis enables me to use git b to see the current branch much easier, git fp to format patches in the style I need them in, and git dc to determine exactly what release a specific git commit was contained in.
As you can see by the list of branches, I have a local branch that mirrors the public versions of the usb-linus and usb-next branches called work-linus and work-next. I do the testing and development work in these local branches, and only when I feel they are "good enough" do I push them to the public facing branches and then out to kernel.org.
$ git checkout work-linus Switched to branch 'work-linus'
Then a quick sanity check to verify that the patches in s1 really will apply to this tree (sadly, they often do not.):
$ p1 < ../s1 patching file drivers/usb/core/endpoint.c patching file drivers/usb/core/quirks.c patching file drivers/usb/core/sysfs.c Hunk #2 FAILED at 210. 1 out of 2 hunks FAILED -- saving rejects to file drivers/usb/core/sysfs.c.rej patching file drivers/usb/storage/transport.c patching file include/linux/usb/quirks.h
(Note, the 'p1' command is really:
patch -p1 -g1 --dry-runthat I set up in my .alias file years ago as I quickly got tired of typing the full thing out.)
Here is an example of patches that will not apply to the work-linus branch, but it turns out that this was my fault. They were generated against the linux-next branch, and really should be queued up for the next merge window, not for this release.
So, let's switch back to the work-next branch, as that is where the patches really belong:
$ git checkout work-next Switched to branch 'work-next'And see if they apply there properly:
$ p1 < ../s1 patching file drivers/usb/core/endpoint.c patching file drivers/usb/core/quirks.c patching file drivers/usb/core/sysfs.c patching file drivers/usb/storage/transport.c patching file include/linux/usb/quirks.hMuch better.
Then I look at the patches themselves again in my email client, and edit anything that needs to be cleaned up. The changes could be in the Subject, the body of the patch, or any other things that need to be touched up. With developers who send patches all the time, no changes generally need to be done in this area, but, unfortunately, I end up editing this type of "metadata" all the time.
After the patches look clean, and I've done a review of them again to verify that I don't notice anything strange or suspicious, I do one last sanity check by running the checkpatch.pl tool:
$ ./scripts/checkpatch.pl ../s1 total: 0 errors, 0 warnings, 73 lines checked ../s1 has no obvious style problems and is ready for submission.
So all looks good, so let's apply them to the branch and see if the build works properly:
$ git am -s ../s1 Applying: usb/endpoint: Set release callback in the struct device_type \ instead of in the device itself directly Applying: usb: convert USB_QUIRK_RESET_MORPHS to USB_QUIRK_RESET $ make -j8If everything built, then it's time to test the patches. This can range from installing the changed kernel and ensuring that everything still works properly and the new modifications work as they say they should work, to doing nothing more than verifying that the build didn't break if I do not have the hardware that the changed driver controls.
After this, and everything looks sane, it's time to push the patches to the public kernel.org repository, as well as notifying the developer that their patch was applied to the tree and where they can find it. This I do with a script called do.sh that has grown over the years; it was originally based on a script that Andrew Morton uses to notify developers when he applies their patches. You can find a copy of it and the rest of the helper scripts I use for kernel development in my gregkh-linux GitHub tree.
The script does the following:
After this, people do sometimes find problems with patches that need to be fixed up. But, since my trees are public, I can't rebase them—otherwise any developer who had previously pulled my branches would get messed up. Instead, I sometimes revert patches, or apply fix-up patches on top of the current tree to resolve issues. It isn't the cleanest solution at times, but it is better to do this than rebase a public tree, which is something that no one should ever do.
Hopefully, this description gives you an idea how you can manage your trees and the patches sent to you to make things easier for yourself, the linux-next maintainer, and any developer who relies on your tree.
Patches and updates
Core kernel code
Filesystems and block I/O
Virtualization and containers
Page editor: Jonathan Corbet
While Debian has discussed systemd—and Upstart—over the past year or more, that's not the whole story: another potential init replacement has appeared on the debian-devel mailing list. OpenRC is a Gentoo Linux project that was proposed as an alternative to the venerable System V init (sysvinit) that is currently the Debian default. That proposal spawned a long thread, even by debian-devel standards, and a more recent revival of the topic is adding more to the discussion. Though OpenRC has some features that sysvinit lacks, it doesn't bring the number of new features that systemd or Upstart do, so it makes some in the Debian community wonder whether it makes sense to add yet another init replacement into the mix.
OpenRC developer Patrick Lauer suggested that Debian look at OpenRC back in April. It is, he said, a "modern, slim, userfriendly init system with minimal dependencies". It would add support for stateful services (e.g. only one instance will be running at a given time), and dependency-based init scripts, without requiring all of what something like systemd requires ("dbus? udev? on my server?! and you expect a linux 3.0+ kernel? waaah!"). It would be a step up from sysvinit, while still in keeping with the "Unix way". In addition, it supports both Linux and the BSDs, which would eliminate one of the bigger gripes against systemd.
But an incremental improvement to init is not what some are looking for. To many, sysvinit and other shell-script-based solutions have not kept up with the changing hardware and kernel environment, so an event-based init is the right way forward. As Arto Jantunen put it:
As might be expected, there are plenty of folks who don't quite see things that way. While there are vocal advocates of systemd—and rather less vocal Upstart advocates—there are numerous opponents as well. OpenRC might provide something of a middle ground as Roger Leigh described:
To that end, Leigh started looking more closely at OpenRC, with an eye toward packaging it for Debian. One problem that he noted early on was the lack of support for LSB dependencies in the init scripts. The LSB headers are comments that specify the runtime dependencies for each init script. OpenRC has its own dependency system, but Leigh believed that LSB dependency handling could be added to OpenRC.
Over the intervening months, that is exactly what happened. On August 9, Benda Xu posted an intent to package (ITP) for OpenRC, which restarted the discussion. Leigh noted that Xu had gotten OpenRC to work with the LSB-based Debian init scripts, so that it could be a replacement for the sysv-rc package (which handles changing runlevels, starting and stopping services, and so on), while still using the init and scripts provided by sysvinit underneath. In addition, the OpenRC upstream is working on ways to allow other tools to access its dependencies, which would allow systemd or others to use OpenRC scripts. He concluded:
Supporting multiple init systems is not without a cost, of course. There are now (or soon will be) at least four different kinds of configuration for init "scripts" (sysvinit, OpenRC, systemd, Upstart). While systemd and Upstart can use existing init scripts, and OpenRC is getting there as well, doing so loses much of the benefit of the alternatives. To some, there is simply an impedance mismatch between static dependency-based systems and those that are event-driven—though systemd advocates might not completely agree with the "event-driven" characterization. As Russ Allbery put it:
Allbery said that these kinds of problems were not easily solvable with the existing init scripts: "The alternative is to add [significant] additional complexity to every package like those listed above that needs the network to loop and retry if the network isn't available when it first starts." That would be a "huge waste of effort".
One of the potential blockers for systemd, though, has been its reliance on Linux-only features, which makes it problematic for Debian GNU/kFreeBSD (and Debian GNU/Hurd down the road). OpenRC might not provide all of the features that systemd (and Upstart) do, but it could be enough of an upgrade to sysvinit that it makes sense to make that switch. That might actually pave the way for an event-driven init default for Debian GNU/Linux as Philip Hands described:
At least some in the Debian community are particularly annoyed by the systemd team's unwillingness to take patches for portability to kernels beyond Linux. That led Adam Borowski to jokingly dismiss OpenRC because it lacks "a hostile upstream". More seriously, Leigh pointed out that OpenRC uses some of the same features as systemd, but does so with portability in mind:
Others see it somewhat differently (of course). Maintaining a package for multiple platforms has its costs, and for a low-level package like systemd those costs may be rather high. It's not that the systemd upstream is "hostile", according to Matthias Klumpp, but that systemd is difficult to port and its developers don't want to maintain an #ifdef-heavy code base. Instead, the systemd folks suggest forking systemd and maintaining a parallel repository for any ports. But that isn't easy, Klumpp said: "So far nobody has created a non-Linux fork of systemd, and the reason is mainly that it is too much work."
There is also the underlying question of just how much "choice" there should be in a distribution's init system. Setting aside the "Linux is about choice" disagreements that always seem to arise in these kinds of discussions, there is a real question about how many different options Debian can and should support. As Allbery noted, Debian does not support switching to a different C library, for example. But Faidon Liambotis countered that it was only because no one had ever tried to show the "viability and usefulness" of switching to something other than glibc. Furthermore, things like kFreeBSD or building Debian with LLVM did not come about by some kind of consensus, rather it was due to someone deciding to make it work.
For init systems, though, Leigh believes that if OpenRC proves to be a viable replacement, it should supplant sysv-rc, rather than providing a choice. It wouldn't resolve the question of defaulting to an event-driven init (for Linux at least), but it would allow the rest of the Debian community to "get on with life while the upstart and systemd folk take chunks out of one another for a decade or so", as Hands put it.
While Linux may not be about choice exactly, its users are certainly accustomed to being able to fairly easily switch between different technologies: distributions, kernels, desktops, mail servers, web browsers, and so on. In some respects, Debian users are even more acclimated to a wide variety of choices. Its package repository is renowned for its breadth, and the distribution as a whole seems intent on providing choices whenever it is technically feasible. It is too soon to say for sure, but the addition of OpenRC may well provide a bridge that would upgrade init for those who aren't convinced of the "event-driven future", while staying out of the way of the systemd and Upstart efforts.
Newsletters and articles of interest
The Register reports that Mozilla's Firefox OS has been successfully ported to run on the diminutive Raspberry Pi platform. The port was apparently done by a Nokia employee, but is a side project: "Romashin seems to have undertaken this work off his own bat. So let’s stick to lame puns on “pie”, shall we, rather than wondering what Redmond will say about Nokia playing with an OS other than Windows Phone."
Page editor: Rebecca Sobol
GUADEC incorporated a blend of old and new business; there were status reports and updates from various GNOME projects and teams, but there were also a lot of sessions devoted to discussing new components and ideas for the coming development cycle, and for GNOME's long-term future. Some were wild concepts, of course, but not every new scheme was a radical departure — many were just solid bits of engineering that will make life easier for developers (and for users) over the next few releases.
For example, one of the week's largest crowds gathered for Owen Taylor's talk about enabling jitter-free animations in GTK+ applications. Smooth animations are not a new idea, but it has taken a while for the right approach to fall into place. Taylor addressed one specific type of animation: 2D redraws, of the kind that are commonly found when dragging or re-sizing an application window, or when translating an object across the screen. GTK+ and GNOME applications have never performed all that well in these circumstances, with tearing and jumpy updates being among the common complaints.
Taylor identified the root cause of such unpleasant visual artifacts as a lack of synchronization between the application and the compositor (following a lengthy investigation, documented on his blog). Historically, compositors attempted to draw new frames whenever they happened to arrive from the application. But this resulted in uneven timing. At times an application might generate more frames than the frame rate (say, 60 frames per second), causing some frames to be dropped, at other times it produces frames too slowly (either by consuming too much CPU or due to system load), and at still other times it may deliver frames too late to be drawn (with a buffer swap during the display's vertical blanking interval). The result is redraws that are unevenly spaced, so they appear jumpy to the eye. The compositor can attempt to be smart about which frames to draw and which to drop, but there has never been a mechanism for applications and compositors to keep in step with one another.
Taylor's solution is to introduce a frame synchronization protocol, which allows the application and the compositor to agree on redraws. The protocol centers on _NET_WM_SYNC_REQUEST_COUNTER, a counter (managed by the X Synchronization Extension, and visible to both the application and the compositor) which the application increments to an odd value whenever it begins to update a new frame of the animation. When the application finishes drawing the new frame, it increments the counter to even, and the compositor can draw the update to the screen. When the update is complete, the compositor sends a _NET_WM_FRAME_DRAWN message back to the application. This synchronization scheme does not enable faster frame rates, but it ensures that the compositor is drawing updates as fast as the application can produce them, be that 30 frames per second, 40, or any other number — and that the application will not draw new frames before the compositor is ready for them
Taylor also observed that there are side benefits to this scheme, including that frames are dropped only when the compositor fails, and that it is possible to benchmark compositor performance independently. That should enable future work benchmarking and improving compositor performance. Taylor implemented the frame synchronization protocol in the Mutter window manager used by GNOME, but he also posted it to the window manager specification list, where it drew positive reactions from KWin developers and others. He had demonstration animations on hand illustrating the smoother performance on several animation effects when frame synchronization was active. The demos included window dragging and rescaling, but there is still some work to be done to make more and easier-to-use animated effects in GTK+ itself.
Colin Walters's OSTree does not extend new features to the GNOME environment or applications, but it is designed to make life simpler for both developers and users. The concept is to replace the package-centric installation model with a Git-like repository of the entire filesystem tree — which can be cloned and updated on the client machine. Installing a new operating system is a matter of cloning the repository, and updating it is a matter of pulling in the changes. But unlike a simple OS ghosting setup, OSTree can retain multiple, named versions of the tree and boot into any of them. That allows developers to do things like maintain experimental builds, roll back to earlier versions to try to reproduce bugs, and ensure that the entire development team can boot an identical set of components.
In his talk, Walters described another feature that OSTree would provide to the core GNOME team: the ability to bisect regressions down to a single commit. As he described it, a regression means an unintentional break in functionality. GNOME has historically had problems identifying and fixing regressions because the vast majority of GNOME users do not install the environment directly: they install packages delivered by their distribution. That separates the discovery of the regression from the commit that caused it by a considerable gulf — both of time and activity. First is the commit, he said, followed by time, then the creation of a tarball, followed by more time, then the package, still more time, the release of the package to the repository, yet more time, installation of the package, updating the filesystem, then finally a reboot, after which the user notices the regression.
If developers and users could immediately see the changes, Walters said, they would find and fix regressions much faster. Walters's ostbuild is an auxiliary build tool that makes this possible. It watches a git repository and creates binary builds stored in an OSTree repository based on each commit. Because ostbuild only re-builds changed components (and OSTree only stores changed portions of the system), it does not use an excessive amount of space, but more importantly it allows developers to track down exactly which commit caused a regression.
The resulting improved feedback rate is one benefit, he said, but OSTree also makes reverting regressions simple: a new commit that fixes the regression can be deployed from the repository, and users can simply boot into a pre-regression OS until a fix is available. Such an option is impossible with traditional packages like those used in RPM and Debian systems, he said, because they rely heavily on the version numbers assigned to packages — and the definition that version numbers must strictly increase over time. OSTree and ostbuild are also fully atomic (side-stepping several problems common in package managers), they eliminate the overhead and headaches of working with GNOME's existing build system jhbuild, and they make it possible to incorporate continuous integration testing into GNOME development.
On the other hand, Walters cautioned that there are a number of issues with using OSTree that have yet to be worked out. For starters, there is no way to push out security updates (as distinct from any other commit), which could prove annoying for system administrators. It would also be necessary to find a way to integrate the OSTree distribution model with existing governance structures that define policy and longer-term strategic decision-making — and in a related issue, without the version numbers required by packages, it would be hard to do marketing and branding to highlight a new release. At a more technical level, there is not yet a preferred way to install applications from outside sources (although OSTree itself is agnostic to how applications are installed), configuration files in /etc make automatically rolling back to a previous version risky, and as Walters put it, the whole system is "barely documented."
Despite such shortcomings, Walters has been using OSTree for GNOME development for several months, via a service running at ostree.gnome.org.
Plenty of other sessions dealt with GNOME's immediate and near-term future. Emmanuele Bassi addressed the plans for GTK+ and Clutter, the two most prominent GUI toolkits used in GNOME. One of the most frequently asked questions is whether the two should be merged into a single toolkit, given that in many ways their features are complimentary. Bassi's answer is that they need to remain separate, but both need to be adapted so that they will work better together. Clutter 2.0 is still in development, only after that will the plan for GTK+ 4.0 (including changes targeting better Clutter integration) take concrete form.
Alejandro Piñeiro Iglesias outlined GNOME's accessibility plans, including focus-tracking in the magnifier tool. Tim Muller discussed GStreamer's future, including more GPU support and improved memory management. By and large, major changes like Walters's OSTree were a rarity; most of the work that goes into each successive GNOME release consists of incremental improvements, even if (as in the case of Taylor's animation work) the result fixes a long-standing issue.
[The author would like to thank the GNOME Foundation for travel assistance to A Coruña for GUADEC.]
Even if the reviewer provides helpful comments and points out things that you missed, those few words are the first thing you see about your patch. Part of me understands that these headlines are informative messages, that they are not judgements on me as a person, merely some sort of judgement on the patch that I have written. But the language doesn’t encourage that perspective. Permissions are not granted, entrance is not granted, your favorite show on television is canceled, and so forth. These are not heartening words: they are words that shut you out, that indicate your contribution was not desirable.
Git version 1.7.12 has been released, with several new features making their debut. Included are support for XDG-compatible $HOME/.config/ configuration files, more informative error messages, and an improved git apply that can "wiggle the base version and perform three-way merge when a patch does not exactly apply to the version you have."
However, in the last week, we learned of bugs in two tools. These bugs are not like the others, in that they are relatively serious. Each is about two years old.
The relevant reports note that "grep -i '^$' could exit 0 (i.e., report a match) in a multi-byte locale, even though there was no match, and the command generated no output," and that "sort -u could fail to output one or more result lines" or read freed memory.
Newsletters and articles
On his blog, Mozilla's David Baron describes a number of changes he would like to see in Bugzilla functionality, including several ideas about tracking bug metadata (such as what needs to be done next, workarounds, and assessing the expected behavior). "One of the difficult aspects of designing something like this, however, is the tradeoff between the cost of maintaining metadata and the desire to get work done quickly. There are currently many bugs in Bugzilla that have a bunch of fields that are just left at their defaults (e.g., severity, priority), and in many cases that's fine because we don't have a need to maintain these fields. But once a bug gets complicated enough, it's useful to be able to keep the discussion organized."laments the lack of open source instrument files. He was attempting to create music where all parts (tools and instruments) were freely available so that anyone could learn from and modify the music. "Until a commercial company release their old instruments as open source or some rich guy hires several audio technicians, a whole orchestra and software developers for approx. one year and then gives it all away for free I see nothing on the horizon here. And the Salamander Piano and G-Town are very good as well, even better as single instruments than Sonatina. But not all compositions are for 'Piano, Anvils, Stomps and Fake Glass Bowls'."
Page editor: Nathan Willis
Brief itemsHeliOS project, an effort to get Linux computers into the hands of school children. Ken is ill (more information on the HeliOS blog), and in need of funds for medication and surgery. Thomas A. Knight has set up a donations campaign to help Ken out. Surplus funds will go to the HeliOS project.
Lars Wirzenius wrote in to call attention to two recent releases from his projects. The first is "secret volcano," the first stable release of Baserock, "a method and toolset for developing embedded Linux systems in a way that we believe is going to be much better than anything currently out there." The second is an ARM-based server product designed for Baserock development.
Articles of interesta troll of sorts by Poul-Henning Kamp, posted to the ACM Queue site. "That is the sorry reality of the bazaar [Eric] Raymond praised in his book: a pile of old festering hacks, endlessly copied and pasted by a clueless generation of IT 'professionals' who wouldn't recognize sound IT architecture if you hit them over the head with it. It is hard to believe today, but under this embarrassing mess lies the ruins of the beautiful cathedral of Unix, deservedly famous for its simplicity of design, its economy of features, and its elegance of execution." Perhaps it's just venting by somebody who got left behind, but perhaps he has a point: are we too focused on the accumulation of features at the expense of the design of the system as a whole?
New BooksThis book is a translation from a Chinese edition by the Learning through Engineering, Art, and Design (LEAD) Project, an educational initiative established to encourage the development of creative thinking through the use of technology."
Calls for Presentations
Upcoming EventsLPI-East Africa is also co-sponsoring an "Open Source Developer Challenge" during the event with the Linux Professional Association of Kenya." The conference, themed "the Django community and ecosystem," showcases an array of tutorials, two tracks of talks over three days, lightning talks and a development sprint on topics such as creating dynamic applications, debugging live python web apps, internationalization, PostgreSQL and design tips."
|August 25||Debian Day 2012 Costa Rica||San José, Costa Rica|
|GStreamer conference||San Diego, CA, USA|
|Kernel Summit||San Diego, CA, USA|
|XenSummit North America 2012||San Diego, CA, USA|
|Ubuntu Developer Week||IRC|
|2012 Linux Plumbers Conference||San Diego, CA, USA|
|LinuxCon North America||San Diego, CA, USA|
|Linux Security Summit||San Diego, CA, USA|
|Electromagnetic Field||Milton Keynes, UK|
|September 1||Panel Discussion Indonesia Linux Conference 2012||Malang, Indonesia|
|Kiwi PyCon 2012||Dunedin, New Zealand|
|VideoLAN Dev Days 2012||Paris, France|
|DjangoCon US||Washington, DC, USA|
|Foundations of Open Media Standards and Software||Paris, France|
|Magnolia Conference 2012||Basel, Switzerland|
|Hardening Server Indonesia Linux Conference 2012||Malang, Indonesia|
|International Conference on Open Source Systems||Hammamet, Tunisia|
|Debian Bug Squashing Party||Berlin, Germany|
|Debian FTPMaster sprint||Fulda, Germany|
|KPLI Meeting Indonesia Linux Conference 2012||Malang, Indonesia|
|PyTexas 2012||College Station, TX, USA|
|Bitcoin Conference||London, UK|
|Postgres Open||Chicago, IL, USA|
|SNIA Storage Developers' Conference||Santa Clara, CA, USA|
|SUSECon||Orlando, Florida, US|
|2012 X.Org Developer Conference||Nürnberg, Germany|
|Automotive Linux Summit 2012||Gaydon/Warwickshire, UK|
|September 21||Kernel Recipes||Paris, France|
|openSUSE Summit||Orlando, FL, USA|
|OpenCms Days||Cologne, Germany|
|GNU Radio Conference||Atlanta, USA|
|PuppetConf||San Francisco, US|
|September 28||LPI Forum||Warsaw, Poland|
|Ohio LinuxFest 2012||Columbus, OH, USA|
|PyCon India 2012||Bengaluru, India|
|PyCon UK 2012||Coventry, West Midlands, UK|
|Velocity Europe||London, England|
|PyCon South Africa 2012||Cape Town, South Africa|
|GNOME Boston Summit 2012||Cambridge, MA, USA|
|Korea Linux Forum 2012||Seoul, South Korea|
|Open Source Developer's Conference / France||Paris, France|
|Debian BSP in Alcester (Warwickshire, UK)||Alcester, Warwickshire, UK|
|October 13||2012 Columbus Code Camp||Columbus, OH, USA|
|PyCon Ireland 2012||Dublin, Ireland|
|Debian Bug Squashing Party in Utrecht||Utrecht, Netherlands|
|FUDCon:Paris 2012||Paris, France|
|OpenStack Summit||San Diego, CA, USA|
|Linux Driver Verification Workshop||Amirandes,Heraklion, Crete|
|LibreOffice Conference||Berlin, Germany|
|MonkeySpace||Boston, MA, USA|
|14th Real Time Linux Workshop||Chapel Hill, NC, USA|
|PyCon Ukraine 2012||Kyiv, Ukraine|
|Gentoo miniconf||Prague, Czech Republic|
|PyCarolinas 2012||Chapel Hill, NC, USA|
|LinuxDays||Prague, Czech Republic|
|openSUSE Conference 2012||Prague, Czech Republic|
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
Copyright © 2012, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds