|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for June 5, 2014

Open-source real-time strategy gaming with 0 A.D.

June 4, 2014

This article was contributed by Adam Saunders

If you've played 0 A.D., you've heard the phrase "Ti esti?" ("What is it?" in ancient Greek) a lot. Whether you want your citizens to farm fields, mine metal, or build buildings, or want your cavalry to engage enemy civilizations in battle, whenever you select your units, they always ask "Ti esti?". As you continue playing, your citizens continue to display fluency in ancient Greek. That's because 0 A.D. is an open-source real-time strategy game — reminiscent of games like Age of Empires — that has a focus on historical realism.

This attention to linguistic detail is only one sign of the ambition of Wildfire Games: "an international community of dozens of game developers and gamers, who mostly contribute in their spare time on a volunteer basis" to the development of 0 A.D. Different civilizations in the game not only have different strengths and weaknesses, they are also modeled directly on a particular era in that civilization's history. That same attention to ancient language is also a sign of 0 A.D.'s continued alpha status: civilizations that you'd expect to speak in different languages, such as the Gauls, also speak ancient Greek. Nonetheless, the game impresses with its playability and fun, which belies its incomplete state. 0 A.D.'s recent Alpha 16 release on May 17 makes for a great opportunity to look at the project to date.

[Game start]

0 A.D. began in 2000 as a concept for a game to be built on top of the proprietary Age of Empires II engine, but the company Wildfire later decided to develop it as an entirely new, stand-alone game. 0 A.D. uses its own home-built engine, called Pyrogenesis. In 2009, the project became completely open source, with the code licensed as GPL and its art assets available under CC-BY-SA. The project is under heavy development, with significant updates coming fairly frequently. The last couple of years have been particularly good for the project. In 2012, Wildfire Games joined the non-profit Software in the Public Interest, which has given the 0 A.D. project a non-profit structure for monetary transactions like paying for development and receiving donations. 2013 saw three alpha releases and an Indiegogo fundraiser that raised over $30,000 toward development costs. As a result, the project has hired a developer for a year's work on the game.

For those not familiar with real-time strategy gameplay, the following is the general flow of 0 A.D. Games have two to four players, in any combination of human and AI players. In the larger games, players can play free-for-all (where everyone is on their own) or form teams (though it may be difficult to coordinate effectively with an AI teammate).

Players start the game on one part of a map, with a few worker units and a "base" building. In the first few minutes, the priority is almost always building an economy: players click on the units and order them to gather resources, and train more workers to speed up growth. From there, the player has a variety of strategic choices to make, all with advantages and trade-offs.

For example, if they would like to launch an early strike on their opponent, they will focus on training basic infantry and perhaps cavalry soldiers as soon as possible. This means delaying technological development (e.g. by ordering workers to construct buildings that allow players to train more advanced, expensive soldiers rather than pursuing technology). If, instead, the player would like to gain an economic advantage, the player could train many workers, and construct buildings that improve resource gathering. This could allow the player to outnumber the opponent later in the game, as the player could then afford more units, but risks vulnerability to an early attack, as producing an army is delayed in favor of improved resource gathering.

All of this activity, like all actions in the game, takes place in "real-time". That is, there is no waiting for one's "turn": one always has the ability to act (e.g. select units, order units, train units from buildings, and research upgrades for buildings). This makes planning, speed, and adaptability valuable skills for a player.

[Combat]

Thus, the three main economic concerns a player has to "macromanage" in this game are resource gathering, technological development, and unit production. Once the player commits to combat, or has to defend from an attack, "micromanagement" skills become crucial. The player can select individual units or group units (and create hotkey shortcuts for them) and order them to attack in the optimal manner. Skilled players who want to work on their economy while also pushing an attack will often briefly pull back their troops to attend to their own base, and then order the soldiers back on the assault when they can give combat their full attention.

0 A.D. differs from other real-time strategy games with its focus on historical realism, the ability to play multiplayer games without relying on a centralized server (a hosting player must forward UDP port 20595 through any firewall or NAT and disclose their IP address to other players), and with certain gameplay mechanics details. For instance, troops can be grouped into different formations at the click of a button; player's armies can close ranks when on defense, or split their units and flank their enemies. Creative players can also use the built-in Scenario Editor to make maps for multiplayer games or for playing against an AI opponent.

Alpha 16 is a significant step up from the previous release. The game now has 14 different localizations, as noted in the release announcement. This is a vast improvement from the English-only Alpha 15 released just a few months earlier; players from Brazil, Germany, and Japan, and many other countries can now enjoy the game in their own language. A new AI opponent, Petra, is noticeably smarter than the prior Aegis AI, particularly on defense. There is a new song for those playing as the Gauls or Britons, and also some graphical enhancements, such as new ships and animals.

The game's system requirements are pretty light: 512 MB of RAM, a 1 GHz single-core processor, the ability to support 1024x786 screen resolution, and a graphics card that can handle 3D hardware acceleration and OpenGL 1.3 will do. Strangely, the system requirements suggest one needs a dedicated graphics card with a minimum of 128 MB memory, but the Intel HD 4000 Graphics on my laptop worked very well. The game even played quite smoothly for me with all graphical enhancements turned on in VirtualBox running Lubuntu 14.04 (since Alpha 16 was not yet packaged for Fedora 20).

[Water effects]

Much is still missing, which is to be expected in an alpha release. On first start-up, 0 A.D. tells the players that the AI is incapable of using naval forces, for example. 0 A.D. can also slow down noticeably in the late-game, as more units are built and the demands on the AI are increased; improving game performance is a task for the next alpha. There is still no single-player story campaign, only the option to battle the AI ad hoc or play multiplayer matches. I found the multiplayer game lobby to be sparsely populated; I was unable to join a multiplayer game while preparing for this review. It is possible to set up a game for friends to join through port forwarding as mentioned above, but I was unable to find someone to play multiplayer with me that way. However, there is some discussion on the forums about starting regular multiplayer meetups.

0 A.D. has no official final release date, nor an official expected date for a beta release. Nevertheless, a look at the forums shows significant interest from potential contributors, and the difference in quality between alpha releases is palpable. Those with knowledge of C++ and JavaScript can contribute as developers as described on the project's Trac page. There is also space for many other volunteers: people with knowledge of ancient history and languages, voice actors, musicians, and artists. Those interested in supporting the project, but lacking the time to contribute, can donate to assist development.

0 A.D. is an intriguing game, with an interesting future ahead. I look forward to following this ambitious project as it continues development.

Comments (16 posted)

Questioning corporate involvement in GNOME development

By Jonathan Corbet
May 31, 2014
It is a rare free software project that feels it has too many developers; indeed, most could benefit from more development help. One way to get that help is to have a company pay developers to work on a project; the presence of paid developers is often one of the first signs that a particular project is gaining traction. But paid developers often bring with them worries that the company footing the bill will seek to drive the project in undesirable directions. The GNOME project, which is conducting its annual election for its board of directors until June 8, has an opportunity to say that corporate involvement in development has gone too far — or not.

In particular, board candidate Emily Gonyer has taken the position that corporations have too much control over the GNOME project. Her declaration of candidacy is explicit on this subject:

It is my opinion that GNOME has strode too far towards a corporate-driven project and away from its community-led roots. As of now, GNOME is, in my opinion too beholden to a small handful of large corporations which forces the project to ignore large swaths of our users in preference to them. The end result being that GNOME has lost a tremendous portion of its respect and goodwill in the wider free software community. As a member of the GNOME board of directors I will actively work against this tide and towards the more open, community-driven project that GNOME once was and I hope will be again.

After a bit of discussion, it became clear that Emily was concerned about one company in particular:

But for the last several years, Red Hat's wants/needs have trumped what anyone else wants/needs, including the larger user base of GNOME which is what (I believe) has driven it to fracture into so many [desktop environments] over the last 3-4 years.

She also stated that contributions from unpaid developers should be "favored" in some unspecified way. A project like GNOME, she said, should be run and developed by volunteers.

Needless to say, this set of opinions is not shared by everybody in the GNOME development community. Bastien Nocera (a Red Hat developer) made it clear that he found that position insulting. Even Richard Stallman chimed in, saying "We're happy when the developers of free software get paid." But Emily's remarks will certainly resonate with some developers; concerns about corporate involvement in free software projects is more widespread than one might think.

In this case, it is not entirely clear that companies are behind whatever difficulties GNOME may be facing. The GNOME project has clearly struggled in recent years; the proliferation of GNOME forks and ongoing criticism of the project's core decisions make that clear. But it has not been demonstrated that some sort of corporate agenda is behind these problems; it is not in Red Hat's interest, for example, to cause users to flee from its flagship desktop environment. If corporate desires have truly "trumped what anyone else wants/needs", it should be possible to point out specific examples where this trumping has happened, but such examples are not (yet) on offer.

Equally unclear is what can be done about this problem, if, indeed, it is deemed to be a problem. Certainly the GNOME board could, if it were sufficiently determined, manage to reduce the amount of company involvement in GNOME development. That does not seem like anybody's idea of the path to happiness and the Year of the Linux Desktop, though. So one would have to attack the problem at the other end by trying to increase the level of volunteer contributions. The GNOME project appears to work hard already at attracting new developers; examples include its Google Summer of Code participation, the Outreach Program for Women, and numerous conferences around the world. There is undoubtedly more that could be done to bring in new developers, but it is hard to fault the project for its current efforts.

Another option, suggested by former GNOME executive director Stormy Peters, would be to increase corporate participation by bringing in support from a wider range of companies. Involvement from more companies would serve to reduce the influence of any given member of the group. That seems like the sort of task the board of directors should be concerned with.

For the curious, Dave Neary and Vanessa David performed a survey of corporate involvement in GNOME development back in 2010. Their report [PDF] showed that unpaid developers, while making up about 70% of the development community, accounted for just under 25% of the contributions to the project; a group of about a dozen companies, led by Red Hat, accounted for the bulk of the rest. How that picture may have changed since 2010 is unclear; no followup survey has been done thus far. But things probably have not shifted to the point that any single corporation has a dominating influence over the development of the GNOME project as a whole.

And that is important. When a project is controlled by a single company, that company's needs will almost certainly win out over anything that the wider community may want to do. One need only look at Android for a classic example; company-dominated projects can still be valuable free software, but they tend not to be community-driven. If GNOME were to be controlled by a single company, it might well go in directions that would not be welcomed by its development community. Some people, it seems, feel that one company has indeed reached a level of control where it is able to take the project in unwelcome directions.

When one reads the discussion among the candidates for the board, there is one topic that stands out by its absence: with the exception of Emily, none of the candidates have expressed any discomfort with the direction of the GNOME project or the functioning of its community. Perhaps that is appropriate; there may be no cause for concern. But, again, the forks and ongoing controversies suggest that the project might want to be asking itself whether all of its decisions have been wise. Emily may or may not have found the correct target when she named corporate involvement, but she may be doing the project a favor by asking, in a high-profile way, whether something might be wrong.

In any case, the GNOME community now has an opportunity to make a statement about corporate participation and the direction of GNOME development. If enough GNOME developers are sympathetic to Emily's position, they will elect her to the board and she will be able to push for change, though there are limits to what the board (which is not empowered to make technical decisions) can do. Her chances are reasonably good; there are eleven candidates for the eight available positions. Voting continues through June 8, with the results to be announced on the 10th.

Comments (102 posted)

Page editor: Nathan Willis

Security

Static security analysis of Tizen apps

By Nathan Willis
June 4, 2014

TDC 2014

At the 2014 Tizen Developer Conference (TDC) in San Francisco, Dan Wallach of Rice University delivered a presentation about his ongoing work to perform security analysis of third-party apps written for Tizen mobile devices. While static analysis can never catch every violation of security rules, a framework that helps automate the process would simplify any "app store" review process, as well as providing side benefits to the app and platform developers.

Wallach's research is partly funded by Samsung, he disclosed at the beginning of his talk, but his presentation in no way indicates that he speaks on the company's behalf. Wallach works at Rice's Computer Security Lab, and co-leads a team of researchers and PhD students on this project. The target is performing automated, static analysis of mobile Tizen apps submitted to the app store, in order to reduce the amount of time that each app demands of a human reviewer.

Analysis of Tizen apps

Wallach's team is only investigating one piece of the app review process, of course. Tizen offers two development frameworks: HTML5/JavaScript and native code; Wallach's team is looking at native apps while someone else is doing similar work on analysis of HTML5 apps. But Tizen apps can also be compiled with either a GCC or LLVM toolchain; Wallach's team is focusing on analysis of the LLVM-generated apps.

[Dan Wallach at TDC 2014]

The reason is that the LLVM toolchain produces cross-platform LLVM "bitcode" executables (which are intended to run on either Intel or ARM hardware, both architectures that are supported by Tizen). The LLVM bitcode is an intermediate representation of the compiled program, designed to be optimized, he said, but that also means it is ripe for other forms of analysis. It preserves the high-level semantics of the program, unlike machine code, and it can be generated from multiple languages (unlike Java bytecode). Finally, it supports multiple architectures (which is of particular interest to the Tizen project), so one analysis framework would assist a lot of developers.

The security analysis performed by the team is static, he explained, meaning that the submitted app is not executed. Instead, the framework examines the program structure, flagging potential security problems. The basic idea is to mark points where app security is "tainted;" for example, in a potential information leak, such a point would have both a "source" call (e.g., requesting the device's GPS location) and a "sink" (e.g., sending some data to a remote server).

Obviously, if there is no conditional branching in the program flow between the source and the sink, it is trivial to determine if the program copies the GPS location into the data structure sent to the remote server, which could be a violation of security policy. Static analysis has to cope with the conditional flows, Wallach said. Historically, static analysis has been a conservative approach: assuming all branches in the code are taken, which can lead to false positives. It must also deal with other complexities such as pointer aliasing and object-oriented method dispatching, both of which introduce a lot of unknowns to cope with. On the other hand, he said, the opposite approach—dynamic analysis—tends to produce false negatives.

Control-flow analysis is not a new problem, of course, and there are existing approaches to reducing the potentially overwhelming number of false positives. The techniques do require some knowledge of the platform: knowing what the most likely vulnerabilities are, knowing what special cases should be allowed, and so on.

The group at Rice has been working for the past year to build an LLVM information-flow analysis engine for Tizen, though the project is far from finished. As of now, the group's work can statically analyze LLVM-bitcode Tizen apps, although Wallach pointed out that it does not analyze the Tizen libraries or kernel for vulnerabilities. Samsung provided the team with 30 apps as test cases, and the analysis found only one privacy leak, but just as importantly, it produced no false positives or false negatives. 30 apps (and 30 known apps, for that matter) is not a large test set, of course; an audience member asked Wallach if he would be interested in working on his company's library of 300 apps, to which Wallach replied "that would be fantastic."

The framework that the team is developing is intended to fit into the Tizen app-store submission process, so that new submissions are analyzed and a report is generated for a human analyst to look over. Security problems caught would send the app back to the developer for further work, but could also be used to help the Tizen project refine its security policies. Wallach noted that this approach differs from both Apple's app store review process and Google's; Apple's is secretive but apparently labor-intensive and slow, while Google's is known to be 100% automated and fast, but allows problematic apps to slip through.

What's next

The next stage of the ongoing research, Wallach said, is to perform similar analysis of the Tizen libraries. Everyone likes to think about their platform APIs as being discrete, orthogonal entities, he said—Bluetooth, Networking, Filesystem, etc.—each of which mediates access to some specific kernel-level feature. But in reality, they overlap quite a bit and share access to the same resources. In many cases they are interdependent, too.

That makes it difficult to construct strict security policies, as Tizen would like to do. For example, he described a well-known 1996-era Java vulnerability. The URL handler was intended to let an application retrieve a remote URL, but the URL-handling code also understood other protocol schemes like file:, which created a vulnerability. But simply blacklisting file: was not sufficient, since applications should be allowed to create and retrieve temporary files (for caching and so on).

The group's plan is to perform its static analysis on the Tizen libraries to map all of the platform APIs to kernel calls, then use that mapping to verify (and update) Tizen's Smack security policies. But the analysis will have other benefits, too. It can be adapted into the Tizen build process, so that any changes to the libraries that have security-policy implications can be flagged automatically. It will also allow higher-level APIs to be written while forbidding direct calls to lower-level dependencies.

Subsequently, the group may also annotate code with #pragma statements to denote exceptions to the security policy that a human auditor has determined is safe. The example he gave was an app doing a filesystem read to a cache file that it owns; an auditor can mark this with

    #pragma SecurityAudited(OpenFile)

and eliminate a false-positive result in the automated analysis step. The group also intends to perform an information-flow analysis of the entire Tizen library set, and to analyze Tizen's multiple inter-process communication (IPC) mechanisms. When he was first approached about working with Tizen, Wallach recalled, he asked "what's your IPC mechanism?" and was a bit surprised at the answer: "which one?" Comparable work in Android, he noted, has uncovered and fixed many security holes.

Eventually, he concluded, as the Tizen app ecosystem grows, the "known bad behavior" patterns will emerge, and analysis can focus more directly on them. In the meantime, static analysis frameworks like his team is developing will help reduce the pressure put on Tizen's Smack policy maintainers (by consolidating and simplifying rules), and will help developers write safer apps. It may even be possible, he said, to eliminate the need to manually write app "manifest" files (which list the permissions that an app requires); if the analyzer can determine which privileges the app actually uses, that is better than a list of what it claims it needs.

There is always a risk that making the security analysis process public will have the side effect of showing app developers "what they can get away with," he admitted, but that trade-off is one that the project and app-store maintainers will have to make. Wallach's group has also not yet made its code available to the public; for the project to have a significant impact on Tizen (rather than just on Samsung's app store), such a release will, no doubt, eventually be needed by the rest of the community.

[The author would like to thank the Tizen Association for travel assistance to attend TDC 2014.]

Comments (3 posted)

Brief items

Security quotes of the week

The implications of this gladden my "right to be forgotten" hating heart. If you're an EU user searching for Joe Blow, and the EU has forced removal of a search result related to him on, say, google.fr, the warning notice informing you that results have been removed for that search give you an immediate cue that you might want to head over to google.com to see what the EU censorship bureaucrats deemed unfit for your eyes. In essence, it's a built in Streisand Effect, courtesy of the EU itself! Before this, you might not even have noticed the result in question among other results for that search .
Lauren Weinstein

I think it provides a vivid illustration of how invasive this technology is and how the courts regulate its use. It’s one thing to have a generic description of how it’s used; it’s another thing to read a first-hand account of how people are walking up to people’s doors and windows sending powerful signals to [cell phones] inside. This transcript illustrates both the fact that bystanders' phones were being tracked and that the police operating the device knew that’s what the device was doing.
— ACLU attorney Nathan Freed Wessler on cell phone tracking devices known as "stingrays"

The question that remains is this: What should we expect in the future -- are there more Heartbleeds out there?

Yes. Yes there are. The software we use contains thousands of mistakes -- many of them security vulnerabilities. Lots of people are looking for these vulnerabilities: Researchers are looking for them. Criminals and hackers are looking for them. National intelligence agencies in the United States, the United Kingdom, China, Russia, and elsewhere are looking for them. The software vendors themselves are looking for them.

Bruce Schneier

Of course, we in the real world know that shaved apes like us never saw a system we didn't want to game. So in the event that sarcasm detectors ever get a false positive rate of less than 99% (or a false negative rate of less than 1%) I predict that everybody will start deploying sarcasm as a standard conversational gambit on the internet. Trolling the secret service will become a competitive sport, the goal being to not receive a visit from the SS [Secret Service] in response to your totally serious threat to kill the resident of 1600 Pennsylvania Avenue. Al Qaida terrrrst training camps will hold tutorials on metonymy, aggressive irony, cynical detachment, and sarcasm as a camouflage tactic for suicide bombers. Post-modernist pranks will draw down the full might of law enforcement by mistake, while actual death threats go encoded as LOLCat macros. Any attempt to algorithmically detect sarcasm will fail because sarcasm is self-referential and the awareness that a sarcasm detector may be in use will change the intent behind the message.
Charlie Stross

Comments (32 posted)

Making end-to-end encryption easier to use (Google Online Security Blog)

The Google Online Security Blog has announced the alpha release of an OpenPGP-compliant end-to-end encryption extension for the Chrome/Chromium browser. "While end-to-end encryption tools like PGP and GnuPG have been around for a long time, they require a great deal of technical know-how and manual effort to use. To help make this kind of encryption a bit easier, we’re releasing code for a new Chrome extension that uses OpenPGP, an open standard supported by many existing encryption tools. However, you won’t find the End-to-End extension in the Chrome Web Store quite yet; we’re just sharing the code today so that the community can test and evaluate it, helping us make sure that it’s as secure as it needs to be before people start relying on it. (And we mean it: our Vulnerability Reward Program offers financial awards for finding security bugs in Google code, including End-to-End.)"

Comments (18 posted)

Critical new bug in crypto library leaves Linux, apps open to drive-by attacks (Ars Technica)

Ars Technica reports on a buffer overflow in GnuTLS, which is an alternative to OpenSSL for SSL/TLS support. The length checks for the session ID in the ServerHello message were not correct, which allowed the overflow. "Maliciously configured servers can exploit the bug by sending malformed data to devices as they establish encrypted HTTPS connections. Devices that rely on an unpatched version of GnuTLS can then be remotely hijacked by malicious code of the attacker's choosing, security researchers who examined the fix warned. The bug wasn't patched until Friday [May 30], with the release of GnuTLS versions 3.1.25, 3.2.15, and 3.3.4. While the patch has been available for three days, it will protect people only when the GnuTLS-dependent software they use has incorporated it. With literally hundreds of packages dependent on the library, that may take time." This analysis shows how the bug could be exploited for arbitrary code execution.

Comments (13 posted)

Patch All The Things! New "Cupid" Technique Exploits Heartbleed Bug (PCMagazine)

Cupid is an exploit for the Heartbleed bug in OpenSSL that can target both servers and endpoints running Linux and Android, reports PCMagazine. "Luis Grangeia, a researcher at SysValue, created a proof-of-concept code library that he calls "Cupid." Cupid consists of two patches to existing Linux code libraries. One allows an "evil server" to exploit Heartbleed on vulnerable Linux and Android clients, while the other allows an "evil client" to attack Linux servers. Grangeia has made the source code freely available, in hopes that other researchers will join in to learn more about just what kind of attacks are possible."

Comments (6 posted)

New vulnerabilities

chkrootkit: privilege escalation

Package(s):chkrootkit CVE #(s):CVE-2014-0476
Created:June 4, 2014 Updated:June 13, 2014
Description: From the Debian advisory:

Thomas Stangner discovered a vulnerability in chkrootkit, a rootkit detector, which may allow local attackers to gain root access when /tmp is mounted without the noexec option.

Alerts:
Mageia MGASA-2014-0249 chkrootkit 2014-06-04
Ubuntu USN-2230-1 chkrootkit 2014-06-04
Debian DSA-2945-1 chkrootkit 2014-06-03
Fedora FEDORA-2014-7090 chkrootkit 2014-06-13
Fedora FEDORA-2014-7071 chkrootkit 2014-06-13
Mandriva MDVSA-2014:122 chkrootkit 2014-06-11

Comments (none posted)

chromium-browser: multiple vulnerabilities

Package(s):chromium-browser CVE #(s):CVE-2014-1743 CVE-2014-1744 CVE-2014-1745 CVE-2014-1746 CVE-2014-1747 CVE-2014-1748 CVE-2014-1749 CVE-2014-3152
Created:June 2, 2014 Updated:March 30, 2016
Description: From the CVE entries:

Use-after-free vulnerability in the StyleElement::removedFromDocument function in core/dom/StyleElement.cpp in Blink, as used in Google Chrome before 35.0.1916.114, allows remote attackers to cause a denial of service (application crash) or possibly have unspecified other impact via crafted JavaScript code that triggers tree mutation. (CVE-2014-1743)

Integer overflow in the AudioInputRendererHost::OnCreateStream function in content/browser/renderer_host/media/audio_input_renderer_host.cc in Google Chrome before 35.0.1916.114 allows remote attackers to cause a denial of service or possibly have unspecified other impact via vectors that trigger a large shared-memory allocation. (CVE-2014-1744)

Use-after-free vulnerability in the SVG implementation in Blink, as used in Google Chrome before 35.0.1916.114, allows remote attackers to cause a denial of service or possibly have unspecified other impact via vectors that trigger removal of an SVGFontFaceElement object, related to core/svg/SVGFontFaceElement.cpp. (CVE-2014-1745)

The InMemoryUrlProtocol::Read function in media/filters/in_memory_url_protocol.cc in Google Chrome before 35.0.1916.114 relies on an insufficiently large integer data type, which allows remote attackers to cause a denial of service (out-of-bounds read) via vectors that trigger use of a large buffer. (CVE-2014-1746)

Cross-site scripting (XSS) vulnerability in the DocumentLoader::maybeCreateArchive function in core/loader/DocumentLoader.cpp in Blink, as used in Google Chrome before 35.0.1916.114, allows remote attackers to inject arbitrary web script or HTML via crafted MHTML content, aka "Universal XSS (UXSS)." (CVE-2014-1747)

The ScrollView::paint function in platform/scroll/ScrollView.cpp in Blink, as used in Google Chrome before 35.0.1916.114, allows remote attackers to spoof the UI by extending scrollbar painting into the parent frame. (CVE-2014-1748)

Multiple unspecified vulnerabilities in Google Chrome before 35.0.1916.114 allow attackers to cause a denial of service or possibly have other impact via unknown vectors. (CVE-2014-1749)

Integer underflow in the LCodeGen::PrepareKeyedOperand function in arm/lithium-codegen-arm.cc in Google V8 before 3.25.28.16, as used in Google Chrome before 35.0.1916.114, allows remote attackers to cause a denial of service or possibly have unspecified other impact via vectors that trigger a negative key value. (CVE-2014-3152)

Alerts:
Gentoo 201612-41 webkit-gtk 2016-12-13
openSUSE openSUSE-SU-2016:0915-1 webkitgtk 2016-03-30
Fedora FEDORA-2016-9ec1850fff webkitgtk 2016-03-29
Mageia MGASA-2016-0120 webkit 2016-03-25
Fedora FEDORA-2016-5d6d75dbea webkitgtk 2016-03-22
Ubuntu USN-2937-1 webkitgtk 2016-03-21
Fedora FEDORA-2016-1a7f7ffb58 webkitgtk3 2016-03-21
Fedora FEDORA-2015-6845 v8 2015-05-08
Fedora FEDORA-2015-6908 v8 2015-05-08
Mageia MGASA-2014-0413 chromium-browser-stable 2014-10-09
Gentoo 201408-16 chromium 2014-08-30
Ubuntu USN-2298-1 oxide-qt 2014-07-23
Debian DSA-2939-1 chromium-browser 2014-05-31
openSUSE openSUSE-SU-2014:0783-1 chromium 2014-06-12

Comments (none posted)

emacs: multiple vulnerabilities

Package(s):emacs CVE #(s):CVE-2014-3421 CVE-2014-3422 CVE-2014-3423 CVE-2014-3424
Created:May 30, 2014 Updated:March 29, 2015
Description:

From the Red Hat bug report:

Steve Kemp discovered multiple temporary file handling issues in Emacs. A local attacker could use these flaws to perform symbolic link attacks against users running Emacs. Original report: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=747100

CVE-2014-3421 was assigned to the issue in lisp/gnus/gnus-fun.el Upstream fix: http://lists.gnu.org/archive/html/emacs-diffs/2014-05/msg00055.html

CVE-2014-3422 was assigned to the issue in lisp/emacs-lisp/find-gc.el Upstream fix: http://lists.gnu.org/archive/html/emacs-diffs/2014-05/msg00056.html

CVE-2014-3423 was assigned to the issue in lisp/net/browse-url.el (this one does not currently have a fix) Upstream note: http://lists.gnu.org/archive/html/emacs-diffs/2014-05/msg00057.html

CVE-2014-3424 was assigned to the issue in lisp/net/tramp.el Upstream fix: http://lists.gnu.org/archive/html/emacs-diffs/2014-05/msg00060.html

Alerts:
Mandriva MDVSA-2015:117 emacs 2015-03-29
openSUSE openSUSE-SU-2014:1460-1 emacs 2014-11-20
Mageia MGASA-2014-0250 emacs 2014-06-06
Fedora FEDORA-2014-6554 emacs 2014-05-29
Mandriva MDVSA-2014:118 emacs 2014-06-10

Comments (none posted)

gnutls: code execution

Package(s):gnutls26 CVE #(s):CVE-2014-3466
Created:June 2, 2014 Updated:July 24, 2014
Description: From the Debian advisory:

Joonas Kuorilehto discovered that GNU TLS performed insufficient validation of session IDs during TLS/SSL handshakes. A malicious server could use this to execute arbitrary code or perform denial or service.

This Red Hat bug report has some more information.

Alerts:
Mandriva MDVSA-2015:072 gnutls 2015-03-27
Oracle ELSA-2014-0684 gnutls 2014-07-23
SUSE SUSE-SU-2014:0800-1 GnuTLS 2014-06-16
Fedora FEDORA-2014-6963 mingw-gnutls 2014-06-10
Fedora FEDORA-2014-6953 mingw-gnutls 2014-06-10
Fedora FEDORA-2014-6881 gnutls 2014-06-10
Slackware SSA:2014-156-01 gnutls 2014-06-05
openSUSE openSUSE-SU-2014:0767-1 gnutls 2014-06-06
openSUSE openSUSE-SU-2014:0763-1 gnutls 2014-06-06
SUSE SUSE-SU-2014:0758-1 gnutls 2014-06-05
Scientific Linux SLSA-2014:0595-1 gnutls 2014-06-03
Scientific Linux SLSA-2014:0594-1 gnutls 2014-06-03
Oracle ELSA-2014-0594 gnutls 2014-06-03
Oracle ELSA-2014-0595 gnutls 2014-06-03
Fedora FEDORA-2014-6891 gnutls 2014-06-04
CentOS CESA-2014:0594 gnutls 2014-06-04
CentOS CESA-2014:0595 gnutls 2014-06-04
Red Hat RHSA-2014:0595-01 gnutls 2014-06-03
Red Hat RHSA-2014:0594-01 gnutls 2014-06-03
Mageia MGASA-2014-0248 gnutls 2014-06-02
Ubuntu USN-2229-1 gnutls26 2014-06-02
Debian DSA-2944-1 gnutls26 2014-06-01
Mandriva MDVSA-2014:109 gnutls 2014-06-09
Mandriva MDVSA-2014:108 gnutls 2014-06-09
SUSE SUSE-SU-2014:0788-2 GnuTLS 2014-06-13
Gentoo 201406-09 gnutls 2014-06-13
SUSE SUSE-SU-2014:0758-2 GnuTLS 2014-06-13
SUSE SUSE-SU-2014:0788-1 GnuTLS 2014-06-13
Red Hat RHSA-2014:0684-01 gnutls 2014-06-10

Comments (none posted)

gnutls: NULL pointer dereference flaw

Package(s):gnutls CVE #(s):CVE-2014-3465
Created:June 3, 2014 Updated:July 24, 2014
Description: From the Mageia advisory:

A NULL pointer dereference flaw was discovered in GnuTLS's gnutls_x509_dn_oid_name(). The function, when called with the GNUTLS_X509_DN_OID_RETURN_OID flag, should not return NULL to its caller. However, it could previously return NULL when parsed X.509 certificates included specific OIDs

Alerts:
Mandriva MDVSA-2015:072 gnutls 2015-03-27
Oracle ELSA-2014-0684 gnutls 2014-07-23
Slackware SSA:2014-156-01 gnutls 2014-06-05
openSUSE openSUSE-SU-2014:0767-1 gnutls 2014-06-06
openSUSE openSUSE-SU-2014:0763-1 gnutls 2014-06-06
Mageia MGASA-2014-0248 gnutls 2014-06-02
Mandriva MDVSA-2014:108 gnutls 2014-06-09
Gentoo 201406-09 gnutls 2014-06-13
Red Hat RHSA-2014:0684-01 gnutls 2014-06-10

Comments (none posted)

java: insecure random numbers

Package(s):IBM Java 6 CVE #(s):CVE-2014-0878
Created:May 30, 2014 Updated:June 4, 2014
Description: From the Novell bug entry:

The IBMSecureRandom component in the IBMJCE and IBMSecureRandom cryptographic providers in IBM SDK Java Technology Edition 5.0 before Service Refresh 16 FP6, 6 before Service Refresh 16, 6.0.1 before Service Refresh 8, 7 before Service Refresh 7, and 7R1 before Service Refresh 1 makes it easier for context-dependent attackers to defeat cryptographic protection mechanisms by predicting the random number generator's output.

Alerts:
SUSE SUSE-SU-2014:0733-2 IBM Java 7 2014-06-02
SUSE SUSE-SU-2014:0728-3 IBM Java 6 2014-06-03
SUSE SUSE-SU-2014:0733-1 IBM Java 7 2014-05-30
SUSE SUSE-SU-2014:0728-2 IBM Java 6 2014-05-30

Comments (none posted)

libarchive: multiple vulnerabilities

Package(s):libarchive CVE #(s):CVE-2010-4666 CVE-2011-1779
Created:June 2, 2014 Updated:June 4, 2014
Description: From the CVE entries

Buffer overflow in libarchive 3.0 pre-release code allows remote attackers to cause a denial of service (application crash) or possibly have unspecified other impact via a crafted CAB file, which is not properly handled during the reading of Huffman code data within LZX compressed data. (CVE-2010-4666)

Multiple use-after-free vulnerabilities in libarchive 2.8.4 and 2.8.5 allow remote attackers to cause a denial of service (application crash) or possibly have unspecified other impact via a crafted (1) TAR archive or (2) ISO9660 image. (CVE-2011-1779)

Alerts:
Gentoo 201406-02 libarchive 2014-06-01

Comments (none posted)

libtasn1: multiple vulnerabilities

Package(s):libtasn1 CVE #(s):CVE-2014-3467 CVE-2014-3468 CVE-2014-3469
Created:June 3, 2014 Updated:March 29, 2015
Description: From the Mageia advisory:

Multiple buffer boundary check issues were discovered in libtasn1 library, causing it to read beyond the boundary of an allocated buffer. An untrusted ASN.1 input could cause an application using the library to crash (CVE-2014-3467).

It was discovered that libtasn1 library function asn1_get_bit_der() could incorrectly report negative bit length of the value read from ASN.1 input. This could possibly lead to an out of bounds access in an application using libtasn1, for example in case if application tried to terminate read value with NUL byte (CVE-2014-3468).

A NULL pointer dereference flaw was found in libtasn1's asn1_read_value_type() / asn1_read_value() function. If an application called the function with a NULL value for an ivalue argument to determine the amount of memory needed to store data to be read from the ASN.1 input, libtasn1 could incorrectly attempt to dereference the NULL pointer, causing an application using the library to crash (CVE-2014-3469).

Alerts:
Mandriva MDVSA-2015:116 libtasn1 2015-03-29
Debian DSA-3056-1 libtasn1-3 2014-10-26
Gentoo 201408-09 libtasn1 2014-08-29
SUSE SUSE-SU-2014:0931-1 libtasn1 2014-07-24
Oracle ELSA-2014-0687 libtasn1 2014-07-23
Ubuntu USN-2294-1 libtasn1-3, libtasn1-6 2014-07-22
SUSE SUSE-SU-2014:0800-1 GnuTLS 2014-06-16
Red Hat RHSA-2014:0687-01 libtasn1 2014-06-10
Fedora FEDORA-2014-6919 libtasn1 2014-06-10
Mandriva MDVSA-2014:107 libtasn1 2014-06-09
Slackware SSA:2014-156-02 libtasn1 2014-06-05
Slackware SSA:2014-156-01 gnutls 2014-06-05
SUSE SUSE-SU-2014:0758-1 gnutls 2014-06-05
Scientific Linux SLSA-2014:0596-1 libtasn1 2014-06-03
Scientific Linux SLSA-2014:0594-1 gnutls 2014-06-03
Oracle ELSA-2014-0596 libtasn1 2014-06-03
Oracle ELSA-2014-0594 gnutls 2014-06-03
Fedora FEDORA-2014-6895 libtasn1 2014-06-04
CentOS CESA-2014:0596 libtasn1 2014-06-04
CentOS CESA-2014:0594 gnutls 2014-06-04
Red Hat RHSA-2014:0596-01 libtasn1 2014-06-03
Red Hat RHSA-2014:0594-01 gnutls 2014-06-03
Mageia MGASA-2014-0247 libtasn1 2014-06-02
SUSE SUSE-SU-2014:0788-2 GnuTLS 2014-06-13
SUSE SUSE-SU-2014:0758-2 GnuTLS 2014-06-13
SUSE SUSE-SU-2014:0788-1 GnuTLS 2014-06-13

Comments (none posted)

moodle: information leak

Package(s):moodle CVE #(s):CVE-2014-0217
Created:May 30, 2014 Updated:June 4, 2014
Description:

From the Moodle security alert:

Description: Access to files linked on HTML blocks on the My home page was not being checked in the correct context allowing access to unauthenticated users.

Issue summary: Files linked in HTML blocks on My home are available to non authenticated users

Alerts:
Fedora FEDORA-2014-10802 moodle 2014-09-25
Fedora FEDORA-2014-6585 moodle 2014-05-29
Fedora FEDORA-2014-6577 moodle 2014-05-29

Comments (none posted)

openstack-foreman-installer: insecure defaults

Package(s):openstack-foreman-installer CVE #(s):CVE-2013-6470
Created:May 30, 2014 Updated:June 4, 2014
Description:

From the Red Hat advisory:

It was discovered that the Qpid configuration created by openstack-foreman-installer did not have authentication enabled when run with default settings in standalone mode. An attacker able to establish a TCP connection to Qpid could access any OpenStack back end using Qpid (for example, nova) without any authentication.

Alerts:
Red Hat RHSA-2014:0517-01 openstack-foreman-installer 2014-05-29

Comments (none posted)

openstack-heat-templates: multiple vulnerabilities

Package(s):openstack-heat-templates CVE #(s):CVE-2014-0040 CVE-2014-0041 CVE-2014-0042
Created:May 30, 2014 Updated:June 4, 2014
Description:

From the Red Hat advisory:

It was discovered that certain heat templates used HTTP to insecurely download packages and signing keys via Yum. An attacker could use this flaw to conduct man-in-the-middle attacks to prevent essential security updates from being installed on the system. (CVE-2014-0040)

It was found that certain heat templates disabled SSL protection for various Yum repositories (sslverify=false). An attacker could use this flaw to conduct man-in-the-middle attacks to prevent essential security updates from being installed on the system. (CVE-2014-0041)

It was discovered that certain heat templates disabled GPG signature checking of packages via Yum (gpgcheck=0). An attacker could use this flaw to conduct man-in-the-middle attacks to install arbitrary packages on the system. (CVE-2014-0042)

Alerts:
Red Hat RHSA-2014:0579-01 openstack-heat-templates 2014-05-29

Comments (none posted)

openstack-neutron: privilege escalation

Package(s):openstack-neutron CVE #(s):CVE-2013-6433
Created:May 30, 2014 Updated:October 1, 2014
Description:

From the Red Hat advisory:

It was discovered that the default sudo configuration provided in OpenStack Networking, which is specific to the openstack-neutron package shipped by Red Hat, did not correctly specify a configuration file for rootwrap, potentially allowing an unauthenticated user to escalate their privileges.

Alerts:
Red Hat RHSA-2014:1339-01 openstack-neutron 2014-09-30
Ubuntu USN-2255-1 neutron 2014-06-25
Red Hat RHSA-2014:0516-01 openstack-neutron 2014-05-29

Comments (none posted)

openstack-nova: unintended file access

Package(s):openstack-nova CVE #(s):CVE-2014-0134
Created:May 30, 2014 Updated:June 4, 2014
Description:

From the Red Hat advisory:

It was found that overwriting the disk inside of an instance with a malicious image, and then switching the instance to rescue mode, could potentially allow an authenticated user to access arbitrary files on the compute host depending on the file permissions and SELinux constraints of those files. Only setups that used libvirt to spawn instances and which had the use of cow images disabled ("use_cow_images = False" in nova configuration) were affected.

Alerts:
Ubuntu USN-2247-1 nova 2014-06-17
Red Hat RHSA-2014:0578-01 openstack-nova 2014-05-29

Comments (none posted)

php5: denial of service

Package(s):php5 CVE #(s):CVE-2014-0237 CVE-2014-0238
Created:June 2, 2014 Updated:July 7, 2014
Description: From the CVE entries:

The cdf_unpack_summary_info function in cdf.c in the Fileinfo component in PHP before 5.4.29 and 5.5.x before 5.5.13 allows remote attackers to cause a denial of service (performance degradation) by triggering many file_printf calls. (CVE-2014-0237)

The cdf_read_property_info function in cdf.c in the Fileinfo component in PHP before 5.4.29 and 5.5.x before 5.5.13 allows remote attackers to cause a denial of service (infinite loop or out-of-bounds memory access) via a vector that (1) has zero length or (2) is too long. (CVE-2014-0238)

Alerts:
Scientific Linux SLSA-2015:2155-7 file 2015-12-21
Oracle ELSA-2015-2155 file 2015-11-23
Red Hat RHSA-2015:2155-07 file 2015-11-19
Mandriva MDVSA-2015:080 php 2015-03-28
Debian-LTS DLA-145-1 php5 2015-01-31
Scientific Linux SLSA-2014:1606-2 file 2014-11-03
Red Hat RHSA-2014:1766-01 php55-php 2014-10-30
Red Hat RHSA-2014:1765-01 php54-php 2014-10-30
Red Hat RHSA-2014:1606-02 file 2014-10-14
Debian DSA-3021-2 file 2014-09-10
Debian DSA-3021-1 file 2014-09-09
Gentoo 201408-11 php 2014-08-29
Oracle ELSA-2014-1606 file 2014-10-16
Scientific Linux SLSA-2014:1012-1 php53 and php 2014-08-06
CentOS CESA-2014:1013 php 2014-08-06
CentOS CESA-2014:1012 php53 2014-08-06
Oracle ELSA-2014-1013 php 2014-08-06
Oracle ELSA-2014-1012 php53 2014-08-06
Oracle ELSA-2014-1012 php53 2014-08-06
CentOS CESA-2014:1012 php53 2014-08-06
Red Hat RHSA-2014:1012-01 php53 2014-08-06
Fedora FEDORA-2014-7992 file 2014-07-05
SUSE SUSE-SU-2014:0869-1 php53 2014-07-04
Red Hat RHSA-2014:1013-01 php 2014-08-06
Ubuntu USN-2254-2 php5 2014-06-25
Ubuntu USN-2254-1 php5 2014-06-23
Fedora FEDORA-2014-6904 php-phpunit-PHPUnit-MockObject 2014-06-17
Fedora FEDORA-2014-6901 php-phpunit-PHPUnit-MockObject 2014-06-17
Fedora FEDORA-2014-6904 php-doctrine-orm 2014-06-17
Fedora FEDORA-2014-6901 php-doctrine-orm 2014-06-17
Fedora FEDORA-2014-6904 php 2014-06-17
Fedora FEDORA-2014-6901 php 2014-06-17
Slackware SSA:2014-160-01 php 2014-06-09
Mandriva MDVSA-2014:115 php 2014-06-10
Mageia MGASA-2014-0258 php 2014-06-06
Mageia MGASA-2014-0252 file 2014-06-06
Debian DSA-2943-1 php5 2014-06-01
Mandriva MDVSA-2014:116 file 2014-06-10
openSUSE openSUSE-SU-2014:0786-1 php5 2014-06-12
openSUSE openSUSE-SU-2014:0784-1 php5 2014-06-12

Comments (none posted)

policycoreutils: privilege escalation

Package(s):policycoreutils CVE #(s):CVE-2014-3215
Created:May 30, 2014 Updated:March 29, 2015
Description:

From the CVE entry:

seunshare in policycoreutils 2.2.5 is owned by root with 4755 permissions, and executes programs in a way that changes the relationship between the setuid system call and the getresuid saved set-user-ID value, which makes it easier for local users to gain privileges by leveraging a program that mistakenly expected that it could permanently drop privileges.

Alerts:
Oracle ELSA-2015-3064 kernel 3.8.13 2015-07-31
Oracle ELSA-2015-3064 kernel 3.8.13 2015-07-31
Oracle ELSA-2015-3035 kernel 2015-05-13
Oracle ELSA-2015-3035 kernel 2015-05-13
Oracle ELSA-2015-3036 kernel 2015-05-13
Oracle ELSA-2015-3036 kernel 2015-05-13
Oracle ELSA-2015-3034 Unbreakable Enterprise kernel 2015-04-23
Oracle ELSA-2015-3034 Unbreakable Enterprise kernel 2015-04-23
Oracle ELSA-2015-3033 Unbreakable Enterprise kernel 2015-04-23
Oracle ELSA-2015-3033 Unbreakable Enterprise kernel 2015-04-23
Oracle ELSA-2015-3032 Unbreakable Enterprise kernel 2015-04-23
Oracle ELSA-2015-3032 Unbreakable Enterprise kernel 2015-04-23
Scientific Linux SLSA-2015:0864-1 kernel 2015-04-21
Oracle ELSA-2015-0864 kernel 2015-04-21
CentOS CESA-2015:0864 kernel 2015-04-22
Red Hat RHSA-2015:0864-01 kernel 2015-04-21
Mandriva MDVSA-2015:156 libcap-ng 2015-03-29
Gentoo 201412-44 policycoreutils 2014-12-26
Mageia MGASA-2014-0251 libcap-ng 2014-06-06
openSUSE openSUSE-SU-2014:0749-1 libcap-ng 2014-06-03
openSUSE openSUSE-SU-2014:0736-1 policycoreutils 2014-05-30
Mandriva MDVSA-2014:117 libcap-ng 2014-06-10

Comments (none posted)

smb4k: credential cache leak

Package(s):smb4k CVE #(s):CVE-2014-2581
Created:June 3, 2014 Updated:June 23, 2014
Description: From the Smb4K 1.1.1 release notes:

Fixed potential security issue reported by Heiner Markert. Do not allow the cruid option to be entered via the "Additional options" line edit. Also, implement a check in Smb4KMountJob::createMountAction() that removes the cruid option from the custom options returned by Smb4KSettings::customCIFSOptions().

Alerts:
Mageia MGASA-2014-0271 smb4k 2014-06-20
Fedora FEDORA-2014-6255 smb4k 2014-06-02
Fedora FEDORA-2014-6258 smb4k 2014-06-02

Comments (none posted)

typo3-src: multiple vulnerabilities

Package(s):typo3-src CVE #(s):
Created:June 2, 2014 Updated:February 23, 2015
Description: From the Typo3 advisory:

It has been discovered that TYPO3 CMS is vulnerable to Cross-Site Scripting, Insecure Unserialize, Improper Session Invalidation, Authentication Bypass, Information Disclosure and Host Spoofing.

Failing to properly validate the HTTP host-header TYPO3 CMS is susceptible to host spoofing. TYPO3 uses the HTTP host-header to generate absolute URLs in several places like 404 handling, http(s) enforcement, password reset links and many more. Since the host header itself is provided by the client it can be forged to any value, even in a name based virtual hosts environment.

Alerts:
Debian DSA-2942-1 typo3-src 2014-06-01

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 3.15-rc8, which was released on June 1. At that time, Linus Torvalds also opened the merge window for 3.16. He is trying to avoid having the merge window open during an upcoming family vacation. "So let's try to see how well that works - the last weeks of the release tends to be me just waiting around to make sure nothing bad is happening, so doing this kind of overlapping development *should* work fine. Maybe it works so well that we'll end up doing it in the future even if there *isn't* some kind of scheduling conflict that makes me want to start the merge window before I'm 100% comfortable doing the release for the previous version." See our merge window article for more information about what has been merged so far.

Stable updates: The 3.14.5 and 3.10.41 stable kernels were release on May 31. There are no stable updates in the review process as of this writing.

Comments (none posted)

Kernel development news

3.16 merge window, part 1

By Jake Edge
June 4, 2014

The merge window for the 3.16 kernel might be showing us a glimpse of a future where kernel releases happen even more frequently than they do today. By opening the window for 3.16 before the final release of the 3.15 kernel, Linus Torvalds may have shaved a week off the time between the two kernels. The length of kernel development cycles has generally trended downward, but has leveled off between 60 and 70 days for recent releases. While Torvalds's reason for overlapping the development cycles of two kernels—a family vacation—may not recur anytime soon, he may find that some parallelism in kernel development suits his purposes moving forward.

So, unlike previous merge windows, Torvalds is juggling two branches for a week—or possibly longer if serious problems pop up in -rc8. There is the mainline (or "master") branch on his tree that is accumulating the—hopefully small—fixes that are going into 3.15. In addition, he is managing a "next" branch that is collecting all of the changes bound for 3.16 (i.e. the merge window changes). Once 3.15 is released, he will presumably merge next to master and keep on merging from there.

As he said in the -rc8 release announcement, this part of the development cycle is typically fairly boring for Torvalds and the rest of the kernel hackers. Normally, Torvalds is "just waiting around to make sure nothing bad is happening" for the last few weeks of each cycle. If this "experiment" works well, one—or even two—week overlaps between kernel cycles could become a regular occurrence. That could increase the already frenetic pace of kernel development substantially.

As of this writing, Torvalds has pulled 5348 non-merge changes for 3.16 (and 54 into the mainline after the v3.15-rc8 tag). Since we are in uncharted territory, it is a little hard to say for sure when the merge window will close, but one could guess that it will before he leaves on vacation, so an -rc1 on or about June 15 seems just about right.

Changes visible to users include:

  • Xen on ARM systems now supports suspend and resume.
  • SMP support has been added for Marvell Armada 375 and 38x SoCs. SMP has been reworked for the Allwinner A31 SoC.
  • The Goldfish virtual platform now has 64-bit support.
  • Early debug serial consoles have been made generic and support for early consoles on the p1011 serial port has been added.
  • KVM on s390 gained some optimizations, support for migration, and GDB support.
  • KVM has added initial little-endian support for POWER8. The project has also done MIPS user-space interface and virtualized timer work along with adding support for nested fully-virtualized Xen guests on x86 hosts.
  • ACPI video will now default to using native backlight drivers, rather than the ACPI backlight interface, "which should generally help systems with broken Win8 BIOSes", Rafael Wysocki said in the pull request.
  • New hardware support includes:
    • Systems and processors: Support for several ARM system-on-chips (SoCs) has been added via device tree bindings, including ST Microelectronics STiH407; Freescale i.MX6SX; Samsung EXYNOS 3250, 5260, 5410, 5420, and 5800; and LSI Axxia AXM55xx.
    • Audio: Behringer BCD2000 DJ controllers; NVIDIA Tegra HD Audio controllers; FireWire devices based on the Echo Digital Audio Fireworks board; FireWire devices based on BridgeCo DM1000/DM1100/DM1500 with BeBoB firmware; SoC Audio for Freescale i.MX CPUs; TI STA350 speaker amplifiers; Realtek ALC5651 codecs; Analog Devices ADAU1361 and ADAU1761 codecs; Analog Devices ADAU1381 and ADAU1781 codecs; Cirrus Logic CS42L56 low-power stereo codecs; Intel Baytrail with MAX98090 codecs; Realtek ALC5677 codecs; Google Snow boards.
    • Sensors: AS3935 Franklin lightning sensors; Asahi Kasei AK8963 magnetometers; Invensense MPU6500 gyroscope/accelerometers; Freescale MPL115A2 pressure sensors; Melexis MLX90614 contact-less infrared sensors; Freescale MMA8452Q accelerometers; Nuvoton NCT6683D hardware-monitoring chips.
    • Miscellaneous: SSI (Synchronous Serial Interface, aka McSAAB) protocol support; OMAP3 SSI; Nokia N900 modems; Renesas R-Car PCIe controllers; Maxim MAX77836 Micro-USB interface controllers (MUIC); Analog Devices AD799x analog-to-digital converters (ADC) graduated from staging; Microchip Technology MCP3426, MCP3427, and MCP3428 ADCs; HID device rotation; MEN 16z135 High Speed UARTs; SC16IS7xx serial ports; Exynos 5 USB dual-role device (DRD) PHYs; Maxim MAX3421 HCDs (USB-over-SPI); Marvell Armada 375/38x ARM SOC xHCI host controllers; Qualcomm APQ8064 top-level multiplexing (TLMM) blocks; Qualcomm IPQ8064 TLMM blocks; Cadence SPI controllers; X-POWERS AXP20X PMIC regulators; LTC3589, LTC3589-1, and LTC3589-2 regulators; CPU idle has been added for Cirrus Logic CLPS711X SOCs; Synaptics RMI4 touchpads; HDMI support for OMAP5.

Changes visible to kernel developers include:

  • The m68k architecture now has early_printk() support for more platforms.
  • Lots of cleanup and refactoring has been done in the GPIO subsystem.
  • Much work has gone into the multiqueue block layer; "3.16 will be a feature complete and performant blk-mq", Jens Axboe said in his pull request. Multiqueue SCSI will be coming in 3.17. The Micron PCIe flash driver (mtip32xx) has been converted to multiqueue and those changes were merged as well.
  • Several block layer files have moved from the fs/ and mm/ directories to the block/ directory: bio.c, bio-integrity.c, bounce.c, and ioprio.c.
  • Samsung Exynos ARM SoCs now support multi-cluster power management, which allows big.LITTLE CPU switching. There is also support for multi-platform kernels incorporating Exynos, though there is still some driver work to do.
  • CONFIG_USB_DEBUG has been removed and all USB drivers have been converted to use the dynamic debug interface.
  • The smp_mb__{before,after}_{atomic,clear}_{dec,inc,bit}() family of memory-barrier functions has been substantially reduced, to just two: smp_mb__{before,after}_atomic().

Next week's edition will pick up any merges made after this report. If there are any significant merges after that, we'll write those up for the following week as well.

Comments (2 posted)

Locking and pinning

By Jonathan Corbet
June 4, 2014
The kernel has long supported the concept of locking pages into physical memory; the mlock() system call is one way to accomplish that. But it turns out that there is more than one way to fix memory in place, and some of those ways have to behave differently than others. The result is confusion with resource accounting and suboptimal memory-management behavior in current kernels. A patch set from Peter Zijlstra may soon straighten things out by formalizing a second type of page locking under the name "pinning."

One of the problems with memory locking is that it doesn't quite meet the needs of all users. A page that has been locked into memory with a call like mlock() is required to always be physically present in the system's RAM. At a superficial level, locked pages should thus never cause a page fault when accessed by an application. But there is nothing that requires a locked page to always be present in the same place; the kernel is free to move a locked page if the need arises. Migrating a page will cause a soft page fault (one that is resolved without any I/O) the next time an application tries to access that page. Most of the time, that is not a problem, but developers of hard real-time applications go far out of their way to avoid even the small amount of latency caused by a soft fault. These developers would like a firmer form of locking that is guaranteed to never cause page faults. The kernel does not currently provide that level of memory locking.

Locking also fails to meet the needs of various in-kernel users. In particular, kernel code that uses a range of memory as a DMA buffer needs to know that said memory will not be moved. As a result, the locking mechanism has never been used for these pages; instead, they are fixed in place by incrementing their reference counts or through a call to get_user_pages(). Such pages are effectively fixed in place, though there is no way for the kernel to know that they may be nailed down for a long time.

There is an interesting question that arises with these informally locked pages, though: how do they interact with the resource limit mechanism? The kernel allows an administrator to place an upper bound on the number of pages that a user is able to lock into memory. But, in some cases, the creation of a DMA buffer shared with user space is the result of an application's request. So users can, for all practical purposes, lock pages in memory via actions like the creation of remote DMA (RDMA) buffers; those pages are not currently counted against the limit on locked pages. This irritates administrators and developers who want the limit on locked pages to apply to all locked pages, not just some of them.

These "back door" locked pages also create another sort of problem. Normally, the memory management subsystem goes out of its way to separate pages that can be moved from those that are fixed in place. But, in this case, the pages are often allocated as normal anonymous memory — movable pages, in other words. Fixing them in place makes them unmovable. At that point, they will be in the way any time the memory management code tries to create contiguous ranges of memory by shifting pages around; they are in a place reserved for movable pages, but, being unmovable, they cannot be moved out of the way to make the creation of larger blocks possible.

Peter's patch set tries to address all of these problems — or, at least, to show how they could be addressed. It creates a formal notion of a "pinned" page, being a page that must remain in memory at its current physical location. Pinned pages are kept in a separate virtual memory area (VMA), which is marked with the VM_PINNED flag. Within the kernel, pages can be pinned with the new mm_mpin() function:

    int mm_mpin(unsigned long start, size_t len);

This function will pin the pages in memory, but only if the calling process's resource limits allow it. Kernel code that needs to access the pinned memory directly will still need to call get_user_pages(), of course; that call should be done after the call to mm_mpin().

One of the longer-term goals (not part of this patch set) is to make this memory-pinning functionality available to user space. A new mpin() system call would function like mlock(), but with the additional guarantee that the page would never be moved and, thus, would never generate page faults on access. Adding this functionality would mostly appear to be a matter of setting up the system call plumbing.

Another currently unimplemented feature is the migration of the pages to be pinned prior to nailing them down. The mm_mpin() call makes it clear that the pages involved will not be movable in the near future. It would thus make sense for the kernel to shift them out of a movable zone (if that is where they are currently located) and into one of the ranges of memory reserved for non-movable pages. That would prevent pinned pages from interfering with memory compaction and, thus, would facilitate the creation of larger blocks of free memory in those pages' original location.

Finally, putting pinned pages under their own VMA makes it relatively easy to keep track of them. So pinned pages can be counted against the locked-pages limit, eliminating that particular loophole.

Thus far, nobody seems to be overly bothered by this patch set. In previous discussions, there have been concerns that changing the accounting of locked pages could cause regressions on some systems where users are running close to their limits. There are few ways around that problem, though; one could continue to leave pinned pages out of the equation or, perhaps, create a separate limit for them. Neither option has a great deal of appeal, so it may just be that this change will go through as-is.

Comments (none posted)

Another attempt at power-aware scheduling

By Jonathan Corbet
June 4, 2014
Numerous attempts to improve the power efficiency of the kernel's CPU scheduler have been made in recent years. Most of these attempts have taken the form of adding heuristics to the scheduler ("group small tasks onto just a few CPUs," for example) that, it was hoped, would lead to more efficient use of the system's resources. These attempts have run aground for a number of reasons, including the fact that they tend to work for only a subset of the range of workloads and systems out there and their lack of integration with other CPU-related power management subsystems, including the CPU frequency and CPU idle governors. At the power-aware scheduling mini-summit in 2013, a call was made for a more organized approach to the problem. Half a year or so later, some of the results are starting to appear.

In particular, Morten Rasmussen's Energy cost model for energy-aware scheduling patch set was posted on May 23. This patch set is more of a demonstration-of-concept than something suitable for merging, but it does show the kind of thinking that is going into power-aware scheduling now. Heuristics have been replaced with an attempt to measure and calculate what the power cost of each scheduling decision will be.

The patch set starts by creating a new data structure to describe the available computing capacity of each CPU and the power cost of running at each capacity. If a given CPU can operate at three independent frequencies, this data structure will contain a three-element array describing the power cost of running at each frequency and the associated computing capacity that will be available. There are no specific units associated with either number; as long as they are consistent across the system, things will work.

On a simple system, the cost/capacity array will be the same for each CPU. But things can quickly get more complicated than that. Asymmetric systems (big.LITTLE systems, for example) have both low-power and high-power CPUs offering a wide range of capacities. On larger systems, CPUs are organized into packages and NUMA nodes; the power cost of running two CPUs on the same package will be quite a bit less than the cost of keeping two packages powered up. So the cost/capacity array must be maintained at every level of the scheduling domain hierarchy (which matches the hardware topography), and scheduling decisions must take into account the associated cost at every level.

In the current patch set, this data structure is hard coded for a specific ARM processor. One of the many items on the "to do" list is to create this data structure automatically, either from data found in the firmware or from a device tree. Either way, some architecture-specific code will have to be written, but that was not a problem that needed to be solved to test out the concepts behind this patch set.

With this data structure in place, it is possible to create a simple function:

    int energy_diff_util(int cpu, int utilization);

The idea is straightforward enough: return the difference in power consumption that will result from adding a specific load (represented by utilization) to a given CPU. In the real world, though, there are a few difficulties to be dealt with. One of those is that the kernel does not really know how much CPU utilization comes with a specific task. So the patch set has to work with the measured load values, which are not quite the same thing; in particular, load does not take a process's priority into account.

Then there is the little problem that the scheduler does not actually know anything about what the CPU frequency governor is doing with any given CPU. The patch set adds a hack to make the current frequency of each CPU available, and there is an explicit assumption that the governor will make changes to match utilization changes on any given processor. The lack of integration between these subsystems was a major complaint at last year's mini-summit; it is clearly a problem that will need to be addressed as part of any viable power-aware scheduling patch. But, for the time being, it's another detail that can be glossed over while the main concepts are worked out.

There are a number of factors beyond pure CPU usage that can change how much power a given process needs. One of those is CPU wakeups: taking a processor out of a sleep state has an energy cost of its own. It is not possible to know how often a given process will awaken a sleeping CPU, but one can get an approximate measure by tracking how often the process itself wakes up from a sleeping state. If one assumes that some percentage of those wakeups will happen when the CPU itself was sleeping, one can make a guess at how many CPU wakeups will be added if a process is made to run on a given CPU.

So Morten's patch set adds simple process wakeup tracking to get a sense for whether a given process wakes up frequently or rarely. Then, when the time comes to consider running that process on a given CPU, a look at that CPU's current idle time will generate a guess for how many additional wakeups the process would create there. A CPU that is already busy most of the time will not sleep often, so it will suffer fewer wakeups than one that is mostly idle. Factor in the energy cost of waking the CPU (which will depend on just how deeply it is sleeping, another quantity that is hard for the scheduler to get currently) and an approximate energy cost associated with wakeups can be calculated.

With that structure in place, it's just a matter of performing the energy calculations for each possible destination when the time comes to pick a CPU for a given task. Iterating through all CPUs could get expensive, so the code tries to quickly narrow things down to one low-level group of CPUs; the lowest-cost CPU in that group is then chosen. In this patch set, find_idlest_cpu() is modified to do this search; other places where task placement decisions are made (load balancing, for example) have not been modified.

The patch set came with a small amount of benchmark information; it shows energy savings from 3% to 50%, depending on the workload, on a big.LITTLE system. As Morten notes, the savings on a fully symmetric system will be smaller. There is also an approximate quadrupling of the time taken to switch tasks; that cost is likely to be seen as unacceptable, but it should also be possible to reduce that cost considerably with some targeted optimization work.

Thus far, discussion of the patch set has been muted. Getting sufficient reviewer attention on power-aware scheduling patches has been a problem in the past. The tighter focus of this patch set should help to make review relatively easy, though, so, with luck, this work will be looked over in the near future. Then we'll have an idea of whether it represents a viable path forward or not.

Comments (1 posted)

Patches and updates

Kernel trees

Architecture-specific

Core kernel code

Device drivers

Documentation

Filesystems and block I/O

Memory management

Networking

Alexei Starovoitov split BPF out of core networking ?

Security-related

Miscellaneous

Sebastian Andrzej Siewior perf to ctf converter ?

Page editor: Jake Edge

Distributions

Tizen and the Internet of Things

By Nathan Willis
June 4, 2014

TDC 2014

The third annual Tizen Developer Conference was held in San Francisco June 2–4. As in previous years, the program included a lot of updates about the direction the platform itself is heading and practical sessions for application developers. But this year also introduced a new motto for the project, "The OS of Everything." The slogan is an allusion to Tizen's goal of being a Linux platform for a wide variety of consumer devices, and there were a lot of devices on hand (cars, phones, TVs, watches, and cameras). It takes on a different tenor, however, when it is used in conjunction with the Internet of Things (IoT) concept—which several of the speakers addressed.

Like "the cloud," IoT can mean several different things to various groups of people, but some of those people make the argument that connected devices as they currently exist are a fragmented field of single-vendor products. Linux and open-source software clearly has an important role to play in parts of the IoT space, and the Tizen project seems to be positioning itself to be the go-to Linux distribution for manufacturers.

Mark Bryan of the home-automation company iControl first raised the IoT topic in his keynote address. As one would expect from a home-automation vendor, Bryan's perspective on IoT focused on connected appliances, lights, door locks, and similar household features. At present, he said, the marketplace for these products is a mishmash of single-vendor products like the Nest thermostat and isolated services, like those from house-alarm providers and some cable companies. Since there is evidently considerable interest in "smart homes" and all of the individual pieces (open networking standards, inexpensive processors, low-power chipsets) are available, he said, the question becomes: where is the friction that prevents this type of IoT from taking off?

[Mark Bryan at TDC 2014]

Bryan argued that the main point of friction is that all of the smart-home product vendors are still building their own operating system stack from scratch: from the kernel right up to the application layer. His company, he said, has seen this occur multiple times as they are contracted to develop smart-home software. The result is one-off products that are quickly orphaned. When each device offers a unique API, application developers tend to give up quickly, and there is rarely (if ever) any interoperability.

As an experiment, iControl recently decided to write a home-control app for Samsung's Tizen-powered Gear2 smartwatch. The app includes monitoring and management for a wide variety of home systems: illumination, alarms, door locks, appliances, and thermostat. It completed the project in just two weeks, easily surpassing expectations. Subsequently, he said, he has come to think that a common Linux platform used by multiple vendors is what IoT will eventually adopt, and he thinks Tizen is the best choice.

Of course, home automation is not the only possible meaning of IoT. In a later breakout session, Joe Speed from the Allseen Alliance raised that point specifically. The home automation scenarios Bryan described typically revolve around relaying a status update or a command between a smartphone (or, in the Gear2 case, a smartwatch) and a connected appliance that has little or no built-in user interface. Such systems tend to use remote servers (and "the cloud") to communicate—and not just to smartphone apps. Instead, they also involve sending data back to power companies and other service providers.

But, he said, there is a second, completely different meaning of IoT that focuses on direct, peer-to-peer messaging between the devices themselves—or, as he put it, "proximal" communication. While HTTP tends to be the protocol of choice for relaying information from an appliance to a remote server, he said, it is unsuitable for proximal connections, especially between resource-constrained devices like sensors.

Speed's own background prior to joining Allseen was in automotive computing, which he pointed out was part of the IoT landscape even if it is rarely characterized as such: "a car is a thing," he pointed out. Connected cars can generate an average of 5GB of sensor data per hour on the road, he said, which is prohibitively large for an HTTP transport mechanism. Instead, the project he was on investigated the compact MQTT (for "for MQ Telemetry Transport") protocol, which features a publish/subscribe model and a lightweight message format.

Most recently, MQTT has been used by Local Motors for its Tizen-powered "Rally Fighter" connected car. It results in about 93 times the throughput of HTTP (while still running over TCP/IP), he said, and just as importantly, it can be used in road sensors and other embedded devices with very little power. The automotive industry is eyeing it for "next-generation telematics," so that (for example) multiple cars on a stretch of road all using their antilock brakes would be recognized as an indicator of some dangerous condition (such as ice), a fact that could then be relayed to other approaching vehicles.

Speed showed a demonstration of the Rally Fighter's remote connectivity by using his phone to activate the headlights, blinkers, and windshield wipers of a car sitting at a different conference venue in Florida. A coworker provided a live video feed of car responding to the phone's commands.

There were several other talks over the course of the conference that dealt with IoT and Tizen but, like Speed's and Bryan's, they generally focused on what would be possible further down the line and offered standalone engineering samples and experiments as demonstrations. There is a compelling-sounding case to be made for many of the IoT scenarios—after all, most of them start with automating some everyday process or problem (such as noticing that the refrigerator needs a new water filter).

But Tizen is just beginning to branch out into the IoT space; there is still quite a bit of work to be done persuading device makers to adopt Tizen as their base Linux distribution. In the event's first keynote address, Intel's Imad Sousou set out a rough roadmap for the project. At present, he said, the project is supporting three classes of device: automotive, smartphones, and wearable devices (i.e., smartwatches). In the next few months, three more will be added: televisions, cameras, and home appliances. Samsung's J.D. Choi also addressed IoT is his keynote talk, announcing that Tizen would be participating in a new "Open Smart Home" project and a related W3C group to define HTML5 APIs for connected appliances.

That represents an ambitious goal for a project as young as Tizen is; TDC is in its third year, and this is the first event at which there were consumer devices on display that are already available for purchase: the Gear2 watch and NX300M camera (which is a Samsung product, even though when it was launched "camera" was not one of Tizen's official device profiles). Samsung announced more devices—the Samsung Z smartphone and an as-yet unnamed family of smart TVs—which was big news for the project.

But IoT may be a trickier device class to support; as Bryan and Speed illustrated, it encompasses a much wider array of hardware performing a less-well-defined set of operations. Perhaps IoT is still in the early stages of defining itself; if so, then Tizen could either be instrumental in helping industry players figure out the way forward, or it may have a hard time meeting the needs of such a diverse assortment of products. Time, as always, will tell.

[The author would like to thank the Tizen Association for travel assistance to attend TDC 2014.]

Comments (none posted)

Brief items

Distribution quote of the week

A more reasonable approach would be for the Council to permit the tree to contain at most 6 wrong lines at any given time. That way any developer wishing to add a new wrong line must previously fix an existing wrong line.
-- Ciaran McCreesh

Comments (1 posted)

Linux Mint 17 released

The Linux Mint project has released version 17 "Qiana" in Cinnamon and MATE editions. Qiana is a long term support release so it will be supported until 2019. See the new features pages for Cinnamon and MATE for some details. Here are the release notes for Cinnamon and MATE, where a few known issues are listed.

Comments (none posted)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

Bergeron: Introducing the new Fedora Project Leader, and some parting thoughts.

In a lengthy message to the fedora-announce mailing list, outgoing Fedora Project Leader (FPL) Robyn Bergeron has described the role of the FPL and why turnover in that position (and other, similar leadership roles) is desirable. She also announced that the new FPL will be Matthew Miller: "Of course, Matthew is no newcomer to the Fedora Project, having been around since the *LITERAL DAWN OF FEDORA TIME* -- he was an early contributor to the Fedora Legacy project, and helped to organize early FUDCons in his area of the world, at Boston University. Since joining Red Hat in 2012, he's been responsible for the Cloud efforts in Fedora, and as the previous wrangler for that team, I was thrilled when he came on board and was willing and able to start driving forward some of the initiatives and wishlist items that team was working on. What started out small has since grown into a vision for the future, and I'm confident in Matthew's ability to lead the Fedora Project forward into its next 10 years of innovative thinking."

Full Story (comments: 1)

Miller: First Thoughts as Fedora Project Leader

Matthew Miller shares his thoughts on becoming the new Fedora Project Leader. "I’m proud to have been part of the Fedora community since the early days. I’m grateful to have been given the opportunity to work on Fedora as my full-time job for the past year and a half. And now, I’m excited to be stepping into a new place within the community as Fedora Project Leader. These are incredible times in computing and in free and open source software, and we have incredible things going on in Fedora to match — the next years are full of opportunity and growth for the whole project and community, and I’m thrilled to be in a position to help."

Comments (none posted)

A backpacking trip around SEA sparked this business idea (e27)

Webconverger is a Debian based web platform for kiosks and thin clients. e27 talks with Kai Hendry, chief programmer and founder of Webconverger.

What were the challenges in developing Webconverger?

In general, I had to be very focussed for long periods of time; that was probably the biggest challenge. I never had ambitions for Webconverger to be a business when I first started. I simply saw a problem that needed solving. It’s just that over time, people started asking me to do things with Webconverger. I offered it as a free, open source solution. In the end, I had people come to me asking for customisation. When that happened, I automated it and started a business based around Webconverger.

Comments (none posted)

Page editor: Rebecca Sobol

Development

PGCon 2014: Clustering and VODKA

June 4, 2014

This article was contributed by Josh Berkus

The eighth annual PostgreSQL developer conference, known as PGCon, concluded on May 24th in Ottawa, Canada. This event has stretched into five days of meetings, talks, and discussions for 230 members of the PostgreSQL core community, which consists both of contributors and database administrators. PGCon serves to focus the whole PostgreSQL development community on deciding what's going to be in next year's PostgreSQL release as well as on showing off new features that contributors have developed. This year's conference included meetings of the main PostgreSQL team as well as for the Postgres-XC team, a keynote by Dr. Richard Hipp, and new code to put VODKA in your database.

[Developer meeting group photo]

In many ways, this year's conference was about hammering out the details of many of the new ideas introduced at last year's conference, where Postgres-XC 1.0, the new JSON storage format, and the new data change streaming replication method were all introduced. While some of these features have code in PostgreSQL 9.4 beta, all of them need further work and development, and that's what people were in Ottawa to discuss. They were also there to discuss satellites, code forks, and SQLite.

ESA data and Postgres-XC

PGCon week started out with meetings of the developers most concerned with clustering and horizontal scalability for PostgreSQL: the Postgres-XC meeting and the clustering summit. Both events were sponsored by NTT, the Japanese telecom.

Postgres-XC is a fork of PostgreSQL that is intended to support high-consistency transactional clustering for write scalability in order to support workloads that need a very high volume of small writes of valuable data. Examples of this workload include stock trading, airline bookings, and cell phone routing; it is the same use-case that is filled by Oracle RAC. While this was the working meeting of the Postgres-XC developers, two things made it interesting this year: a presentation by Krzysztof Nienartowicz of the European Space Agency (ESA), and the announcement of the Postgres-XL fork of Postgres-XC.

Nienartowicz presented on the Gaia sky survey satellite project that is soon to be deployed by the ESA. It will stay in the sun-Earth L2 Lagrangian point and use a special mirror arrangement projecting into digital receptors in order to take broad survey images of stars and other celestial objects at a higher resolution than has ever been done before. The ESA's plan is to scan the whole sky over a period of five years, section by section, recording every visible object an average of 80 times in order to record motion as well as position, and to categorize and identify over one billion objects.

The satellite will download gigabytes of data per day from its 938 million pixel camera, eventually yielding several hundred terabytes. This presents the ESA with a tough data-management problem. Right now, ESA is using PostgreSQL for mapping, metadata, and categorization because the Gaia team likes PostgreSQL's analytic capabilities. In particular, the team has designed machine-learning algorithms that use Java, which is run both outside the database (connecting via OpenJPA) and inside it using PL/Java. These algorithms help scientists by doing some automated classification of objects based on distance, behavior, motion, and luminosity. PostgreSQL also allows them to collaborate with the many astronomical centers across Europe by distributing complex data as PostgreSQL database backups.

However, it will soon outgrow what's reasonably possible to manage with mainstream PostgreSQL. For that reason, the ESA is planning to build a large Postgres-XC cluster in order to be able to do analytics across a larger number of machines. Their team plans to contribute to the Postgres-XC project as well, so that project can meet ESA's needs for data scale.

Postgres-XC forked

The proposal which dominated the rest of the Postgres-XC meeting, however, was the presentation by Mason Sharp on his fork of Postgres-XC, called Postgres-XL. Previously, Sharp had used his clustering knowledge from the Stado project (formerly "ExtenDB", then "GridSQL") to create a proprietary fork of Postgres-XC called StormDB. That fork was the product of a startup launched in 2012, which operated for two years before being purchased by fellow PostgreSQL-based clustering startup Translattice. Sharp was then permitted by Translattice to open-source StormDB as "Postgres-XL" in April 2014.

Postgres-XL differs from Postgres-XC in several ways. First, it has a better logo. It is slightly more focused on the analytics use case than the transaction-processing use case, including pushing more work down to the individual nodes through changes in how query plans are optimized. More importantly, Sharp has been able to drop Postgres-XC's requirement to send all queries through a few "controller" nodes by allowing each cluster node to behave as its own controller. Postgres-XL has also added some "multi-tenant" features aimed at running it as a public service, so that user data is strictly segregated.

The biggest difference in the fork, though, is that Sharp chose to emphasize stability and eliminating bugs over adding new features. For the last couple years, Postgres-XC has been very focused on adding as many core PostgreSQL features as possible and constantly merging new source code from the upstream project. Postgres-XL, in contrast, is still based on PostgreSQL 9.2 (the previous version), and has disabled several features, such as triggers and auto-degrade of transaction control, which had been a source of reliability issues for Postgres-XC.

As good forks do, this provoked a lot of discussion and re-evaluation among the Postgres-XC developers and users. After much discussion, the Postgres-XC team decided that they would emphasize stability and eliminating bugs in the next release. It remains to be seen whether the two projects will merge, though. For one thing, Translattice chose to open-source Postgres-XL under the Mozilla Public License rather than the PostgreSQL License.

Developer meeting

Another PGCon event is the annual PostgreSQL Developer meeting, which is where the project hackers coordinate and discuss strategy and projects for the next year. Among the highlights from this meeting were:

Simon Riggs discussed his work with the European Union's (EU) AXLE Project. AXLE stands for "Analytics on eXtremely Large European data". This EU-funded project is focusing on "operational business intelligence", meaning analytics of current business and government data in the EU. The EU government has chosen to do this as a mostly open-source effort, and has selected PostgreSQL as the primary database to be used for the project.

Various AXLE projects will be contributing features and tools for PostgreSQL over the next two years. Among their goals are: security and encryption suitable for medical data, better performance and support for very large tables, GPU and FPGA integration for query execution, and analytic SQL functions.

The developers also discussed making several changes to the current CommitFest process, which the PostgreSQL project uses to manage new patches for the database. First, there will be a new, Django-based application for patch management; PostgreSQL has chosen to "roll its own" rather than using Gerrit in order to achieve tighter integration with the postgresql.org mailing list archives. Second, there will be five CommitFests over nine months in the upcoming year instead of four over seven months, making the development part of the year longer and shortening the overly long beta period. The committers hope that this will take some time pressure off of the development cycle, as well as giving contributors more time to respond to user feedback from the previous year's release.

Finally, Peter Geoghegan suggested that the project allow pull requests instead of requiring patches for new feature submission. The other developers raised various problems with this idea, the largest of which were issues around use of rebase, merge, and squash merge. None of them were thought to be satisfactory for the level of history which PostgreSQL wants to maintain. Merge retains too much extraneous activity from continuously merging from upstream as well as from minor bug-fix commits, while rebase and squash merge can eliminate all development history on large features, preventing committers from evaluating which alternate approaches the submitter already tried. For now, PostgreSQL retains a patch-based submission process.

The developers then had a long discussion about making PostgreSQL scale to more cores and more RAM. Some of the various obstacles to this were discussed. For example, there are currently two locks in PostgreSQL's dedicated memory, the "buffer free list" lock and the "buffer mapping lock", that are highly contended; ways to make them less of a bottleneck were proposed. In one of the most promising, Andres Freund proposed eliminating the lightweight lock used by read-only transactions on the buffer. The developers also plan to use perf for additional profiling of the "clock-sweep" code that frees memory buffers in PostgreSQL's private cache.

Another big way to improve this, Freund proposed, is to use atomic operations in the CPU rather than spinlocks where reasonable for various operations. Developers discussed how to handle older platforms which don't support atomic operations; whether it makes more sense to auto-degrade them to spinlocks, or whether to de-support those platforms (such as ARM6 and PA-RISC) entirely. The next step is to assemble a chart of which atomic operations are supported on which platforms.

Other topics were discussed at the meeting, such as data access auditing and eliminating requirements to log in as the superuser. There was a long discussion about how to avoid some of the bugs that appeared in PostgreSQL 9.3, which has had more critical patch updates than any release in a decade. The project will also be considering whether it is reasonable to emulate the Linux Foundation model of having a couple of committers paid by a PostgreSQL non-profit to do review and maintenance work on PostgreSQL full-time.

SQLite and PostgreSQL

[Dr. Richard Hipp]

This year's keynote was delivered by Dr. Richard Hipp, the inventor of SQLite, which is a widely-used embedded SQL database. SQLite was created in 2000, and today this open-source SQL database is part of over 500,000 applications and devices, including the iPhone, Firefox, Dropbox, Adobe Lightroom, and the Android TV.

It might seem strange to have the founder of a different database system give the keynote to a PostgreSQL conference, but Hipp was invited because of his well-known respect for PostgreSQL, and because many users consider SQLite to be "embedded Postgres". He explained how, when he created SQLite, the syntax and semantics were originally based on PostgreSQL 6.5. He chose PostgreSQL because unlike other SQL databases at the time, it always returned correct results and didn't crash. Even today, "WWPD" for "What Would Postgres Do" is a mantra of the SQLite development team.

Also, both database systems share a love of SQL and are quite complementary. While PostgreSQL is a scalable server database, SQLite is a replacement for data file storage for applications. Hipp called it "a replacement for fopen()". Instead of a "pile of files", the database offers a clean and consistent data storage interface which is more resistant to corruption and more versatile than application-specific XML and binary files. Hipp went on to suggest that several existing programs, such as OpenOffice and Git, would be significantly improved by using SQLite instead of their current file format. To demonstrate this, he created a web page that takes the PostgreSQL Git history, converts it to SQLite, and then offers it for searches and analytics that are not possible with the native Git files.

The big disagreement between PostgreSQL and SQLite relates to data types. While PostgreSQL has a complex and strictly enforced type system, SQLite uses an undefined type for all data, which can store strings, numbers and other values, much like variables in languages like Perl, Python, and PHP. This difference sparked some discussion between Hipp and a few members of the audience after the talk.

Hipp went on to explain how, despite recent trends, SQL would endure and replace current non-relational database approaches. He cited the evidence of Google's recent return to SQL with BigQuery and SQL interfaces for Hadoop, and quoted Fred Brooks, Rob Pike, and Linus Torvalds in support of the idea of formal data structures. He also called the current "NoSQL" databases "postmodern databases" because they embrace an "absence of objective truth".

Indexing with VODKA

Of course, while the PostgreSQL project may love SQL, recently it has been seeing JSON on the side. The project's team of Russian advanced indexing experts, Teodor Sigaev, Oleg Bartunov, and Alexander Korotkov, presented their latest innovations to the other developers. These new ideas, which include a new indexing data structure and a new query syntax, center around PostgreSQL's new binary JSON data type, JSONB.

[Sigaev and Bartunov]

First, however, they also presented some of their benchmarking work using the JSONB type and indexes that will be released with version 9.4. For these tests, they loaded 1.2 million bookmarks from the old Delicious database in JSON form into a PostgreSQL 9.4 database, and into a MongoDB 2.6.0 database to make comparisons. Search times for a single key between MongoDB and PostgreSQL were similar: one second vs. 0.7 seconds. However, it took 17 times as long to load the data into MongoDB, and the resulting database was 50% larger.

Bartunov and Sigaev had added "GIN" indexes to PostgreSQL in 2006. GIN stands for "Generalized Inverted iNdex", and is similar in structure and function to the indexes used for searching in Apache Lucene and Elastic Search. Their new index is designed for better searching of nested data structures, and is also based on a "to do" item from the original GIN submission message. They named the new indexing method "VODKA", which is a recursive acronym that stands for "VODKA Optimized Dendriform Keys Array". VODKA replaces some of the B-tree structures inside GIN indexes with a more generalized pointer arrangement based on SP-GiST, which is another index type they added to PostgreSQL 9.2.

Most importantly, of course, it allows PostgreSQL users to type: CREATE INDEX ... USING VODKA.

While they will be useful for certain kinds of spatial queries, the primary use of VODKA indexes is expected to be for searching JSONB data. To support this, they have also developed a new matching syntax and operators for JSON which they call "jsquery", a name which will probably need to change to avoid confusion. Jsquery combines with VODKA indexes to support fast index searches for keys and values deep inside nested JSONB values stored in PostgreSQL tables. While PostgreSQL 9.4 will allow searching for nested keys and values inside JSONB (a back-port of jsquery is available for 9.4), it is limited in how complex these expressions can be for fast index searches. VODKA removes these limitations.

This jsquery looks a lot like PostgreSQL's existing full text search syntax, which is unsurprising since it has the same inventors. For example:

    SELECT jsonb_col FROM table1 WHERE jsonb_col @@ 'a.b @> [1,2]';

That query asks "tell me if you have a key 'a' which contains a key 'b' which contains an array with at least the values (1,2)". It would be return true for '{"a": {"b": [1,2,3]}}', but false for '{"a": {"e": [2,5]}}' or '{"a": {"b": [1,3]}}'.

They concluded by discussing some of the roadblocks they are facing in current VODKA development, such as index cleanup and an inability to collect statistics on data distribution. This discussion continued at the unconference which took place on Saturday, at the end of PGCon. There was also some discussion about the proposed jsquery syntax, which some developers felt was too different from established JSON query technologies.

Other sessions and the unconference

Of course, there were many other sessions in addition to those mentioned above. There were several presentations about the PostgreSQL's new JSON features, including one by the pgRest team from Taiwan, who showed off a complete Firebase/MongoDB replacement using PostgreSQL and V8 JavaScript. Other talks covered improving test coverage for PostgreSQL, why it's taken so long to implement UPSERT, analyzing core dumps, using PostgreSQL in the Russian Parliament, and how to program the new streaming replication.

The conference then wound up with the second annual PostgreSQL Unconference, which allowed the contributors and users to discuss some of the many ideas and issues which had come up during the developer meeting and the main conference. Participants talked about data warehousing and the extension system, and a Hitachi staff member discussed the design of its PostgreSQL-based appliance. While half the participants in the unconference were code contributors to PostgreSQL, the other half weren't. These users were excited to have the chance to directly influence the course of development, as explained by Shaun Thomas.

The biggest focus of the day, similar to last year, was discussions about "pluggable storage" for PostgreSQL in order to support column stores, append-only storage, and other non-mainstream options. This topic was introduced by the staff of CitusData, based on limitations they encountered with Foreign Data Wrappers and their cstore_fdw extension, which they released earlier this year. Unfortunately, progress has been slow on pluggable storage due to the many difficult changes required to the code.

If last year's PGCon was revolutionary, introducing many of the new developments which would change how the database is used, this year's conference was all about turning those changes into production code. Certainly anyone whose job centers around PostgreSQL should try to attend PGCon. If you couldn't make it, though, slides and audio will be online soon at the PGCon web site.

Comments (18 posted)

Brief items

Quotes of the week

The problem with the W3C standards is that they only discuss abstract resources, like "telephony." "Telephony" ... okay ... show me one?
— Casey Schaufler at Tizen Developer Conference 2014, explaining the shortcomings of a security policy written in W3C terminology.

The only reason I used OLinuXino is that it is so much more convenient when the manufacturer is in the same time zone as your apartment.
— Leon Anavi (who lives in Bulgaria) at Tizen Developer Conference 2014, explaining his choice of Allwinner SBC prototyping hardware.

Comments (none posted)

Buildroot 2014.05 available

Version 2014.05 of buildroot has been released. Notable changes in this update are the addition of Musl C library support, GCC 4.9.x and Glibc 2.19 support, and updates to Linaro external toolchains. The Python infrastructure also supports Python 3, there have been 77 new packages added, and Kconfig can be used to specify minimum kernel header versions. And on top of everything else, there is a new web site.

Full Story (comments: none)

Newsletters and articles

Development newsletters from the past week

Comments (none posted)

Mozilla to build WebRTC chat into Firefox

At the Mozilla "Future Releases" blog, Chad Weiner announces a new feature just added to the latest Firefox Nightly builds: WebRTC-powered audio/video chat functionality. The feature "aims to connect everyone with a WebRTC-enabled browser. And that’s all you will need. No plug-ins, no downloads. If you have a browser, a camera and a mic, you’ll be able to make audio and video calls to anyone else with an enabled browser. It will eventually work across all of your devices and operating systems. And we’ll be adding lots more features in the future as we roll it out to more users." Cross-browser multimedia chat has been demonstrated with WebRTC before, of course, but the functionality has not been built in. Firefox will evidently use OpenTok, a WebRTC application platform, in its implementation.

Comments (21 posted)

Page editor: Nathan Willis

Announcements

Brief items

Code.org to receive part of LF individual membership dues

The Linux Foundation is holding its biannual membership drive. This time around LF will donate $25 to Code.org for every individual member who joins during June. "The Linux Foundation and Code.org share common values that include increasing contributions to computer science through education and training. Code.org is also a non-profit organization and is dedicated to expanding participation in computer science by making it available in more schools and increasing participation by women and underrepresented students of color. The organization’s vision is that every student in every school should have the opportunity to learn computer programming. The Linux Foundation is offering an opportunity to individuals who want to extend their support for Linux to include increasing opportunities for people to learn programming of all types."

Full Story (comments: none)

Articles of interest

Free Software Supporter - Issue 74, May 2014

The Free Software Foundation newsletter for May covers the partnership between Mozilla and Adobe to support DRM, International Day Against DRM, Tehnoetic wireless USB adapter is FSF-certified, interview with Ciaran Gultnieks of F-Droid, FSF job opening, FSF statement on Court of Appeals ruling in Oracle v Google, and several other topics.

Full Story (comments: none)

FSFE Newsletter – June 2014

The Free Software Foundation Europe newsletter for June looks at privacy and Gmail, the security of Android apps, DRM, and much more.

Full Story (comments: none)

The unexpected outcome of the Open Source Seed Initiative's licensing debate (Opensource.com)

Over at Opensource.com, Jack Kloppenburg—one of the founders of the Open Source Seed Initiative (OSSI) that is trying to apply open source ideas to the genetic material in plant seeds—describes the switch from a licensing approach to that of a "pledge". "In February of 2014, OSSI made the hard but considered decision to abandon efforts to develop a legally defensible license and to shift to a pledge. This moves OSSI’s discourse and action from the legal field to the terrain of norms and ethics. We have found this shift to be stimulating, reinvigorating, and productive. The licensing approach was pulling us into a policing and bureaucratic orientation that was not congenial. Although our pledge is likely not legally binding, it is easily transmissible, it is viral, it is an uncompromising commitment to free exchange and use, and it is a very effective tool for outreach and education."

Comments (24 posted)

New Books

New from No Starch Press: "Penetration Testing"

No Starch Press has released "Penetration Testing" by Georgia Weidman.

Full Story (comments: none)

Calls for Presentations

CFP Deadlines: June 5, 2014 to August 4, 2014

The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.

DeadlineEvent Dates EventLocation
June 6 September 22
September 23
Open Source Backup Conference Köln, Germany
June 6 June 10
June 12
Ubuntu Online Summit 06-2014 online, online
June 20 August 18
August 19
Linux Security Summit 2014 Chicago, IL, USA
June 30 November 18
November 20
Open Source Monitoring Conference Nuremberg, Germany
July 1 September 5
September 7
BalCCon 2k14 Novi Sad, Serbia
July 4 October 31
November 2
Free Society Conference and Nordic Summit Gothenburg, Sweden
July 5 November 7
November 9
Jesień Linuksowa Szczyrk, Poland
July 7 August 23
August 31
Debian Conference 2014 Portland, OR, USA
July 11 October 13
October 15
CloudOpen Europe Düsseldorf, Germany
July 11 October 13
October 15
Embedded Linux Conference Europe Düsseldorf, Germany
July 11 October 13
October 15
LinuxCon Europe Düsseldorf, Germany
July 11 October 15
October 17
Linux Plumbers Conference Düsseldorf, Germany
July 14 August 15
August 17
GNU Hackers' Meeting 2014 Munich, Germany
July 15 October 24
October 25
Firebird Conference 2014 Prague, Czech Republic
July 20 January 12
January 16
linux.conf.au 2015 Auckland, New Zealand
July 21 October 21
October 24
PostgreSQL Conference Europe 2014 Madrid, Spain
July 24 October 6
October 8
Qt Developer Days 2014 Europe Berlin, Germany
July 24 October 24
October 26
Ohio LinuxFest 2014 Columbus, Ohio, USA
July 25 September 22
September 23
Lustre Administrators and Developers workshop Reims, France
July 27 October 14
October 16
KVM Forum 2014 Düsseldorf, Germany
July 27 October 24
October 25
Seattle GNU/Linux Conference Seattle, WA, USA
July 30 October 16
October 17
GStreamer Conference Düsseldorf, Germany
July 31 October 23
October 24
Free Software and Open Source Symposium Toronto, Canada
August 1 August 4 CentOS Dojo Cologne, Germany Cologne, Germany

If the CFP deadline for your event does not appear here, please tell us about it.

Upcoming Events

LinuxCon/CloudOpen North America Schedule Announced

The Linux Foundation has announced the schedule and program for LinuxCon and CloudOpen North America, taking place August 20-22, 2014 in Chicago, IL. "Featuring more than 140 sessions and keynotes, LinuxCon and CloudOpen are co-located with a Community Development Workshop presented by community expert Jono Bacon, the Linux Kernel Summit, Linux Security Summit, #MesosCon, OpenDaylight Project Mini-Summit, a UEFI Mini-Summit, and Xen Project User Summit. CloudOpen is the only technical conference that focuses on the open cloud and those projects that comprise it all in one place, including CloudStack, Ceph, Gluster, KVM, OpenStack, Puppet, SaltStack, Xen Project and more."

Full Story (comments: none)

Events: June 5, 2014 to August 4, 2014

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
June 9
June 10
Erlang User Conference 2014 Stockholm, Sweden
June 9
June 10
DockerCon San Francisco, CA, USA
June 10
June 12
Ubuntu Online Summit 06-2014 online, online
June 10
June 11
Distro Recipes 2014 - canceled Paris, France
June 13
June 14
Texas Linux Fest 2014 Austin, TX, USA
June 13
June 15
State of the Map EU 2014 Karlsruhe, Germany
June 13
June 15
DjangoVillage Orvieto, Italy
June 17
June 20
2014 USENIX Federated Conferences Week Philadelphia, PA, USA
June 19
June 20
USENIX Annual Technical Conference Philadelphia, PA, USA
June 20
June 22
SouthEast LinuxFest Charlotte, NC, USA
June 21
June 28
YAPC North America Orlando, FL, USA
June 21
June 22
AdaCamp Portland Portland, OR, USA
June 23
June 24
LF Enterprise End User Summit New York, NY, USA
June 24
June 27
Open Source Bridge Portland, OR, USA
July 1
July 2
Automotive Linux Summit Tokyo, Japan
July 5
July 11
Libre Software Meeting Montpellier, France
July 5
July 6
Tails HackFest 2014 Paris, France
July 6
July 12
SciPy 2014 Austin, Texas, USA
July 8 CHAR(14) near Milton Keynes, UK
July 9 PGDay UK near Milton Keynes, UK
July 14
July 16
2014 Ottawa Linux Symposium Ottawa, Canada
July 18
July 20
GNU Tools Cauldron 2014 Cambridge, England, UK
July 19
July 20
Conference for Open Source Coders, Users and Promoters Taipei, Taiwan
July 20
July 24
OSCON 2014 Portland, OR, USA
July 21
July 27
EuroPython 2014 Berlin, Germany
July 26
August 1
Gnome Users and Developers Annual Conference Strasbourg, France
August 1
August 3
PyCon Australia Brisbane, Australia

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol


Copyright © 2014, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds