|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for October 17, 2013

Web fonts, open source, and industry disruption

By Nathan Willis
October 16, 2013

ATypI Conference

If there was any lingering doubt that open fonts are causing an irreversible upheaval in the graphic design and typographic industries, one needs look no further than the ATypI 2013 conference in Amsterdam to dispel the idea. Within the type community, the availability of open fonts has been a serious point of disagreement for years, with critics espousing complaints that will sound familiar to those in the free software world. For example: open-licensed fonts will put professionals out of work, quality will plummet, and companies that use open fonts in their products are exploiting naive developers who do not know the value of their output. These critiques were heard in the 2013 conference, but a series of presentations would indicate that the tide is turning—albeit slowly—toward acceptance.

The rarely-discussed complication in assessing the rise of the open font movement is that there are actually two simultaneous shifts taking place: the increasing availability of open fonts and the increasing use of web fonts delivered to browsers using HTTP and CSS. The shifts overlap frequently, but they are not inseparable: services like Adobe TypeKit are happy to serve proprietary fonts for web pages, and open fonts can easily be used in print and offline documents. Put them together, however, and one has a system of delivering fonts that has nothing to do with the type industry as it has existed for the past three decades or so. Such changes are inevitable over time, but there is still a lot of disagreement about how they are implemented, as well as how much is being gained or lost.

There were actually two panel discussions at ATypI 2013 dedicated to open fonts. The first was moderated by Thomas Phinney from Extensis, and was humorously titled "Free fonts: threat or menace?" The second was moderated by Victor Gaultney of SIL International—co-author of the Open Font License (OFL)—and was intended to address how open fonts allow for collaborative work. This topic was overshadowed in the second panel session, however, by many of the same criticisms voiced in the first. The fundamental point of contention is whether or not open fonts constitute a net gain or a net loss for type designers—where the gain or loss is most frequently measured first in financial terms, and secondarily by the overall quality level of the world's typography. Both panels touched on these issues.

[Open font panel 2]

The biggest target is Google Fonts, which is by far the largest purveyor of web fonts, and which exclusively serves fonts under open licenses. In the first panel session, Phinney took the position that Google Fonts has done a poor job at quality control—incorporating fonts that are not simply of mediocre aesthetic quality, but also suffer from serious technical drawbacks like irregular letter spacing. As the most public face of the web-font revolution, the analysis goes, Google ought to do a better job weeding out low-quality material.

The counterargument is that Google Fonts is intentionally casting a wide net; making a large variety of resources available to users without attempting to act as a "gatekeeper" on matters of taste—to many, the traditional gatekeepers of typographic taste are seen as out-of-date regarding the needs of the web and other new technology, if not outright elitist. David Kuettel, manager of the Google Fonts service, described this as a data-driven approach. On the first day of the conference, he presented a new report about web font usage based on the company's analysis of the top one million web sites (by Alexa ranking). The results show a phenomenal increase in the use of web fonts: more than 35% of the top million sites use web fonts, and 62% of the top 100 sites.

The total usage numbers are hard to comprehend: the most-watched video on YouTube (Gangnam Style) has 1.7 billion hits, while the most-used web font (Open Sans) has 139.5 billion. By comparing pages with Internet Archive's Wayback machine, the analysis showed that it is clear web fonts are rapidly supplanting Flash and images as the preferred way to deliver custom text to users.

A surge in demand should be good news for typographers, of course—but there is a catch: the fonts served by Google Fonts offer no royalty payments to the type designer. The Google Fonts team's position is that it has played a primary role in kickstarting web-font usage—thus generating demand for other web-font services, even if (as Keuttel put it) Google has not yet figured out how to monetize web fonts. Keuttel said he believes the company will find a way to compensate web font designers, perhaps when it makes Google Fonts an option within its AdSense program.

The loudest critic (literally) of this position at ATypI was Bruno Maag of type foundry Dalton Maag, whom many will remember was commissioned to create the open Ubuntu Font. Maag vociferously criticized Google Fonts during the audience-question portion of the second panel, saying that Google was reported to pay a flat fee of $6500 for a font family; creating such a family required 400 hours of work, he said, so "how can Google expect someone to earn a living at $15 per hour." There was applause from many sections of the audience.

Keuttel responded by saying that Google Fonts is still in the process of growing from a "20 percent time" project into a full-fledged service, a process that has required his team to sell other Google product managers (such as Blogger, Google Docs, and now AdSense) on the value of integrating Google Fonts one at a time. He said that he thinks the service will figure out how to better compensate designers in the next few years, and that he also thinks Google services like AdSense will eventually be able to use commercial web font services, which allows for others to find their own pricing plans.

It is certainly disconcerting for anyone to hear "just wait five years and the money will sort itself out." Keuttel made a comment along those lines that was met with grumbling from the audience. But not everyone found Maag's protest compelling; some called his 400-hour number into serious question, while others denounced the wage-earning calculations as "a first world problem."

More interesting, however, were several comments about how business models have changed and will continue to change. Panelist Eben Sorkin, an ATypI board member with several open fonts published via Google Fonts, said that he has received requests for commissioned type design work through his open font releases. Furthermore, he said, the old model of selling licenses for digital fonts has basically meant that type designers sold only to professional graphic design studios—which is a very small audience. Web fonts may mean that the price of a license goes down, but they also make it possible to sell licenses to potentially everyone on the web.

Finally, Adam Twardoch (lead developer of the proprietary font editor FontLab) commented from the audience that open fonts provide yet another business opportunity, because anyone can be hired to improve or extend the product. With proprietary fonts, he said, if you ask the designer to create an additional weight and the designer says "I don't have time, ask me again in a few months" you are simply out of luck.

Sorkin and Twardoch's comments no doubt echo the experiences that other sectors of the open source software business have gone through in recent years. Business models change, but new ones do appear; they may prove most upsetting to established players, but eventually the majority of those players learn to adapt. Case in point: Adobe recently launched its Edge Web Fonts service, which offers a selection of open fonts from the Google Fonts service that Adobe type designers have enhanced and polished.

Phinney subsequently wrote a blog post with a more in-depth analysis of the open font and web font situation. It is an informative read, particularly with regard to the "quality gap" that Phinney reports in open fonts as compared to proprietary fonts. To a degree, of course, "quality" is in the eye of the beholder, and Phinney's post has spawned a lengthy and ongoing debate about that subject on the Open Font Library mailing list. For those interested in exploring the Google Fonts team's analysis of web font adoption, the data set is available on line (although anonymized for privacy reasons); Keuttel has posted a guide to getting started with it.

Regardless of where they predict the prices of font licenses to go or what they think of the predicted business models around open fonts, the majority of type designers at ATypI do seem to agree on one thing: for the first two decades of its existence, typography on the web was pretty terrible because it was limited to the so-called "web-safe fonts." That has now changed, considerably for the better, and wherever the industry heads now, open fonts will be part of it.

Comments (25 posted)

Unanswered questions about fonts and open source

By Nathan Willis
October 16, 2013

ATypI Conference

The annual ATypI conference is, historically, a bit more technical than some other typographic conferences, and the 2013 event in Amsterdam was no exception. These days, there are still working letterpress shops, but fonts are implemented, tested, delivered, and rendered almost entirely in software. As several talks showed, open source principles are having a big impact on how fonts are designed and released, but there are still pain points in need of attention.

Old tech and new tech

One of the recurring themes was that the existing tools and practices for designing electronic page layout are woefully underpowered. No fewer than three speakers (Nick Sherman, Claus Eggers Sørensen, and ATypI president John D. Berry) presented sessions which argued that web design copies too many design patterns from the print world without fixing the problems that dynamically rendering on a screen should fix.

[John D. Berry]

For example, Berry noted that the majority of "tablet-friendly" web site designs simply re-flow text when the device is rotated from portrait to landscape orientation. In most cases this means extra-long lines of text, which are demonstrably harder to read. The proper thing to do in wide-screen orientation is split the text into two columns—and in fact APIs already exist that can detect device orientation, but web site designs do not take advantage of them. Similarly, most ebook readers have a feature that inverts the colors of the display, but they do not make the corresponding increase in line spacing that testing shows is needed to maintain readability for white-on-black text. Sørensen commented that web browsers give users the ability to increase or decrease the font size on a page, but that there is no corresponding way to make typographic adjustments (for example, to hyphenation) to keep blocks of text looking good when the size changes.

Berry's proposed solution to these problems was to start an advocacy group that will push for standards that take such typographic concerns to heart. He calls the group Scripta, and has put a brief landing page online at typoinstitute.org. Prior to the announcement, he got several big names to sign on to the project, including type designer Matthew Carter and author Cory Doctorow. Reaction to the announcement was mixed; some agreed that 30 years after the debut of TeX it was high time that page layout on the web received close attention. Others, however, thought that the name and presentation of Scripta sounded too much like an "old guard" approach that would be hard-pressed to make itself seem relevant to web developers and browser makers. On that point, Berry said he had intentionally taken a conservative approach to attract buy-in from traditional publishing experts, but that he would be happy to reconsider the messaging moving forward.

Coders

A different approach was advocated by several ATypI speakers who essentially argued that type designers simply need to become software developers. Leading that charge was keynote speaker Petr van Blokland, who criticized designers that are content to live "on the island of existing tools." He cited Adobe's InDesign desktop publishing application as an example: the application has no scripting interface because designers simply have not asked for it—while plenty of other applications have been adding Python scripting support over the last fifteen years.

Similarly, Van Blokland said, anyone who still works on static single files rather than version-preserving databases is "stupid." How is it that there is sufficient room on the Internet for all of the porn in the world, he asked, but type designers still work from a single file? Programmers solved this problem for themselves with Git, he said, but type designers are not working on their version of Git.

A later session by Cyrus Highsmith and David Jonathan Ross reiterated the importance of bridging the gap between type design and programming, although in less confrontational words. Highsmith and Ross showed several type projects that incorporate code into the final product delivered. For example, one typeface was designed to include ornamented drop-caps for use at the start of the document. But too ornate of an illuminated letter can be indecipherable at small sizes, so the final font includes several versions optimized for different text sizes—and it includes JavaScript that switches between the sizes as the window is resized. That sort of feature might historically be considered an add-on, they said, but as the web becomes the most important publishing platform, it should be considered an integral part of the product.

Solutions

On the whole, both the call for improved web typography tools and the call for type designers to learn software development were repeated enough to show that the industry considers both issues to be high priorities. But there were also positive signs that progress is being made on each front.

For example, Mark Barratt presented a session on the role that annotations play in defining a book. His premise was that there was something lost when annotations changed from being notes written from one scribe to another (as was common in the era of hand-copied books) to static footnotes in digitally typeset books today. As was the case with several of the other speakers, Barratt lamented that the web had not improved on this situation, despite the fact that it makes interactive discussions so easy. But there was hope for change, he said, and compared several tools for live web site annotation.

Several projects have attempted to use the HTML5 <aside> tag for this purpose, he said, although they suffer from incompatible and confusing browser implementations. He currently finds the hypothes.is project to be the best web annotation implementation. Although it requires a browser plug-in (which some might see as an inconvenience), the project is built around an open standard and is free software. Users can add comments to any page they read, without requiring permission or a time investment by the site owner, and everyone has the ability to see anyone else's annotations.

On the improving-software-tools front, several free software developers were intrigued by Van Blokland's comment that Git was unsuitable for font development, and a discussion followed during the coffee break. Van Blokland's dissatisfaction with Git came down to two points. First, most software tools have standardized on the XML-based Unified Font Object (UFO) file format created by (among others) Petr's brother Erik van Blokland, but many of them re-order XML objects when writing out UFO files—which, in turn, means "changes" are recorded to files even when nothing has actually changed. Second, Git tools are still optimized for showing diffs between text files, but the changes of interest between two versions of a UFO are most likely to be visual differences; a rendering of the changes is what users need, rather than a list of (x,y) coordinates that have changed.

The first problem should be solvable by serializing the UFO data before it is committed to the repository, of course; what is needed is a pre-commit hook. The second problem is a bit more work to solve, but it should be doable, too. Indeed, there are similar projects to do a "visual diff" comparison for SVG files. After the discussion, Van Blokland seemed much happier with the prospects for integrating Git with font development—although he would probably point out that it was the presence of software developers in the audience that helped find the right solutions.

There were plenty of other open source projects on display at ATypI Amsterdam—everything from major infrastructure projects like FreeType to small, one-person tools—although it is still quite common to see code released with nothing but an informal "you are free to use this" non-license attached. Of course, that is an issue all too common where the web is concerned, and it is not likely to disappear worldwide overnight. Perhaps if the type development community starts to take a more active role in the web standards process and develops the habit of releasing more software alongside digital fonts, that will change.

Comments (3 posted)

Rationalizing Python packaging

By Jonathan Corbet
October 16, 2013
The Python language comes with a long list of nice features, in keeping with the language's "batteries included" mantra. One battery that is noticeably absent, though, is a comprehensive mechanism for the building, distribution, and installation of Python packages. That leaves packagers and users having to choose between a variety of third-party tools or just giving up and solving the whole problem themselves. The good news is that Python 3.4 is likely to solve this problem, but Python 2 users may still have to go battery shopping on their own.

Python packaging has long been recognized as a problem for users of the language. There is an extensive collection of add-on modules in the Python Package Index (PyPI), but there is no standard way for a user to obtain one of those modules (and, crucially, any other modules it depends on) and install it on their system. The distutils package — the engine behind the nearly omnipresent setup.py files found in modules — can handle some of the mechanics of installation, but it is showing its age and lacks features. Distutils2 is a fork of distutils intended to solve many of the problems there, but this project appears to have run out of steam. Setuptools is a newer approach found on many systems, but it has a long list of problems of its own. Distribute is "a deprecated fork" of Setuptools. And so on; one does not need to look for long to see that the situation is messy — and that's without looking at the variety of package formats ("egg," "wheel," etc.) out there.

For a while, the plan was to complete work on distutils2 and merge the result into the Python 3.3 release. But, in June 2012, that effort collapsed when it became clear that the work would not be anywhere near complete in time. The results were a 3.3 release without an improved packaging story, an epic email thread on the nature of the problem and what should be done about it, and a virtual halt to distutils2 work.

PEP 453

Well over one year later, a solution appears to be in sight; it takes the form of PEP 453, which, barring some unforeseen glitch, should be officially approved in the near future. This proposal, written by Donald Stufft and Nick Coghlan, charts the path toward better Python package management.

One might start by wondering why such a thing is needed in the first place. Linux users, of course, already have systems with nice package management built into them. But the world is full of users of other operating systems that lack comprehensive packaging systems. And, even on Linux, even on Debian, one is unlikely to find packages for all 35,690 packages found in PyPI, so Linux users, too, are likely to have to install modules outside of the distribution's packaging system. It would seem that there is a place for a package distribution mechanism for Python modules, much like the Perl community has long had with CPAN.

PEP 453 calls for that mechanism to be built on PyPI using the pip installer. Pip, which is already in wide use, lacks a number of the problems found in its predecessors (though pip is based on Setuptools — a dependency which is expected to go away over time). It does not attempt to solve the whole problem, so complicated programs with non-Python dependencies may still end up needing a more comprehensive tool like Buildout or conda. But, for most users, pip should be more than adequate. And, by designating pip as the officially recommended installer, the PEP should help to direct resources toward improving pip and porting modules to it.

Pip will become a part of the standard Python distribution, but in an interesting way. A copy of pip will be hidden away deep within the Python library; it can then be installed into the system using the (also included) ensurepip module. Anybody installing their own version of Python can optionally use ensurepip to install pip; otherwise they can get it independently or (for Linux users) rely on the version shipped by the distributor. Python will also include a bundle of certificate-authority certificates to verify package sources, though the PEP envisions distributors wanting to replace that with their own central CA certificate collection. For as long as pip needs Setuptools, that will be bundled as well.

This scheme thus calls for pip to be distributed with Python, but it will not strictly become a part of Python. It will remain an independently developed project that, it is expected, will advance more quickly than Python and make more frequent releases. Python's 18-month cycle was seen as being far too slow for a developing utility like pip, so the two will not be tied together. There is a plan to include updated versions of pip in Python maintenance releases, though, to ensure that security fixes get out to users eventually.

Pip for Python 2

Perhaps the most controversial part of earlier versions of this PEP was a plan to include a version of ensurepip in the next Python 3.3 and 2.7 releases as well. The motivation for this idea is clear enough: if pip is to be the standard Python package manager, it would be nice to make it easily available to all Python users. As much as the Python developers would like to see everybody using Python 3, they have a realistic view of how long it will really take for users — especially those with existing, working applications — to move off Python 2. Putting ensurepip into (say) Python 2.7.6 would make it easier for Python 2 developers to work with the official packaging system.

On the other hand, Python 2 is currently being maintained under a strict "no new features" policy; adding ensurepip would require an explicit exception that, some developers fear, could open the floodgates for similar requests from developers of other modules. There are also worries that, once ensurepip goes in, some versions of Python 2.7 will have different feature sets than others, creating confusion for application developers and users. And, though they were not in the majority, some developers clearly do not want to do anything that might encourage developers to stay with Python 2 for any longer than necessary. These concerns led to substantial opposition to adding ensurepip to point releases of older Python versions.

The end result is a compromise: the documentation for Python 3.3 and 2.7 will be updated to anoint pip as the standard package manager, but no other changes will be made to those versions — for now. Nick has stated his intent to put together a separate PEP to revisit the idea of bundling pip and Python 2.7 for separate consideration once the (relatively uncontroversial) question of getting pip into the 3.4 release is resolved.

Assuming there are no major disagreements, that resolution should happen soon. It needs to: the Python 3.4 release schedule calls for the first beta release — and associated feature freeze — to happen on November 24. The actual 3.4 release is currently planned for late February; after that, Python developers and users should have a standardized packaging and distribution scheme for the first time. "Better late than never" certainly applies in this case.

Comments (22 posted)

Page editor: Jonathan Corbet

Security

Browser fingerprinting

By Jake Edge
October 16, 2013

Browser fingerprinting is apparently on the rise, at least partly to help advertisers evade "do not track" cookie restrictions. The technique is not new; the Electronic Frontier Foundation (EFF) popularized it with the announcement of its Panopticlick fingerprinting tool back in 2010. But marketing firms and advertisers are preparing for a future where fewer web users are willing to be tracked via cookies—or are just out to pick up that extra few percent of the more savvy surfers.

The idea is simple: browsers expose a fair amount of identifying information in normal operation. That includes the User-Agent string and the Accept string (which are both included in HTTP headers). Browser capability queries via JavaScript add an enormous amount of identifying information including screen size and depth, time zone, browser plugins, system fonts, and so on. Those last two, in particular, are often fairly distinct, all of which adds up to a unique (or nearly) fingerprint for a particular user's browser.

Tracking a user across multiple sites then just becomes an exercise in matching fingerprints. That suggests that those who do not wish to be subject to that kind of tracking should seek to have their browser look as common as possible. If your browser is unique in the 3.5 million that Panopticlick has seen (as mine is when JavaScript is enabled for eff.org), it will be far easier to track than a browser that has the same fingerprint as one in every 28,286—how my browser appears with JavaScript disabled. But even the latter may not be much of a defense if some other information (e.g. IP address, timing correlations) are added into the mix.

Clearly disabling JavaScript wherever possible—NoScript to the rescue—will help, but it isn't always possible to do so (and it seems to get less and less possible as time goes on). Limiting what kinds of information are available via JavaScript (and CSS) may help. The biggest differences between my browser and the others Panopticlick has tested is in the plugins and the fonts, for example.

Tor has taken that approach with its Tor Browser Bundle. It limits the properties that users can change, limits plugins, and only allows a certain number of fonts to be presented. That increases the set of browsers with the same fingerprint, though there is a balance to be struck. If Tor browser fingerprints are too similar, it makes identifying Tor users easier, which may have its own set of impacts.

Much like the "do not track" effort for cookie-base tracking, the advertising trade groups are trying to come up with an opt-out scheme for non-cookie tracking. As the article mentions, that's both good and bad: it will be nice to have a way to turn off that tracking (at least for compliant advertisers), but it may also legitimize fingerprinting as a way to track users.

It may not only be advertisers who are using fingerprinting, however, and the US National Security Agency (NSA) or other secret services are unlikely to pay attention to any kind of opt-out mechanism. As ars technica noted, the NSA specifically mentioned better fingerprinting in a now-famous presentation entitled "Tor Stinks".

In addition, some security researchers in Belgium have looked into the practice of fingerprinting to see how widespread it is today. Their paper [PDF] claims that 97 sites among the top 10,000 were using JavaScript fingerprinting, and, amusingly, 404 among the top million sites were using the technique. The full list is not available, evidently due to legal concerns, but Orbitz, T-Mobile UK, PokerStrategy.com, and others were listed as using fingerprinting. What, exactly, they use it for is unclear.

Like many things on the internet, fingerprinting is essentially an escalation. Cookie-based tracking may be getting less effective, so something was needed to fill that information gap. Spam and malware have followed similar paths, with some deploying countermeasures that lead to a change in tactics from the spammers and malware purveyors to work around them.

Tracking is in something of a category of its own. One could argue that web site owners are providing a free service that costs them real money, so they should be able to do what they wish with the information gathered (actively or passively) from connections made to their sites. Most would agree that goes too far, but tracking is not as clearly "wrong" as malware or spam. But, for privacy purposes, preventing tracking is critical. One expects that we will be seeing more anti-fingerprinting efforts in browsers before too long.

Comments (6 posted)

Brief items

Security quotes of the week

Whenever non-cryptographers come up with cryptographic algorithms based on some novel problem that's hard in their area of research, invariably there are pretty easy cryptographic attacks.

So consider this a good research exercise for all budding cryptanalysts out there.

Bruce Schneier

If whistleblowers don’t dare reveal crimes and lies, we lose the last shred of effective control over our government and institutions. That’s why surveillance that enables the state to find out who has talked with a reporter is too much surveillance — too much for democracy to endure.
Richard Stallman in a long Wired essay

To see why, consider two companies, which we’ll call Lavabit and Guavabit. At Lavabit, an employee, on receiving a court order, copies user data and gives it to an outside party—in this case, the government. Meanwhile, over at Guavabit, an employee, on receiving a bribe or extortion threat from a drug cartel, copies user data and gives it to an outside party—in this case, the drug cartel.

From a purely technological standpoint, these two scenarios are exactly the same: an employee copies user data and gives it to an outside party. Only two things are different: the employee’s motivation, and the destination of the data after it leaves the company. Neither of these differences is visible to the company’s technology—it can’t read the employee’s mind to learn the motivation, and it can’t tell where the data will go once it has been extracted from the company’s system. Technical measures that prevent one access scenario will unavoidably prevent the other one.

Ed Felten

This is performing a strcmp between the string pointer at offset 0xD0 inside the http_request_t structure and the string “xmlset_roodkcableoj28840ybtide”; if the strings match, the check_login function call is skipped and alpha_auth_check returns 1 (authentication OK).
/dev/ttyS0 finds a backdoor in D-Link firmware

Comments (none posted)

Google: Going beyond vulnerability rewards

Google is now offering between $500 and $3,133.7 for security improvements to core free software. That includes projects like OpenSSH, OpenSSL, BIND, libjpeg, Blink, Chromium, the Linux kernel, and more. Expansion into toolchains, web servers, SMTP servers, and VPN is planned. Patches should be submitted to the upstream project and, once they are merged, to Google for evaluation. The official rules have more details. "So we decided to try something new: provide financial incentives for down-to-earth, proactive improvements that go beyond merely fixing a known security bug. Whether you want to switch to a more secure allocator, to add privilege separation, to clean up a bunch of sketchy calls to strcat(), or even just to enable ASLR - we want to help!"

Comments (22 posted)

Why Android SSL was downgraded from AES256-SHA to RC4-MD5 in late 2010 (op-co.de)

A site called "op-co.de" has a look at how Android chooses SSL ciphers and an explanation why a shift was made to a less secure cipher in the 2.3 release. "So what the fine Google engineers did to reduce our security was merely to copy what was there, defined by the inventors of Java!"

Comments (21 posted)

New vulnerabilities

clutter: authentication bypass

Package(s):clutter CVE #(s):CVE-2013-2190
Created:October 11, 2013 Updated:October 18, 2013
Description:

From the Novell bugzilla entry:

A security flaw was found in the way Clutter, an open source software library for creating rich graphical user interfaces, used to manage translation of hierarchy events in certain circumstances (when underlying device disappeared, causing XIQueryDevice query to throw an error). Physically proximate attackers could use this flaw for example to obtain unauthorized access to gnome-shell session right after system resume (due to gnome-shell crash).

Alerts:
Mandriva MDVSA-2013:255 clutter 2013-10-18
Mageia MGASA-2013-0312 clutter 2013-10-17
openSUSE openSUSE-SU-2013:1540-1 clutter 2013-10-10

Comments (none posted)

drupal: multiple vulnerabilities

Package(s):drupal6 CVE #(s):CVE-2012-0825 CVE-2012-0826 CVE-2012-5652 CVE-2013-0244 CVE-2013-0245
Created:October 14, 2013 Updated:October 16, 2013
Description: From the Debian advisory:

Multiple vulnerabilities have been been fixed in the Drupal content management framework, resulting in information disclosure, insufficient validation, cross-site scripting and cross-site request forgery.

Alerts:
Debian DSA-2776-1 drupal6 2013-10-11

Comments (none posted)

ejabberd: SSLv2 and weak cipher use

Package(s):ejabberd CVE #(s):
Created:October 11, 2013 Updated:October 16, 2013
Description:

From the Debian advisory:

It was discovered that ejabberd, a Jabber/XMPP server, uses SSLv2 and weak ciphers for communication, which are considered insecure. The software offers no runtime configuration options to disable these. This update disables the use of SSLv2 and weak ciphers.

Alerts: (No alerts in the database for this vulnerability)

Comments (none posted)

elinks: does not properly verify SSL certificates

Package(s):elinks CVE #(s):
Created:October 14, 2013 Updated:January 22, 2014
Description: From the Red Hat bugzilla:

A Debian bug report indicated that Links does not properly verify SSL certificates. If you visit a web site with an expired SSL certificate, Links will only display "SSL error" without any indication as to what the error was. This, in and of itself, is not a flaw however when testing, I found that when you go to a site with a valid SSL certificate, but for a different hostname (for example, if you go to https://alias.foo.com which might be a CNAME or a proxy for https://foo.com) Links will connect without any errors or warnings. Doing the same in a browser like Google Chrome, however, reports "You attempted to reach alias.foo.com, but instead you actually reached a server identifying itself as foo.com." and allows you to either proceed or not, before loading the site.

Alerts:
Mandriva MDVSA-2014:019 elinks 2014-01-22
Mageia MGASA-2014-0014 elinks 2014-01-21
Fedora FEDORA-2013-18404 elinks 2013-10-14
Fedora FEDORA-2013-18347 elinks 2013-10-14

Comments (1 posted)

gnupg: denial of service

Package(s):gnupg CVE #(s):CVE-2013-4402
Created:October 10, 2013 Updated:November 13, 2013
Description:

From the Mageia advisory:

Special crafted input data may be used to cause a denial of service against GPG. GPG can be forced to recursively parse certain parts of OpenPGP messages ad infinitum (CVE-2013-4402).

Alerts:
Gentoo 201402-24 gnupg 2014-02-21
Fedora FEDORA-2013-18647 gnupg 2013-11-13
Fedora FEDORA-2013-18814 libgpg-error 2013-10-26
Fedora FEDORA-2013-18814 gnupg2 2013-10-26
Scientific Linux SLSA-2013:1459-1 gnupg2 2013-10-24
Scientific Linux SLSA-2013:1458-1 gnupg 2013-10-24
Oracle ELSA-2013-1459 gnupg2 2013-10-24
Oracle ELSA-2013-1459 gnupg2 2013-10-24
Oracle ELSA-2013-1458 gnupg 2013-10-24
CentOS CESA-2013:1459 gnupg2 2013-10-25
CentOS CESA-2013:1459 gnupg2 2013-10-24
CentOS CESA-2013:1458 gnupg 2013-10-25
Red Hat RHSA-2013:1459-01 gnupg2 2013-10-24
Red Hat RHSA-2013:1458-01 gnupg 2013-10-24
openSUSE openSUSE-SU-2013:1552-1 gpg2 2013-10-16
Slackware SSA:2013-287-04 libgpg 2013-10-14
Slackware SSA:2013-287-02 gnupg2 2013-10-14
Slackware SSA:2013-287-01 gnupg 2013-10-14
openSUSE openSUSE-SU-2013:1546-1 gpg2 2013-10-14
Fedora FEDORA-2013-18807 gnupg2 2013-10-14
Fedora FEDORA-2013-18676 gnupg 2013-10-12
Debian DSA-2774-1 gnupg2 2013-10-10
Debian DSA-2773-1 gnupg 2013-10-10
Ubuntu USN-1987-1 gnupg, gnupg2 2013-10-09
Mandriva MDVSA-2013:247 gnupg 2013-10-10
Mageia MGASA-2013-0299 gnupg2 2013-10-10
Mageia MGASA-2013-0303 gnupg 2013-10-10

Comments (none posted)

icu: denial of service

Package(s):icu CVE #(s):CVE-2013-2924
Created:October 16, 2013 Updated:June 10, 2014
Description: From the CVE entry:

Use-after-free vulnerability in International Components for Unicode (ICU), as used in Google Chrome before 30.0.1599.66 and other products, allows remote attackers to cause a denial of service or possibly have unspecified other impact via unknown vectors.

Alerts:
Fedora FEDORA-2014-6858 mingw-icu 2014-06-10
Fedora FEDORA-2014-6828 mingw-icu 2014-06-10
openSUSE openSUSE-SU-2014:0065-1 chromium 2014-01-15
openSUSE openSUSE-SU-2013:1861-1 chromium 2013-12-12
Gentoo 201402-14 icu 2014-02-10
Mandriva MDVSA-2013:258 icu 2013-10-28
Mageia MGASA-2013-0316 icu 2013-10-25
Mageia MGASA-2013-0315 icu 2013-10-25
Fedora FEDORA-2013-18771 icu 2013-10-26
Fedora FEDORA-2013-18774 icu 2013-10-26
Debian DSA-2786-1 icu 2013-10-27
Debian DSA-2785-1 chromium-browser 2013-10-26
openSUSE openSUSE-SU-2013:1556-1 chromium 2013-10-16
Ubuntu USN-1989-1 icu 2013-10-15

Comments (none posted)

kernel: denial of service

Package(s):kernel CVE #(s):CVE-2013-4387
Created:October 10, 2013 Updated:June 9, 2014
Description:

From the Red Hat bugzilla entry:

Linux kernel built with the IPv6 protocol(CONFIG_IPV6) support and an Ethernet driver(ex. virtio-net) which has a UDP Fragmentation Offload(UFO) feature ON is vulnerable to NULL pointer dereference flaw. It could occur while sending a large messages over an IPv6 connection. Though the crash occurs while sending messages, it could be triggered by a remote client by requesting larger data from a server.

An unprivileged user/program could use this flaw to crash the kernel, resulting in DoS.

Alerts:
Ubuntu USN-2233-1 kernel 2014-06-05
Ubuntu USN-2234-1 EC2 kernel 2014-06-05
SUSE SUSE-SU-2014:0536-1 Linux kernel 2014-04-16
Red Hat RHSA-2014:0284-01 kernel 2014-03-11
Mageia MGASA-2013-0375 kernel-vserver 2013-12-18
Mageia MGASA-2013-0373 kernel-tmb 2013-12-18
Mageia MGASA-2013-0374 kernel-rt 2013-12-18
Mageia MGASA-2013-0372 kernel-linus 2013-12-18
Mageia MGASA-2013-0371 kernel 2013-12-17
Scientific Linux SLSA-2013:1645-2 kernel 2013-12-16
Ubuntu USN-2050-1 linux-ti-omap4 2013-12-07
Ubuntu USN-2049-1 kernel 2013-12-07
Ubuntu USN-2039-1 linux-ti-omap4 2013-12-03
Ubuntu USN-2041-1 linux-lts-raring 2013-12-03
Ubuntu USN-2045-1 kernel 2013-12-03
Ubuntu USN-2038-1 kernel 2013-12-03
Oracle ELSA-2013-2584 kernel 2013-11-28
Oracle ELSA-2013-2584 kernel 2013-11-28
Oracle ELSA-2013-2583 kernel 2013-11-28
Mageia MGASA-2013-0342 kernel 2013-11-22
Red Hat RHSA-2013:1645-02 kernel 2013-11-21
Ubuntu USN-2024-1 linux-ti-omap4 2013-11-08
Ubuntu USN-2022-1 linux-ti-omap4 2013-11-08
Mageia MGASA-2013-0346 kernel-vserver 2013-11-22
Mageia MGASA-2013-0344 kernel-tmb 2013-11-22
Mageia MGASA-2013-0345 kernel-rt 2013-11-22
Ubuntu USN-2019-1 linux-lts-quantal 2013-11-08
Ubuntu USN-2021-1 kernel 2013-11-08
Mandriva MDVSA-2013:265 kernel 2013-11-10
Red Hat RHSA-2013:1490-01 kernel-rt 2013-10-31
Oracle ELSA-2013-1645 kernel 2013-11-26
Mageia MGASA-2013-0343 kernel-linus 2013-11-22
Fedora FEDORA-2013-18822 kernel 2013-10-18
Fedora FEDORA-2013-18820 kernel 2013-10-14
Fedora FEDORA-2013-18364 kernel 2013-10-10

Comments (none posted)

libapache2-mod-fcgid: code execution

Package(s):libapache2-mod-fcgid CVE #(s):CVE-2013-4365
Created:October 14, 2013 Updated:February 10, 2014
Description: From the Debian advisory:

Robert Matthews discovered that the Apache FCGID module, a FastCGI implementation for Apache HTTP Server, fails to perform adequate boundary checks on user-supplied input. This may allow a remote attacker to cause a heap-based buffer overflow, resulting in a denial of service or potentially allowing the execution of arbitrary code.

Alerts:
Gentoo 201402-09 mod_fcgid 2014-02-07
SUSE SUSE-SU-2013:1667-1 apache2-mod_fcgid 2013-11-13
openSUSE openSUSE-SU-2013:1664-1 apache2-mod_fcgid 2013-11-13
openSUSE openSUSE-SU-2013:1613-1 apache2-mod_fcgid 2013-10-30
openSUSE openSUSE-SU-2013:1609-1 apache2-mod_fcgid 2013-10-30
Fedora FEDORA-2013-18686 mod_fcgid 2013-10-18
Fedora FEDORA-2013-18638 mod_fcgid 2013-10-18
Mandriva MDVSA-2013:256 apache-mod_fcgid 2013-10-18
Mageia MGASA-2013-0313 apache-mod_fcgid 2013-10-17
Debian DSA-2778-1 libapache2-mod-fcgid 2013-10-11

Comments (none posted)

libtar: code execution

Package(s):libtar CVE #(s):CVE-2013-4397
Created:October 11, 2013 Updated:February 21, 2014
Description:

From the Red Hat advisory:

Two heap-based buffer overflow flaws were found in the way libtar handled certain archives. If a user were tricked into expanding a specially-crafted archive, it could cause the libtar executable or an application using libtar to crash or, potentially, execute arbitrary code. (CVE-2013-4397)

Note: This issue only affected 32-bit builds of libtar.

Alerts:
Gentoo 201402-19 libtar 2014-02-21
Debian DSA-2817-1 libtar 2013-12-14
Fedora FEDORA-2013-18808 libtar 2013-10-21
Fedora FEDORA-2013-18785 libtar 2013-10-19
Mandriva MDVSA-2013:253 libtar 2013-10-18
Mageia MGASA-2013-0309 libtar 2013-10-17
CentOS CESA-2013:1418 libtar 2013-10-11
Scientific Linux SLSA-2013:1418-1 libtar 2013-10-10
Oracle ELSA-2013-1418 libtar 2013-10-10
Red Hat RHSA-2013:1418-01 libtar 2013-10-10

Comments (none posted)

mozilla-nss: unspecified impact

Package(s):mozilla-nss CVE #(s):CVE-2013-1739
Created:October 11, 2013 Updated:December 17, 2013
Description:

From the Novell bugzilla entry:

Bug 894370 - (CVE-2013-1739) Avoid uninitialized data read in the event of a decryption failure.

[ NSS bug 894370 is closed at the time of this writing. ]

Alerts:
Gentoo 201406-19 nss 2014-06-22
Scientific Linux SLSA-2013:1829-1 nss, nspr, and nss-util 2013-12-13
Oracle ELSA-2013-1829 nss, nspr, and nss-util 2013-12-12
CentOS CESA-2013:1829 nspr 2013-12-13
CentOS CESA-2013:1829 nss 2013-12-13
CentOS CESA-2013:1829 nss-util 2013-12-13
Red Hat RHSA-2013:1829-01 nss, nspr, and nss-util 2013-12-12
Scientific Linux SLSA-2013:1791-1 nss and nspr 2013-12-09
Oracle ELSA-2013-1791 nss, nspr 2013-12-05
CentOS CESA-2013:1791 nspr 2013-12-05
CentOS CESA-2013:1791 nss 2013-12-05
Red Hat RHSA-2013:1791-01 nss, nspr 2013-12-05
Mandriva MDVSA-2013:269 firefox 2013-11-20
SUSE SUSE-SU-2013:1678-1 Mozilla Firefox 2013-11-15
Mandriva MDVSA-2013:270 nss 2013-11-20
Mageia MGASA-2013-0320 firefox 2013-11-09
Debian DSA-2790-1 nss 2013-11-02
Ubuntu USN-2030-1 nss 2013-11-18
Fedora FEDORA-2013-20448 xulrunner 2013-11-01
Fedora FEDORA-2013-20448 firefox 2013-11-01
Ubuntu USN-2010-1 thunderbird 2013-10-31
Mandriva MDVSA-2013:264 firefox 2013-10-31
Ubuntu USN-2009-1 firefox 2013-10-29
Mandriva MDVSA-2013:257 nss 2013-10-23
openSUSE openSUSE-SU-2013:1539-1 mozilla-nss 2013-10-10
openSUSE openSUSE-SU-2013:1542-1 mozilla-nss 2013-10-10

Comments (none posted)

php-pecl-xhprof: cross-site scripting

Package(s):php-pecl-xhprof CVE #(s):
Created:October 10, 2013 Updated:October 16, 2013
Description:

From the Fedora adivsory:

Fix reflected XSS with run parameter.

Alerts:
Fedora FEDORA-2013-18049 php-pecl-xhprof 2013-10-10
Fedora FEDORA-2013-18094 php-pecl-xhprof 2013-10-10

Comments (none posted)

polarssl: insecure RSA private key

Package(s):polarssl CVE #(s):CVE-2013-5915
Created:October 14, 2013 Updated:June 20, 2014
Description: From the PolarSSL advisory:

The researchers Cyril Arnaud and Pierre-Alain Fouque investigated the PolarSSL RSA implementation and discovered a bias in the implementation of the Montgomery multiplication that we used. For which they then show that it can be used to mount an attack on the RSA key. Although their test attack is done on a local system, there seems to be enough indication that this can properly be performed from a remote system as well.

Alerts:
Fedora FEDORA-2014-7261 polarssl 2014-06-19
Fedora FEDORA-2014-7263 polarssl 2014-06-19
Mageia MGASA-2013-0353 polarssl 2013-11-30
Debian DSA-2782-1 polarssl 2013-10-20
Gentoo 201310-10 polarssl 2013-10-17
Fedora FEDORA-2013-18251 polarssl 2013-10-14
Fedora FEDORA-2013-18228 polarssl 2013-10-14

Comments (none posted)

quagga: code execution

Package(s):quagga CVE #(s):CVE-2013-2236
Created:October 10, 2013 Updated:November 26, 2013
Description:

From the quagga-dev bug report:

While processing the received LSAs, we crash with gdb backtrace points to memcpy called from new_msg_lsa_change_notify. By code review, I see that we memcpy into a buffer with a length we learned from the input, not governed by the length of the available buffer. In my patch, I suggest that we govern the memcpy by the length of the available buffer.

Alerts:
Ubuntu USN-2941-1 quagga 2016-03-24
Debian DSA-2803-1 quagga 2013-11-26
Mandriva MDVSA-2013:254 quagga 2013-10-18
Mageia MGASA-2013-0310 quagga 2013-10-17
Gentoo 201310-08 quagga 2013-10-10

Comments (none posted)

qemu: privilege escalation

Package(s):qemu CVE #(s):CVE-2013-4344
Created:October 14, 2013 Updated:February 7, 2014
Description: From the CVE entry:

Buffer overflow in the SCSI implementation in QEMU, as used in Xen, when a SCSI controller has more than 256 attached devices, allows local users to gain privileges via a small transfer buffer in a REPORT LUNS command.

Alerts:
openSUSE openSUSE-SU-2014:1281-1 xen 2014-10-09
openSUSE openSUSE-SU-2014:1279-1 xen 2014-10-09
Debian DSA-2933-1 qemu-kvm 2014-05-19
Debian DSA-2932-1 qemu 2014-05-19
SUSE SUSE-SU-2014:0623-1 kvm 2014-05-08
Ubuntu USN-2092-1 qemu, qemu-kvm 2014-01-30
Scientific Linux SLSA-2013:1553-2 qemu-kvm 2013-12-09
openSUSE openSUSE-SU-2014:0200-1 QEMU 2014-02-06
Oracle ELSA-2013-1553 qemu-kvm 2013-11-27
Mageia MGASA-2013-0341 qemu 2013-11-22
Red Hat RHSA-2013:1553-02 qemu-kvm 2013-11-21
Fedora FEDORA-2013-18493 qemu 2013-10-14

Comments (none posted)

systemd: multiple vulnerabilities

Package(s):systemd CVE #(s):CVE-2013-4391 CVE-2013-4394
Created:October 14, 2013 Updated:December 13, 2016
Description: From the Debian advisory:

Multiple security issues in systemd have been discovered by Sebastian Krahmer and Florian Weimer: Insecure interaction with DBUS could lead to the bypass of Policykit restrictions and privilege escalation or denial of service through an integer overflow in journald and missing input sanitising in the processing of X keyboard extension (XKB) files.

Alerts:
Gentoo 201612-34 systemd 2016-12-13
Debian DSA-2777-1 systemd 2013-10-11

Comments (none posted)

typo3-src: cross-site scripting

Package(s):typo3-src CVE #(s):CVE-2013-1464
Created:October 11, 2013 Updated:October 16, 2013
Description:

From the Debian advisory:

Markus Pieton and Vytautas Paulikas discovered that the embedded video and audio player in the TYPO3 web content management system is [susceptible] to cross-site-scripting.

Alerts:
Debian DSA-2772-1 typo3-src 2013-10-10

Comments (none posted)

xen: information leak

Package(s):xen CVE #(s):CVE-2013-4355 CVE-2013-4361
Created:October 14, 2013 Updated:December 9, 2013
Description: From the CVE entries:

Xen 4.3.x and earlier does not properly handle certain errors, which allows local HVM guests to obtain hypervisor stack memory via a (1) port or (2) memory mapped I/O write or (3) other unspecified operations related to addresses without associated memory. (CVE-2013-4355)

The fbld instruction emulation in Xen 3.3.x through 4.3.x does not use the correct variable for the source effective address, which allows local HVM guests to obtain hypervisor stack information by reading the values used by the instruction. (CVE-2013-4361)

Alerts:
Debian DSA-3006-1 xen 2014-08-18
Gentoo 201407-03 xen 2014-07-16
SUSE SUSE-SU-2014:0470-1 Xen 2014-04-01
SUSE SUSE-SU-2014:0446-1 Xen 2014-03-25
SUSE SUSE-SU-2014:0411-1 Xen 2014-03-20
openSUSE openSUSE-SU-2013:1953-1 xen 2013-12-25
Scientific Linux SLSA-2013:1790-1 kernel 2013-12-09
Oracle ELSA-2013-1790 kernel 2013-12-06
Oracle ELSA-2013-1790 kernel 2013-12-06
CentOS CESA-2013:1790 kernel 2013-12-06
Red Hat RHSA-2013:1790-01 kernel 2013-12-05
CentOS CESA-2013:X013 xen 2013-11-25
openSUSE openSUSE-SU-2013:1636-1 xen 2013-11-07
Fedora FEDORA-2013-18378 xen 2013-10-14
Fedora FEDORA-2013-18373 xen 2013-10-14

Comments (none posted)

xorg-server: code execution

Package(s):xorg-server CVE #(s):CVE-2013-4396
Created:October 15, 2013 Updated:October 31, 2013
Description: From the CVE entry:

Use-after-free vulnerability in the doImageText function in dix/dixfonts.c in the xorg-server module before 1.14.4 in X.Org X11 allows remote authenticated users to cause a denial of service (daemon crash) or possibly execute arbitrary code via a crafted ImageText request that triggers memory-allocation failure.

Alerts:
Fedora FEDORA-2015-3948 nx-libs 2015-03-26
Fedora FEDORA-2015-3964 nx-libs 2015-03-26
Gentoo 201405-07 xorg-server 2014-05-15
Oracle ELSA-2013-1620 xorg-x11-server 2013-11-27
openSUSE openSUSE-SU-2013:1614-1 xorg-x11-server 2013-10-30
openSUSE openSUSE-SU-2013:1610-1 xorg-x11-server 2013-10-30
Mandriva MDVSA-2013:260 x11-server 2013-10-28
Mandriva MDVSA-2013:259 x11-server 2013-10-28
Mageia MGASA-2013-0317 x11-server 2013-10-25
Debian DSA-2784-1 xorg-server 2013-10-22
Ubuntu USN-1990-1 xorg-server, xorg-server-lts-quantal, xorg-server-lts-raring 2013-10-17
CentOS CESA-2013:1426 xorg-x11-server 2013-10-16
Scientific Linux SLSA-2013:1426-1 xorg-x11-server 2013-10-16
Oracle ELSA-2013-1426 xorg-x11-server 2013-10-15
Oracle ELSA-2013-1426 xorg-x11-server 2013-10-15
Red Hat RHSA-2013:1426-01 xorg-x11-server 2013-10-15
Slackware SSA:2013-287-05 xorg 2013-10-14

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 3.12-rc5, released on October 13. Linus notes that things are calming down and seems generally happy.

Stable updates: 3.11.5, 3.10.16, 3.4.66, and 3.0.100 were released on October 13. There is probably only one more 3.0.x update to be expected before that kernel goes unsupported; 3.0 users should be thinking about moving on.

Comments (none posted)

Quote of the week

Single core systems are becoming a historic curiosity, we should justify every piece of extra complexity we add for them.
Ingo Molnar

Comments (3 posted)

Linux Foundation Technical Advisory Board Elections

Elections for (half of) the members of the Linux Foundation's Technical Advisory Board will be held as a part of the 2013 Kernel Summit in Edinburgh, probably on the evening of October 23. The nomination process is open now; anybody with an interest in serving on the board should get their nomination in soon.

Full Story (comments: none)

Kernel development news

Mount point removal and renaming

By Jake Edge
October 16, 2013

Mounting a filesystem is typically an operation restricted to the root user (or a process with CAP_SYS_ADMIN). There are ways to allow regular users to mount certain filesystems (e.g. removable devices like CDs or USB sticks), but that needs to be set up in advance by an administrator. In addition, bind mounts, which mount a portion of an already-mounted filesystem in another location, always require privileges. User namespaces will allow any user to be root inside their own namespace—thus be able to mount files and filesystems in (currently) unexpected ways. As might be guessed, that can lead to some surprising behavior that a patch set from Eric W. Biederman is trying to address.

The problem crops up when someone tries to delete or rename a file or directory that has been used as a mount point elsewhere. A user only needs read access to a file (and execute permissions to the directories in the path) to be able to use it as a mount point, which means that users can mount filesystems over files they don't own. When the owner of the file (or directory) goes to remove it, they get an EBUSY error—for no obvious reason. Biederman has proposed changing that with a set of patches that would allow the unlink or rename to proceed and to quietly unmount anything mounted there.

For example, if two users were to set up new mount and user namespaces ("user1" creates "ns1", "user2" creates "ns2"), the existing kernel would give the following behavior:

    ns1$ ls foo
    f1   f2
    ns1$ mount foo /tmp/user2/bar
Over in the other namespace, user2 tries to remove their temporary directory:
    ns2$ ls /tmp/user2/bar
    ns2$ rmdir /tmp/user2/bar
    rmdir: failed to remove ‘bar’: Device or resource busy

The visibility of mounts in other mount namespaces is part of the problem. A user getting an EBUSY when they attempt to remove their own directory may not even be able to determine why they are getting the error. They may not be able to see the mount on top of their file because it was made in another namespace. Coupled with user namespaces, this would allow unprivileged users to perform a denial of service attack against other users—including those more privileged.

Biederman's patches first add mount tracking to the virtual filesystem (VFS) layer. That will allow the later patches to find any mounts associated with a particular mount point. Using that, all of the mounts for a given directory entry (dentry) can be unmounted, which is exactly what is done when a mount point is deleted or renamed.

The idea was generally greeted favorably, but Linus Torvalds raised an issue: some programs are written to expect that rmdir() on a non-empty directory has no side effects, as it just returns ENOTEMPTY. The existing behavior is to return EBUSY if the directory is a mount point, but under Biederman's patches, any mount on the directory would be unmounted before determining whether the directory is empty and can be removed. That essentially adds a side effect to rmdir() even if it fails.

In addition, depending on the mount propagation settings, the mount in another namespace might be visible. So, a user looking at "their" directory may actually be seeing files that were mounted by another user. But if they try to delete the directory (or some program does), it might succeed because the underlying mount point directory is empty, which may violate the principle of least surprise.

Torvalds was not at all sure that any application cares, but was concerned that it made the change to the semantics larger than needed. He also had a suggestion for a way forward:

That said, I like the _concept_ of being able to remove a mount-point and the mount just goes away. But I do think that for sanity sake, it should have something like "if one of the mounts is in the current namespace, return -EBUSY". IOW, the patch-series would make the VFS layer _able_ to remove mount-points, but a normal rmdir() when something is mounted in that namespace would fail in order to give legacy behavior.

Biederman agreed and proposed another patch that would cause rmdir() to fail with an EBUSY if there is a mount on the directory in the current mount namespace. Mounts in other mount namespaces would continue to be unmounted in that case. But there were some questions raised about whether renaming mount points (or unlink()ing file mount points) should get the same treatment.

Serge E. Hallyn asked: "Do you think we should do the same thing for over-mounted file at vfs_unlink()?" In other words: if the mount is atop a file that is removed (unlink()ed), rather than a directory, should the same rule be applied? The question was eventually broadened to include rename() as well. At first, Biederman thought the rules should only apply to rmdir(), believing that the permissions in the enclosing directories should be sufficient to avoid any problems with the other two operations. But after some discussion with Miklos Szeredi and Andy Lutomirski, he changed his mind. For consistency, as well as alleviating a race condition in earlier (pre-UMOUNT_NOFOLLOW) versions of the fusermount command, "the most practical path I can see is to block unlink, rename, and rmdir if there is a mount in the local namespace".

The fusermount race comes about because of its attempt to ensure that the mount point it is unmounting does not change out from under it. A malicious user could replace the mount point with a symbolic link to some other filesystem, which the root-privileged fusermount would happily unmount. Earlier, Biederman had seen that problem as an insurmountable hurdle to his approach for fixing the rmdir() problem. But, not allowing mount point renames eliminates most of the concern with the fusermount race condition. There are still unlikely scenarios where an older fusermount binary and a newer kernel could be subverted to unmount any filesystem, but Szeredi, who is the FUSE maintainer, is not overly worried. It should be noted that there are other ways to "win" that race even in existing kernels (by renaming a parent directory of the mount point, for example).

New patches reflecting the changes suggested by various reviewers were posted on October 15. Biederman is targeting the 3.13 kernel, so there is some more time for reviewers to weigh in. It is a change that interested folks should be paying attention to, as it does subtly change the longtime behavior of the kernel.

It is, in some ways, another example of the unintended consequences of user namespaces. If user namespaces are not enabled, the problem is essentially just a source of potential confusion; it only becomes a denial of service when they are enabled. But, if distributions are to ever enable user namespaces, these kinds of problems need to be found and fixed.

Comments (3 posted)

Revisiting CPU hotplug locking

By Jonathan Corbet
October 16, 2013
Last week's Kernel Page included an article on a new CPU hotplugging locking mechanism designed to minimize the overhead of "read-locking" the set of available CPUs on the system. That article remains valid as a description of a clever and elaborate special-purpose locking system, but it seems unlikely that it describes code that will be merged into the mainline. Further discussion — along with an intervention by Linus — has caused this particular project to take a new direction.

The CPU hotplug locking patch was designed with a couple of requirements in mind: (1) actual CPU hotplug operations are rare, so that is where the locking overhead should be concentrated, and (2) as the number of CPUs in commonly used systems grows, it is no longer acceptable to iterate over the full set of CPUs with preemption disabled. That is why get_online_cpus() was designed to be cheap, but also to serve as a sort of sleeping lock. Both of those requirements came into question once other developers started looking at the patch set.

CPU hotplugging as a rare action

Peter Zijlstra's patch set (examined last week), in response to the above-mentioned requirements, went out of its way to minimize the cost of calls to get_online_cpus() and put_online_cpus() — the locking functions that ensure that no changes will be made to the set of online CPUs during the critical section. Interestingly, one of the first questions came from Ingo Molnar, who thought that get_online_cpus() still wasn't cheap enough. He suggested that read-locking the set of online CPUs should cost nothing, while actual hotplug operations should avoid contention by freezing all tasks in the system. Freezing all tasks is an expensive operation, but, as Ingo put it:

Actual CPU hot unplugging and replugging is _ridiculously_ rare in a system, I don't understand how we tolerate _any_ overhead from this utter slowpath.

It was then pointed out (in the LWN comments too) that Android systems use CPU hotplug as a crude form of CPU power management. Ingo dismissed that use as "very broken to begin with", saying that proper power-aware scheduling should be used instead. That may be true, but it doesn't change the fact that hotplugging is used that way — or that the kernel lacks proper power-aware scheduling at the moment anyway. Paul McKenney posted an interesting look at the situation, noting that CPU hotplugging can serve as an effective defense against scheduler bugs that could otherwise ruin a system's battery life.

The end result is that, for the next few years at least, CPU hotplugging as a power management technique seems likely to stay around. So, while it still makes sense to put the expense of the necessary locking on that side — actually adding or removing CPUs is not going to be a hugely fast operation in the best of conditions — it would hurt some users to make hotplugging a lot slower.

A different way

This was about the point where Linus came along with a suggestion of his own. Rather than set up complex locking, why not use the normal read-copy-update (RCU) mechanism to protect CPU removals? In short, if a thread sees a bit set indicating that a particular CPU exists, all data associated with that CPU will continue to be valid for as long as the reading thread holds an RCU read lock. When a CPU is removed, the bit can be cleared, but the removal of the associated data would have to wait until after an RCU grace period has passed. This mechanism is used throughout the kernel and is well understood.

There is only one problem: holding an RCU read lock requires disabling preemption, essentially putting the holding thread into atomic context. Peter expressed his concerns about disabling preemption in this way. Current get_online_cpus() callers assume they can do things like memory allocation that might sleep; that would not be possible if that code had to run with preemption disabled. The other potential problem is that some systems have a lot of CPUs; keeping preemption disabled while iterating over 4096 CPUs could introduce substantial latencies into the system. For these reasons, Peter thought, disabling preemption was not the right way to solve the hotplug locking problem.

Linus was, to put it mildly, unimpressed by this reasoning. It was, he said, the path to low-quality code. Working with preemption disabled, he said, is just the way things should be done in the core kernel:

Yes, preempt_disable() is harder to use than sleeping locks. You need to do pre-allocation etc. But it is much *much* more efficient.

And in the kernel, we care. We have the resources. Plus, we can also say "if you can't handle it, don't do it". We don't need new features so badly that we are willing to screw up core code.

So the sleeping-lock approach has gone out of favor. But, if disabling preemption is to be used instead, solutions must be found to the atomic context and latency problems mentioned above.

With regard to atomic context, the biggest issue is likely to be memory allocations which, normally, can sleep while the kernel works to free the needed space. There are two ways to handle memory allocations when preemption is disabled. One of those is to use the GFP_ATOMIC flag, but code using GFP_ATOMIC tends to draw a lot of critical attention from reviewers. The alternative is to either pre-allocate the memory before disabling preemption, or to temporarily re-enable preemption for long enough to perform the allocation. With the latter approach, naturally, it is usually necessary to check whether the state of the universe has changed while preemption was enabled. All told, it makes for more complex programming, but, as Linus noted, it can be very efficient.

Latency problems can be addressed by disabling preemption inside the loop that passes over all CPUs, rather than outside of it. So preemption is disabled while any given CPU is being processed, but it is quickly re-enabled (then disabled again) between CPUs. That should eliminate any significant latencies, but, once again, the code needs to be prepared for things changing while preemption is enabled.

Changing CPU hotplug locking along these lines would eliminate the need for the complex locking code that was examined last week. But there is a cost to be paid elsewhere: all code that uses get_online_cpus() must be audited and possibly changed to work under the new regime. Peter has agreed that this approach is workable, though, and he seems willing to carry out this audit. That work appears to be underway as of this writing.

To some observers, this sequence of events highlights the difficulties of kernel programming: a talented developer works to create some tricky code that makes things better, only to be told that the approach is wrong. In truth, early patch postings are often better seen as a characterization of the problem than the final solution. As long as developers are willing to let go of their approach when confronted with something better, things work out for the best for everybody involved. That would appear to be the case here; the resulting kernel will perform better while using code that is simpler and adheres more closely to common programming practices.

Comments (none posted)

A new direction for power-aware scheduling

By Jonathan Corbet
October 15, 2013
Power-aware scheduling attempts to place processes on CPUs in a way that minimizes the system's overall consumption of power. Discussion in this area has been muted since we last looked at it in June, but work has been proceeding. Now a new set of power-aware scheduling patches shows a significant change in direction motivated by criticisms that were aired in June. This particular problem is far from solved, but the shape of the eventual solution may be becoming a bit more clear.

Thus far, most of the power-aware scheduling patches posted to the lists have been focused on task placement — packing "small" processes onto a small number of CPUs to allow others to be powered down, for example. The problem with that approach, as Ingo Molnar complained at the time, was that it failed to recognize that there are several mechanisms used to control CPU power consumption. These include the cpuidle subsystem (which decides when a CPU can sleep and how deeply), the cpufreq subsystem (charged with controlling the clock frequency for CPUs) and various aspects of the scheduler itself. There is no integration between these subsystems; indeed, the scheduler is almost entirely ignorant of what the cpuidle and cpufreq controllers are doing. There are other problems as well: the notion of controlling a CPU's frequency has been effectively rendered obsolete by current processor designs, for example.

In the end, Ingo said that no power-aware scheduling patches would be considered for merging until these problems were solved. In other words, the developers working on these patches needed to solve not just their problem, but the problem of rationalizing and integrating the work that has been done by other developers in preceding years. Such things happen in kernel development; it can be hard on individual developers, but it does result in better code in the long term.

The latest approach

To address this challenge, Morten Rasmussen, who has been working on the big LITTLE MP scheduler, has taken a step back; his latest power-aware scheduling patch set does not actually introduce much in the way of power-aware scheduling. Instead, it is focused on the creation of an internal API that governs communications between the scheduler and a new type of "power driver" that is meant to eventually replace the cpuidle and cpufreq subsystems. The power driver (there can only be one for all CPUs in the current patch set) provides these operations to the scheduler:

    struct power_driver {
	int (*at_max_capacity)	(int cpu);
	int (*go_faster)	(int cpu, int hint);
	int (*go_slower)	(int cpu, int hint);
	int (*best_wake_cpu)	(void);
	void (*late_callback)	(int cpu);
    };

Two of these methods allow the scheduler to query the power state of a given CPU; at_max_capacity() allows the scheduler to ask whether the processor is running at full speed, while best_wake_cpu() asks which (sleeping) CPU would be the best to wake in response to increasing load. The best_wake_cpu() call can make use of low-level architectural knowledge to determine which CPU would require the least power to bring up; it would, for example, favor CPUs that share power or clock lines with currently running CPUs over those that would require powering up a new package.

The scheduler can provide feedback to the power driver with the go_faster() and go_slower() methods. These calls request higher or lower speed from the given CPU without specifying an actual clock frequency, which isn't really possible on a lot of current processors. The power driver can then instruct the hardware to adopt a power policy that matches what the scheduler is asking for. The hint parameter is not used in the current patch set; its purpose is to indicate how much faster or slower performance the scheduler would like to see. These calls as a whole are hints, actually; the power driver is not required to carry out the scheduler's wishes.

Finally, late_callback() exists to allow the power driver to do work that may require sleeping or having interrupts enabled. Most of the functions listed above can be called from within the scheduler at almost any point, so they have to be written to run in atomic context. If they need to do something that cannot be done in that context, they can set the work aside; the scheduler will call late_callback() at a safe time for that work to be done.

The current patch set makes just enough use of these functions to show how they would be used. Whenever the scheduler adds a process to a given CPU's run queue, it checks whether the total load exceeds what the CPU is able to provide; if so, a call to go_faster() will be made to ask for more performance. A similar test is done whenever a process is removed from a CPU; if that CPU is providing more power than is needed, go_slower() will be called. A separate test will call go_faster() if the idle time on the CPU is low, even if the computed load suggests that the CPU should not be busy. Rudimentary implementations of go_faster() and go_slower() have been provided; they are a simple wrapper around the existing cpufreq driver code.

What's coming

The full plan (as described in Morten's Linux Plumbers Conference talk slides [PDF]) calls for the elimination of cpufreq and cpuidle altogether once their functionality has been pulled into the power driver. There will also be several more functions to be provided by the power driver. These include get_best_sleep_cpu() to get a hint for the best CPU to put asleep, enter_idle() to actually put a CPU into the sleep state, load_scale() to help with the calculation of loads regardless of the CPU's current power state, and task_boost() to give priority to a specific CPU. task_boost() is aimed at systems that provide features like "turbo mode," where one CPU can be overclocked, but only if the others are idle.

The long-term plan also involves bringing back techniques like small-task packing, proper support for big.LITTLE systems, and more. But those goals look distant at the moment; Morten and company must first build a consensus around the proposed architecture. That may take some doing, yet; scheduler developer Peter Zijlstra's first response was "I don't see anything except a random bunch of hooks without an over-all picture of how to get less power used." Morten has promised to fill out the story.

Some of these issues may be resolved on October 23, when a half-day minisummit will be held on the topic in Edinburgh. Many of the relevant developers should be there, allowing for quick resolution of a number of the outstanding issues. With luck, your editor will be there too; stay tuned for the next episode in this long-running story.

Comments (3 posted)

Patches and updates

Kernel trees

Linus Torvalds Linux 3.12-rc5 ?
Greg KH Linux 3.11.5 ?
Greg KH Linux 3.10.16 ?
Sebastian Andrzej Siewior 3.10.15-rt11 ?
Kamal Mostafa Linux 3.8.13.11 ?
Luis Henriques Linux 3.5.7.23 ?
Greg KH Linux 3.4.66 ?
Greg KH Linux 3.0.100 ?

Architecture-specific

Core kernel code

Development tools

Device drivers

Filesystems and block I/O

Memory management

Networking

Security-related

Miscellaneous

Page editor: Jonathan Corbet

Distributions

Chromium API keys on Debian

By Jake Edge
October 16, 2013

Just over a year ago, Google started requiring an API key to enable certain features in the Chromium browser. For developers, it is presumably a minor nuisance to acquire the key before building the browser—or to simply ignore the features enabled by the key. For Linux distributions, though, a key is pretty much required. But, as Ignacio Areta pointed out on the debian-legal mailing list, the Terms of Service that go along with API keys may not be compatible with free software and with the Debian Free Software Guidelines (DFSG) in particular.

Areta noted that several different distributions he looked at (openSUSE, Arch Linux, and Debian) had their own API key, each with a warning that the key was only for use by the distribution itself. Those wanting to build or distribute their own Chromium using the source provided by the distribution are expected to get their own key. That seems to run counter to the normal expectations for free software.

Beyond that, the Terms of Service (ToS) for using the Google APIs has language that worried Areta. He was not alone as Ben Finney noted specific sections of the DFSG that seem to be violated by the sections of the ToS quoted by Areta. In particular, the ToS sublicensing language may run afoul of section 8 of the DFSG, while the "no reverse engineering" language may violate section 6, Finney said.

The key in question allows access to nine separate Google services including Spelling, Suggest, Translate, Geolocation, Safe Browsing, and so on. While some of those features could be considered optional, losing them would certainly degrade users' experience compared to Chrome browser users—or even Firefox users—in many cases. It's a little strange to see a feature added that requires a magic key in order to use it, but the restrictions on what can be done with the APIs and keys makes it all the more worrisome.

Paul Wise noted that the key strings themselves (which are stored in a Debian makefile) are probably not copyrightable, which means their license may be irrelevant from a copyright standpoint. But he also pointed out the real danger: Google could turn off access to its APIs for that key at any time. Someone using the Debian Chromium source (and key) in a way that violates the ToS might be enough to provoke the search giant into just that kind of response. That would, of course, damage all Debian Chromium users, not just the "guilty" party.

In a followup message, Areta pointed out another troubling clause in the ToS: "You will require your end users to comply with any applicable law and these terms." That seems to indicate that Debian would, somehow, need to require its users to agree to Google's ToS—yet another incompatibility with the expectations for a free software distribution. He also wondered whether other companies might start requiring keys to access their services. "That would be bad", Wise commented.

While Google clearly has the right to limit access to its APIs in any way it chooses, integrating features that depend on them into free software muddies the water considerably. No firm conclusion was reached about what to do in the debian-legal thread, but it's probably not the last we've heard on the issue. One might want to simply write off the API ToS as legalese that is unlikely to be invoked (at least against a distribution's key), but there is a risk in doing so. Some distributions might well write it off, but Debian has a reputation as a stickler on legal issues.

One could imagine that distributions could leave getting the API key up to the user, perhaps even providing an automated way to do so at package installation time. But that leaves a great deal to be desired as well. For one thing, it would allow individual users to be more easily tracked across the APIs—though signing into Google as Chromium suggests to users already makes that fairly easy. That has privacy implications, of course.

There is a balance to be struck here. Toning down the API ToS might be one way forward, as would turning off the Chromium features enabled by the APIs—or dropping Chromium entirely. Perhaps the most likely outcome, at least in the short term, is for distributions to continue on their existing course. Until and unless Google actually starts revoking API access, that seems fairly harmless. The scramble for a solution may only come later—or never. Only time will tell.

Comments (6 posted)

Brief items

Distribution quotes of the week

I think I had it right last night. There is a giant cosmic dance being performed in a tiny little box on my desk on a hill on an island at the bottom of the world.

Or, I’ve been sitting at my computer for too long again.

Tim Serong

The Provo team noted that being in the last timezone meant being pretty lonely. Pizza Master Scott suggested we need to set up a teleportation unit and get everybody physically in one place next time. The openSUSE team is evaluating this option and suggestions for reasonably priced teleportation devices are welcome.
Jos Poortvliet

Comments (none posted)

Debian 7.2 released

The Debian Project has released the second update, Debian 7.2 "Wheezy", to the current stable version. As usual the update adds corrections to security problems and other serious issues.

Full Story (comments: none)

Announcing Fedora 19 ARM remix for Allwinner SOCs release 3

Hans de Goede has announced the third release of his Fedora 19 ARM remix for Allwinner A10, A10s, A13 and A20 based devices. The release is based on the official Fedora 19 for ARM release with u-boot and kernel(s) from the linux-sunxi project.

Full Story (comments: none)

openSUSE 13.1 RC 1 Available

The first release candidate for openSUSE 13.1 is available for testing. "As you might remember, we called for additional testing of btrfs specifically. It won’t be the default in this release but the next generation filesystem has been making steady progress and in the last month, over 25 bugs have been found and fixed. There is still more work to be done, but btrfs should be a safe choice for openSUSE 13.1 users and a good candidate for default filesystem for the next release."

Comments (none posted)

Slackware 14.1 RC1

The October 14 entry in the Slackware current changelog announces the first release candidate for Slackware 14.1. "UEFI (with the exception of Secure Boot, which will have to wait until we have real hardware) should be fully implemented in the installer now, which will detect and warn about common problems, set up the EFI System Partition under /boot/efi, and install ELILO and a UEFI boot entry automatically. There's a new README_UEFI.TXT file with detailed instructions for installing 64-bit Slackware on UEFI."

Comments (none posted)

Whonix Anonymous Operating System Version 7 Released

Whonix has released version 7 of its anonymity, privacy and security focused operating system. "It's based on the Tor anonymity network, Debian GNU/Linux and the principle of security by isolation. DNS leaks are impossible, and not even malware with root privileges can find out the user's real IP."

Full Story (comments: 2)

Distribution News

Debian GNU/Linux

Bits from the Release Team (Jessie freeze info)

The Debian release team has announced a freeze date of November 5, 2014 for the next stable version, Jessie. Click below for the current freeze policy, the results of a porter roll-call, proposed release goals, and more.

Full Story (comments: none)

Newsletters and articles of interest

Page editor: Rebecca Sobol

Development

Deploying Docker

By Nathan Willis
October 16, 2013

The Docker project is not (like its name might suggest) yet another desktop application launcher. Rather, it is an application deployment system designed to make it easy for programmers to develop an application on one type of machine (such as a work laptop) and package it for predictable usage on a variety of other machines (such as a cloud instance or server farm). A Docker package is a container that can be deployed and run in nearly any Linux environment, but it is not a virtual machine image—instead, it is a self-contained copy-on-write duplicate of a master application image. It comes with some awkward technical baggage, but the ideas behind Docker proved compelling enough that one Red Hat developer recently took it upon himself to figure out how to adapt the system for standard Linux distributions.

The Docker site compares its deployment model with the idea of the standard shipping container: just as standard containers allow freight companies to move cargo on any vessel without modifying either the vessel or the payload, a Docker package is designed to be self-sufficient. It can be packaged up anywhere, then unpacked and installed anywhere. Each Docker application runs as its own LXC process container, with full network and resource isolation from the others. Cgroups and namespaces must also be enabled, although Docker will run with some cgroups (such as the memory controller) disabled.

The same claims would be made about VM deployments, of course. Where Docker differs is that it only containerizes the application and its dependencies; the process container runs directly on the host OS. This makes for smaller packages, less overhead, and faster application restarts. Multiple copies of each Docker container can be deployed together on the same server, sharing libraries. In this scenario, Docker again saves space by storing the containers on a union file system (specifically, AnotherUnionFS or AUFS). Only the differences between the various copies of the application need to be saved, so read-only portions of each application take up zero additional space.

Because each Docker container is a copy of the same original, starting up an additional instance of the application is also far faster than spinning up another VM. The AUFS filesystem can also be exploited when rolling out application updates, overwriting only the changed portions of the application.

Docker development is sponsored by dotCloud, a platform-as-a-service vendor. The system undoubtedly makes for a good base on which to build a cloud service, since the marginal savings over a VM approach continue to scale up as more and more applications are deployed. Nevertheless, it has attracted attention from cloudless developers as well, who would like to see it available on general-purpose machines running Red Hat Enterprise Linux (RHEL), Ubuntu, and other large distributions. On those systems, the cloud-scalability advantages might not be a significant factor, but the other ease-of-deployment features could still prove helpful.

The problem is that Docker relies on AUFS for so much of its functionality. AUFS is not part of the mainline Linux kernel; it is a patch set. Furthermore, the only major distribution to build AUFS support in the kernels it ships is Ubuntu—and Ubuntu has announced plans to deprecate it in favor of an upstream replacement.

That problem evidently got Red Hat's Alex Larsson thinking. Larsson wrote a blog post on October 15 describing his recent effort to get Docker working on top of a different filesystem. The obvious solution might be to use overlayfs, which seems to be on track for landing in the upstream kernel, but Larsson felt that it was not yet ready for use. Similarly, Btrfs has copy-on-write functionality, but Btrfs is not yet stable enough for many users.

Eventually, Larsson decided to try porting Docker to use Logical Volume Manager (LVM) thin provisioning with a little trickery from the kernel device-mapper: he creates a copy-on-write block device pool with LVM thin provisioning, then creates an ext4 "base" device on top of it. Each Docker container is then created as an LVM snapshot of that base device. The LVM thin provisioning means that the copy-on-write device only takes up space for the actual files it uses; since the LVM snapshots are layered on top of the base image, they automatically reuse the files below.

Larsson reports that there are still some issues to work out (such as how to overcome the various maximum sizes on the filesystems used), but he hopes to have the work land in time for the upcoming Docker 0.7 release. That may not make Docker ready for deployment on vanilla RHEL machines any time soon, but it is a step in the right direction. Larsson said he thinks the long-term solution will probably be to port Docker over to overlayfs, but in the meantime, the LVM-based approach looks like a good—and clever—workaround.

Comments (11 posted)

Brief items

Quote of the week

Every time a company uses “proprietary” to describe how good a feature is I vomit. “Square’s proprietary, industry-leading risk detection”
Alex Gaynor

Comments (6 posted)

Wayland and Weston 1.3 released

Version 1.3 of the Wayland protocol and Weston reference compositor have been released. In the release announcement, Kristian Høgsberg says that there isn't much that's new in the Wayland release, which is a sign of its maturation. New pixel formats support, additional documentation, language binding support, a few bug fixes, and more. This cycle for Weston was more active with the addition of hardware-accelerated screen capture, libhybris support, support for multiple input devices of the same type, better touch support, new launching options, and more. "We're going to try something new for 1.4 - we'll do an alpha release a month before the scheduled release. I'm looking at Jan 15, 2014 as the release date for 1.4.0, and we'll do an alpha release on Dec 16. The motivation here is to get a snapshot out a bit earlier so we can start testing earlier and hopefully uncover bugs earlier."

Comments (19 posted)

Samba 4.1.0 released

The Samba 4.1.0 release is out. This version adds improved SMB2 and SMB3 protocol support, encrypted transport over SMB3, server-side copy operations, Btrfs filesystem integration, and more. This release also removes the long-unloved SWAT web-based administration tool.

Full Story (comments: 1)

Cinnamon 2.0 released

Version 2.0 of the Cinnamon desktop has been released. There are some improvements in features like edge tiling, edge snapping, user account management, and more, but much of the work in this release appears to have been done to the supporting infrastructure. "Prior to version 2.0, and similar to Shell or Unity, Cinnamon was a frontend on top of the GNOME desktop. In version 2.0, and similar to MATE or Xfce, Cinnamon is an entire desktop environment built on GNOME technologies. It still uses toolkits and libraries such as GTK or Clutter and it is still compatible with all GNOME applications, but it no longer requires GNOME itself to be installed. It now communicates directly with its own backend services, libraries and daemons: cinnamon-desktop, cinnamon-session and cinnamon-settings-daemon."

Comments (55 posted)

PyPy's new garbage collector

The PyPy status blog has a detailed description of the new incremental garbage collector adopted by this performance-oriented Python interpreter project. "The main issue is that splitting the major collections means that the main program is actually running between the pieces, and so it can change the pointers in the objects to point to other objects. This is not a problem for sweeping: dead objects will remain dead whatever the main program does. However, it is a problem for marking. Let us see why."

Comments (none posted)

Newsletters and articles

Development newsletters from the past week

Comments (none posted)

XBMC DevCon 2013 LiveBlog

Nathan Betzen of XBMC has posted a round-up of the events at the project's recent XBMC DevCon. The article was started as a live-blogging piece, but the conference has now concluded, leaving Betzen's account as a good record for those unable to attend. Issues discussed at the event include revising the settings system, 3D support, and adding the ability for multiple XBMC instances on a network to share content with each other.

Comments (none posted)

Page editor: Nathan Willis

Announcements

Brief items

OIN Surpasses 600 Licensees in its Patent Non-Aggression Community

Open Invention Network (OIN) has announced that it has surpassed the milestone of 600 members in its patent non-aggression community. ""Since our formation, in addition to dramatically growing the number of participants in the OIN community, we have thoughtfully expanded the scope of our Linux System definition and developed important programs like Linux Defenders," said Keith Bergelt, CEO of Open Invention Network. "We will continue to improve as we fulfill our mission of providing Linux and open source developers, distributors and users the freedom to innovate and operate.""

Full Story (comments: none)

Slides and videos from the 2013 Linux Plumbers Conference

The Linux Plumbers Conference organizers have put up a page of slides and videos from the 2013 event. There is a lot of interesting material to be found there for those of us who could not attend the conference — or even for those who were there.

Comments (none posted)

Articles of interest

FSFE Newsletter - October 2013

The Free Software Foundation Europe's newsletter for October covers GNU's 30th anniversary, local Fellowship activities, human rights and communication surveillance, CERN's Open Hardware License, and several other topics.

Full Story (comments: none)

Calls for Presentations

FOSDEM14: Graphics DevRoom: call for speakers.

There will be a graphics devroom at this year's FOSDEM (Free and Open Source Developer's European Meeting). FOSDEM will take place February 1-2, 2014 in Brussels, Belgium. This announcement is the call for speakers for the graphics devroom.

Full Story (comments: none)

CFP Deadlines: October 17, 2013 to December 16, 2013

The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.

DeadlineEvent Dates EventLocation
November 1 January 6 Sysadmin Miniconf at Linux.conf.au 2014 Perth, Australia
November 4 December 10
December 11
2013 Workshop on Spacecraft Flight Software Pasadena, USA
November 15 March 18
March 20
FLOSS UK 'DEVOPS' Brighton, England, UK
November 22 March 22
March 23
LibrePlanet 2014 Cambridge, MA, USA
November 24 December 13
December 15
SciPy India 2013 Bombay, India
December 1 February 7
February 9
devconf.cz Brno, Czech Republic
December 1 March 6
March 7
Erlang SF Factory Bay Area 2014 San Francisco, CA, USA
December 2 January 17
January 18
QtDay Italy Florence, Italy
December 3 February 21
February 23
conf.kde.in 2014 Gandhinagar, India
December 15 February 21
February 23
Southern California Linux Expo Los Angeles, CA, USA

If the CFP deadline for your event does not appear here, please tell us about it.

Upcoming Events

Real World Cryptography Workshop

The Real World Cryptography Workshop will be held in New York City, NY January 13-15, 2014. "The Real World Cryptography Workshop aims to bring together cryptography researchers with developers implementing cryptography in real-world systems. The main goal of the workshop is to strengthen the dialogue between these two groups. Topics covered will focus on uses of cryptography in real-world environments such as the Internet, the cloud, and embedded devices." Registration is required and closes December 20, 2013. (Thanks to Geoffrey Thomas)

Comments (none posted)

Events: October 17, 2013 to December 16, 2013

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
October 14
October 19
PyCon.DE 2013 Cologne, Germany
October 17
October 20
PyCon PL Szczyrk, Poland
October 19 Central PA Open Source Conference Lancaster, PA, USA
October 19 Hong Kong Open Source Conference 2013 Hong Kong, China
October 20 Enlightenment Developer Day 2013 Edinburgh, Scotland, UK
October 21
October 23
KVM Forum Edinburgh, UK
October 21
October 23
LinuxCon Europe 2013 Edinburgh, UK
October 21
October 23
Open Source Developers Conference Auckland, New Zealand
October 22
October 24
Hack.lu 2013 Luxembourg, Luxembourg
October 22
October 23
GStreamer Conference Edinburgh, UK
October 23 TracingSummit2013 Edinburgh, UK
October 23
October 25
Linux Kernel Summit 2013 Edinburgh, UK
October 23
October 24
Open Source Monitoring Conference Nuremberg, Germany
October 24
October 25
Embedded LInux Conference Europe Edinburgh, UK
October 24
October 25
Xen Project Developer Summit Edinburgh, UK
October 24
October 25
Automotive Linux Summit Fall 2013 Edinburgh, UK
October 25
October 27
Blender Conference 2013 Amsterdam, Netherlands
October 25
October 27
vBSDcon 2013 Herndon, Virginia, USA
October 26
October 27
T-DOSE Conference 2013 Eindhoven, Netherlands
October 26
October 27
PostgreSQL Conference China 2013 Hangzhou, China
October 28
November 1
Linaro Connect USA 2013 Santa Clara, CA, USA
October 28
October 31
15th Real Time Linux Workshop Lugano, Switzerland
October 29
November 1
PostgreSQL Conference Europe 2013 Dublin, Ireland
November 3
November 8
27th Large Installation System Administration Conference Washington DC, USA
November 5
November 8
OpenStack Summit Hong Kong, Hong Kong
November 6
November 7
2013 LLVM Developers' Meeting San Francisco, CA, USA
November 8 PGConf.DE 2013 Oberhausen, Germany
November 8 CentOS Dojo and Community Day Madrid, Spain
November 8
November 10
FSCONS 2013 Göteborg, Sweden
November 9
November 11
Mini DebConf Taiwan 2013 Taipei, Taiwan
November 9
November 10
OpenRheinRuhr Oberhausen, Germany
November 13
November 14
Korea Linux Forum Seoul, South Korea
November 14
November 17
Mini-DebConf UK Cambridge, UK
November 15
November 16
Linux Informationstage Oldenburg Oldenburg, Germany
November 15
November 17
openSUSE Summit 2013 Lake Buena Vista, FL, USA
November 17
November 21
Supercomputing Denver, CO, USA
November 18
November 21
2013 Linux Symposium Ottawa, Canada
November 22
November 24
Python Conference Spain 2013 Madrid, Spain
November 25 Firebird Tour: Prague Prague, Czech Republic
November 28 Puppet Camp Munich, Germany
November 30
December 1
OpenPhoenux Hardware and Software Workshop Munich, Germany
December 6 CentOS Dojo Austin, TX, USA
December 10
December 11
2013 Workshop on Spacecraft Flight Software Pasadena, USA
December 13
December 15
SciPy India 2013 Bombay, India

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol


Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds