LWN.net Weekly Edition for May 22, 2014
PostgreSQL 9.4 beta: Binary JSON and Data Change Streaming
It's May, which means that it's time for a new PostgreSQL beta release. As with each annual release, PostgreSQL 9.4 has a few dozen new features addressing the various ways people use the database system. Among the features ready for testing are: index and aggregation performance improvements, materialized views, ALTER SYSTEM SET, dynamically loadable background workers, new array-manipulation syntax, security barrier views, and time-delayed standbys. While users all have their own favorites among the new features, this article will focus on two features that have received the most attention: the new JSONB type, and Data Change Streaming.
JSONB
JSON, which stands for "JavaScript Object Notation", is a serialization format originally designed to allow JavaScript programs to store stuff in disk or cache. It is a dialect of YAML, and consists of keys, values, and lists. Over the last few years JSON has become a kind of lingua franca for data interchange between programs written in different languages, largely replacing XML for new applications.
PostgreSQL 9.4 introduces a new data type called "JSONB" for "binary JSON". The name is somewhat deceptive; since JSON is a text serialization format, it's not binary anything. What JSONB does is implement storage for semantic JSON in a specialized compressible tree structure based on the PostgreSQL extension HStore. This binary structure allows the implementation of new features that make PostgreSQL JSON much more useful: indexing, transformation, sorting, path search, and matching keys and values. It also means better performance on some operations.
Version 9.2 introduced JSON text support to PostgreSQL. However, that data was stored as text, which had several disadvantages. All operations had to re-parse the JSON, even purely internal ones. It was also impossible to sort JSON values, or to determine the equivalence of two JSON strings since order and white space were preserved. For backward compatibility reasons, however, the original text JSON type will be retained in PostgreSQL, which requires users to choose between the two types. The old JSON type will continue to be useful for users who need to preserve white space and key ordering.
Indexing JSONB
The biggest feature advantage of JSONB is the ability to create general indexes on columns of JSONB values, which can then be used for almost any search on that data. These indexes use GIN (Generic INverted) indexes, which have also received a substantial (50% or more) performance boost for 9.4. As an example, imagine that we have this table of publication data for a set of books:
Table "public.booksdata" Column | Type ----------+-------- title | citext isbn | isbn pubinfo | jsonb
We can create a GIN index on the bookdata column:
CREATE INDEX booksdex ON booksdata USING gin_hash_ops (pubinfo);
And then do path queries and matching against that data, which will use the index:
SELECT title, isbn, pubinfo #> '{ "whlscost" }' as cost FROM booksdata WHERE pubinfo @> '{ "publisher" : "Avon" }';
As with other exotic data types, the new features are supported by a mix of functions and operators which are available inside the SQL interface. For example, "@>" used above means "contains", and "#>" means "extract path". In this case, a "path" is a chain of hierarchical keys, such as "publisher, format, edition". Path queries will become even more useful in PostgreSQL 9.5, when wildcards will be supported.
The new type also supports doing element extraction, comparisons, and sorting, which allows treating JSON values like other kinds of data values. For example, given a set of heterogeneous JSON strings, we can get them to sort in a way which makes intuitive sense:
SELECT some_json FROM sortjson ORDER BY some_json; some_json ------------------------ {"a": 1, "b": "carol"} {"a": 1, "b": "mel"} {"a": 3, "b": "alice"} {"b": "alice", "c": 2}
Version 9.4 also introduces several aggregation functions, which let you "roll up" JSON from an entire column:
SELECT title, json_agg(pubinfo #>> '{ "published_on" }') as pub_dates FROM booksdata WHERE title = 'Sphere' group by title; title | pub_dates --------+------------------------------ Sphere | ["1999-03-04", "1998-07-04"]
This takes the publication date from a set of JSON documents, and aggregates them into a JSON array.
Utility and purpose of JSONB
There have been many questions inside and outside the PostgreSQL community (for example, on Reddit) as to why an advanced relational database is implementing functionality more associated with non-relational, or "NoSQL", databases. Some commenters, in particular, see JSONB as an abandonment of the relational model. So, why did the project implement it?
The first answer is that, historically, the PostgreSQL project has continually added new data types to support new kinds of data which need storage, indexing, and manipulation. Over the last 16 years, this has included IP address types, arrays and matrices, spatial data types, XML data, and others. So adding support for storing data in JSON format is just a continuation of that.
Perhaps more importantly, JSON supports having the application add new attributes to the data at runtime. One of the limitations of the SQL model has long been the lack of a safe way for a user application to add columns in response to a user action, such as data from a configuration control panel. PostgreSQL 8.3 previously bridged this gap with the HStore indexed key-value data type. JSONB is a furtherance of that concept that supports not only key-value data but also hierarchical data.
Existing PostgreSQL users also wanted richer JSON features to support their use of JSON as an output format and an API for web applications that interact with the database. This allows construction of simple two- or three-tier web and mobile applications that get back data as JSON. This data can then be passed through directly to a JavaScript client without further manipulation. In particular, the new operators and functions that come with JSONB will enable building more of these kinds of applications, faster.
Comparisons to NoSQL
Of course, the other big reason to implement indexed JSON storage inside PostgreSQL is competition with new, non-relational databases — the so-called NoSQL databases. Many of these databases use JSON as a storage or API format, which is appealing to web developers who are already familiar with it.
With JSONB, PostgreSQL now implements a large chunk of the JSON data manipulation features available in databases like MongoDB and CouchDB, with comparable performance. This is all implemented without sacrificing the database's relational capabilities, reliability, or multi-core performance. The project is betting that it can lure back users who like JSON but are unhappy with some of the limitations of the new databases.
The PostgreSQL project will need to implement more to become really attractive for use as a NoSQL database, though. A fast API that doesn't require SQL would be the first step, and projects like pgRest, Mongolike, and Mongres have all been working on this. A second requirement is much more difficult: sharding and horizontal scalability; this may possibly be addressed by the recently-released PostgresXL.
Other features remain on the "to do" list for PostgreSQL's JSON support that the developers plan to address, including: offering ways to update a single key in a large JSON document, wildcard support for path queries, and faster and more versatile indexing. That last item will be addressed as part of the VODKA project (yes, really), which will be launched at pgCon this week. One only wonders how long it will take PostgreSQL to support WHISKEY and RUM indexing as well.
Data Change Streaming
PostgreSQL's built-in replication has proved to be extremely popular and robust, allowing many users to improve redundancy and scale. However, it is limited to replicating the entire database in only one direction, which constrains the kind of scale-out architectures users can build with it. It also requires both master and replica to be running the same PostgreSQL version, preventing upgrade-by-replication.
There have been other systems to work around this, such as Slony-I, Bucardo, and Londiste. However, all of these limit both throughput and the changes application developers are allowed to make to the database. As a result, they have been unattractive for large scale-out infrastructures.
Data Change Streaming, a 9.4 feature added by 2nd Quadrant developer Andres Freund, provides a C API for listening to PostgreSQL's binary replication stream and extracting row changes and SQL statements from it. This feature has also been called "Changeset Extraction" and "Logical Decoding" at various points. This release also includes pg_recvlogical, a command-line utility that connects to PostgreSQL replication and writes out data changes to STDOUT or a file.
Data Change Streaming can become a game-changing feature for PostgreSQL, allowing development of sophisticated multi-directional replication (otherwise known as "multi-master"), and automated sharding systems, without sacrificing per-node performance. That will require the construction of an entire layer of tools on top of Data Change Streaming before most users are able to utilize it. The Slony-I project is already writing code to use the new API.
The biggest "to do" for the next version is to find a way to capture Data Definition Language (DDL) statements, which are the commands the user sends to create and modify table designs that are not represented in the data change stream in 9.4. Fixing this is a high priority for 9.5, so that the replication system doesn't interfere with continuous integration and development pushes, which may include such statements.
Other Features
As with other PostgreSQL releases, this one contains a large number of other features. Among them are:
ALTER SYSTEM SET: this new statement allows setting PostgreSQL configuration variables in the configuration file over a database connection. This enables easier auto-tuning and management of large numbers of PostgreSQL instances.
REFRESH MATERIALIZED VIEW CONCURRENTLY: this allows users to append or rebuild "materialized views" in the background while other users are still querying the old version. As materialized views are large, complex reporting queries whose results have been stored for quick reference, this will make PostgreSQL more useful as an analytics and decision support database.
Dynamic background workers: version 9.3 introduced the idea of the "background worker", a daemon that would start and stop with PostgreSQL and handle background tasks. Now, 9.4 adds the ability to make these workers dynamically loadable, which means that they can be launched in response to server tasks, permitting asynchronous activity, parallelism, and deferred maintenance.
This is just the first beta release, but the list of features is expected to be stable between this and the final release. Multiple betas will be released over the next four months, culminating in a final release sometime in September. Development on PostgreSQL version 9.5 will start in June.
[ Josh Berkus is a member of the PostgreSQL core team. ]
US Supreme Court decisions make patent trolling riskier
While the recent decision for Oracle on the copyrighting of APIs may be distressing to software developers, the Supreme Court of the US (SCOTUS) offered some comfort on a different issue a few weeks ago. The court dealt a significant blow to patent trolling.
Exhibiting an awareness of frivolous litigation plaguing the patent system, SCOTUS chose to hear oral arguments in two cases — Octane v. ICON, and Highmark v. Allcare — that focused on the awarding of legal fees for victorious defendants of weak-to-completely-baseless lawsuits for patent infringement. We looked at the cases in March.
Toward the end of April, SCOTUS made two 9–0 rulings in these cases. And these rulings will likely deter much frivolous patent litigation because they effectively create a threat of major financial loss to an unsuccessful plaintiff. For example, suppose a troll's business model relies on getting settlements of several tens of thousands of dollars from numerous defendants. If someone it threatens stands up to it, and gets a judgment that includes hundreds of thousands of dollars in lawyer's fees, that could make a major impact on the troll.
Writing for a unanimous court, Justice Sotomayor found [PDF] in favor of Octane in Octane v. ICON. The issue in the case was the standard by which a "court in exceptional cases may award reasonable attorney fees to the prevailing party.
" Sotomayor began by tracing the history of the rules for attorney's fee awards in patent litigation. The most recent change to the rules, Section 285 of the Patent Act, essentially inserted two words — "exceptional cases" — into those rules. Sotomayor noted that SCOTUS had previously ruled that those two words merely clarify the rules.
Following the addition of an appellate court for all patent matters in
the US — the Court of Appeals for the Federal Circuit (CAFC) — in 1982, the
status quo was largely upheld for over twenty years; that is, "the
Federal Circuit [...] instructed district courts to consider the totality
of the circumstances when making fee determinations under §285
"
(page 5). But when the CAFC came across a particular case nine years ago —
Brooks Furniture v. Dutailier — it decided on its own to implement a new
standard: a defendant could only get attorney's fees if the lawsuit was done "in subjective bad faith and [...] [it was] objectively baseless
" (page 8).
The Supreme Court has now cast aside that restrictive standard. After looking at dictionary definitions of "exceptional", SCOTUS decided on this standard (pages 7–8):
Accordingly, SCOTUS reversed the CAFC's ruling, and ordered that the case go back to the lower court to resolve the attorney's fee question by following the new standard SCOTUS had established.
As the other case, Highmark v. Allcare, also dealt with attorney's fees and Section 285, the ruling [PDF] was short. The particular issue in the case was how much deference appeals courts should give to district courts that award attorney's fees in patent infringement cases. If higher courts must defer to lower courts on these rulings, it could deter frivolous litigators, because it closes off an avenue for them to keep the threat of a lawsuit alive.
The CAFC ruled that no deference should be awarded to the lower
courts. Speaking again for a unanimous SCOTUS, Sotomayor reversed this
decision in light of the Octane ruling: "Because §285 commits the
determination whether a case is 'exceptional' to the discretion of the
district court, that decision is to be reviewed on appeal for abuse of
discretion
" (page 4). This effectively means that a higher court
reversing an award of attorney's fees will become quite uncommon. As with
Octane, SCOTUS ordered the case back to the lower courts, and
for those courts to apply the SCOTUS ruling.
These rulings have already sent some waves through the patent-litigation world. Several experienced patent litigators have expressed a
belief that the anti-patent-troll bill before the US Senate will
now likely die because it also centered around the issue of attorney's fee awards. Some have suggested that, with the broad discretion now clearly granted to them, district courts will feel much more confident in awarding attorney's fees to successful defendants of frivolous patent litigation. Kristen Fries, an experienced patent attorney, stated on the popular patent blog "Anticipate This" that the rulings "may aid in thwarting certain 'patent trolls' from asserting patent claims that are meritless or brought in bad faith.
"
The US software industry may be able to relax a little. After these rulings, some potential malicious litigators may need to rethink their strategy. That could lead to fewer weak patent suits, which would at least be a step in the right direction.
Security
XMPP switches on mandatory encryption
The global community of Extensible Messaging and Presence Protocol (XMPP) instant-messaging users took a step toward improved security on May 19, with the operators of a large number of XMPP servers began enforcing a new, mandatory-encryption requirement. The move is one part of a larger effort to secure the global XMPP network; since that network is a federated collection of independent servers, the task is not easy. But whether it is ultimately successful or not, it increases the availability of end-to-end encryption for users, and those developing other Internet communication tools can learn by watching XMPP's example.
In October of 2013, XMPP creator Peter Saint-Andre first published
a manifesto calling
for ubiquitous encryption of the XMPP network. XMPP (or, as it was
known beforehand, Jabber) has supported SSL/TLS encryption of
communication channels since the beginning, but that encryption has
always been optional. The manifesto argues that encryption should be
mandatory " May 19, 2014 (deemed Open
Discussion Day by the signatories) was the "flip the switch" date set out in the
manifesto, after a series of four one-day tests earlier in the year.
As of the switch-over itself, 70 XMPP server operators and
client-application developers had signed the manifesto. The signatories include
the administrators of a number of public XMPP services and the teams
behind multiple open-source applications (for example, Jitsi, Gajim,
Adium, Miranda NG, ejabberd, Prosody IM, and Tigase).
The main conditions of the manifesto were to support STARTTLS connection
establishment
(including the mandatory cipher suites and certification validation
protocol of RFC 6125) and to require TLS encryption for all client-to-server and
server-to-server channels. The hard requirement fell to
XMPP service operators, who agreed to reject unencrypted XMPP
connection requests. Clients, for backward-compatibility reasons, can
continue to support unencrypted connections, but make encryption the
default.
Several
other details were optional: TLS 1.2 was preferred, but negotiation
for TLS 1.1, TLS 1.0, and SSLv3 were to be supported, while support
for SSLv2 was to be disabled entirely. Likewise, certificate-based
authentication and forward-secrecy cipher options were to be available and
preferred, but fallback to unauthenticated encryption and other
cipher suites must also be supported. Finally, signatories agreed to
provide user-configurable options for other security features,
(such as cipher selection and forward-secrecy).
The manifesto also notes that " In essence, then, the participating XMPP networks are requiring an
encrypted channel for XMPP connections, supporting all of the
recommended options, and making the strongest options the preference.
Nevertheless, this set of requirements does not implement ubiquitous
encryption, nor does it mandate every strong authentication option
possible. The manifesto notes that these remaining requirements are
still to come, and that Open Discussion Day event was merely step one.
In particular, running the XMPP connection over TLS is not the same
as encrypting the XMPP messages themselves. For that, one would use
the Off-the-Record Messaging (OTR) protocol. The use of TLS also does
not guarantee channel binding (as in RFC 5056), which enables
applications to verify that secure network-layer connections are not
hijacked by other programs elsewhere in the protocol stack, nor does
it mandate secure DNS or application-level server identity
verification.
The XMPP community has invested some of its time in working on
these other pieces of the secure-messaging puzzle, though. The IETF's
XMPP Working Group has written a number of draft proposals in recent
years, covering topics from DNSSEC
for XMPP records to server
identity verification. Some of these drafts have since expired
(in IETF terminology), seemingly without an update or forward progress
in several years. However, the argument in the Open Discussion Day
community is that forcibly migrating the XMPP network to TLS
encryption was a necessary first step; only with that in place it is
possible to make meaningful progress on the remaining challenges.
As for the newly activated encryption requirement, though, it is
already in effect. Since XMPP servers are federated (and since quite
a few of the manifesto signatories run their own server), it is not easy
to estimate how many users there are with accounts that will reject
unencrypted connection requests. Unless a server advertises how many
users it has, there is no way to know how many accounts each server
represents—much as is the case with email providers. Fortunately, the IM Observatory site provides some tools
for assessing the current state of many clients and servers.
The site provides a tool with which users can test any publicly
reachable XMPP server for client-to-server and server-to-server
encryption. Recent test results are published on a live-updated page,
although only the past few hours are visible at any one time, and
multiple test runs against the same server are not filtered out. Each
server receives a grade from
A (best) to F (worst), based on the same
rubric used by SSL Labs' SSL
Server Rating Guide. The guide takes into account connection protocol
support (e.g., TLS 1.2 is better than TLS 1.1), key-exchange
protocol support (e.g., ephemeral key exchange is better than
non-ephemeral), and cipher strength (measured by key length).
The site also maintains statistics over the total set
of test tested servers. As of today, 59.1% of the tested servers
receive an A, while 7.7% receive an F. Most of those in between are
skewed toward the A side of the graph. While that is certainly
positive news, not all of the servers tested offer accounts to the
general public. To that end, the site also provides a list of
open-to-the-public XMPP
servers that can be sorted by grade. Among these public servers, 59
out of 115 scored an A, or about 51.3%.
All in all, toggling the mandatory-encryption switch for XMPP is
clearly a good move from a security standpoint, and the project seems
to have implemented it with an admirable degree of success. One might
be tempted to view it as a template that other development communities
could emulate. In theory, for instance, it would be nice to see
email providers make a concerted push for PGP or S/MIME.
But an uncomfortable caveat accompanies the Open Discussion Day success:
the fact that XMPP is not nearly as widely deployed as email. There
are some large-scale instant-messaging services (like Skype and
Facebook) that offer some degree of XMPP compatibility, but the
overall size of the XMPP network is small compared to the proprietary
alternatives. In fact, Google deactivated its own XMPP compatibility
for Google Talk in 2013.
There are still reportedly millions of XMPP users, but it would be
considerably harder to implement a similar single-day switchover for
other, more widely deployed services. In the early days of TCP/IP,
for example, it was possible
for all implementers to assemble in one room. But the transition from
IPv4 to IPv6 has been many orders of magnitude slower.
Nevertheless, the activation of mandatory XMPP encryption does
demonstrate that when like-minded service operators and application
developers put their minds together, fixing a widespread security
problem is possible. That potentially bodes well for a number of
other Internet security issues, from the certificate-authority problem
to Do Not Track. When concerned parties coordinate their efforts,
they can indeed implement change.
out of respect for the users of our software
and services
", and lays out a set of policy recommendations for
server and client applications.
ideally
" the
implementers should present as much information as possible about the
authentication and encryption status of the channel to the user, and that
services should use certificates from well-known certificate
authorities, although both of these conditions are described as
aspirational goals, rather than as mandates.
Brief items
Security quotes of the week
That, fundamentally, is surprising. If you gave a super-secret Internet exploitation organization $10 billion annually, you'd expect some magic. And my guess is that there is some, around the edges, that has not become public yet. But that we haven't seen any yet is cause for optimism.
New vulnerabilities
botan: insufficiently random cryptographic base
Package(s): | botan | CVE #(s): | |||||||||
Created: | May 21, 2014 | Updated: | May 21, 2014 | ||||||||
Description: | From the Botan announcement:
Fix a bug in primality testing introduced in 1.8.3 which caused only a single random base, rather than a sequence of random bases, to be used in the Miller-Rabin test. This increased the probability that a non-prime would be accepted, for instance a 1024 bit number would be incorrectly classed as prime with probability around 2^-40. Reported by Jeff Marrison. The key length limit on HMAC has been raised to 512 bytes, allowing the use of very long passphrases with PBKDF2. | ||||||||||
Alerts: |
|
charybdis: denial of service
Package(s): | charybdis | CVE #(s): | CVE-2012-6084 | ||||
Created: | May 19, 2014 | Updated: | May 21, 2014 | ||||
Description: | From the CVE entry:
modules/m_capab.c in (1) ircd-ratbox before 3.0.8 and (2) Charybdis before 3.4.2 does not properly support capability negotiation during server handshakes, which allows remote attackers to cause a denial of service (NULL pointer dereference and daemon crash) via a malformed request. | ||||||
Alerts: |
|
chromium-browser: multiple vulnerabilities
Package(s): | chromium-browser | CVE #(s): | CVE-2014-1740 CVE-2014-1741 CVE-2014-1742 | ||||||||||||||||||||
Created: | May 19, 2014 | Updated: | May 21, 2014 | ||||||||||||||||||||
Description: | From the CVE entries:
Multiple use-after-free vulnerabilities in net/websockets/websocket_job.cc in the WebSockets implementation in Google Chrome before 34.0.1847.137 allow remote attackers to cause a denial of service or possibly have unspecified other impact via vectors related to WebSocketJob deletion. (CVE-2014-1740) Multiple integer overflows in the replace-data functionality in the CharacterData interface implementation in core/dom/CharacterData.cpp in Blink, as used in Google Chrome before 34.0.1847.137, allow remote attackers to cause a denial of service or possibly have unspecified other impact via vectors related to ranges. (CVE-2014-1741) Use-after-free vulnerability in the FrameSelection::updateAppearance function in core/editing/FrameSelection.cpp in Blink, as used in Google Chrome before 34.0.1847.137, allows remote attackers to cause a denial of service or possibly have unspecified other impact by leveraging improper RenderObject handling. (CVE-2014-1742) | ||||||||||||||||||||||
Alerts: |
|
cifs-utils: code execution
Package(s): | cifs-utils | CVE #(s): | CVE-2014-2830 | ||||||||||||||||||||
Created: | May 15, 2014 | Updated: | December 5, 2016 | ||||||||||||||||||||
Description: | From the Red Hat bugzilla entry:
Sebastian Krahmer discovered a stack-based buffer overflow flaw in cifskey.c, which is used by pam_cifscreds. | ||||||||||||||||||||||
Alerts: |
|
clamav: multiple unspecified vulnerabilities
Package(s): | clamav | CVE #(s): | CVE-2013-7087 CVE-2013-7088 CVE-2013-7089 | ||||
Created: | May 16, 2014 | Updated: | May 21, 2014 | ||||
Description: | From the Gentoo advisory:
Multiple vulnerabilities have been found in ClamAV, the worst of which could lead to arbitrary code execution. | ||||||
Alerts: |
|
dovecot: denial of service
Package(s): | dovecot | CVE #(s): | CVE-2014-3430 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | May 16, 2014 | Updated: | March 29, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Mandriva advisory: Dovecot 1.1 before 2.2.13 and dovecot-ee before 2.1.7.7 and 2.2.x before 2.2.12.12 does not properly close old connections, which allows remote attackers to cause a denial of service (resource consumption) via an incomplete SSL/TLS handshake for an IMAP/POP3 connection (CVE-2014-3430). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
egroupware: cross site request forgery
Package(s): | egroupware | CVE #(s): | |||||||||
Created: | May 19, 2014 | Updated: | May 21, 2014 | ||||||||
Description: | From the Mageia advisory:
eGroupWare before 1.8.007 allows logged in users with administrative privileges to remotely execute arbitrary commands on the server. It is also vulnerable to a cross site request forgery vulnerability that allows creating new administrative users. | ||||||||||
Alerts: |
|
ettercap: code execution
Package(s): | ettercap | CVE #(s): | CVE-2010-3844 | ||||
Created: | May 19, 2014 | Updated: | May 21, 2014 | ||||
Description: | From the Gentoo advisory:
A format string flaw in Ettercap could cause a buffer overflow. A remote attacker could entice a user to load a specially crafted configuration file using Ettercap, possibly resulting in execution of arbitrary code with the privileges of the process or a Denial of Service condition. A local attacker could perform symlink attacks to overwrite arbitrary files with the privileges of the user running the application. | ||||||
Alerts: |
|
kernel: multiple vulnerabilities
Package(s): | kernel | CVE #(s): | CVE-2014-3144 CVE-2014-3145 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | May 16, 2014 | Updated: | June 5, 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Red Hat bug report: Linux kernel built with the BPF interpreter support in the networking core is vulnerable to an out of bounds buffer access flaw. It occurs when accessing a netlink attribute from the skb->data buffer. It could lead to DoS via kernel crash or leakage of kernel memory bytes to user space. An unprivileged user/program could use this flaw to crash the system kernel resulting in DoS or leak kernel memory bytes to user space. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
kernel: two vulnerabilities
Package(s): | kernel | CVE #(s): | CVE-2014-0691 CVE-2014-2672 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | May 19, 2014 | Updated: | May 21, 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the openSUSE advisory:
cifs: ensure that uncached writes handle unmapped areas correctly (CVE-2014-0691) ath9k: protect tid->sched check (CVE-2014-2672). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
libgadu: code execution
Package(s): | libgadu | CVE #(s): | CVE-2014-3775 | ||||||||||||||||||||||||||||||||||||||||||||
Created: | May 21, 2014 | Updated: | July 28, 2014 | ||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Ubuntu advisory:
It was discovered that libgadu incorrectly handled certain messages from file relay servers. A malicious remote server or a man in the middle could use this issue to cause applications using libgadu to crash, resulting in a denial of service, or possibly execute arbitrary code. | ||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
libvirt: information disclosure/denial of service
Package(s): | libvirt | CVE #(s): | CVE-2014-0179 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | May 15, 2014 | Updated: | September 27, 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the openSUSE advisory:
libvirt was patched to prevent expansion of entities when parsing XML files. This vulnerability allowed malicious users to read arbitrary files or cause a denial of service (CVE-2014-0179). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
mcrypt: code execution
Package(s): | mcrypt | CVE #(s): | CVE-2012-4426 | ||||
Created: | May 19, 2014 | Updated: | May 21, 2014 | ||||
Description: | From the CVE entry:
Multiple format string vulnerabilities in mcrypt 2.6.8 and earlier might allow user-assisted remote attackers to cause a denial of service (crash) or possibly execute arbitrary code via vectors involving (1) errors.c or (2) mcrypt.c. | ||||||
Alerts: |
|
mono: denial of service
Package(s): | mono | CVE #(s): | CVE-2012-3543 | ||||||||||||
Created: | May 19, 2014 | Updated: | May 29, 2014 | ||||||||||||
Description: | From the Gentoo advisory:
Mono does not properly randomize hash functions for form posts to protect against hash collision attacks. A remote attacker could send specially crafted parameters, possibly resulting in a Denial of Service condition. | ||||||||||||||
Alerts: |
|
moodle: multiple vulnerabilities
Package(s): | moodle | CVE #(s): | CVE-2014-0213 CVE-2014-0214 CVE-2014-0215 CVE-2014-0216 CVE-2014-0218 | ||||||||||||||||
Created: | May 20, 2014 | Updated: | May 30, 2014 | ||||||||||||||||
Description: | From the Mageia advisory:
In Moodle before 2.6.3, Session checking was not being performed correctly in Assignment's quick-grading, allowing forged requests to be made unknowingly by authenticated users (CVE-2014-0213). In Moodle before 2.6.3, MoodleMobile web service tokens, created automatically in login/token.php, were not expiring and were valid forever (CVE-2014-0214). In Moodle before 2.6.3, Some student details, including identities, were included in assignment marking pages and would have been revealed to screen readers or through code inspection (CVE-2014-0215). In Moodle before 2.6.3, Access to files linked on HTML blocks on the My home page was not being checked in the correct context, allowing access to unauthenticated users (CVE-2014-0216). In Moodle before 2.6.3, There was a lack of filtering in the URL downloader repository that could have been exploited for XSS (CVE-2014-0218). | ||||||||||||||||||
Alerts: |
|
owncloud: multiple unspecified vulnerabilities
Package(s): | owncloud | CVE #(s): | |||||
Created: | May 16, 2014 | Updated: | May 21, 2014 | ||||
Description: | From the Mandriva advisory: Owncloud versions 5.0.16 and 6.0.3 fix several unspecified security vulnerabilities, as well as many other bugs. | ||||||
Alerts: |
|
python-django: information disclosure
Package(s): | python-django | CVE #(s): | CVE-2014-1418 | ||||||||||||||||||||||||||||||||||||||||||||
Created: | May 15, 2014 | Updated: | May 27, 2014 | ||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Ubuntu advisory:
Stephen Stewart, Michael Nelson, Natalia Bidart and James Westby discovered that Django improperly removed Vary and Cache-Control headers from HTTP responses when replying to a request from an Internet Explorer or Chrome Frame client. An attacker may use this to retrieve private data or poison caches. This update removes workarounds for bugs in Internet Explorer 6 and 7. (CVE-2014-1418) | ||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
python-django: open redirect attacks
Package(s): | python-django | CVE #(s): | CVE-2014-3730 | ||||||||||||||||||||||||||||||||||||
Created: | May 20, 2014 | Updated: | May 27, 2014 | ||||||||||||||||||||||||||||||||||||
Description: | From the CVE entry:
The django.util.http.is_safe_url function in Django 1.4 before 1.4.13, 1.5 before 1.5.8, 1.6 before 1.6.5, and 1.7 before 1.7b4 does not properly validate URLs, which allows remote attackers to conduct open redirect attacks via a malformed URL, as demonstrated by "http:\\\djangoproject.com." | ||||||||||||||||||||||||||||||||||||||
Alerts: |
|
python-fmn-web: covert redirect
Package(s): | python-fmn-web | CVE #(s): | |||||||||
Created: | May 21, 2014 | Updated: | May 21, 2014 | ||||||||
Description: | From the Fedora advisory:
Fix for Covert Redirect. | ||||||||||
Alerts: |
|
qemu: multiple vulnerabilities
Package(s): | qemu | CVE #(s): | CVE-2014-0182 CVE-2013-4534 CVE-2013-4533 CVE-2013-4535 CVE-2013-4536 CVE-2013-4537 CVE-2013-4538 CVE-2013-4539 CVE-2013-4540 CVE-2013-4541 CVE-2013-4542 CVE-2013-6399 CVE-2013-4531 CVE-2013-4530 CVE-2013-4529 CVE-2013-4527 CVE-2013-4526 CVE-2013-4151 CVE-2013-4150 CVE-2013-4149 CVE-2013-4148 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | May 16, 2014 | Updated: | July 25, 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Red Hat bug reports: CVE-2014-0182: An user able to alter the savevm data (either on the disk or over the wire during migration) could use this flaw to to corrupt QEMU process memory on the (destination) host, which could potentially result in arbitrary code execution on the host with the privileges of the QEMU process. CVE-2013-4534: opp->nb_cpus is read from the wire and used to determine how many IRQDest elements to read into opp->dst[]. If the value exceeds the length of opp->dst[], MAX_CPU, opp->dst[] can be overrun with arbitrary data from the wire. An user able to alter the savevm data (either on the disk or over the wire during migration) could use this flaw to to corrupt QEMU process memory on the (destination) host, which could potentially result in arbitrary code execution on the host with the privileges of the QEMU process. CVE-2013-4533: s->rx_level is read from the wire and used to determine how many bytes to subsequently read into s->rx_fifo[]. If s->rx_level exceeds the length of s->rx_fifo[] the buffer can be overrun with arbitrary data from the wire. An user able to alter the savevm data (either on the disk or over the wire during migration) could use this flaw to to corrupt QEMU process memory on the (destination) host, which could potentially result in arbitrary code execution on the host with the privileges of the QEMU process. CVE-2013-4535, CVE-2013-4536: Both virtio-block and virtio-serial read, VirtQueueElements are read in as buffers, and passed to virtqueue_map_sg(), where num_sg is taken from the wire and can force writes to indicies beyond VIRTQUEUE_MAX_SIZE. An user able to alter the savevm data (either on the disk or over the wire during migration) could use this flaw to to corrupt QEMU process memory on the (destination) host, which could potentially result in arbitrary code execution on the host with the privileges of the QEMU process. CVE-2013-4537: s->arglen is taken from wire and used as idx in ssi_sd_transfer(). An user able to alter the savevm data (either on the disk or over the wire during migration) could use this flaw to to corrupt QEMU process memory on the (destination) host, which could potentially result in arbitrary code execution on the host with the privileges of the QEMU process. CVE-2013-4538: s->cmd_len used as index in ssd0323_transfer() to store 32-bit field. Possible this field might then be supplied by guest to overwrite a return addr somewhere. Same for row/col fields, which are indicies into framebuffer array. An user able to alter the savevm data (either on the disk or over the wire during migration) could use this flaw to to corrupt QEMU process memory on the (destination) host, which could potentially result in arbitrary code execution on the host with the privileges of the QEMU process. CVE-2013-4539: s->precision, nextprecision, function and nextfunction come from wire and are used as idx into resolution[] in TSC_CUT_RESOLUTION. An user able to alter the savevm data (either on the disk or over the wire during migration) could use this flaw to to corrupt QEMU process memory on the (destination) host, which could potentially result in arbitrary code execution on the host with the privileges of the QEMU process. CVE-2013-4540: Within scoop_gpio_handler_update, if prev_level has a high bit set, then we get bit > 16 and that does a buffer overrun. An user able to alter the savevm data (either on the disk or over the wire during migration) could use this flaw to to corrupt QEMU process memory on the (destination) host, which could potentially result in arbitrary code execution on the host with the privileges of the QEMU process. CVE-2013-4541: s->setup_len and s->setup_index are fed into usb_packet_copy as size/offset into s->data_buf, it's possible for invalid state to exploit this to load arbitrary data. An user able to alter the savevm data (either on the disk or over the wire during migration) could use this flaw to to corrupt QEMU process memory on the (destination) host, which could potentially result in arbitrary code execution on the host with the privileges of the QEMU process. CVE-2013-4542: hw/scsi/scsi-bus.c invokes load_request. virtio_scsi_load_request does: qemu_get_buffer(f, (unsigned char *)&req->elem, sizeof(req->elem));this probably can make elem invalid, for example, make in_num or out_num out-of-bounds, later leading to buffer overrun. An user able to alter the savevm data (either on the disk or over the wire during migration) could use this flaw to to corrupt QEMU process memory on the (destination) host, which could potentially result in arbitrary code execution on the host with the privileges of the QEMU process. CVE-2013-6399: vdev->queue_sel is read from the wire, and later used in the emulation code as an index into vdev->vq[]. If the value of vdev->queue_sel exceeds the length of vdev->vq[], currently allocated to be VIRTIO_PCI_QUEUE_MAX elements, subsequent PIO operations such as VIRTIO_PCI_QUEUE_PFN can be used to overrun the buffer with arbitrary data. An user able to alter the savevm data (either on the disk or over the wire during migration) could use this flaw to to corrupt QEMU process memory on the (destination) host, which could potentially result in arbitrary code execution on the host with the privileges of the QEMU process. CVE-2013-4531: cpreg_vmstate_indexes is a VARRAY_INT32. A negative value for cpreg_vmstate_array_len will cause a buffer overflow. An user able to alter the savevm data (either on the disk or over the wire during migration) could use this flaw to to corrupt QEMU process memory on the (destination) host, which could potentially result in arbitrary code execution on the host with the privileges of the QEMU process. CVE-2013-4530: pl022.c did not bounds check tx_fifo_head and rx_fifo_head after loading them from file and before they are used to dereference array. An user able to alter the savevm data (either on the disk or over the wire during migration) could use this flaw to to corrupt QEMU process memory on the (destination) host, which could potentially result in arbitrary code execution on the host with the privileges of the QEMU process. CVE-2013-4529: There are two issues in hw/pci/pcie_aer.c: 1. log_max from remote can be larger than on local then buffer will overrun with data coming from state file. 2. log_num can be larger then we get data corrution again with an overflow but not adversary controlled. An user able to alter the savevm data (either on the disk or over the wire during migration) could use this flaw to to corrupt QEMU process memory on the (destination) host, which could potentially result in arbitrary code execution on the host with the privileges of the QEMU process. CVE-2013-4527: hpet is a VARRAY with a uint8 size but static array of 32 and the index (num_timers ) into this array is not checked for sanity. An user able to alter the savevm data (either on the disk or over the wire during migration) could use this flaw to to corrupt QEMU process memory on the (destination) host, which could potentially result in arbitrary code execution on the host with the privileges of the QEMU process. CVE-2013-4526: Within hw/ide/ahci.c, VARRAY refers to ports which is also loaded. So we use the old version of ports to read the array but then allow any value for ports. An user able to alter the savevm data (either on the disk or over the wire during migration) could use this flaw to to corrupt QEMU process memory on the (destination) host, which could potentially result in arbitrary code execution on the host with the privileges of the QEMU process. CVE-2013-4151: QEMU 1.0 out-of-bounds buffer write in virtio_load@virtio/virtio.c array of vqs has size VIRTIO_PCI_QUEUE_MAX, so on invalid input this will write beyond end of buffer. An user able to alter the savevm data (either on the disk or over the wire during migration) could use this flaw to to corrupt QEMU process memory on the (destination) host, which could potentially result in arbitrary code execution on the host with the privileges of the QEMU process. CVE-2013-4150: QEMU 1.5.0 out-of-bounds buffer write in virtio_net_load()@hw/net/virtio-net.c Number of vqs is max_queues, so if we get invalid input here, for example if max_queues = 2, curr_queues = 3, we get write beyond end of the buffer, with data that comes from wire. This might be used to corrupt qemu memory in hard to predict ways. An user able to alter the savevm data (either on the disk or over the wire during migration) could use this flaw to to corrupt QEMU process memory on the (destination) host, which could potentially result in arbitrary code execution on the host with the privileges of the QEMU process. CVE-2013-4149: QEMU 1.3.0 out-of-bounds buffer write in virtio_net_load()@hw/net/virtio-net.c An user able to alter the savevm data (either on the disk or over the wire during migration) could use this flaw to to corrupt QEMU process memory on the (destination) host, which could potentially result in arbitrary code execution on the host with the privileges of the QEMU process. CVE-2013-4148: QEMU 1.0 integer conversion in virtio_net_load()@hw/net/virtio-net.c An user able to alter the savevm data (either on the disk or over the wire during migration) could use this flaw to to corrupt QEMU process memory on the (destination) host, which could potentially result in arbitrary code execution on the host with the privileges of the QEMU process. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
ruby-actionpack: information leak
Package(s): | ruby-actionpack-3.2 | CVE #(s): | CVE-2014-0130 | ||||||||||||||||||||||||||||||||
Created: | May 16, 2014 | Updated: | May 28, 2014 | ||||||||||||||||||||||||||||||||
Description: | From the Debian advisory: A directory traversal vulnerability in actionpack/lib/abstract_controller/base.rb allows remote attackers to read arbitrary files. | ||||||||||||||||||||||||||||||||||
Alerts: |
|
srm: unspecified vulnerability
Package(s): | srm | CVE #(s): | |||||
Created: | May 15, 2014 | Updated: | May 21, 2014 | ||||
Description: | no information was provided in the Fedora advisory | ||||||
Alerts: |
|
util-linux: corruption of the /etc/mtab file
Package(s): | util-linux | CVE #(s): | CVE-2011-1676 | ||||
Created: | May 19, 2014 | Updated: | May 21, 2014 | ||||
Description: | From the CVE entry:
mount in util-linux 2.19 and earlier does not remove the /etc/mtab.tmp file after a failed attempt to add a mount entry, which allows local users to trigger corruption of the /etc/mtab file via multiple invocations. | ||||||
Alerts: |
|
x2goserver: privilege escalation
Package(s): | x2goserver | CVE #(s): | CVE-2013-7383 | ||||
Created: | May 19, 2014 | Updated: | May 21, 2014 | ||||
Description: | From the Gentoo advisory:
X2Go Server is prone to a local privilege-escalation vulnerability. A local attacker could gain escalated privileges. | ||||||
Alerts: |
|
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 3.15-rc6, released on May 22. "With rc5 being a couple of days early, and rc6 being several days late, we had almost two weeks in between them. The size of the result is not twice as large, though, hopefully partially because it's getting late in the rc series and things are supposed to be calming down, but presumably also because some submaintainers just didn't send their pull requests because they knew I was off-line. Whatever the reason, things don't look bad." Linus plans to return to the normal Sunday schedule for rc7, presumably on June 1, which might be the last rc for 3.15.
Stable updates: 3.4.91 was released on May 18 with a handful of important fixes, including one (known) important security fix. For those who like their kernels especially stable, 2.6.32.62 was released on May 21; it includes fixes for 39 CVE-numbered bugs.
Quotes of the week (code simplicity edition)
Tux3 posted for review
After years of development and some seeming false starts, the Tux3 filesystem has been posted for review with the hope of getting it into the mainline in the near future. Tux3, covered here in 2008, promises a number of interesting next-generation filesystem features combined with a high level of reliability. This posting is a step forward for Tux3, but it will still probably be some time before it finds its way into the mainline.The only developer to review the code so far is Dave Chinner, and he was not entirely impressed. There is a lot of stuff to clean up, but Dave is most concerned about various core memory management and filesystem changes that, he says, need to be separated out for review on their own merits. One of the core Tux3 mechanisms, called "page forking," was not well received at the 2013 Storage, Filesystem and Memory Management Summit, and Tux3 developer Daniel Phillips has done little since then to address the criticisms heard there.
Dave is also worried about the "work in progress" nature of a number of promised Tux3 features. Years ago, Btrfs was merged while in an incomplete state in the hope of accelerating development; Dave now says that was a mistake he does not want to repeat:
All told, it adds up to a chilly reception for this new filesystem. Daniel appears to be up to the challenge of getting this code into shape for merging, though. If he follows through, we should start seeing smaller patch sets that will facilitate the review of specific Tux3-related changes. Only after that process completes will it be time to look at getting the filesystem itself into the mainline.
Kernel development news
2038 is closer than it seems
Most LWN readers are likely aware of the doom impending upon us in January 2038, when the time_t type used to store time values (in the form of seconds since January 1, 1970) runs out of bits on 32-bit systems. It may be surprising that developers are increasingly worried about this deadline, which is still nearly 24 years in the future, but there are good reasons to be concerned now. Whether those worries will lead to a solution in the near term remains to be seen; not much has happened since this topic came up last August. But recent discussions have at least shed a bit of light on the forms such a solution might take.At times, developers have hoped that this problem might solve itself. On 64-bit systems, the time_t type has always been defined as a 64-bit quantity and will not run out of space anytime soon. Given that 64-bit systems appear to be taking over the world — even phone handsets seem likely to make the switch in the next few years — might the best solution be to just wait for 32-bit systems to die out and take the problem with them? A "no action required" solution has an obvious appeal.
There are two problems with that reasoning: (1) 32-bit systems are likely to continue to be made for far longer than most people might expect, and (2) there are 32-bit systems being deployed now that can be expected to have lifetimes of 24 years or longer. 32-bit systems will be useful as cheap microcontrollers for a long time, and, once deployed, they will often be expected to work for many years while being difficult or impossible to update. There are almost certainly systems already deployed that are going to provide unpleasant surprises in 2038.
Kernel-based solutions
So it would appear to make sense to solve the problem soon, rather than in, say, 2036 or so. There is only one snag: the problem is not all that easy to solve. At least, it is not easy if one is concerned about little details like not breaking existing programs. Since Linux developers at most levels are quite concerned about compatibility, the simplest solutions (such as a BSD-style ABI break) are not seen as being workable. In a recent discussion, John Stultz outlined a couple of alternative approaches, neither of which is without its difficulties.
The first approach would be to change the 32-bit ABI to use a 64-bit version of time_t (related data structures like, struct timespec and struct timeval would also change). Old binaries could be supported through a compatibility interface, but newly compiled code would normally use the new ABI. There are some advantages to this approach, starting with the fact that lots of applications could be updated simply by rebuilding them. Since a couple of BSD variants have already taken this path, a number of the worst application problems have already been fixed. Embedded microcontrollers typically run custom distributions built entirely from source; changing the ABI in this way would make it possible to build 2038-capable systems in the near future with a minimum of pain.
On the other hand, the kernel would have to maintain a significant compatibility layer for a long time. Developers are also worried that there will be many applications that store 32-bit time_t values in their own data structures, in on-disk formats, and more. Many of these applications could break in surprising ways, and they could prove to be difficult to fix. There are also some concerns about the runtime cost of using 64-bit time_t values on 32-bit systems. Much of this cost could be mitigated within the kernel by using a different format internally, but applications could slow down as well.
The alternative approach is to simply define a new set of system calls, all of which are defined to use better time formats from the beginning. The new formats could address other irritations at the same time; not everybody likes the separate seconds and nanoseconds fields used in struct timespec, for example. All system calls defined to use the old time_t values would be deprecated, with the idea of removing them, if possible, before 2038.
With this approach, there would be no hard ABI break anytime soon and applications could be migrated gradually. Once again, embedded systems could be built using the new system calls in the relatively near future, while desktop systems could be left alone for another decade or so. And it would be a chance to start over and redesign some longstanding system calls with 21st-century needs in mind.
Defining new system calls has its downsides as well, though. It would push Linux further away from being a POSIX system, and would take us down a path different from the one chosen by the BSD world. There are a lot of system calls to replace, and time_t values show up in other places as well, most notably in a long list of ioctl() calls. Applications would have to be updated, including those running only on 64-bit systems, which would not see much of a benefit from the new system calls. And, undoubtedly, there would be lots of applications using the older system calls that would surface in 2037. So this approach is not an easy solution either.
Including glibc
Discussions of these alternatives went on for a surprisingly long time before Christoph Hellwig made an (in retrospect) obvious suggestion: the C library developers are going to have to be involved in the implementation of any real solution to the year-2038 problem, so perhaps they should be part of the discussion now. For years, communications between the kernel community and the developers of C libraries (including the GNU C library — glibc) have been sporadic at best. The changing of the guard at glibc has made productive conversations easier to have, but changing old habits has proved hard. In any case, it is true that the glibc developers will have to be involved in the design of the solution to this problem; the good news is that such involvement appears likely to happen.
Glibc developers are not known for their love of ABI breaks — or of non-POSIX interfaces for that matter. So, once glibc developer Joseph Myers joined the conversation, the tone shifted a bit toward a solution that would allow a smooth transition while retaining existing POSIX system calls and application compatibility. The plan (which was discussed only in rough form and would need a lot of work yet) looks something like this:
- Create new, 64-bit versions of the affected system calls. So, for
example, there would be a gettimeofday64() that returns the
time in a struct timeval64. The existing versions of these
system calls would be unchanged.
- Glibc would gain a new feature test macro with a name like
TIME_BITS. If TIME_BITS=64 on a 32-bit system, a
call to gettimeofday() will be remapped to
gettimeofday64() within the library. So applications can opt
into the new world by building with an appropriate value of
TIME_BITS defined.
- Eventually, TIME_BITS=64 would become the default, probably after distributions had been shipping in that mode for a while. Even in the 64-bit configuration, compatibility symbols would remain so that older binaries would still work against newer versions of the C library.
Such an approach could possibly allow for a relatively smooth transition to a system that will work in 2038, though, naturally, a number of troublesome details remain. There was talk of remapping ioctl() calls in a similar way, but that looks like a recipe for trouble given just how many of those calls there are and how hard it would be to even find them all. Developers in other C library projects, who often don't wish to maintain the sort of extensive compatibility infrastructure found in glibc, may wish to take a different approach. And so on.
But, even with its challenges, the existence of a vague plan hashed out with participation from kernel and glibc developers is reason for hope. Maybe, just maybe, some sort of reasonably robust solution to the 2038 problem will be found before it becomes absolutely urgent, and, with luck, before lots of systems that will need to function properly in 2038 are deployed. We have the opportunity to avoid a year-2038 panic at a relatively low cost; if we make use of that opportunity, our future selves will thank us.
Taking the Eudyptula Challenge
Linux kernel development is tricky to learn. Though there are lots of resources available covering many of the procedural and technical facets of kernel development as well as a mailing list for kernel newbies, it can still be difficult to figure out how to get started. So some kind of "challenge" may be just what the penguin ordered: a series of increasingly difficult, focused tasks targeted at kernel development. As it turns out, the Eudyptula Challenge—which has been attracting potential kernel hackers since February—provides exactly that.
As the challenge web page indicates, it was inspired by the Matasano Crypto Challenge (which, sadly, appears to be severely backlogged and not responding to requests to join). The name of the challenge comes from the genus of the Little Penguin (Eudyptula minor) and the pseudonymous person (people?) behind the challenge goes by "Little Penguin" (or "Little" for short).
Getting started
Signing up for the challenge is easy; just send an email to little at eudyptula-challenge.org. In fact, it is so easy that more than 4700 people have done so, according to Little, which is many more than were expected. The first message one receives establishes the "rules" of the challenge as well as assigning an ID used in all of the subsequent email correspondence with Little. After that, the first task of the challenge is sent.
All of the interaction takes place via email, just like Linux kernel development. For the most part, the tasks just require locally building, changing, and running recent kernels. There are a few tasks where participants will need to interact with the wider kernel community, however. A recent rash of cleanup patches in the staging tree may quite possibly be related to the challenge.
I started the challenge on March 3 after seeing a Google+ post about it. I have done some kernel programming along the way. I did a couple of minor patches a few years back and some more extensive hacking many moons ago—back in the 2.4 days—and some before that as well. So I was by no means a complete novice. I have a good background in C programming and try to keep up with the Kernel page of a certain weekly publication. While all of that was helpful, it still was quite a bit of work to complete the twenty tasks that make up the challenge. It took a fair amount of effort, but it was also a lot of fun—and I learned a ton in the process.
While I won't be revealing the details on any of the tasks (and Little tells me that some have been tweaked over time and may be different than the ones I did), I can describe the general nature and subject areas for the tasks. It should be no surprise that the first task is a kernel-style "hello world" of sorts. From there, things get progressively harder, in general, and the final task was fairly seriously ... well ... challenging.
Building both mainline and linux-next kernels figures into most of the tasks. Many of the tasks also require using the assigned ID in some way. Not surprisingly, kernel modules figure prominently in many of the tasks. As you progress through the early tasks, you will learn about creating devices, debugfs, and sysfs. In addition, kernel coding style, checkpatch.pl, sparse, and how to submit patches are all featured.
In the later tasks, things like locking, kernel data structures, different kinds of memory allocation, kernel threads and wait queues, as well as a few big subsystems, such as networking and filesystems, are encountered. It is, in some sense, a grand tour of the kernel and how to write code for it. Which is not to say that it covers everything—that might require a tad more than twenty tasks—but it does get you into lots of different kernel areas. There are even plans to add more tasks once there is a good chunk of people (say 40–50) who have completed the challenge, Little said. At the current rate, that will be several months out.
Participant progress
The challenge is not a race of any kind—unless, perhaps, you are trying to complete it so you can write an article about it—but at least twenty people have completed it in the three months since it was announced. Given the nature of the challenge, it's a bit difficult to pin down how many are actually actively still working on it, but the statistics on how many have completed certain tasks can give a bit of a picture.
Roughly three-quarters of the 4700 who have signed up have never completed the first task. Presumably many of those were just curious about the challenge without any real intention to participate. But it is entirely possible that some of those folks are still working on that first task and may eventually work their way through the whole thing. On the other hand, though, that means that around 1200 folks have gotten a module built and loaded into their kernel, which is quite an accomplishment for those participants—and for the challenge.
The bulk of the participants are still working on tasks up through number six and there are less than 100 working on tasks ten and higher. Some of that may be due to some tasks' queues being slower than others. There is manual review that needs to be done for each task, and some tasks take longer than others to review. It might not have been designed that way, but that sort of simulates the review process for kernel patches, which features unknown wait times. Longer queues may also be caused by participants having to try several times on certain tasks, while other tasks are commonly "solved" the first time.
Often the tasks are described loosely enough that there are several to many different ways to complete them. Some solutions are better than others, of course, and Little will patiently point participants in the right direction when they choose incorrectly. I found that the more open-ended the task, the more difficult it was for me—not to complete it so much as to choose what to work on. That may just be a character flaw on my part, however. The final task was also a significant jump in difficulty as well, I thought.
Documentation
One thing that participants are likely to notice is how scattered kernel documentation is. There is, of course, the Documentation directory, but it doesn't cover everything and was actually of fairly limited use to me during the challenge. Google is generally helpful, but there is a huge amount of information out there in various forums, mailing lists, blog posts, weekly news magazines, and so on, some of which is good and useful, some of which is old and possibly semi-useful, and some of which is just plain misleading or wrong. It's not clear what to do about that problem, but it is something that participants (and others learning about the kernel) will encounter.
Based on an April 24 status report that Little sent out, there have
been some growing pains. It's clear that the challenge is far more popular
than was expected. That has led to longer wait times and some misbehavior
from the "convoluted shell scripts that are slow to anger and
impossible to debug
" (as they are described on the web page). There
have also been problems with folks expecting a tutorial, rather than a
challenge that will take some real work to complete. Lastly, Little
mentioned some people trying to crash or crack the scripts, which is kind of
sad, but probably not unexpected.
In any case, the Eudyptula Challenge is a lot of fun, which makes it a great way to learn about kernel development. While it is targeted at relative newbies to the kernel, it wouldn't shock me if even seasoned kernel hackers learned a thing or two along the way. It is self-paced, so you can put in as much or as little time as you wish. There is no pressure (unless self-imposed) to complete anything by any particular deadline—the kernel will still be there in a week or a month or a year. Give the challenge a whirl ... it will be time well spent.
BPF: the universal in-kernel virtual machine
Much of the recent discussion regarding the Ktap dynamic tracing system was focused on the addition of a Lua interpreter and virtual machine to the kernel. Virtual machines seem like an inappropriate component to be running in kernel space. But, in truth, the kernel already contains more than one virtual machine. One of those, the BPF interpreter, has been growing in features and performance; it now looks to be taking on roles beyond its original purpose. In the process, it may result in a net reduction in interpreter code in the kernel."BPF" originally stood for "Berkeley packet filter"; it got its start as a simple language for writing packet-filtering code for utilities like tcpdump. Support for BPF in Linux was added by Jay Schulist for the 2.5 development kernel; for most of the time since then, the BPF interpreter has been relatively static, seeing only a few performance tweaks and the addition of a few instructions for access to packet data. Things started to change in the 3.0 release, when Eric Dumazet added a just-in-time compiler to the BPF interpreter. In the 3.4 kernel, the "secure computing" (seccomp) facility was enhanced to support a user-supplied filter for system calls; that filter, too, is written in the BPF language.
The 3.15 kernel sees another significant change in BPF. The language has now been split into two variants, "classic BPF" and "internal BPF". The latter expands the set of available registers from two to ten, adds a number of instructions that closely match real hardware instructions, implements 64-bit registers, makes it possible for BPF programs to call a (rigidly controlled) set of kernel functions, and more. Internal BPF is more readily compiled into fast machine code and makes it easier to hook BPF into other subsystems.
For now, at least, internal BPF is entirely hidden from user space. The packet filtering and secure computing interfaces still accept programs in the classic BPF language; these programs are translated into internal BPF before their first execution. The idea seems to be that internal BPF is a kernel-specific implementation detail that might change over time, so chances are it will not be exposed to user space anytime soon. That said, the documentation for internal BPF indicates that one of the goals of the project is to be easier for compilers like GCC and LLVM to generate. Given that any developer attempting to embed LLVM into the kernel has a rather small chance of success, that suggests that there may eventually be a way to load internal BPF directly from user space.
This latter-day work has been done by Alexei Starovoitov, who looks set to continue improving BPF going forward. In 3.15, the just-in-time compiler only understands the classic BPF instruction set; in 3.16, it will be ported over to the internal format instead. Also, for the first time, the secure computing subsystem will be able to take advantage of the just-in-time compiler, speeding the execution of sandboxed programs considerably.
Sometime after 3.16, use of BPF may be extended further beyond the networking subsystem. Alexei recently posted a patch that uses BPF for tracing filters. This is an interesting change that deletes almost as much code as it adds while improving performance considerably.
The kernel's tracepoint mechanism allows a suitably privileged user to receive detailed tracing information every time execution hits a specific tracepoint in the kernel. As one might imagine, the amount of data that results from some tracepoints can be quite large. The NSA might be able to process such fire-hose-like streams at its new data center (once it's running), but many of the rest of us are likely to want to thin that stream down to something a bit more manageable. That is where the filtering mechanism comes in.
Filters allow the association of boolean expression with any given tracepoint; the tracepoint only fires if the expression evaluates to true at execution time. An example given in Documentation/trace/events.txt reads like this:
# cd /sys/kernel/debug/tracing/events/signal/signal_generate # echo "((sig >= 10 && sig < 15) || sig == 17) && comm != bash" > filter
With this filter in place, the signal_generate tracepoint will only fire if the specific signal being generated is within the given range and the process generating the signal is not running bash.
Within the tracing subsystem, an expression like the above is parsed and represented as a simple tree with each internal node representing one of the operators. Every time that the tracepoint is encountered, that tree will be walked to evaluate each operation with the specific data values present at the time; should the result be true at the top of the tree, the tracepoint fires and the relevant information is emitted. In other words, the tracing subsystem contains a small parser and interpreter of its own, used for this one specific purpose.
Alexei's patch leaves the parser in place, but removes the interpreter. Instead, the predicate tree produced by the parser is translated into an internal BPF program, then discarded. The BPF is translated to machine code by the just-in-time compiler; the result is then run whenever the tracepoint is encountered. From the benchmarks posted by Alexei with the patch, the result is worth the effort: the execution time for most filters is reduced by a factor of approximately twenty — and sometimes quite a bit more. Given that the overhead of tracing can often hide the very problems that tracing is being used to find, a huge reduction in that overhead can only be welcome.
The patch set was indeed welcomed, but it is unlikely to find its way into the 3.16 kernel. It currently depends on the other 3.16 changes, which are merged into the net-next tree; that tree is not normally used as a dependency for changes elsewhere in the kernel. As a result, merging Alexei's changes into the tracing tree creates compilation failures — an unwelcome result.
The root problem here is that the BPF code, showing its origins, is buried deeply within the networking subsystem. But usage of BPF is no longer limited to networking code; it is being employed in core kernel subsystems like secure computing and tracing as well. So the time has come for BPF to move into a more central location where it can be maintained independently of the networking code. This change is likely to involve more than just a simple file move; there is still a lot of networking-specific code in the BPF interpreter that probably needs to be factored out. It will be a bit of work, but that is normal for a subsystem that is being evolved into a more generally useful facility.
Until that work is done, BPF-related changes to non-networking code are going to be difficult to merge. So that is the logical next step if BPF is to become the primary virtual machine for interpreted code loaded into the kernel. It makes sense to have only one such machine that, presumably, is well debugged and maintained. There are no other credible contenders for that role, so BPF is almost certainly it, once it has been repackaged as a utility for the whole kernel to use. After that happens, it will be interesting to see what other users for BPF come out of the woodwork.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Device drivers
Documentation
Filesystems and block I/O
Memory management
Networking
Security-related
Miscellaneous
Page editor: Jonathan Corbet
Distributions
Fedora mulls providing a local DNSSEC resolver
The Domain Name System (DNS) has never been secure, yet a great many other Internet services would not function without it. That is why the DNS Security Extensions (DNSSEC) were created. But the security of DNSSEC hinges on validating that the information returned from a DNS server has not been tampered with. Any application can perform its own DNS lookups—as is true in non-DNSSEC DNS—but most rely on an external resolver to perform the function on their behalf. Consequently, applications place a considerable amount trust in the DNS resolver: DNSSEC can safeguard against cache poisoning and other attacks, but the system does not offer protection if the resolver itself is compromised. To provide the most trustworthy DNSSEC service it can, Fedora is considering making a local DNSSEC resolver the default for the forthcoming Fedora 22 release—although most of the changes should land in time for Fedora 21, giving users and developers an opportunity to battle test them.
The proposed change is meant to ensure that there is always a trusted DNSSEC resolver available, even though the average portable computer is used on a variety of untrusted networks (such as on a public WiFi access point). On many of these untrusted networks, DNS server information is provided to client machines by DHCP. At best, a remote DNSSEC resolver on such an untrusted wireless LAN should be regarded with caution, and in practice many public hotspots do not offer DNSSEC at all, which means that DNS queries fall back to unverified DNS. With a DNSSEC resolver running as a local process, however, less trust is placed in unverified systems.
In a nutshell, DNSSEC is designed to permit cryptographic verification of DNS record lookups by having DNS servers digitally sign all of their messages: IP address lookup, MX records, certificate records, and so on. Notably, DNSSEC servers sign—but do not encrypt—DNS messages, although Daniel J. Bernstein's DNSCurve protocol does perform encryption, including encrypting DNSSEC. DNSCurve, however, is not widely supported among the major DNS service providers.
DNSSEC is broadly available, but, for any given domain, it is up to the domain owner to create and deploy the public/private key pair used to sign authoritative DNS messages for the domain. The root and top-level name servers (i.e., .com, .org, .net, etc.) have already implemented DNSSEC, though, as have many domains, and many recursive name servers are already DNSSEC aware.
As is usual for Fedora proposals, program manager Jaroslav Reznik posted the local-DNSSEC-resolver proposal on the Fedora development list on April 29, on behalf of change owners Prasad J. Pandit, Pavel Šimerda, and Tomas Hozza. The list of benefits to implementing the proposal include providing increased security and privacy for users (in light of how many public networks are not trustworthy), providing a more reliable DNS-resolving solution than that often found on public networks, and the simple principle of pushing forward with items like DNSSEC and IPv6 adoption. DNSSEC and IPv6 are the future, after all; it would be better to get implementations in working order before they become mandatory.
The change proposes setting up a validating DNSSEC resolver running on the Fedora machine, bound to 127.0.0.1:53. The resolver software to be used has yet to be selected, although unbound was suggested as a likely candidate. Paul Wouters said that the Fedora project has been working with the unbound team on adding necessary features, and it has also been selected by FreeBSD.
With the DNSSEC resolver running on a local port, applications configured to use the system default would automatically send their queries to the local server. By default, Fedora uses NetworkManager to configure DNS and other basic network options; if the machine joins a network on which a DHCP server supplies DNS server recommendations, the local DNSSEC resolver would pick those up as transient resolvers. Naturally, implementing the change would have a ripple effect on a number of other components for some systems. Users who manually edit their /etc/resolv.conf or /etc/sysconfig/network-scripts/ifcfg-* files would need more effort to migrate.
There are a few issues yet to be resolved with the proposal. First, the mailing list discussion revealed that there is not a clear, bright line between "trusted" and "untrusted" networks. A network may provide DNS service for machines in the local domain. In that case, the Fedora system can either choose not to trust the LAN's DNS server (and thus be unable access to local machines), or to trust the LAN's DNS server and run the risk that it will respond to queries with malicious replies. Of course, the root of this problem is that it is not entirely clear how the local DNS resolver (DNSSEC or otherwise) could automatically distinguish between a trusted and untrusted network.
But that is not a problem limited just to DNS or DHCP; which networks are trusted and which are untrusted is a question generally only the user can answer. The user knows which locations (home and office, for example) are trustworthy and which are not (like the neighborhood coffee franchise). There is some risk of confusing the two in NetworkManager's saved network information, such as in the case of multiple networks that use the same SSID (e.g., "linksys," "dd-wrt," or other defaults). But that possibility of confusion is already present. In all likelihood, Fedora will delegate the decision to trust a network or not to the user, as it does for other security questions such as firewall policy.
Matthew Miller asked whether the team working on Fedora's cloud product had been consulted, noting that DNS resolvers are not lightweight processes, and cloud admins prefer to run as few services as possible. Also, the stated justification that Fedora machines are portable/mobile devices moving between networks assumes things that are not true for the cloud product.
There is another major challenge with less clarity about possible solutions, however: Docker. Fedora is moving forward with a plan to offer Docker containerization for applications, but as Alexander Larsson pointed out, Docker (as well as, potentially, some other applications) makes use of network namespaces. Inside the namespace, 127.0.0.1:53 would point to the container itself rather than to the host, so DNS resolution would fail.
Several possible solutions were proposed, each of which include some drawbacks. Colin Walters suggested offering a local API for the resolver (either using Unix sockets or over kdbus); that suggestion was rejected because it would require modifying every program that performed DNS queries. Simo Sorce proposed modifying Docker, providing it with an IP address that would be redirected to the host. The problem there is finding an IP address that would be guaranteed to be available for both the container and the host.
Larsson pointed out that the Docker container and the host both reserve the entire 127.0.0.* address block for loopback usage; finding another usable address would probably mean hijacking an address officially reserved for some other purpose (Sorce suggested something from 192.168.*.*, which is used for similar reasons by libvirt, or 169.254.*.*, which is owned by Microsoft but designated for auto-address-assignment by hosts experiencing local collision).
Chuck Anderson asked whether the DNS resolver could simply listen on another address, such as 127.0.0.53. That would also interfere with the 127.0.0.* loopback reservation already mentioned, but Pandit responded that it would be easier to modify the container to forward 127.0.0.1:53 to the host system. Since it seems like modifying Docker's behavior is required for any solution, how best to proceed is still up in the air. As Sorce had noted earlier, the problem of how an application inside a Docker container might communicate with the host system is akin to the problems tackled by other virtualization projects, so it is certainly not unsolvable.
Although the cloud and Docker cases have yet to be resolved, the change is proposed for deployment a full release cycle away, which should provide plenty of time to explore and test possible solutions. For the Fedora 21 release, users may choose to install and run unbound (or another DNSSEC-validating local resolver), gaining the benefits of DNSSEC validation—as well as always having a relatively trustworthy DNS resolver on hand, regardless of the state of the network itself.
Brief items
Distribution quote of the week
Then Adam called Bob a baby and Charles got upset and David was sarcastic at Edgar and Frank pulled Gabriel's hair and then they all woke up and it had all been a dream and they started crying in the nursery.
Ubuntu 12.10 (Quantal Quetzal) End of Life
Ubuntu 12.10 reached end of life on May 16, 2014. "The supported upgrade path from Ubuntu 12.10 is via Ubuntu 13.10, though we highly recommend that once you've upgraded to 13.10, you continue to upgrade through to 14.04, as 13.10's support will end in July."
Newsletters and articles of interest
Distribution newsletters
- DistroWatch Weekly, Issue 559 (May 19)
- Five Things in Fedora This Week
- Ubuntu Weekly Newsletter, Issue 368 (May 18)
Bacon: Goodbye Canonical, Hello XPRIZE
Ubuntu Community Manager Jono Bacon has announced that he is leaving that position to become the Senior Director of Community at the XPRIZE Foundation. "Now, I won’t actually be going anywhere. I will still be hanging out on IRC, posting on my social media networks, still responding to email, and will continue to do Bad Voltage and run the Community Leadership Summit. I will continue to be an Ubuntu Member, to use Ubuntu on my desktop and server, and continue to post about and share my thoughts about where Ubuntu is moving forward. I am looking forward in many ways to experiencing the true Ubuntu community experience now I will be on the other side of the garden."
Robyn Bergeron stepping down as Fedora leader
Fedora project leader Robyn Bergeron has announced her intention to step down from the position. "With Fedora 20 well behind us, and Fedora.next on the road ahead, it seems like a natural time to step aside and let new leadership take the reins. Frankly, I shouldn’t even say 'the road ahead' since we’re well-entrenched in the process of establishing the Fedora.next features and processes, and it’s a rather busy time for us all in Fedora-land — but this is precisely why make the transition into new leadership as smooth as possible for the Fedora Project community is so important. It’s a good time for change, and fresh ideas and leadership will be an asset to the community as we go forward, but I also want to make sure it’s not going to distract us from all the very important things we have in the works."
Page editor: Rebecca Sobol
Development
A quick look at Qt 5.3
Version 5.3 of the Qt application framework was released on May 20, bringing with it a number of improvements to mobile-platform support, plus enhanced printing support, toolchain improvements, and several new widget classes. Qt incorporates a large number of libraries for application development, which can at times make for a dizzying list of changes. Fortunately, the new bits found in the 5.3 release can be grouped into a handful of general categories.
Although Linux users may be most familiar with Qt's desktop support through projects like KDE and Calligra, one of Qt's major selling points in recent years has been its cross-platform availability—including mobile device platforms. The 5.3 release pushes forward on this front, adding several new features on Android as well as introducing support for Windows Phone 8 and QNX Neutrino 6.6. QNX support is a feature that Qt's corporate sponsor Digia uses to plug its Qt Enterprise services, but in this case that enterprise offering only adds pre-built binaries of Qt for QNX; other users can still build their own QNX libraries should they choose to do so.
Two new APIs are now available on Android: Qt Bluetooth (for Bluetooth functionality) and Qt Positioning (which provides geolocation services). Android v2.3.3 (Gingerbread, a.k.a. API level 10) or later is required for both. On desktop Linux systems, notably, the Qt Bluetooth API is limited to BlueZ version 4. BlueZ 5 was released in December 2012, so there are some differences compared to what may ship with a recent Linux distribution. BlueZ 4 uses a different API and kernel interface, and it does not support as many Bluetooth profiles; developers will no doubt want to proceed with a careful evaluation of the difference.
The Positioning API is also made available for iOS in this release, along with a few other Apple-related changes. On iOS, Qt now supports multiple input methods, spell-checking, word autocompletion, and clipboard integration.
Printing support received a major overhaul for 5.3. A QPlatformPrintDevice class has been introduced, which enables applications to access the underlying operating system's printers through a uniform cross-platform API. On Linux, this requires CUPS 1.4 or newer (thus dropping support for RHEL 5 and several other older distributions). There are also QPageSize and QPageLayout classes to provide cross-platform control of page size, orientation, and other print features.
Several new Qt classes and modules are introduced with this release. QCameraInfo allows an application to query the availability and specifications of any attached camera devices. Qt WebSockets is, as the name suggests, an implementation of the WebSocket protocol (RFC 6455) for two-way communication between web servers and in-browser applications. Also on the web-application front, Qt WebKit now supports the HTML5 Video <track> element, which is used to connect external timed "tracks" (such as subtitles or closed captioning) to a <video> element, and the IndexedDB API, which is used for lightweight indexed databases.
More generally, but still of interest to web-app developers, Qt 5.3 now supports the SPDY protocol (version 3.0) for reducing HTTP latency.
There are also several updates to the Qt Quick library used to write applications in the JavaScript-based QML language. New are an interactive Calendar widget, and support for passing through mouse events that occur inside a multi-point gesture area for handling as normal (that is, non-gesture) input events. Also, Qt Quick dialogs now support folder shortcuts, including both standard (system-defined) shortcuts and user-bookmarked locations.
There are not a lot of Linux-specific updates in Qt 5.3, although one is significant. Qt now supports XInput2's "smooth scrolling" feature. This is primarily of use to touchpad users, where high-precision scrolling is more readily apparent than it is for mouse wheels and other input devices.
Alongside Qt 5.3 itself, the Qt project has updated its tool set. The Qt Creator IDE has been updated to version 3.1. Among its new features is support for an experimental Clang-based code model; it enables the editor to use code completion and semantic code highlighting. There is also a JavaScript profiler for QML applications and improved support for working in multiple editors simultaneously.
All in all, Qt 5.3 incorporates a number of small but worthwhile changes over the previous release. Qt is using a time-based release schedule these days, but the 5.3 release is an important one. The KDE project is expected to move to Qt 5.3 for its forthcoming Plasma 5 release, and Ubuntu plans to migrate to it for the distribution's 14.10 release. Whether or not the enhancements to Qt make a significant impact on Windows Phone and QNX users, of course, is another matter. But Linux desktops can expect to see Qt 5.3 be a significant release for some time to come.
Brief items
Quote of the week
For this use case, monkey patching is not an incidental feature to be tolerated merely for backwards compatibility reasons: it is a key capability that makes Python an ideal language for me, as it takes ultimate control of what dependencies do away from the original author and places it in my hands as the system integrator. This is a dangerous power, not to be used lightly, but it also grants me the ability to work around critical bugs in dependencies at run time, rather than having to fork and patch the source the way Java developers tend to do.
Wayland and Weston 1.5.0 released
The 1.5.0 releases of the Wayland display manager and Weston compositor are available. It has been a relatively quiet cycle, especially on the Wayland side, but there are still numerous improvements, including a transition to the new Xwayland server. "The Xwayland code was refactored to be its own X server in the Xorg tree, similar to how Xwin and Xquartz and Xnest work. A lot of the complexity and hacks in the old Xorg based Xwayland was about fighting Xorg trying to be a native display server, discovering input devices and driving the outputs. The goal was to be able to reuse the 2D acceleration code from the various Xorg DDX drivers. With glamor becoming a credible acceleration architecture, we no longer need to jump through those hoops and the new code base is much simpler and cleaner as a result." There is also a change in the maintainer model, with Kristian Høgsberg giving commit privileges to a number of top-level developers.
Nikola v7.0.0 available
Version 7.0.0 of the Python-based static web site generator Nikola has been released. Many theme changes are incorporated into the new release, several settings (such as BLOG_AUTHOR, BLOG_TITLE, BLOG_DESCRIPTION, and LICENSE) can now be translated, and there are new options for controlling RSS generation, logos, and footer generation.
FFmpeg adds support for Magic Lantern raw video
The Planet5D blog posted a brief note highlighting the fact that FFmpeg has begun adding support for the "MLV" raw video format recorded by the open source Magic Lantern firmware for Canon digital cameras. Considering how widely FFmpeg is used by other applications, this could represent the first step toward MLV support in a number of video editors and players.
Adobe releases Source Serif font
Adobe has announced the release of Source Serif, an Open Font License (OFL)-licensed serif font designed as a companion to its popular Source Sans. Design-wise, the letterforms are based on the "transitional" typefaces of Pierre Simon Fournier in the mid 18th century, but the proportions and color (in typographic terminology) are designed for pairing with Source Sans. The source code for Source Serif can be found at the project page.
GPGME 1.5.0 released
Version 1.5.0 of the GPG Made Easy (GPGME) library has been released. Among the new features is support for error-correcting code (ECC) algorithms, the ability to use encryption without the default compression step, and the ability for GPGME to locate the GPG engine by using the PATH environment variable, rather than expecting it to be in a hardwired location.
Newsletters and articles
Development newsletters from the past week
- GNU Toolchain Update (May 18)
- LLVM Weekly (May 19)
- OCaml Weekly News (May 20)
- Perl Weekly (May 19)
- PostgreSQL Weekly News (May 18)
- Python Weekly (May 15)
- Ruby Weekly (May 15)
- This Week in Rust (May 18)
- Tor Weekly News (May 21)
Venturi: The browser is dead. Long live the browser!
While the title might make it seem like another comment on the Mozilla/DRM issue, the article by Giorgio Venturi on the Canonical Design blog is actually about redesigning the browser interface for mobile phones. "If content is our king, then recency should be our queen. [...] Similarly, bookmarks are often a meaningless list of webpages, as their value was linked to the specific time when they were taken. For example, let’s imagine we are planning our next holiday and we start bookmarking a few interesting places. We may even create a new ‘holidays’ folder and add the bookmarks to it. However, once the holiday is the bookmarks are still there, they don’t expire once they have lost their value. This happens pretty much every time; old bookmarks and folders will eventually start cluttering our screen and make it difficult to find the information we need. Therefore we redesigned tabs, history and bookmarks to display the most recent information first. Consequently, the display and the retrieval of information is simplified."
Clasen: Introducing GtkInspector
At his blog, Matthias Clasen introduces
GtkInspector, the freshly minted GTK+ debugging tool he recently
merged into GTK+ itself. "
This way, it will be available
whenever you run a GTK+ application, and we can develop and improve
the debugging tools alongside the toolkit.
" Inspired by
gtkparasite, a third-party debugger, the new tool "lets you
explore the widget hierarchy, change properties, tweak theme settings,
and so on.
" A video is provided, demonstrating
"interactive picking of widgets for inspection, visual debugging
of graphic updates and baseline alignment, changing of widget
properties, theme tweaks and general version and environment
information
", among other features.
Garrett: The desktop and the developer
Matthew Garrett suggests a
rethink of the desktop to better suit the needs of contemporary
developers. "If the desktop had built-in awareness of the issue
tracker then they could be presented with relevant information and options
without having to click through two separate applications. If git commits
were locally indexed, the developer could find the relevant commit without
having to move back to a web browser or open a new terminal to find the
local checkout. A simple task that currently involves multiple context
switches could be made significantly faster.
"
Page editor: Nathan Willis
Announcements
Articles of interest
FCC votes for Internet “fast lanes” but could change its mind later (Ars Technica)
The US Federal Communications Commission (FCC) has voted for the so-called "Internet fast lanes", as Ars Technica reports. "In response to earlier complaints, FCC Chairman Tom Wheeler expanded the requests for comment in the NPRM [Notice of Proposed Rulemaking]. For example, the FCC will ask the public whether it should bar paid prioritization completely. It will ask whether the rules should apply to cellular service in addition to fixed broadband, whereas the prior rules mostly applied just to fixed broadband. The NPRM will also ask the public whether the FCC should reclassify broadband as a telecommunications service. This will likely dominate debate over the next few months. Classifying broadband as a telecommunications service would open it up to stricter “common carrier” rules under Title II of the Communications Act. The US has long applied common carrier status to the telephone network, providing justification for universal service obligations that guarantee affordable phone service to all Americans and other rules that promote competition and consumer choice."
TechView: Linus Torvalds, Inventor of Linux (Huffington Post)
The Huffington Post has an interview with Linus Torvalds. "I think very few people get to feel like they have actually made a difference, and let me tell you, it's a good feeling to have. I was never very interested in the commercial side, and to me the people and companies who were able to take Linux and use it commercially are the people who did what I simply would never have had the drive to do. And it was needed, and useful, so I'm actually very grateful for the commercial entities: they've allowed me to concentrate on the parts I enjoy."
Calls for Presentations
CFP Deadlines: May 22, 2014 to July 21, 2014
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
Deadline | Event Dates | Event | Location |
---|---|---|---|
May 23 | August 23 August 24 |
Free and Open Source Software Conference | St. Augustin (near Bonn), Germany |
May 30 | September 17 September 19 |
PostgresOpen 2014 | Chicago, IL, USA |
June 6 | September 22 September 23 |
Open Source Backup Conference | Köln, Germany |
June 6 | June 10 June 12 |
Ubuntu Online Summit 06-2014 | online, online |
June 20 | August 18 August 19 |
Linux Security Summit 2014 | Chicago, IL, USA |
June 30 | November 18 November 20 |
Open Source Monitoring Conference | Nuremberg, Germany |
July 1 | September 5 September 7 |
BalCCon 2k14 | Novi Sad, Serbia |
July 4 | October 31 November 2 |
Free Society Conference and Nordic Summit | Gothenburg, Sweden |
July 5 | November 7 November 9 |
Jesień Linuksowa | Szczyrk, Poland |
July 7 | August 23 August 31 |
Debian Conference 2014 | Portland, OR, USA |
July 11 | October 13 October 15 |
CloudOpen Europe | Düsseldorf, Germany |
July 11 | October 13 October 15 |
Embedded Linux Conference Europe | Düsseldorf, Germany |
July 11 | October 13 October 15 |
LinuxCon Europe | Düsseldorf, Germany |
July 11 | October 15 October 17 |
Linux Plumbers Conference | Düsseldorf, Germany |
July 14 | August 15 August 17 |
GNU Hackers' Meeting 2014 | Munich, Germany |
July 15 | October 24 October 25 |
Firebird Conference 2014 | Prague, Czech Republic |
July 20 | January 12 January 16 |
linux.conf.au 2015 | Auckland, New Zealand |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
Events: May 22, 2014 to July 21, 2014
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
May 20 May 24 |
PGCon 2014 | Ottawa, Canada |
May 20 May 22 |
LinuxCon Japan | Tokyo, Japan |
May 21 May 22 |
Solid 2014 | San Francisco, CA, USA |
May 23 May 25 |
FUDCon APAC 2014 | Beijing, China |
May 23 May 25 |
PyCon Italia | Florence, Italy |
May 24 | MojoConf 2014 | Oslo, Norway |
May 24 May 25 |
GNOME.Asia Summit | Beijing, China |
May 30 | SREcon14 | Santa Clara, CA, USA |
June 2 June 3 |
PyCon Russia 2014 | Ekaterinburg, Russia |
June 2 June 4 |
Tizen Developer Conference 2014 | San Francisco, CA, USA |
June 9 June 10 |
Erlang User Conference 2014 | Stockholm, Sweden |
June 9 June 10 |
DockerCon | San Francisco, CA, USA |
June 10 June 12 |
Ubuntu Online Summit 06-2014 | online, online |
June 10 June 11 |
Distro Recipes 2014 - canceled | Paris, France |
June 13 June 14 |
Texas Linux Fest 2014 | Austin, TX, USA |
June 13 June 15 |
State of the Map EU 2014 | Karlsruhe, Germany |
June 13 June 15 |
DjangoVillage | Orvieto, Italy |
June 17 June 20 |
2014 USENIX Federated Conferences Week | Philadelphia, PA, USA |
June 19 June 20 |
USENIX Annual Technical Conference | Philadelphia, PA, USA |
June 20 June 22 |
SouthEast LinuxFest | Charlotte, NC, USA |
June 21 June 28 |
YAPC North America | Orlando, FL, USA |
June 21 June 22 |
AdaCamp Portland | Portland, OR, USA |
June 23 June 24 |
LF Enterprise End User Summit | New York, NY, USA |
June 24 June 27 |
Open Source Bridge | Portland, OR, USA |
July 1 July 2 |
Automotive Linux Summit | Tokyo, Japan |
July 5 July 11 |
Libre Software Meeting | Montpellier, France |
July 5 July 6 |
Tails HackFest 2014 | Paris, France |
July 6 July 12 |
SciPy 2014 | Austin, Texas, USA |
July 8 | CHAR(14) | near Milton Keynes, UK |
July 9 | PGDay UK | near Milton Keynes, UK |
July 14 July 16 |
2014 Ottawa Linux Symposium | Ottawa, Canada |
July 18 July 20 |
GNU Tools Cauldron 2014 | Cambridge, England, UK |
July 19 July 20 |
Conference for Open Source Coders, Users and Promoters | Taipei, Taiwan |
July 20 July 24 |
OSCON 2014 | Portland, OR, USA |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol