LWN.net Weekly Edition for July 9, 2015
A better story for multi-core Python
Running the standard Python interpreter in threads on multiple CPU cores has always resulted in a smaller performance gain than one might naively think—or hope for. Because of the CPython global interpreter lock (GIL), only one thread of execution can be running in the interpreter core at any given time. Removing the GIL has long been a topic of discussion in Python circles, and various alternative Python implementations have either removed or worked around the GIL. A recent discussion on the python-ideas mailing list looked at a different approach to providing a better multi-core story for Python.
In a post that was optimistically titled
"solving multi-core Python
", Eric Snow outlined an approach that did not
rely on removing the GIL, but instead relies on "subinterpreters" and a
mechanism to
share objects between them. The multi-core problem is partly
a public
relations problem for the language, Snow said, but it needs solving for
that and other, more technical reasons.
Subinterpreters
The basic principle behind Snow's proposal is to take the existing subinterpreter support and to expose it in the Python language as a concurrency mechanism. The subinterpreters would run in separate threads, but would not generally share data with each other, at least implicitly, unlike the typical use of threads. Data would only be exchanged explicitly via channels (similar to those in the Go language). One of the main influences for Snow's thoughts (and for Go's concurrency model) is Tony Hoare's "Communicating Sequential Processes".
Handling objects shared between subinterpreters is one of the areas that requires more thought, Snow said. One way forward might be to only allow immutable objects to be shared between the subinterpreters. In order to do that, though, it probably makes sense to move the reference counts (used for garbage collection) out of the objects themselves and into a separate table. That would allow the objects themselves to be truly unchanging, which could also help performance in the multi-processing (i.e. fork()) case by avoiding page copies (via copy-on-write) of objects that are simply being referenced again, as Nick Coghlan pointed out.
Other areas that need to be considered are what the restrictions on subinterpreters would be. If, for example, subinterpreters were not allowed to start new threads, they would be single-threaded and not require a GIL. Or the GIL for subinterpreters could be replaced with a "local interpreter lock", with the main GIL used in the main interpreter and to mediate interaction between subinterpreters. There is also a question about using fork() in subinterpreters. In the initial email, Snow suggested disallowing that, but in the discussion that followed, he seemed to rethink that.
The proposal is clearly a kind of early stage "request for comment" (or
"a shot over the bow
" as Snow put it) but it did spark quite a
bit of discussion and some fairly favorable comments. Yury Selivanov was
quite interested in the idea, for example,
noting that just being able to share immutable objects would be useful:
Concerns
But Gregory Smith was concerned about the
impact of each subinterpreter needing to re-import all of the modules used
by the main interpreter, since those would not be shared. That would
reduce the effectiveness of Snow's model. On the other
hand, though, Smith sees a potential upside as well: "I think a
result of it
could be to make our subinterpreter support better which would be a good
thing.
" Several suggestions were made for ways to speed up the
startup time for subinterpreters or to share more state (such as modules)
between the
interpreters.
Several in the thread believed that the existing, fork()-based concurrency was the right way forward, at least for POSIX systems. For example, Devin Jeanpierre said:
While fork() does provide those benefits, it is only available on
POSIX systems. It is different than Snow's goal, which is "to make it obvious and undeniable that
Python (3.6+) has a good multi-core story
", which is partly a matter
of public perception. The subinterpreter idea
is just a means to that end, he said, and he would be happy to see a different
solution if it fulfilled that goal. In the meantime, though, his proposal
has some characteristics that multi-processing with fork() lacks:
But Sturla Molden pointed to the lack of
fork() for Windows as one of the real reasons behind Snow's proposal: "It then boils down to a workaround for the fact that
Windows cannot fork, which makes it particularly bad for running
CPython
". But, as Snow said, Python
cannot ignore Windows. Beyond that, though, even with the "superior"
fork() solution available, the perception of multi-core Python is
much different:
Molden replied with a long list of
answers to the "FUD" that is promulgated about Python and the GIL, but that
doesn't really change anything. That is why Snow's goal is to make
multi-core support "obvious and undeniable
". It also seems
that Molden is coming from a scientific/numeric Python background, which is
not generally where the complaints about Python's multi-core support
originate, as Coghlan noted.
Shared data
The reasoning behind restricting the data shared between interpreters to
immutable types (at
least initially) can
be seen from a question asked by Nathaniel
Smith. He wondered how two subinterpreters could share a complicated data
structure
containing several different types of Python objects.
Snow acknowledged that concern, and
suggested that avoiding the "trickiness involved
" in handling
that kind of data by sticking to immutable objects; though there may be
"some sort of
support for mutable objects
" added later, he said.
Coghlan summarized Snow's proposal as really being three separate things:
- Filing off enough of the rough edges of the subinterpreter support that we're comfortable giving them a public Python level API that other interpreter implementations can reasonably support
- Providing the primitives needed for safe and efficient message passing between subinterpreters
- Allowing subinterpreters to truly execute in parallel on multicore machines
All 3 of those are useful enhancements in their own right, which offers the prospect of being able to make incremental progress towards the ultimate goal of native Python level support for distributing across multiple cores within a single process.
In addition, Coghlan has published a summary of the state of multi-core Python that looks at the problem along with alternatives and possible solutions. It is an update from an earlier entry in his Python 3 Q&A and is well worth a read to get the background on the issues.
There seems to be enough interest in Snow's proposal that it could be on
the radar for Python 3.6 (which is roughly 18 months off). There is a
long road before
that happens, though. A PEP will have to be written—as will a good bit of
code. We also have yet to see what Guido van Rossum's thoughts on
the whole idea are, though Snow did mention some discussions with Python's
benevolent dictator for life in his initial post. As Nathaniel Smith put
it, Snow's approach seems like the "least impossible
" one.
That is not the same as "possible", of course, but seems hopeful at least.
A preview of PostgreSQL 9.5
The PostgreSQL developers voted at the 2015 developer meeting to release an alpha version of PostgreSQL 9.5 as soon as possible. As it turns out, that meant July 2nd, so the 9.5 alpha is available now. While this is only a preview release of 9.5, it's full of cool features for database geeks, including "UPSERT", more JSONB utilities, row-level security, and better multicore scaling. However, long-time PostgreSQL users will notice that 9.5 is a bit behind schedule at this point.
Why an alpha?
One question users have is why the PostgreSQL project isn't issuing a beta release at this point, as it normally would in June or July. The answer to that has to do with the reliability bugs the database has had over the last six months. Due to these problems, the PostgreSQL committers have become very cautious about new features that might cause unexpected reliability or security issues, and want to reserve the right to modify or cancel features before the final release. Historically, the project has tried to freeze all APIs by the first beta, so the alpha label shows that they aren't doing so.
The primary reliability issues for the database centered around special transaction counters called "multixacts" that help the database keep track of data visibility when multiple concurrent sessions touch the same rows. Changes made to the multixact mechanism in PostgreSQL 9.3 in order to make foreign keys more efficient had a number of unexpected side effects, including some data-loss bugs that led to multiple update releases within a relatively short period. One of the things that may delay PostgreSQL 9.5 is that developers are still working on the last known issues with multixacts, and plan to fix them before the final release.
One of PostgreSQL's most competitive features is its reputation for ensuring zero data loss. As such, the developers also want to make certain that none of the new features that make changes to data storage or the database transaction log will cause data loss as a side effect. This means more pre-release testing than was done for prior releases. One such at-risk feature is automated transaction log compression to reduce I/O; the other is UPSERT.
UPSERT
One feature which application developers switching from MySQL to PostgreSQL have missed is "INSERT ON CONFLICT UPDATE", otherwise known as UPSERT. This SQL feature allows developers to not worry about whether the row they are adding is new or not, simplifying application programming. It also can eliminate the need for multiple round-trips to the database to check to see if another user has concurrently added the same information.
PostgreSQL finally has UPSERT, thanks to Heroku engineer Peter Geoghegan. The version of the syntax in the alpha looks like this:
INSERT INTO users (user_id, email, login) VALUES ( 1447, 'josh@postgresql.org', 'jberkus' ) ON CONFLICT (user_id) DO UPDATE SET email = EXCLUDED.email, login = EXCLUDED.login;
What the above says is: insert this user if it is not already present, but if the user ID is already in the table, then update the email and login fields instead. The special EXCLUDED keyword says "do this only with rows that conflicted", and allows UPSERT to work with batch imports of data as well as single rows.
For other situations, you can also choose to DO NOTHING. For example, imagine that you're collecting "likes" for a social web application; if a user tries to insert the same "like" twice, you simply want to ignore it, like so:
INSERT INTO likes ( user_id, video_id ) VALUES ( 1447, 20135 ) ON CONFLICT DO NOTHING;
Adding UPSERT to PostgreSQL has taken Geoghegan more than two years of work, including a lot of "back to the drawing board" redesigns. One of the reasons for the long development period, as well as the long wait for a feature that has been in MySQL for years, is the project's need to accommodate power users. In addition the above examples, the new UPSERT supports designating specific table constraints as the conflicts, and creating arbitrarily complex rules for handing the rejected data, including nesting conflict checks through subqueries and WITH clauses. The new feature was also required to work seamlessly with replication and the new row-level security. Also, since UPSERT is not part of the SQL standard, the project spent a lot of time arguing about desired syntax for the feature.
More than anything, though, the difficult part of developing the feature was making it work correctly in high-concurrency environments. UPSERT needed to yield the correct result and not corrupt data even if 50 users were trying to UPSERT the same row at the same time. This need for "bulletproof concurrency" has been the biggest thing delaying the feature, as well as the biggest reason for concern by committers about it causing future unanticipated issues. Regardless, the long wait has resulted in a much more powerful UPSERT feature than Geoghegan originally specified, so it was probably worth the wait.
PostgreSQL as a document database
While it is adding new SQL features, the project also seems to be hard at work re-implementing itself as a "NoSQL" database competitor. While the project has some grand plans for document database support in future years, version 9.5 includes a bunch of new built-in functions and operators to make PostgreSQL a better JSON document database right away. Various developers have also been creating external tools to make it more non-relational application friendly. JSON is a standard serialization format for object data, and JSONB is PostgreSQL's binary storage format and data type for that type of data.
The central new built-in function is jsonb_set(), a function that allows users to update any arbitrary key within a nested JSONB document, for example:
SELECT profile FROM profiles WHERE user_id = 1447; '{ "type" : "i", "clubs" : { "chess club" { "role" : "member" } } }' UPDATE profiles SET profile = jsonb_set(profile, ARRAY['clubs','chess club'], '{ "role" : "chair" }', TRUE ) WHERE user_id = 1447; SELECT profile FROM profiles WHERE user_id = 1447; '{ "type" : "i", "clubs" : { "chess club" : { "role" : "chair" } } }'
The jsonb_set() statement above would add the '"chess club" : { "role" : "chair" }' document to the user's list of clubs nested inside their profile (which is a JSONB column in the table), or update their chess club membership to "chair" if they were already a member. Since it allows users to modify nested keys "in place" without parsing the entire JSONB document in the application, or installing the PostgreSQL PL/v8 extension to run JavaScript inside the database, this feature allows users to run much more meaningful document database workloads on PostgreSQL. In addition to jsonb_set(), 9.5 includes new functions and operators that support JSONB concatenation, key deletion, and aggregating data in tables into complex JSONB objects.
While the built-in functions allow users to do a lot, they don't support data searches of arbitrary complexity for applications involving large populations of JSONB documents. JsQuery, released this year by PostgreSQL's contributor team from Moscow, adds a special JSONB search language to PostgreSQL, and new indexes to support it. This new language supports wildcards, range searches, and boolean logic. For example, if you wanted to search for the chair of the chess club, you could run this JsQuery:
SELECT user_id FROM profiles WHERE profile @@ 'clubs."chess club".role = chair';
PostgreSQL 9.5 with JsQuery can therefore be used by developers who want to abandon the relational model entirely and just store a collection of documents in the database. Several projects have been created over the last couple of years to take advantage of this and wrap PostgreSQL in a NoSQL API, both to ease migration from MongoDB and other databases, and to allow the creation of "hybrid database applications" with both non-SQL and SQL-based access. One of the most recent of these projects is BedquiltDB by Shane Kilkelly, which supports users who want to use the MongoDB syntax to modify and search data in PostgreSQL.
For users who prefer a fully relational database while supporting a document-oriented API, ToroDB, by Spanish consulting firm 8KData, is also new this year. ToroDB accepts data requests and updates using the MongoDB protocol. Data is automatically decomposed into relational tables and transformed into JSON documents for client requests. At the Big Data Spain conference, developer Álvaro Hernández Tortosa claimed that this kind of hybrid database is more flexible and scales better for very large data sets than pure non-relational approaches.
Regardless of which tools end up being the most successful, it seem that the PostgreSQL community plans to take on a lot of current and future document database workloads. The next couple of years of competition with non-relational databases should be interesting.
Row-level security
For the last three major PostgreSQL releases, the project has been adding features to allow increasingly specific data security controls. This has included column level permissions, security "labels", integration with SELinux, and in 9.5, row-level security (RLS). What RLS does is allow administrators specify rule-based permissions required for each individual row in a table, and is also known by names like "virtual private database" and "fine-grained security". RLS has been in demand by users with strong security needs around their data, such as credit card processors and healthcare companies.
RLS is disabled by default on PostgreSQL tables. However, it's easy to enable and the syntax is straightforward. For example, say you wanted to allow users to read their own profiles, but not other people's. You could take advantage of the special database variable current_user and check that the current database user matched the login column of the table profiles, like so:
ALTER TABLE profiles ENABLE ROW LEVEL SECURITY; CREATE POLICY read_own_data ON profiles FOR SELECT USING (current_user = login);
Much more sophisticated policies are possible, including arbitrary constraints and setting specific policies for specific database login roles. RLS can also be combined with column permissions to effectively give "cell-level" permissions control.
For the past three years, a lot of the work to bring about RLS has been driven by NEC engineer KaiGai Kohei. In the 9.5 development cycle, that work was taken up by Dean Rasheed and Stephen Frost. While Frost's involvement in security features for PostgreSQL is longstanding, another reason for his interest in RLS became apparent on May 22. On that day, the US National Reconnaissance Office (NRO) announced that it was rolling out a relational, geospatial database solution that would support "multilevel security" — the first of its kind. The partnership to deliver this database consists of Lockheed Martin, Red Hat, Seagate, and Frost's employer, Crunchy Data Solutions.
Multilevel security (MLS) is a design for data control that centers around the idea that different personnel should be able to see different data based on their clearance level. Lower-level staff should not even be aware that high-level data exists, and in some cases should be given misleading data in its place. MLS is popular with intelligence agencies, some of whom have been looking to add it to PostgreSQL as far back as 2006. The NRO, which manages US spy satellites, is an obvious user of such a system.
According to various press releases, Red Hat SELinux security policies combined with PostgreSQL RLS delivers effective MLS for the agency's Centralized Supercomputing Facility. Exact details on the implementation are not yet available, but the NRO seems to be prepared to put all of the code for it into GitHub projects. This seems to be part of a trend in the US Department of Defense to release various components as open source, showing that at least one part of the US government believes that open is also more secure.
Even if you don't work for an intelligence agency, though, there are uses for RLS for securing more mundane data like password tables.
Multicore scalability
An area that the PostgreSQL project works on constantly is multicore scalability. While developers are working on scale-out to multiple servers, users also want to run PostgreSQL on bigger and bigger machines. Version 9.5 will bring substantial improvements in read-request throughput on high-core-count machines, such as the IBM POWER-8 machines on which PostgreSQL 9.5 was tested. As these machines offer 24 cores and 192 hardware threads, they make a good target for multicore scalability. IBM's Stewart Smith has been using the same kind of system to push MySQL up to one million queries per second.
PostgreSQL 9.4 peaked at around 32 concurrent requests, with overall throughput dropping beyond that even if there were idle cores available. According to EnterpriseDB engineer Amit Kapila, multicore scalability is a matter of improving lock handling: eliminating as many locks as possible, and reducing the cost of the others. To this end, Andres Freund rewrote PostgreSQL's "lightweight locks" (LWLocks) mechanism to use atomic operations on processors where they are supported, instead of spinlocks. This reduced CPU contention caused by waiting for locks as well as speeding up the process of acquiring a lock.
To further improve throughput, Robert Haas reduced the amount of time that the database holds locks in order to evict buffers from memory, and increased the number of mapping partitions for buffers from 64 to 128. That work, combined with the LWLock improvement, means that PostgreSQL 9.5 now scales smoothly to 64 concurrent requests and up to double the throughput on read-only workloads that 9.4 did — increasing from 300,000 transactions per second to over 500,000 in Kapila's tests [PDF]. Note that Kapila is using a different benchmark than Smith is, so the PostgreSQL and MySQL numbers are not directly comparable.
In version 9.5, the developers have also decreased memory requirements per backend and added transaction log compression to improve memory and I/O performance. Work in PostgreSQL 9.6 is now focusing on other areas of less-than-optimal performance on large servers, such as those with large amounts of memory. Memory management on servers with over 256GB of RAM is inefficient, sometimes causing large amounts of RAM to have little or no benefit for users. Ideas to fix this are under discussion.
Conclusion
There are, of course, more features than the above. The foreign data wrappers facility now supports importing remote database schemas, partitioning across multiple foreign tables, and using index scans on the remote database. The SKIP LOCKED query qualifier makes PostgreSQL a better database for storing application queues. Replication and failover has become more reliable with the pg_rewind tool and other changes.
Version 9.5 also includes a bundle of features targeting "big data" use cases. These include: Block Range Indexes (BRIN) for indexing very large tables, faster data sorting, and data analytics features GROUPING SETS and CUBE. These will be covered in an upcoming article when 9.5 beta 1 is released.
The PostgreSQL project expects to release a beta every month starting in August, until 9.5 is ready for release sometime in late 2015. Historically, the project has released new versions in September, but due to falling behind schedule, mid-to-late October is considered more likely at this point. In the meantime, the alpha release is available for download, including as a Docker container. The PostgreSQL project would like you to try it out for yourself, see if the features are what they're promoted to be, and report a few bugs while you're at it.
Security
OpenOffice and CVE-2015-1774
The Apache Software Foundation requires projects hosted under its umbrella to file quarterly reports to the foundation's board of directors; these reports are meant to enable the board to "evaluate the activity and health of the project". In the case of Apache OpenOffice, the process of writing the quarterly reports tends to be a bit fraught, since it rubs the project's nose in the fact that its health is not all that strong. This time around there is an additional factor in the discussion: the fact that OpenOffice has yet to patch a vulnerability announced back in April.
Jan Iversen announced the drafting of the July report at the end of June. The draft did not mince words with regard to the status of the project in general:
Simon Phipps was quick to suggest that the report was missing one key fact: the vulnerability known as CVE-2015-1774 remains unfixed in the released version (4.1.1) of OpenOffice. This vulnerability, disclosed at the end of April, affects the import filter for Hangul Word Processor (HWP) documents; a lack of input sanitizing there means that an attacker can, by way of a specially crafted HWP document, crash the program and, almost certainly, contrive to execute arbitrary code.
LibreOffice fixed this vulnerability in the 4.3.7
release on April 25. OpenOffice, instead, has limited itself to
publishing a
workaround that consists of telling users to delete the shared object
implementing HWP support. The vulnerability will be fixed, it is promised,
in the 4.1.2 release, but, as the draft report notes, "no real
work has been done since last report
" on getting that release out.
So OpenOffice remains vulnerable and will continue to be until, somehow, the
project is able to get some "real work" done on producing another release.
The rules for quarterly reports say nothing about highlighting open
security issues; indeed, they make no mention of security at all. Simon clearly
believes that the lack of action on this issue is relevant to the health of
the project as a whole, and, thus, relevant to the report. Dennis Hamilton disagreed, though, saying that "very few
users
" would be affected by an exploit, and that the publication of
a "straightforward mitigation
" is sufficient. The failure to
fix this vulnerability, he said, should not overshadow the more serious
problem of the stalled 4.1.2 release.
For the purposes of the board report, Dennis may well be right; telling the board about this vulnerability will, in the end, protect few users from it. But he may be understating the severity of the vulnerability itself. It does not, as he suggests, just affect a small community of Korean users working with files created by an ancient word processor; instead, it affects anybody who can be convinced to open a file in the HWP format. Such files need not, incidentally, have a .hwp extension. There is no shortage of evidence showing that users will open dodgy email attachments from suspicious sources; there is no reason to believe that their behavior would be different in this case. Rather than affecting a small group, this vulnerability affects all OpenOffice users; given that the project loudly claims to have been downloaded over 100 million times, that is a lot of users.
He is also certainly overstating the "straightforward" nature of a mitigation that (1) must be actively sought out by users and (2) requires performing manual surgery on an OpenOffice installation. Few users, even those who download the program today, will notice that there is a vulnerability requiring action on their part to mitigate. A new release would inspire at least some users to update, but workaround instructions hidden away on their own page will bring about few secured systems — even if the instructions were readily discoverable, which these are not.
The moral of this story is that, whenever any of us uses a piece of software, we are depending on the organization behind it — whether it's a corporation or a free-software development community — to protect us from known vulnerabilities. Projects that are short of developers may not be able to live up to that expectation. At any given time, a typical Linux system probably contains a number of applications that lack security updates because their development community has faded away.
Unfortunately, projects that fall below a critical mass of developers rarely send out an advisory to that effect. OpenOffice is actually nearly unique in this regard as a result of the quarterly report requirement; it has informed the world that it is struggling, even though it did ultimately choose to omit information on this specific vulnerability from its quarterly report. In many other cases, projects simply go dark. Linux users are lucky in that distributors can (and often do) serve as a second line of defense for unmaintained projects; users of other operating systems tend to be on their own. In this case, distributors noticed which way the wind was blowing some time back; few of them ship OpenOffice at all. (Debian's recent decision to move away from libav can be seen as another example of this process in operation). Linux users, thus, will be relatively safe, but it appears that there are many millions of vulnerable users out there with no fix in sight.
Brief items
Security quotes of the week
New vulnerabilities
ansible: two vulnerabilities
Package(s): | ansible | CVE #(s): | CVE-2015-3908 | ||||||||||||||||||||
Created: | July 6, 2015 | Updated: | August 31, 2015 | ||||||||||||||||||||
Description: | From the Fedora advisory:
Update to 1.9.2. Fixes CVE-2015-3908 (hostname and cert matching in some modules and plugins) and another not yet issued CVE on chroot/jail/zone connection plugins as well as a number of bugfixes. A bit more information can be found on the Ansible security page: CVE-2015-3908 - Ensure that hostnames match certificate names when using HTTPS - resolved in Ansible 1.9.2 Number pending - Improprer symlink handling in zone, jail, and chroot connection plugins could lead to escape from confined environment - resolved in Ansible 1.9.2 | ||||||||||||||||||||||
Alerts: |
|
bind: denial of service
Package(s): | bind | CVE #(s): | CVE-2015-4620 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | July 8, 2015 | Updated: | August 3, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Arch Linux advisory:
A very uncommon combination of zone data has been found that triggers a bug in BIND, with the result that named will exit with a "REQUIRE" failure in name.c when validating the data returned in answer to a recursive query. This means that a recursive resolver that is performing DNSSEC validation can be deliberately stopped by an attacker who can cause the resolver to perform a query against a maliciously-constructed zone. A remote attacker can crash a bind resolver performing DNSSEC validation by querying it for a specially crafted zone. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
cups-filters: code execution
Package(s): | cups-filters | CVE #(s): | CVE-2015-3258 CVE-2015-3279 | ||||||||||||||||||||||||||||||||||||||||||||
Created: | July 6, 2015 | Updated: | December 22, 2015 | ||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Ubuntu advisory:
Petr Sklenar discovered that the cups-filters texttopdf filter incorrectly handled line sizes. A remote attacker could use this issue to cause a denial of service, or possibly execute arbitrary code as the lp user. (CVE-2015-3258, CVE-2015-3279) | ||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
firefox: code execution
Package(s): | firefox | CVE #(s): | CVE-2015-2726 | ||||||||||||||||||||||||||||||||||||||||
Created: | July 3, 2015 | Updated: | July 8, 2015 | ||||||||||||||||||||||||||||||||||||||||
Description: | From the Arch advisory: CVE-2015-2726 (Miscellaneous memory safety hazards): Mozilla developers and community identified and fixed several memory safety bugs in the browser engine used in Firefox and other Mozilla-based products. Some of these bugs showed evidence of memory corruption under certain circumstances, and we presume that with enough effort at least some of these could be exploited to run arbitrary code. | ||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
firefox: multiple vulnerabilities
Package(s): | firefox thunderbird seamonkey | CVE #(s): | CVE-2015-2722 CVE-2015-2724 CVE-2015-2725 CVE-2015-2727 CVE-2015-2728 CVE-2015-2729 CVE-2015-2731 CVE-2015-2733 CVE-2015-2734 CVE-2015-2735 CVE-2015-2736 CVE-2015-2737 CVE-2015-2738 CVE-2015-2739 CVE-2015-2740 CVE-2015-2741 CVE-2015-2743 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | July 3, 2015 | Updated: | August 17, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Mozilla advisories: CVE-2015-2724, CVE-2015-2725: Mozilla developers and community identified and fixed several memory safety bugs in the browser engine used in Firefox and other Mozilla-based products. Some of these bugs showed evidence of memory corruption under certain circumstances, and we presume that with enough effort at least some of these could be exploited to run arbitrary code. CVE-2015-2722, CVE-2015-2733: Security researcher Looben Yan used the Address Sanitizer tool to discover two related use-after-free vulnerabilities that occur when using XMLHttpRequest in concert with either shared or dedicated workers. These errors occur when the XMLHttpRequest object is attached to a worker but that object is incorrectly deleted while still in use. This results in exploitable crashes. CVE-2015-2731: Security researcher Herre reported a use-after-free vulnerability when a Content Policy modifies the Document Object Model to remove a DOM object, which is then used afterwards due to an error in microtask implementation. This leads to an exploitable crash. CVE-2015-2729: Security researcher Holger Fuhrmannek used the Address Sanitizer tool to discover an out-of-bound read while computing an oscillator rendering range in Web Audio. This could allow an attacker to infer the contents of four bytes of memory. CVE-2015-2728: Security researcher Paul Bandha reported a type confusion error where part of IDBDatabase is read by the Indexed Database Manager and incorrectly used as a pointer when it shouldn't be used as such. This leads to memory corruption and the possibility of an exploitable crash. CVE-2015-2727: Security researcher Jann Horn reported that when Mozilla Foundation Security Advisory 2015-25 was fixed in Firefox 37, an error was made that caused the fix to not be applied to Firefox 38, effectively causing the bug to be unfixed in Firefox 38 (and Firefox ESR38) once it shipped. As Armin Razmdjou reported for that issue, opening hyperlinks on a page with the mouse and specific keyboard key combinations could allow a Chrome privileged URL to be opened without context restrictions being preserved. This could allow for local files or resources from a known location to be opened with local privileges, bypassing security protections. CVE-2015-2734, CVE-2015-2735, CVE-2015-2736, CVE-2015-2737, CVE-2015-2738, CVE-2015-2739, CVE-2015-2740: Security researcher Ronald Crane reported seven vulnerabilities affecting released code that he found through code inspection. These included three uses of uninitialized memory, one poor validation leading to an exploitable crash, one read of unowned memory in zip files, and two buffer overflows. These do not all have clear mechanisms to be exploited through web content but are vulnerable if a mechanism can be found to trigger them. From the Red Hat advisory: It was found that Firefox skipped key-pinning checks when handling an error that could be overridden by the user (for example an expired certificate error). This flaw allowed a user to override a pinned certificate, which is an action the user should not be able to perform. (CVE-2015-2741) A flaw was discovered in Mozilla's PDF.js PDF file viewer. When combined with another vulnerability, it could allow execution of arbitrary code with the privileges of the user running Firefox. (CVE-2015-2743) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
haproxy: information leak
Package(s): | haproxy | CVE #(s): | CVE-2015-3281 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | July 6, 2015 | Updated: | December 18, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Arch Linux advisory:
A vulnerability was found in the handling of HTTP pipelining. In some cases, a client might be able to cause a buffer alignment issue and retrieve uninitialized memory contents that exhibit data from a past request or session. With the proper timing and by requesting files of specific sizes from the backend servers in HTTP pipelining mode, one can trigger a call to a buffer alignment function which was not designed to work with pending output data. The effect is that the output data pointer points to the wrong location in the buffer, causing corruption on the client. It's more visible with chunked encoding and compressed bodies because the client cannot parse the response, but with a regular content-length body, the client will simply retrieve corrupted contents. That's not the worst problem in fact since pipelining is disabled in most clients. The real problem is that it allows the client to sometimes retrieve data from a previous session that remains in the buffer at the location where the output pointer lies. Thus it's an information leak vulnerability. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
kernel: multiple vulnerabilities
Package(s): | kernel | CVE #(s): | CVE-2015-4001 CVE-2015-4002 CVE-2015-4003 | ||||||||||||||||||||||||||||||||
Created: | July 7, 2015 | Updated: | July 8, 2015 | ||||||||||||||||||||||||||||||||
Description: | From the CVE entries:
Integer signedness error in the oz_hcd_get_desc_cnf function in drivers/staging/ozwpan/ozhcd.c in the OZWPAN driver in the Linux kernel through 4.0.5 allows remote attackers to cause a denial of service (system crash) or possibly execute arbitrary code via a crafted packet. (CVE-2015-4001) drivers/staging/ozwpan/ozusbsvc1.c in the OZWPAN driver in the Linux kernel through 4.0.5 does not ensure that certain length values are sufficiently large, which allows remote attackers to cause a denial of service (system crash or large loop) or possibly execute arbitrary code via a crafted packet, related to the (1) oz_usb_rx and (2) oz_usb_handle_ep_data functions. (CVE-2015-4002) The oz_usb_handle_ep_data function in drivers/staging/ozwpan/ozusbsvc1.c in the OZWPAN driver in the Linux kernel through 4.0.5 allows remote attackers to cause a denial of service (divide-by-zero error and system crash) via a crafted packet. (CVE-2015-4003) | ||||||||||||||||||||||||||||||||||
Alerts: |
|
kernel: denial of service
Package(s): | kernel | CVE #(s): | CVE-2015-4700 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | July 7, 2015 | Updated: | July 8, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Ubuntu advisory:
Daniel Borkmann reported a kernel crash in the Linux kernel's BPF filter JIT optimization. A local attacker could exploit this flaw to cause a denial of service (system crash). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
libxml2: multiple vulnerabilities
Package(s): | libxml2 | CVE #(s): | CVE-2015-1819 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | July 3, 2015 | Updated: | September 9, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Debian advisory: (1) CVE-2015-1819 / #782782 Florian Weimer from Red Hat reported an issue against libxml2, where a parser which uses libxml2 chokes on a crafted XML document, allocating gigabytes of data. This is a fine line issue between API misuse and a bug in libxml2. This issue got addressed in libxml2 upstream and the patch has been backported to libxml2 in squeeze-lts. (2) #782985 Jun Kokatsu reported an out-of-bounds memory access in libxml2. By entering an unclosed html comment the libxml2 parser didn't stop parsing at the end of the buffer, causing random memory to be included in the parsed comment that was returned to the evoking application. In the Shopify application (where this issue was originally discovered), this caused ruby objects from previous http requests to be disclosed in the rendered page. (3) #783010 Michal Zalewski reported another out-of-bound reads issue in libxml2 that did not cause any crashes but could be detected under ASAN and Valgrind. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
linux-ftpd-ssl: segmentation fault
Package(s): | linux-ftpd-ssl | CVE #(s): | |||||
Created: | July 8, 2015 | Updated: | July 8, 2015 | ||||
Description: | From the Debian LTS advisory:
The issue is due to a case of missing brackets in the patch '500-ssl.diff', which causes the execution of 'fclose(NULL)' and thus displays as a segmentation fault. The error appears while transmogrifying 'linux-ftpd' into 'linux-ftpd-ssl'. | ||||||
Alerts: |
|
mariadb: man-in-the-middle attack
Package(s): | mariadb | CVE #(s): | CVE-2015-3152 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | July 6, 2015 | Updated: | August 20, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the oCERT advisory:
A vulnerability has been reported concerning the impossibility for MySQL users (with any major stable version) to enforce an effective SSL/TLS connection that would be immune from man-in-the-middle (MITM) attacks performing a malicious downgrade. While the issue has been addressed in MySQL preview release 5.7.3 in December 2013, it is perceived that the majority of MySQL users are not aware of this limitation and that the issue should be treated as a vulnerability. The vulnerability lies within the behaviour of the '--ssl' client option, which on affected versions it is being treated as "advisory". Therefore while the option would attempt an SSL/TLS connection to be initiated towards a server, it would not actually require it. This allows a MITM attack to transparently "strip" the SSL/TLS protection. The issue affects the ssl client option whether used directly or triggered automatically by the use of other ssl options ('--ssl-xxx') that imply '--ssl'. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
mozilla: two vulnerabilities
Package(s): | firefox thunderbird seamonkey nss | CVE #(s): | CVE-2015-2721 CVE-2015-2730 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | July 6, 2015 | Updated: | September 28, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Mageia advisory:
Security researcher Karthikeyan Bhargavan reported an issue in Network Security Services (NSS) where the client allows for a ECDHE_ECDSA exchange where the server does not send its ServerKeyExchange message instead of aborting the handshake. Instead, the NSS client will take the EC key from the ECDSA certificate. This violates the TLS protocol and also has some security implications for forward secrecy. In this situation, the browser thinks it is engaged in an ECDHE exchange, but has been silently downgraded to a non-forward secret mixed-ECDH exchange instead. As a result, if False Start is enabled, the browser will start sending data encrypted under these non-forward-secret connection keys (CVE-2015-2721). Mozilla community member Watson Ladd reported that the implementation of Elliptical Curve Cryptography (ECC) multiplication for Elliptic Curve Digital Signature Algorithm (ECDSA) signature validation in Network Security Services (NSS) did not handle exceptional cases correctly. This could potentially allow for signature forgery (CVE-2015-2730). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
ntp: denial of service
Package(s): | ntp | CVE #(s): | CVE-2015-5146 | ||||||||||||||||||||||||||||||||||||
Created: | July 7, 2015 | Updated: | September 9, 2015 | ||||||||||||||||||||||||||||||||||||
Description: | From the Arch Linux advisory:
Under limited and specific circumstances an attacker can send a crafted remote-configuration packet containing a NUL-byte to cause a vulnerable ntpd instance to crash. This requires each of the following to be true:
A remote attacker is able to send a specially crafted remote-configuration packet that is leading to an application crash resulting in denial of service. | ||||||||||||||||||||||||||||||||||||||
Alerts: |
|
openssh: restriction bypass
Package(s): | openssh | CVE #(s): | CVE-2015-5352 | ||||||||||||||||||||||||||||||||||||||||||||
Created: | July 6, 2015 | Updated: | July 13, 2015 | ||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Arch Linux advisory:
When forwarding X11 connections with ForwardX11Trusted=no, connections made after ForwardX11Timeout expired could be permitted and no longer subject to XSECURITY restrictions because of an ineffective timeout check in ssh coupled with "fail open" behaviour in the X11 server when clients attempted connections with expired credentials. This problem was reported by Jann Horn. A remote attacker is able to bypass the XSECURITY restrictions when forwarding X11 connections by making use of an ineffective timeout check. | ||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
owncloud-client: man-in-the-middle attack
Package(s): | owncloud-client | CVE #(s): | CVE-2015-4456 | ||||||||
Created: | July 6, 2015 | Updated: | September 21, 2015 | ||||||||
Description: | From the Mageia advisory:
ownCloud Desktop Client before 1.8.2 was vulnerable against MITM attacks when used in combination with self-signed certificates. | ||||||||||
Alerts: |
|
pcre: information leak
Package(s): | pcre | CVE #(s): | CVE-2015-5073 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | July 6, 2015 | Updated: | July 20, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Mageia advisory:
PCRE library is prone to a vulnerability which leads to Heap Overflow. During subpattern calculation of a malformed regular expression, an offset that is used as an array index is fully controlled and can be large enough so that unexpected heap memory regions are accessed. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
php: multiple vulnerabilities
Package(s): | php | CVE #(s): | CVE-2015-4598 CVE-2015-4642 CVE-2015-4643 CVE-2015-4644 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | July 6, 2015 | Updated: | August 27, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Mageia advisory:
Incorrect handling of paths with NULs (CVE-2015-4598). OS command injection vulnerability in escapeshellarg (CVE-2015-4642). Integer overflow in ftp_genlist() resulting in heap overflow (CVE-2015-4643). Segfault in php_pgsql_meta_data (CVE-2015-4644). PHP has been updated to version 5.5.26, which fixes multiple bugs and potential security issues. Please see the upstream ChangeLog for details. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
polkit: multiple vulnerabilities
Package(s): | polkit | CVE #(s): | CVE-2015-4625 CVE-2015-3256 CVE-2015-3255 CVE-2015-3218 | ||||||||||||||||||||||||||||||||||||||||
Created: | July 6, 2015 | Updated: | November 15, 2016 | ||||||||||||||||||||||||||||||||||||||||
Description: | From the Mageia advisory:
Local privilege escalation in polkit before 0.113 due to predictable authentication session cookie values (CVE-2015-4625). Various memory corruption vulnerabilities in polkit before 0.113 in the use of the JavaScript interpreter, possibly leading to local privilege escalation (CVE-2015-3256). Memory corruption vulnerability in polkit before 0.113 in handling duplicate action IDs, possibly leading to local privilege escalation (CVE-2015-3255). Denial of service issue in polkit before 0.113 which allowed any local user to crash polkitd (CVE-2015-3218). | ||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
pykerberos: insecure authentication
Package(s): | pykerberos | CVE #(s): | CVE-2015-3206 | ||||||||
Created: | July 3, 2015 | Updated: | August 27, 2015 | ||||||||
Description: | From the Debian advisory: The python-kerberos checkPassword() method has been badly insecure in previous releases. It used to do (and still does by default) a kinit (AS-REQ) to ask a KDC for a TGT for the given user principal, and interprets the success or failure of that as indicating whether the password is correct. It does not, however, verify that it actually spoke to a trusted KDC: an attacker may simply reply instead with an AS-REP which matches the password he just gave you. | ||||||||||
Alerts: |
|
stunnel4: authentication bypass
Package(s): | stunnel4 | CVE #(s): | CVE-2015-3644 | ||||||||
Created: | July 3, 2015 | Updated: | July 28, 2015 | ||||||||
Description: | From the Debian advisory: Johan Olofsson discovered an authentication bypass vulnerability in Stunnel, a program designed to work as an universal SSL tunnel for network daemons. When Stunnel in server mode is used with the redirect option and certificate-based authentication is enabled with "verify = 2" or higher, then only the initial connection is redirected to the hosts specified with "redirect". This allows a remote attacker to bypass authentication. | ||||||||||
Alerts: |
|
wesnoth: information leak
Package(s): | wesnoth | CVE #(s): | CVE-2015-5069 CVE-2015-5070 | ||||||||||||||||||||||||
Created: | July 3, 2015 | Updated: | August 24, 2015 | ||||||||||||||||||||||||
Description: | From the Arch Linux advisory: Wesnoth implements a text preprocessing language that is used in conjunction with its own game scripting language. It also has a built-in Lua interpreter and API. Both the Lua API and the preprocessor make use of the same function (filesystem::get_wml_location()) to resolve file paths so that only content from the user's data directory can be read. However, the function did not explicitly disallow files with the .pbl extension. The contents of these files could thus be stored in saved game files or even transmitted directly to other users in a networked game. Among the information that's compromised is a user-defined passphrase used to authenticate uploads to the game's content server. CVE-2015-5069 and CVE-2015-5070 have been assigned to this vulnerability. Version 1.12.3 included a fix for CVE-2015-5069 only, remaining vulnerable to CVE-2015-5070. Versions 1.12.4 and 1.13.1 contain a more complete fix that addresses both. | ||||||||||||||||||||||||||
Alerts: |
|
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 4.2-rc1, released on July 5. As Linus explains, 4.2 may, in the end, not end up being the development cycle with the most commits ever, but there is still a lot going on. "However, if you count the size in pure number of lines changed, this really seems to be the biggest rc we've ever had, with over a million lines added (and about a quarter million removed). That beats the previous champion (3.11-rc1) that was huge mainly due to Lustre being added to the staging tree." The source of the biggest chunk of those new lines is the new amdgpu graphics driver.
Stable updates: 3.14.47 and 3.10.83 were released on July 6. The 4.1.2, 4.0.8, 3.14.48, and 3.10.84 updates are in the review process as of this writing; they can be expected on or after July 10.
Quotes of the week
Kernel Summit 2015: Call for Proposals
The 2015 Kernel Summit will be held October 26-28 in Seoul, South Korea; the call for discussion proposals is out now. Now would be a good time for those who would like to attend the Summit to come up with a good topic and get the discussion going. Proposals are due by July 31.
Kernel development news
4.2 Merge window part 3
By the time Linus released 4.2-rc1 and closed the merge window on July 5, 12,092 non-merge changesets had been pulled into the mainline kernel repository. That makes 4.2, by your editor's reckoning (but not Linus's — see below), the busiest merge window in the kernel project's history, beating the previous record holder, 3.15, by 58 commits. Even so, Linus doesn't believe that 4.2 will end up being the busiest development cycle for a simple reason: we have gotten better at fixing our code before it goes into the mainline, so fewer fixes are required thereafter. If one assumes that 3.15 had a higher fix rate than 4.2 will, then 4.2 should fall short of 3.15's total.Such ideas are relatively easy to explore using the numbers, so here is the history from the last few years or so, showing non-merge changesets for each kernel release:
Release Merge
windowTotal %fixes v3.0 7333 9153 19.9 v3.1 7202 8693 17.2 v3.2 10214 11881 14.0 v3.3 8899 10550 15.6 v3.4 9248 10899 15.1 v3.5 9534 10957 13.0 v3.6 8587 10247 16.2 v3.7 10409 11990 13.2 v3.8 10901 12394 12.0 v3.9 10265 11910 13.8 v3.10 11963 13637 12.3 v3.11 9494 10893 12.8 v3.12 9479 10927 13.3 v3.13 10518 12127 13.3 v3.14 10622 12311 13.7 v3.15 12034 13722 12.3 v3.16 11364 12804 11.2 v3.17 10872 12354 12.0 v3.18 9711 11379 14.7 v3.19 11408 12617 9.6 v4.0 8950 10346 13.5 v4.1 10659 11916 10.5 v4.2 12092 ? ?
Since the beginning of the 3.x series, the average kernel release has seen 13.6% of its changes pulled after the close of the merge window. In the time between the releases of 3.15 and 4.1, 71,416 changesets were merged, of which 8,452 — 11.8% — came outside of the merge window. So one might conclude that the amount of code arriving outside the merge window has fallen a bit in the last year. If the 11.8% rate holds this time around, 4.2 will finish with 13,709 changesets, 13 short of the total for 3.15.
So, it's possible that 3.15 will remain the busiest development cycle ever, but your editor must conclude that the jury is still out on this one.
In any case, the long-term trend is clear:
Over time, the kernel development community has indeed gotten better at merging code that does not require fixing later in the development cycle.
Final changes for 4.2
There were just over 1,200 non-merge changesets pulled into the mainline kernel repository since last week's summary. Among those were:
- Large x86-based systems can now defer the
initialization of much of main memory, speeding the boot process.
- Some changes affecting how mounts of sysfs and /proc are
managed have been merged. Subdirectories that are meant to serve as
mount points (e.g. /sys/debug) are now marked as such, and
mounts are limited to those directories. Beyond that, new rules have
been added to ensure that new mounts of these filesystems (within a
container, say) respect the mount flags used with existing mounts.
The controversial enforcement of the
noexec and nosuid flags has been removed for now,
though.
- Synopsys DesignWare ARC HS38 processors are now supported. Other new
hardware support includes Dell airplane-mode switches,
TI TLC59108 and TLC59116 LED controllers,
Maxim max77693 LED flash controllers,
Skyworks AAT1290 LED controllers,
Broadcom BCM6328 and BCM6358 LED controllers,
Kinetic Technologies KTD2692 LED flash controllers,
TI CDCE925 programmable clock synthesizers,
Hisilicon Hi6220 clocks,
STMicroelectronics LPC watchdogs,
Conexant Digicolor SoC watchdogs,
Dialog DA9062 watchdogs, and
Weida HiTech I2C touchscreen controllers,
- The red-black tree implementation now supports "latched trees"; these maintain two copies of the tree structure in parallel and only modify one at a time. The end result is that non-atomic modifications can happen concurrently with lookups without creating confusion. See this commit for the implementation, and this one for some discussion of the latched technique. The first use of this technique is to accelerate module address lookups.
If recent patterns hold (and Linus doesn't take any more ill-timed vacations), the final 4.2 release can be expected on August 23.
A postscript
Some readers may be wondering why this article claims that 4.2 had the busiest merge window ever, given that Linus said otherwise in the 4.2-rc1 release announcement:
The difference is that Linus is counting merge commits, while your editor does not. As mentioned above, there were 12,092 non-merge changesets pulled before 4.2-rc1, but that number grows to 12,809 changesets when merges are counted; that falls just short of the total (12,826) for 3.15-rc1. Your editor's reasoning for leaving out merges is that they mostly just represent the movement of patches from one branch to another and, thus, differ from "real" development approaches. No doubt others will have different opinions, though.
Restartable sequences
Concurrent code running in user space is subject to almost all of the same constraints as code running in the kernel. One of those is that cross-CPU operations tend to ruin performance, meaning that data access should be done on a per-CPU basis whenever possible. Unlike kernel code, though, user-space per-CPU code cannot enter atomic context; it, thus, cannot protect itself from being preempted or moved to another CPU. The restartable sequences patch set recently posted by Paul Turner demonstrates one possible solution to that problem by providing a limited sort of atomic context for user-space code.Imagine maintaining a per-CPU linked list, and needing to insert a new element at the head of that list. Code to do so might look something like this:
new_item->next = list_head[cpu]; list_head[cpu] = new_item;
Such code faces a couple of hazards in a multiprocessing environment. If it is preempted between the two statements above, another process might slip in and insert its own new element; when the original process resumes, it will overwrite list_head[cpu], causing the loss of the item added while it was preempted. If, instead, the process is moved to a different CPU, it could get confused between each CPU's list or run concurrently with a new process on the original CPU; the result in either case would be a corrupted list and late-night phone calls to the developer.
These situations are easily avoidable by using locks, but locks are expensive even in the absence of contention. The same holds for atomic operations like compare-and-swap; they work, but the result can be unacceptably slow. So developers have long looked for faster alternatives.
The key observation behind restartable sequences is that the above code shares a specific feature with many other high-performance critical sections, in that it can be divided into two parts: (1) an arbitrary amount of setup work that can be thrown away and redone if need be, and (2) a single instruction that "commits" the operation. The first line in that sequence:
new_item->next = list_head[cpu];
has no visible effect outside the process it is executing in; if that process were preempted after that line, it could just execute it again and all would be well. The second line, though:
list_head[cpu] = new_item;
has effects that are visible to any other process that uses the list head. If the executing process has been preempted or moved in the middle of the sequence, that last line must not be executed lest it corrupt the list. If, instead, the sequence has run uninterrupted, this assignment can be executed with no need for locks or atomic instructions. That, in turn, would make it fast.
A restartable sequence as implemented by Paul's patch is really just a small bit of code stored in a special region of memory; that code implements both the setup and commit stages as described above. If the kernel preempts a process (or moves it to another CPU) while the process is running in that special section, control will jump to a special restart handler. That handler does whatever is needed to restart the sequence; often (as it would be in the linked-list case) it's just a matter of going back to the beginning and starting over.
The sequence must adhere to some restrictions; in particular, the commit operation must be a single instruction and code within the special section cannot invoke any code outside of it. But, if it holds to the rules, a repeatable sequence can function as a small critical section without the need for locks or atomic operations. In a sense, restartable sequences can be thought as a sort of poor developer's transactional memory. If the operation is interrupted before it commits, the work done so far is simply tossed out and it all restarts from the beginning.
Paul's patch adds a new system call:
int restartable_sequences(int op, int flags, long val1, long val2, long val3);
There are two operations that can be passed as the op parameter:
- SYS_RSEQ_SET_CRITICAL sets the critical region; val1
and val2 are the bounds of that region, and val3 is
a pointer to the restart handler (which must be outside of the
region).
- SYS_RSEQ_SET_CPU_POINTER specifies a location (in val1) of an integer variable to hold the current CPU number. This location should be in thread-local storage; it allows each thread to quickly determine which CPU it is running on at any time.
The CPU-number pointer is needed so that each section can quickly get to the correct per-CPU data; to emphasize that, the restart handler will not actually be called until this pointer has been set. Only one region for restartable sequences can be established (but it can contain multiple sequences if the restart handler is smart enough), and the region is shared across all threads in a process.
Paul notes that Google is using this code internally now; it was also discussed at the Linux Plumbers Conference [PDF] in 2013. He does not believe it is suitable for mainline inclusion in its current form, though. The single-region limitation does not play well with library code, the critical section must currently be written in assembly, and the interactions with thread-local storage are painful. But, he thinks, it is a reasonable starting place for a discussion on how a proper interface might be designed.
Paul's patch is not the only one in this area; Mathieu Desnoyers posted a patch set with similar goals back in May. Given Linus's reaction, it's safe to say that Mathieu's patch will not be merged anytime soon, but Mathieu did achieve his secondary goal of getting Paul to post his patches. In any case, there is clearly interest in mechanisms that can improve the performance of highly concurrent user-space code, so we will almost certainly see more patches along these lines in the future.
Deferred memory locking
The mlock() and mlockall() system calls are charged with locking a portion (or all) of a process's address space into physical memory. The most common use cases for this functionality are situations where the latency of a page fault cannot be afforded and protecting sensitive data (cryptographic keys, say) from being written out to the swap device. Both system calls assume that the caller wants all of the memory present and locked immediately, but that may not always be the case. As a result, we are likely to see new versions of the memory-locking system calls in the near future.The idea that a user who has requested the locking of a range of memory doesn't actually want it locked now may seem a little strange; that is what mlock() and mlockall() were created for, after all. The problem with immediate locking, as described by Eric Munson in his patch set, is that faulting in and locking a large address range can take a long time, and much of that time may be wasted if the calling process never actually uses much of that memory. If the cost of a page fault on the first access to a given page is not an issue, deferring the population and locking of a memory range can be a useful way to improve performance.
The cryptographic use case is one where deferred locking might make sense: the buffer to be locked may need to be able to handle a large worst case, but, most of the time, the portion of the buffer that's actually used is quite a bit smaller. If the pages that make up that buffer could only be locked after they are first faulted in, the objective of preventing writeout to the swap device will be met with lower overhead overall. Eric also mentions programs that use small parts of a large buffer, but which cannot know from the outset which parts will be used.
The solution in both cases is to modify mlock() so that it does not fault in all of the pages in the indicated address range. Instead, the range is simply marked as "lock on fault." Whenever a page within that range is faulted in, it will be locked from then on.
The problem is that mlock() has this prototype:
int mlock(const void *addr, size_t len);
There is no way to tell the kernel to not fault the pages in immediately. The natural response is to create a new system call that has a feature that arguably should have been present in mlock() in the first place: a "flags" argument:
int mlock2(const void *addr, size_t len, int flags);
The flags argument has two possibilities: MLOCK_LOCKED (to fault in the pages immediately) or MLOCK_ONFAULT (which only locks pages once they are faulted in). Exactly one of those flags must be present in any mlock2() call.
The mlockall() system call does already have a flags argument; the new MCL_ONFAULT flag has been added to request the new behavior via that interface. There is also a new flag (MAP_LOCKONFAULT) that can be used to get locked-on-fault behavior when creating an address range with mmap().
Eric's patch set adds new versions of the corresponding unlock system calls:
int munlock2(const void *addr, size_t len, int flags); int munlockall2(int flags);
These system calls have the effect of clearing the given flags; the actual unlocking of memory is a side effect if all the flags are cleared. If a region has been locked with MLOCK_ONFAULT, one can call:
munlock2(addr, len, MLOCK_ONFAULT);
to cancel the on-fault locking in the future while leaving currently locked pages in place, or:
munlock2(addr, len, MLOCK_LOCKED|MLOCK_ONFAULT);
to unlock the address range entirely. It is not entirely clear (to your editor, at least) what will happen if munlock2() is called with just the MLOCK_LOCKED flag in this situation. Similar things can be done with munlockall2(); in this case, it is also possible to clear existing flags like MCL_FUTURE.
This patch set has been through a few iterations over the last few months. It has taken Eric a bit of work to convince reviewers of the value of this functionality; review comments also led to the addition of the new system calls (as opposed to just the new mmap() and mlockall() flags). This patch set has found its way into the -mm tree, which is a good sign that it's likely to head toward the mainline sometime in the relatively near future.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Development tools
Device drivers
Device driver infrastructure
Filesystems and block I/O
Memory management
Networking
Security-related
Miscellaneous
Page editor: Jonathan Corbet
Distributions
The value of specs
The value of design specifications ("specs") for open-source projects is something of an open question. Some projects, with the Linux kernel perhaps being the most prominent, eschew specs in favor of code. Other projects, such as various OpenStack sub-projects, have a fairly heavyweight process that requires specs for most proposed features. In a recent openstack-dev discussion, the value of requiring specs for the Nova compute component was called into question.
Nova manages the compute resources for an OpenStack cloud. Those resources are in the form of different kinds of virtual machines (VMs) from hypervisors such as KVM, Xen, VMware, and Hyper-V or from container technology like LXC or Docker.
That discussion started with a June 24 post from Nikola Đipanov that was rather negative about the whole spec process:
Đipanov cited a few examples where he saw that the spec process had gone wrong that he had encountered just that week. He noted that a big part of the problem is that Nova is so large and tightly coupled that a heavyweight process has been put in place to, effectively, slow or stop changes from being made. It is the tight coupling that needs to be addressed, but that the process itself is preventing that.
But Daniel Berrange and others did not see things quite that way, with Berrange pointing out that the situation was far worse before the Nova project adopted specs:
New OpenStack features always require a blueprint in Launchpad, but many components, including Nova, have adopted a requirement that most features need specs based on a project-specific template. Blueprints are typically a much simpler statement of the problem to be solved, while specs require a great deal more detail, including design information, use cases, impacts, and more. In addition, new features are only approved for a single six-month development cycle; if they spill over into the next cycle, they must be re-reviewed and approved again.
While specs have made things much better, there are still a number of
problems with the process, Berrange said. It is too rigid and
bureaucratic, and too many features are being pushed into the spec process
that could simply be handled with just a blueprint. Also, tying the
spec review and approval schedule to that of the overall development cycle
is counterproductive: "We should be willing to accept and
review specs at any point in any cycle, and once approved they should
remain valid for a prolonged period of time - not require us to go
through re-review every new dev cycle as again that's just creating
extra burden.
" In addition, as Đipanov also noted, there is only a
subset
of the
core team (which is "already faaaar too small
", Berrange said)
that can approve specs, which creates further bottlenecks.
Others strongly agreed that specs have made things better, but some questioned whether the Gerrit code-review tool was the best mechanism for reviewing specs. One of the reasons for requiring specs (and placing them into Git repositories so they could be reviewed via Gerrit) was the inability to comment on blueprints in Launchpad. But code-review tools foster a line-by-line approach, which is not optimal to review specs, as Technical Committee chair Thierry Carrez noted:
Part of the problem is that the spec template is overkill for many features, Carrez said. It would be better to start small and build more into a spec as it gets reviewed:
You *can* do this with Gerrit: discourage detail review + encourage idea review, and start small and develop the document in future patchsets as-needed. It's just not really encouraging that behavior for the job, and the overhead for simple features still means we can't track smallish features with it. As we introduce new tools we might switch the "feature approval" process to something else. In the mean time, my suggestion would be to use smaller templates, start small and go into details only if needed, and discourage nitpicking -1s.
Đipanov agreed with Carrez's ideas, and suggested that investigating other tools might be in order. On the other hand, Kyle Mestery noted that the Neutron networking component had recently switched from a heavyweight spec-based process to one that uses "request for enhancement" (RFE) bugs instead. The reasons behind the switch, as outlined in a blog post from Mestery, sound rather similar to the complaints heard in the Nova thread. So far, that switch is working out well, Mestery said.
The RFE process was also championed by Adam
Young. He strongly agreed with Đipanov that Gerrit was not the proper tool
for the job and suggested that keeping the documentation with the code (and
keeping
them both in sync) would avoid "bike shedding about Database
schemas
". But Berrange said that
hearkened back to the days before specs for Nova, which "really didn't work at all - code reviews are too late in the workflow
to start discussions around the design, as people are already invested
in dev work at that point and get very upset when you then tell them
to throw away their work
".
But Young is fairly adamant that the spec process is holding back progress in the code:
On the other hand, the spec process not necessarily the real bottleneck, as James Bottomley pointed out; review bandwidth will not magically increase simply by removing specs from the process. He is concerned that precious reviewer time may be wasted on things that should already have been accepted (because they are an obvious bug fix, say) or rejected (for bogus code). Reducing the number of reviews required to get to a resolution is the way to stretch review resources.
There is another, possibly overlooked, advantage to the spec process that
Tim Bell raised: it allows operators and
other users without Python knowledge to "give input on the overall approach being taken
". If commenting is left
until code review time, it leaves out those who aren't able to read the
code—and who may have important thoughts based on running OpenStack in
production. Đipanov acknowledged that, but is still concerned
about the weight of the process for many of the features proposed for Nova.
In a summary post, Đipanov outlined the positives and negatives with regard to the Nova process that had emerged from the discussion. The strident "specs don't work" attitude from his initial post is replaced with a more even-handed view. The post also makes some concrete suggestions for moving forward.
Full-blown specs should not be required from the outset, he suggested. Instead, a simpler blueprint that is mirrored into the repository could be used and a spec should only be created if multiple core team members (or a larger number of contributors) request one (by making a negative vote on the blueprint). In addition, feature approval should not necessarily expire when a release is made—expiration should strictly be for the specific features that require it. Lastly, new tools should be considered that would facilitate a more nimble process, perhaps along the lines of what Carrez described.
At some level this is a struggle between those of a more "agile" mindset and those who are more process-oriented. It seems that there is broad agreement that improvements are needed to the current Nova development process, but where and how those changes come is not yet clear. The OpenStack project, though, has multiple components, each with its own process, that can be studied to see what works and what doesn't—and why. Beyond that, sub-projects like Nova can also look at the wider free-software world for ideas. A bit of observation and iteration is likely all that is required to find some useful improvements to the Nova development process.
Brief items
Distribution quotes of the week
openSUSE Leap 42.x
We felt that Leap, with reference to motion, i.e. how the distribution moves forward, provides a nice contrast to Tumbleweed. It also represents that we are taking a leap to get there.
Happy 2nd Epoch CoreOS Linux
CoreOS celebrates its second birthday with an alpha release. "Two years ago we started this journey with a vision of improving the consistency, deployment speed and security of server infrastructure. In this time we have kicked off a rethinking of how server OSes are designed and used."
Distribution News
Debian GNU/Linux
Debian to switch back to ffmpeg
After nearly a year of consideration, the Debian project has decided to switch back to the ffmpeg multimedia library at the expense of its fork libav. See this wiki page for a summary of the current reasoning behind the switch.Preparing for GCC 5/libstdc++6 (and GCC 6)
Matthias Klose notes that GCC 5 will soon be the default compiler in Debian sid. "Compared to earlier version bumps, the switch to GCC 5 is a bit more complicated because libstdc++6 sees a few ABI incompatibilities, partially depending on the C++ standard version used for the builds. For some C++11 language requirements, changes on some core C++ classes are needed, resulting in an ABI change."
Ubuntu family
Ubuntu 14.10 (Utopic Unicorn) reaches End of Life
Ubuntu has announced that version 14.10 (Utopic Unicorn) will reach its end of support on July 23. The supported upgrade path is via Ubuntu 15.04.
Newsletters and articles of interest
Distribution newsletters
- DistroWatch Weekly, Issue 617 (July 6)
- Ubuntu Weekly Newsletter, Issue 424 (July 5)
Page editor: Rebecca Sobol
Development
Self-hosting projects with Gogs
In May, we noted the problems that GIMP and other free-software projects have encountered of late with the SourceForge project-hosting service. While there are plenty of alternative hosting providers to choose from, some developers will likely always prefer to self-host their projects—precisely because an outside service provider can make just such an abrupt or surprising about-face. Gogs is one option for those taking the self-hosting approach: it provides a web-based front-end to a GitHub-like hosting service. Gogs offers quite a few features, but its choice of GitHub-like qualities may not be to everyone's tastes.
When it comes to providing a globally visible home for a software project, one can certainly make do with gitweb or cgit running on a web server. They provide a quick overview of the latest changes in the codebase, a way to browse through a repository, project history, statistics, and more. But the reason that heavyweight hosting services like GitHub have taken off is that they integrate support for many of the common tasks that accompany managing a project but which are not tackled directly by Git itself.
These features include core processes like issue tracking, code review, and release management. In addition, though, project hosting sites add a social dimension: the ability to form teams of developers, to link between repositories, and to subscribe to updates from a particular repository or user account.
![[Gogs dashboard]](https://static.lwn.net/images/2015/07-gogs-dashboard-sm.png)
Gogs is a Git-hosting package that intentionally clones the look and feel of GitHub, but can be installed and run on a personal server. Since early 2014, it has been developed on GitHub itself—at some point, presumably, it may well be self-hosted, but for now the benefits of GitHub's large user base are key. The software is written in Go (the name of the project stemming from "GO Git Service"), and it can use MySQL, PostgreSQL, or SQLite as a database backend. Installable packages are available (in addition to source); with the latest Go runtime installed (version 1.4.2), one can get up and running with Gogs in a matter of minutes.
The most recent release is version 0.6.1 from March 26. Historically, the project has released a major (i.e., non-bugfix) release every six months or so, although it has not published a long-term roadmap detailing where development is headed. As it stands now, Gogs supports multiple user accounts, "team" organizations, and most of the core feature set for managing a Git repository as one would at GitHub. That includes issue tracking (complete with labeling and user assignment), milestones, releases, branch management, and even mirroring or one-click forking of existing repositories. On the social side, users can watch and "star" (i.e., favorite) repositories and follow an automatically generated feed of project or user activity.
That leaves several GitHub features unaddressed, though. Creating pull requests is not yet supported, nor is sharing code snippets (as in GitHub's gists). There are also no advanced project-hosting features like wikis or GitHub Pages, and Gogs lacks GitHub's pervasive statistics tools, which automatically generate charts and graphs from user and repository activity. Most of these features have an open feature request on the Gogs issue tracker, which underscores the project's apparent interest in closely following GitHub's lead.
I installed the 0.6.1 release from source on my desktop machine. There is little to say about the installation process, except that one must be running the latest and greatest Go runtime, which generally means using packages directly from golang.org rather than what is provided by the distribution package manager. Gogs offers a dead-simple setup and configuration story for new users. There is only the single, gogs binary to worry about (plus the database associated with it).
![[Viewing a commit in Gogs]](https://static.lwn.net/images/2015/07-gogs-commit-sm.png)
For running in a production environment, the project has a generic systemd service file available separately. Starting with the 0.6.0 release in March, the main configuration parameters are embedded into the binary, rather than being a separate file. However, it is still possible to override parameters with a customization file.
In my tests, Gogs runs admirably fast, even when importing or cloning a remote Git repository from elsewhere. Naturally, one should expect performance to slow down when dealing with a large number of simultaneous users and repositories, but that may not be a significant problem in reality. Gogs has historically been aimed at people who do not want to host their code on GitHub (prioritizing small installations, Raspberry Pi support, and so on), and has only recently begun tackling enterprise features. So it arguably will not be used by large organizations at all. Large projects or groups looking to build a self-hosted GitHub clone have other options already (GitLab, for example, which emphasizes its large-organization support through features like clustering and scalability).
On the other hand, that distinction makes some of the features prioritized by the Gogs team puzzling. Gogs supports a wealth of social-networking features: users can add multiple email addresses, social media accounts, and personal information to their user account. But surely these features are less important to the average self-hosting developer than gists or the ability to add a project wiki—support for those two features are the subject of multiple feature-requests (and discussions) on the issue tracker. Git itself has built-in support for identifying users by name and email address (data which is automatically picked up by Gogs); in light of that, the choice to built additional "social" features first can be frustrating.
It is hard to know when new features of the functional variety might be expected, since the Gogs team has not formally established a roadmap. But it is possible that outside developers could help fill in some of the gaps. Some of the more interesting features available to projects on GitHub are actually third-party services like the Travis continuous integration system. Gogs supports web hooks for triggering external services; perhaps hooking a self-hosting tool like Gogs up to another system (say, Gerrit) is a better option than trying to re-implement GitHub's code-review features.
But that train of thought can quickly get off track. There is a subset of the free-software community that cares about self-hosting services because they are perceived to be more resilient against privacy invasion and against "friendly" service providers suddenly changing their tune. To really satisfy those goals, however, one would need a project-hosting tool that was completely decentralized and federated.
Gogs is not that tool; if a dozen projects started self-hosting their code on Gogs installations, a user would need to create a dozen separate accounts to file issues or otherwise interact with those projects. It is lamentable, perhaps, but the overhead that requires is part of what makes centralized services look so appealing by contrast.
That issue aside, Gogs does provide a fast and simple way to put a Git-based project repository online and give it a more appealing "front door" than simple web front ends like cgit or gitweb. That will, no doubt, be a valuable service to a great many independent developers or small teams. It has plenty of room to grow; time will tell whether or not it manages to find its own identity or simply provide an easy alternative for GitHub.
Brief items
Quotes of the week
Firefox 39 released
Firefox 39 has been released for both desktop and mobile systems. The new features include a social sharing tool for the Firefox Hello video chat subsystem. It is designed to make it easier to share Firefox Hello chat invitations over third-party social networks. In addition, Firefox's existing phishing-and-malware detection tool has been extended to cover downloads, support has been added for Unicode 8.0's multi-ethnic emoji characters, and there is improved support for the Accessible Rich Internet Applications (ARIA) standard.
ownCloud 8.1 released
The ownCloud 8.1 release is out. "This release marks significant under the hood improvements, such as increasing scalability and performance of syncing and file operations while making ownCloud a better platform for developers to build upon. Security enhancements, integrated documentation links, more control in the admin panel over external storage, LDAP and encryption make ownCloud more secure and easier to use." See the release notes for details.
sendmail 8.15.2
Version 8.15.2 of the sendmail mail agent is out. Changes include a number of IPv6-related fixes, some security improvements, and more.
Newsletters and articles
Development newsletters from the last week
- What's cooking in git.git (July 1)
- What's cooking in git.git (July 7)
- LLVM Weekly (July 6)
- OCaml Weekly News (July 7)
- OpenStack Community Weekly Newsletter (July 3)
- Perl Weekly (July 6)
- PostgreSQL Weekly News (July 6)
- Python Weekly (July 2)
- Ruby Weekly (July 2)
- This Week in Rust (July 6)
- Tor Weekly News (July 2)
- Wikimedia Tech News (July 6)
Page editor: Nathan Willis
Announcements
Brief items
Microsoft Now OpenBSD Foundation Gold Contributor
The OpenBSD Foundation has announced that Microsoft has made a significant financial donation to the Foundation. "This donation is in recognition of the role of the Foundation in supporting the OpenSSH project. This donation makes Microsoft the first Gold level contributor in the OpenBSD Foundation's 2015 fundraising campaign."
Articles of interest
Free Software Supporter - Issue 87, July 2015
This edition of the Free Software Foundation newsletter covers international trade agreements, introducing Adam Leibson, Historical Permission Notice and Disclaimer added to license list, Email Self-Defense guide, MediaGoblin 0.8.0, introducing Stephen Mahood, meet the DRM drones, and much more.FSFE Newsletter - July 2015
The July issue of the Free Software Foundation Europe newsletters looks at FSFE pokes the European Commission on its transparency commitment, TiSA: intransparent treaty might prevent digital sovereignty, and several other topics.
Calls for Presentations
PyCon Ireland 2015 Call for Proposals
PyCon Ireland will take place October 24-25 in Dublin. The deadline for talk proposals is July 31.PyCon HK 2015: Call For Proposals and Sponsorship
PyCon Hong Kong will take place November 7-8 in Hong Kong. The call for proposals closes August 16. Early-bird tickets are on sale and a call for sponsors is also open.linux.conf.au CFP extended
The organizers of linux.conf.au 2016 (Geelong, February 1-5) have, as is traditional, extended the deadline for proposals to speak at the event. The new deadline is August 2. "However, we think there are more stories out there that deserve attention. Originally scheduled to be closed today, the papers committee has agreed to extend the deadline, just to give everyone a little extra time to have their voice heard."
CFP Deadlines: July 9, 2015 to September 7, 2015
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
Deadline | Event Dates | Event | Location |
---|---|---|---|
July 15 | October 8 October 9 |
CloudStack Collaboration Conference Europe | Dublin, Ireland |
July 15 | October 27 October 30 |
OpenStack Summit | Tokyo, Japan |
July 15 | November 6 November 8 |
Jesień Linuksowa 2015 | Hucisko, Poland |
July 17 | October 2 October 3 |
Ohio LinuxFest 2015 | Columbus, OH, USA |
July 19 | September 25 September 27 |
PyTexas 2015 | College Station, TX, USA |
July 24 | September 22 September 23 |
Lustre Administrator and Developer Workshop 2015 | Paris, France |
July 31 | October 26 October 28 |
Kernel Summit | Seoul, South Korea |
July 31 | October 24 October 25 |
PyCon Ireland 2015 | Dublin, Ireland |
July 31 | November 3 November 5 |
EclipseCon Europe 2015 | Ludwigsburg, Germany |
August 2 | February 1 February 5 |
linux.conf.au | Geelong, Australia |
August 2 | October 21 October 22 |
Real Time Linux Workshop | Graz, Austria |
August 2 | August 22 | FOSSCON 2015 | Philadelphia, PA, USA |
August 7 | October 27 October 30 |
PostgreSQL Conference Europe 2015 | Vienna, Austria |
August 9 | October 8 October 9 |
GStreamer Conference 2015 | Dublin, Ireland |
August 10 | September 2 September 6 |
End Summer Camp | Forte Bazzera (VE), Italy |
August 10 | October 26 | Korea Linux Forum | Seoul, South Korea |
August 14 | November 7 November 9 |
PyCon Canada 2015 | Toronto, Canada |
August 16 | November 7 November 8 |
PyCON HK 2015 | Hong Kong, Hong Kong |
August 17 | November 19 November 21 |
FOSSETCON 2015 | Orlando, Florida, USA |
August 19 | September 16 September 18 |
X.org Developer Conference 2015 | Toronto, Canada |
August 24 | October 19 October 23 |
Tcl/Tk Conference | Manassas, VA, USA |
August 31 | November 21 November 22 |
PyCon Spain 2015 | Valencia, Spain |
August 31 | October 19 October 22 |
Perl Dancer Conference 2015 | Vienna, Austria |
August 31 | November 5 November 7 |
systemd.conf 2015 | Berlin, Germany |
August 31 | October 9 | Innovation in the Cloud Conference | San Antonio, TX, USA |
August 31 | November 10 November 11 |
Open Compliance Summit | Yokohama, Japan |
September 1 | October 1 October 2 |
PyConZA 2015 | Johannesburg, South Africa |
September 6 | October 10 | Programistok | Białystok, Poland |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
EuroPython 2015 Keynote: Holger Krekel
Holger Krekel is a confirmed keynote speaker at EuroPython. His talk will be on July 22, day 3 of the seven-day conference. "In this talk, Holger will discuss the recent rise of immutable state concepts in languages and network protocols."
Events: July 9, 2015 to September 7, 2015
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
July 4 July 10 |
Rencontres Mondiales du Logiciel Libre | Beauvais, France |
July 6 July 12 |
SciPy 2015 | Austin, TX, USA |
July 7 July 10 |
Gophercon | Denver, CO, USA |
July 15 July 19 |
Wikimania Conference | Mexico City, Mexico |
July 18 July 19 |
NetSurf Developer Weekend | Manchester, UK |
July 20 July 24 |
O'Reilly Open Source Convention | Portland, OR, USA |
July 20 July 26 |
EuroPython 2015 | Bilbao, Spain |
July 25 July 31 |
Akademy 2015 | A Coruña, Spain |
July 27 July 31 |
OpenDaylight Summit | Santa Clara, CA, USA |
July 30 July 31 |
Tizen Developer Summit | Bengaluru, India |
July 31 August 4 |
PyCon Australia 2015 | Brisbane, Australia |
August 7 August 9 |
GUADEC | Gothenburg, Sweden |
August 7 August 9 |
GNU Tools Cauldron 2015 | Prague, Czech Republic |
August 8 August 14 |
DebCamp15 | Heidelberg, Germany |
August 12 August 15 |
Flock | Rochester, New York, USA |
August 13 August 17 |
Chaos Communication Camp 2015 | Mildenberg (Berlin), Germany |
August 15 August 22 |
DebConf15 | Heidelberg, Germany |
August 15 August 16 |
I2PCon | Toronto, Canada |
August 15 August 16 |
Conference for Open Source Coders, Users, and Promoters | Taipei, Taiwan |
August 16 August 23 |
LinuxBierWanderung | Wiltz, Luxembourg |
August 17 August 19 |
LinuxCon North America | Seattle, WA, USA |
August 19 August 21 |
Linux Plumbers Conference | Seattle, WA, USA |
August 19 August 21 |
KVM Forum 2015 | Seattle, WA, USA |
August 20 August 21 |
Linux Security Summit 2015 | Seattle, WA, USA |
August 20 | Tracing Summit | Seattle, WA, USA |
August 20 August 21 |
MesosCon | Seattle, WA, USA |
August 21 | Unikernel Users Summit at Texas Linux Fest | San Marcos, TX, USA |
August 21 | Golang UK Conference | London, UK |
August 21 August 22 |
Texas Linux Fest | San Marcos, TX, USA |
August 22 August 23 |
Free and Open Source Software Conference | Sankt Augustin, Germany |
August 22 | FOSSCON 2015 | Philadelphia, PA, USA |
August 28 September 3 |
ownCloud Contributor Conference | Berlin, Germany |
August 29 | EmacsConf 2015 | San Francisco, CA, USA |
September 2 September 6 |
End Summer Camp | Forte Bazzera (VE), Italy |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol