By Jake Edge
July 10, 2013
As with most things in software development, there are trade-offs inherent in
the length of the release cycle for a project. Shorter cycles mean more
frequent releases, with features getting into the hands of users more
quickly. But they also mean that whatever overhead there is in creating a
release is replicated more times. Finding a balance between the two can be
difficult and projects have tried various lengths over the years.
Currently, the KDE project is discussing a proposal to
move from six months between releases down to three. It has sparked a
fairly thoughtful discussion of some of the pros, cons, and logistics of
making that kind of switch.
Àlex Fiestas announced the proposal on the
kde-core-devel mailing list, looking for feedback prior to a "Birds of a
Feather" (BoF) session he has scheduled for the upcoming Akademy conference. In a nutshell,
the idea is to shrink the current release cycle down to three months, with
two months for feature merging and one for release preparation and
testing. Instead of multiple freezes for different elements (features,
dependencies, messages, artwork, documentation, ...) of the KDE
Software Collection (SC), all freezes would happen at the same time:
roughly a month before release. A look at the schedule
for the in-progress 4.11 release will show a rather more complicated set of
freezes, for example.
Several advantages are listed in the proposal. With more frequent
releases, it will be easier for distributions to pick up the most recent
code. Right now, a distribution that happens to have a release at an
inconvenient time in the KDE SC release cycle will have to ship a nearly
six-month-old major release. The proposal would cut that down. In
addition, a three-month cycle will necessarily have fewer changes than a
longer cycle would, which means fewer bugs and easier testing, at least in
theory.
The other benefit listed is actually more of a prerequisite for making the
change: "Master will be always in a releasable state". One of
the objections raised to the plan was that
it would reduce the amount of time available for feature development. It
would seem that some KDE projects do their feature development on the
master branch, which then needs to stabilize prior to a release. As Thomas
Lübking (and others) pointed out, though, the
solution is to do feature development on a topic branch, rather than the
master.
There are reasons that some projects do feature work on the master, however. For
one thing, it is generally easier to get users to test new features when
they are on the master branch, rather than some fast-moving development
branch that may be broken on a fairly regular basis. Handling that problem
is simply a matter of creating an integration branch for users to test, as
Martin Gräßlin noted. In addition, Aurélien
Gâteau explained why a topic-branch-based workflow
generally works
better.
Part of the
difficulty may be that some KDE projects are still coming up to speed with
Git, which KDE has been migrating to over the last few years. Subversion's
painful branch management may have led projects into using the master (or
"trunk") for development over the years. The final shutdown of the
Subversion servers has not yet occurred, but it is scheduled
for January 1, so projects either have switched or will soon.
More frequent releases might result in an undesirable dilution of the
impact of those releases, Nuno Pinheiro said. Sven Brauch expanded on that, noting, with a bit of
hyperbole, that the frequent
Firefox releases (every six weeks) had made each one less visible: "I think
attention in the media to their releases has declined a lot -- nobody
cares any more that a new version of firefox was released since it
happens every three days." It is, he said, something to consider,
though it shouldn't be an overriding criterion.
The impact on distributions was also discussed. Kubuntu developer Philip
Muskovac was concerned about long-term
support for KDE SC releases, especially with regard to the upcoming 14.04
LTS for the Ubuntu family. He noted that, depending on how things go
(presumably with Mir), whichever KDE release goes into the LTS "might very well be our last release based on
KDE4/Qt4/X11". It will need to be supported for three years, and three-month cycles mean fewer minor releases, all of which may add up
to a problem for Kubuntu. He suggested creating better tools to allow
distributions to find "stable" fixes in newer releases—something that
Fiestas seemed
amenable to providing.
Kubuntu developer Scott Kitterman was more
positive about the impact of a three-month KDE cycle on the
distribution. He, too, is concerned about having fewer minor releases
available ("we ship all the point releases to our end
users and appreciate the added stability they provide"), but thinks
that's probably the biggest hurdle for Kubuntu. If a solution can be found
for that problem, he thought the distribution could handle the change,
though he clearly noted
that was his opinion only.
On behalf of the
openSUSE KDE packagers, Luca Beltrame posted concerns over the amount of "extra" work that
would be required to keep up with packaging KDE SC every three months.
There is also a support burden when trying to handle multiple different
major versions, he said. Fiestas asked Beltrame
(and distribution packagers in general) what KDE could do to make it easier
to package up the project. He noted that figuring out the dependencies for
each new release is a pain point mentioned by multiple distributions: "Can't we coordinate on that so
everybody life is easier?" Fiestas's approach seems to be one of trying
to identify—and remove—the obstacles in the way of the proposal.
In a lengthy message, Aaron Seigo touched
on a number of the topics in the thread. He noted that there is nothing
particularly special about three months, and other intervals should be
considered (he mentioned two or four months). He also pointed out that the
marketing and visual design cycles need not coincide with those of the
software development. The project could, for example, do a visual refresh
yearly, while doing twice-yearly public relations pushes to highlight the
latest KDE software features. Other arrangements are possible too, of course.
Seigo did note something that was quite apparent in the thread: there were few, if any,
flames and essentially no bikeshedding. A potentially contentious topic
has, at least so
far, been handled by "thoughtful input". Whether or not the
proposal is adopted, "the discussion has been very valuable
already", he said. More of that will undoubtedly take place at
Akademy in
Bilbao, Spain later in July and it will be interesting to see where it leads.
Comments (9 posted)
By Nathan Willis
July 10, 2013
As has been widely reported already, Google discontinued Reader,
its RSS and Atom feed-reading tool, at the beginning of July. In the
weeks preceding the shutdown, scores of replacement services popped up
hoping to attract disgruntled Reader refugees. But most of them
focused squarely on the news-delivery features of Reader; a closer
look illustrates several additional lessons about the drawbacks of web
services—beyond the simple question of where one's data is
stored.
Takeout, again?
First, Google had advertised that users would be able to
extract their account information from Reader ahead of the
shutdown. But the reality is that the available data migration tools
are often not all that they are cracked up to be, particularly when
they are offered by the service provider. Reader had always allowed
users to export their list of feed subscriptions in Outline Processor Markup
Language (OPML) format, of course. But access to the rest of an
account's Reader data required visiting Google Takeout, the
company's extract-and-download service (which is run by a team within
Google called the Data
Liberation Front). Takeout allowed users to extract additional
data like the lists of starred and shared items, notes attached to
feed items, and follower/following information.
However, Takeout does not preserve the historical contents of
subscribed feeds, the existence of which is one of the more valuable aspects of
always accessing news items at a single location: it is what enables
full-text and title search of cached entries. Obviously, there are
copyright issues that could understandably make Google shy away from
offering downloads of other sites' content—although it could be
argued that the company was already retaining that content and
offering access to it in a variety of products, from Reader's cache to
the "cached" items in Google Search. In any event, in the weeks
preceding the Reader
shutdown, several tools sprang up to retrieve the cached item store,
from the open source Reader Is Dead (RID)
to the commercial
(and Mac-only) Cloudpull.
Both Cloudpull and RID utilized the unpublished Reader
API to fetch and locally store an account's entire feed history. By
sheer coincidence, I stumbled across the existence of RID a
few days before the shutdown deadline, and used it to successfully
pull down several year's worth of feed items on June 30. The archive
consumes about 30 GB of space (uncompressed), although about half of
that is wasted on high-traffic feeds without any historic value,
such as local weather updates and Craigslist categories.
For the rest, however, the backup is akin to a local Wayback Machine. Initially
the RID project was working on its own web application
called reader_browser to access and search these archives;
that program is still under development with limited functionality at
present, but in the first week of July the project rolled out a stop-gap
solution called zombie_reader as well.
zombie_reader starts a local web server on port 8074, and
presents a clone of the old Reader interface using the cached archive
as storage. The legality of the clone may be questionable, since it
employs a modified copy of the Reader JavaScript and CSS. But there
is little long-term value in developing it further anyway, since
outside of search and browsing, few of the old application's features
make sense for an archive tool. Developer Mihai Parparita is
continuing to work on reader_browser and on an accompanying
command-line tool.
The silo
Of course, maintaining a standalone archive of old news items puts
an additional burden on the user; at some point the news is too old to
be of sufficient value. A better long term solution would
be to merge the extracted data into a replacement feed reader. That
illustrates another difficulty with migrating away from an application
service provider—importing the extracted data elsewhere is
problematic, if it is possible at all.
Copying in an OPML subscription
list is no problem, of course, but other web-based feed-reader
services will understandably not support a full history import (much
less one 30GB in size). Self-hosted free software tools like ownCloud News and Tiny Tiny RSS are an option, although
the official reception from
Tiny Tiny RSS to such ideas has been less than enthusiastic. The Tiny
Tiny RSS feature-request forum even lists
asking for Google Reader features as a "bannable offense."
Outside contributors may still manage to build a
working import tool for RID archives (there is one effort
underway on the Tiny Tiny RSS forum). Regardless, the main factor
that makes RID just a short-term fix is the fact only those
users who made an archive before Reader closed can use it. Once
Google deactivated Reader, it was no longer possible to extract any
more cached account data. That left quite a few confused users who
did not complete their exports before the July 1 shutdown, and it puts
a hard upper limit on the number of RID users and testers.
The reason archive export no longer works, quite simply, is that
Google switched off the Reader API with the application itself. That
is an understandable move, perhaps. But there is still another
shutdown looming: even the ability to export basic
information (i.e., OPML subscriptions) will vanish on July
15—which is a perplexingly short deadline, considering that
users can still snag their old Google Buzz and Google
Notebook data through official channels, several years after those
services were shuttered. So despite the efforts of the Data
Liberation Front, it seems, the company can still be arbitrarily
unhelpful when it comes to specific services.
Why it still matters
The moral of the Reader shutdown (and resulting headaches) is that
it is often impossible to predict which portions of your data are the
valuable ones until you actually attempt to migrate away to a new
service provider. Certainly Google Reader had a great many users who
did not care about the ability to search through old feed item
archives. But some did, and the limitations of the service's export
functionality only brought that need to light when they tried to move
elsewhere.
For the future, the obvious lesson is that one should not wait
until a service is deactivated to attempt migration. It is easy to
lapse into complacency and think that leaving Gmail will be simple if
and when the day comes. But, as is the case with restoring from
backups, Murphy's Law is liable to intervene in one form or anther,
and it is better to discover how in advance. There are certainly
other widely-used Google services that exhibit the same problematic
symptoms as Reader, starting with not allowing access to full data.
Many of these services are for personal use only, but others are
important from a business standpoint.
The most prominent example is probably Google Analytics, which is
used for site traffic analysis by millions of domains. Analytics
allows users to download summary reports, but not the raw numbers
behind them. On the plus side, there are options
for migrating the data into the open source program Piwik. However, without the original
data there are limits to the amount and types of analysis that can be
performed on the summary information alone. Most other Google
products allow some form of export, but the options are substantially
better when there is an established third-party format available, such
as iCalendar. For services without clear analogues in other
applications of providers—say, Google+ posts or AdWords accounts—generic
formats like HTML are the norm, which may or may not be of immediate
use outside of the service.
The Data Liberation Front is an admirable endeavor; no doubt,
without it, the task of moving from one service provider to another
would be substantially more difficult for many Google products. And
the Reader shutdown is precisely the kind of major disruption that the
advocates of self-hosted and federated network services (such as the
Autonomo.us project) have warned
free software fans about for years. But the specifics are instructive
in this case as well: perhaps few Reader users recognized that the
loss of their feed history would matter to them in time to
export everything with RID, and perhaps more than a few are
still unaware that Google Takeout will drop its Reader export
functionality completely on July 15.
Ultimately, the question of how to maintain
software freedom with web services divides people into several camps.
Some argue that users should never use proprietary web services in
the first place, but always maintain full control themselves. Others
say that access to the data and the ability to delete one's account is
all that really matters. The Autonomo.us project, for example, argues in its Franklin
Street Statement that "users should control their
private data" and that public data should be available under
free and open terms. One could argue that Reader met both of those
requirements, though. Consequently, if it signifies nothing else,
Reader's
shutdown illustrates that however admirable data portability
conditions may be, those conditions are still complex ones, and there
remains considerable latitude for their interpretation.
Comments (7 posted)
July 10, 2013
This article was contributed by Martin Michlmayr
EuroPython 2013
The EuroPython 2013 conference in Florence,
Italy opened with a keynote by Van Lindberg about the next twenty years
of Python. Lindberg, a lawyer with an engineering background, is the
chairman of the Python Software Foundation
(PSF) and the author of the book Intellectual Property and Open
Source (reviewed by LWN in 2008). His keynote looked at the
challenges facing the Python community and the work underway to ensure that
Python will remain an attractive programming language and have a healthy
community for the next twenty years (and beyond).
The design philosophy of Python
Lindberg began his keynote with a retrospective of the last twenty years of
Python. He described the origins of Python as a skunkworks
project, which led Guido van Rossum, the creator of Python, to a number of
interesting design choices. One is that Van Rossum borrowed ideas from
elsewhere, such as ALGOL 68 and C. Another design approach was to make
things as simple as possible. This involved taking the same concepts and
reusing them over and over again. Python also follows the Unix philosophy
of doing one thing well, he said. Finally, perfection is the enemy
of the good, as "good enough" is often just that. Cutting corners is
allowed, as you can always go back and improve it later. Lindberg
summarized that Van Rossum "got a lot right in the early days".
Lindberg noted that Van Rossum also succeeded in creating a community
around Python. Lindberg identified four factors that were crucial for the
success
of Python. First, Python was an excellent language. This was a necessary
basis because "otherwise there's nothing to gather and rally around".
Second, Van Rossum chose an open source license even before the term "open
source" was invented. Third, Van Rossum encouraged a sense of humor,
naming the language after the Monty
Python crew.
Finally, Python had a sense of values.
The values behind Python, in particular, are what differentiates Python from
many other programming languages. Lindberg asked the audience whether they
knew about
"import this". This is an Easter egg in
Python
which displays the Zen of
Python, the guiding principles
behind Python. Unlike Perl, which proudly proclaims that there's more
than one way to do
it,
Python encourages a certain programming style. This is reflected in the
Zen of Python, which says that there should be one — and preferably
only one — obvious way to do it.
Challenges for the Python community
Lindberg emphasized that Python is a remarkable story of success. There
are hundreds of thousands, maybe even millions, of people using Python as part of their jobs. Python is widely deployed — it
has become the de facto standard in the movie and animation industry, is
overtaking Perl in bioinformatics, and is the implementation language of
two of the leading cloud platforms. Python is also a significant player in
education, "finally replacing Java as the primary teaching language for a
lot of universities", he said.
Despite the success, Python is facing what Lindberg described as "market
share challenges". JavaScript, which used to be stricken by buggy,
browser-only, and inconsistent implementations, has become a fairly big
competitor in the desktop and server spaces, and particularly in mobile.
Lua is increasingly used as an embeddable extension language. Lindberg
sees Go as another contender. What makes Go attractive is its concurrency
and ability to create easily deployable binaries that you can just drop on
a system and run. "Frankly, deployment is a challenge for us", admitted
Lindberg, as are mobile and other areas with highly constrained space
requirements. Lindberg also mentioned the statistical and graphic
abilities of R as a potential
competitor.
Asking "why do I care?", he explained that it's important to keep growing
— otherwise Python will end up where Smalltalk and Tcl are today. He
rhetorically asked the audience when the last time was that anyone did
anything interesting in Tcl. Noting that these are fantastic languages,
Lindberg argued that "they have died because they have not grown". It's
not just the language, but also the community around it, that can die. He
observed that in addition to technical challenges facing Python, there are
also challenges with scaling the Python community that need to be
addressed. Lindberg believes that ten or twenty years ago it was enough to
focus on the programmer, whereas these days you have to form a culture around
programming.
There is something special about the Python community, according to
Lindberg. He quoted the mission of the Python Software
Foundation, which is to "promote,
protect, and advance the Python programming language, and to support and
facilitate the growth of a diverse and international community of Python
programmers", observing that "these are important words". Lindberg
argued that the current community is getting older and that actions have to
be taken that will create the Python community twenty years from now: "if
we don't build and grow the community, it will go away".
Big changes coming
Lindberg emphasized three areas that the Python Software Foundation is focusing
on to grow the Python community, now and in the future. One is the Code
of Conduct the PSF adopted in
April. The Zen of Python has been important in defining Python, but its
focus is on code. The Code of Conduct, on the other
hand, captures what the community itself should be like — it should
consist of members from all around the world with a diverse set of skills.
He said that a member of the Python community is open, considerate, and respectful:
members are open to collaboration, to constructive criticism, and to
fostering an environment in which everyone can participate; they are
considerate of their peers; and they are respectful of others, their
skills, and their efforts. The Code of Conduct condenses what is great
about the Python community. "It's about being the best people that we can
be and being the best community that we can be", Lindberg said. Alluding
to Python's reputation as the language with batteries included, he
summarized that "Python is the language with community included".
The second focus for PSF is education. As we're all getting older, we have to
think about where the next generation is coming from, Lindberg said. He
told the
story of Sam Berger, an eleven year old school boy from South Africa, who
attended
PyCon
and participated in professional level tutorials and classes. This is an
illustration of where the next generation of Python leaders is coming
from. In order to encourage that next generation, the
PSF is supporting initiatives to promote young coders, such as making a
curriculum
to teach kids Python available online. Lindberg is also
very supportive of the Raspberry Pi. He
reminisced about the 80s when computers booted into BASIC. The default
way to interact with the computer was through programming. If you wanted
to do something else, you had to make an explicit decision. This led to
an entire generation that understood that computers are tools — tools
that won't break if you play around with them.
Finally, the PSF itself is adapting to better serve the needs of the Python
community. It is working on a new web site (a preview of which can be
found at preview.python.org). The design goal
of the new site is to make it easy for the community to get involved. It
is also putting a lot of thought into representing the community, and there
will be efforts to address various needs, such as learning Python
or teaching Python. Lindberg also lamented that the PSF is not broad and
inclusive enough. Membership in the PSF currently requires a nomination
from an existing member, but Lindberg believes that every member of the
Python community should be a member of the PSF. In April, the PSF voted to
completely redo its membership program and to open up membership to anyone.
Answering a question from the audience, Lindberg clarified that basic
membership will be available to anyone who signs up. Further rights, such
as voting privileges, will be given to those members who have demonstrated
a commitment to the Python community, such as by contributing code,
documentation, or test cases — or by organizing events.
Lindberg closed by saying that the PSF is "changing to be your home". It
is fundamentally saying that "we need each of you" and that "this is all
about you". This is the most significant change the Python community has
seen since the formation of the PSF, according to Lindberg, and it's about
building the next twenty years of Python.
Comments (54 posted)
Page editor: Jonathan Corbet
Security
By Jake Edge
July 10, 2013
A bug in Android that Google has known about since February is finally
coming to light. It affects the integrity of Android packages, allowing an
attacker to create a malicious version of an app that still passes the
cryptographic signature check. While it doesn't affect any
packages in
Google's Play store, it certainly could have affected packages that came
from elsewhere—"sideloaded" apps. In fact, since the flaw goes back to
Android 1.6 ("Donut") and hasn't been fixed for the vast majority of
Android firmware versions, it still affects hundreds of millions of devices.
The actual bug is fairly trivial; a four-line
patch is enough to fix it. The problem stems from the fact that the
Android ZIP
file decoder will allow archives to contain more than one file with the
same name. That might not be a big deal, except that there is an
inconsistency on which of the two identically named files in the
archive is used. It turns out that the Android package verification code
uses the last file in the archive, while the app uses the first.
That situation means that an attacker can take an existing,
cryptographically signed .apk file (in Android's application
package file format—essentially just a ZIP file with defined contents)
and massage it into one that will run malicious
code, while still passing the signature checks. By placing a malicious file
with the same name as a signed file in the archive before the
existing files, an attacker gets their fondest wish: an app that passes the
security checks but compromises the device.
From a device's perspective, the compromised package looks for all the
world like a valid app signed by the app vendor—but it isn't. It contains
code that comes from elsewhere but will be run by the device when the app
is loaded. So the user gets an app that is rather different than what they
expect based on the signature. Bouncing Cows has now become a bouncing
Trojan horse.
The problem was discovered
by Bluebox Security and reported to Google in February. It was first
"announced" in a tweet
in March that just referred to the Android bug number (8219321). More
information about the flaw came in an early July blog
post by Bluebox CTO Jeff Forristal. He called it a "master key"
vulnerability, which may have been meant to misdirect others from
discovering the flaw. But he also evidently wanted to increase the
anticipation for his Black Hat USA
talk in August.
It would seem that there was enough information in that post and elsewhere
for the CyanogenMod team to
figure out the problem.
A patch landed
in the CyanogenMod tree on July 7 that disabled the multiple filename
"feature". A bit more information appears in the CYAN-1602 bug
report. Interestingly, the patch comes from Google and was committed
into some internal Google tree on February 18. That may suggest the
existence of a
back channel between Google and CyanogenMod, which would be a good—if,
perhaps, surprising—development.
It doesn't require a very sophisticated program to perform the attack, and a
simple
proof of
concept was posted on July 8. It uses Python's ZIP file library, as
most ZIP tools will not allow duplicate file names. A more sophisticated,
Java-based tool is also available.
The biggest danger is from sideloaded system apps, which have more
privileges than regular Android apps. System apps can effectively take
over the phone completely to use any of its services or devices as well as
to access any installed app and its data. Depending on the phone, however,
the system apps may be stored in a read-only part of the system, so it
would require a full Android update to change.
But there is plenty of damage that a
regular app can do, depending on the permissions required when it was
installed. Attackers would seem likely to choose to compromise apps with numerous
permissions when (ab)using this vulnerability.
In some sense, though, the vulnerability doesn't give the apps any more
privileges than they already have. A malicious app (either system or
regular) could already perform any of the actions that a compromised app
could perform. The difference is that apps in the Play store, or those
installed by phone vendors, are not expected to join botnets, send
contact information to remote sites, activate the microphone or GPS without
user approval, or any other "tricks" that an attacker might want to do.
By
giving permission to apps (either explicitly when installing them or
implicitly by buying a phone with system apps installed), the user is
allowing various types of activities. Apps that violate the expectations
will presumably find themselves blocked or removed from the Play store,
while misbehaving system apps will, at least, negatively impact the phone
vendor or carrier's reputation. Sideloading apps, especially system apps,
comes with some inherent risk that may largely not be present when getting
them through "normal" channels.
In an interview
at the CIO web site, Forristal said that Google blocks Play store apps with
duplicate
filenames, and Google indicated that none had been found when that blocking
began. So the problem exists only for those users who sideload apps from
other sites. That may be a minority of Android users, but it still seems
like the kind of bug that should have been fixed long ago.
As the CyanogenMod patch indicates, however, Google did fix this in
its trees shortly after the report. So far, though, only the Galaxy S4 has
been reported
to have been updated. The Android Open Source Project (AOSP) code
has not been updated with a fix, yet, either. It's a little puzzling why a
trivial fix, seemingly with few, if any, downsides, has not seen more
proliferation through the Android ecosystem.
In light of Google's
disclosure policy, it would seem that the company should have been more
forthcoming about this bug. Given that Bluebox is trying to make a splash
with the vulnerability at Black Hat, perhaps there was a request—or
requirement—to withhold the details, or even existence, of the flaw until
after the
presentation. If so, it is a sad statement about the state of
security vulnerability research and disclosure today.
Cryptographic signing is specifically targeted at preventing precisely this kind
of attack, of course. This vulnerability serves as a reminder that it
isn't just the cryptographic code that needs vetting, but that all of the
steps in the signing and packaging process are important pieces of the
puzzle too. In this case, an obscure bug in decoding a decades-old format
led to a complete circumvention of the security meant to be provided by the
signatures—it makes
one wonder what other weird little corner cases lurk out there waiting to be
discovered, or exploited.
Comments (6 posted)
Brief items
The idea that copyright owners might convince a judge, or, worse, a jury
that because they found a copy of an e-book on the Pirate Bay originally
sold to me they can then hold me responsible or civilly liable is almost
certainly wrong, as a matter of law. At the very least, it’s a long shot
and a stupid legal bet. After all, it’s not illegal to lose your
computer. It’s not illegal to have it stolen or hacked. It’s not illegal to
throw away your computer or your hard drive. In many places, it’s not
illegal to give away your e-books, or to loan them. In some places, it’s
not illegal to sell your e-books.
—
Cory
Doctorow on yet another e-book DRM scheme
Based on recent disclosures, we know that the NSA has decided to store encrypted communication for later analysis, and I think it’s safe to say that other countries follow suit. So it’s likely there are stored Cryptocat communications floating around in various spy agency archives. These agencies may have already found this issue and used it to view messages, or now that it’s public - they can do so easily.
This is where an issue like this can be so devastating, if those encrypted messages have been saved anywhere - any users engaged in activity that their local government doesn’t care for are now at risk.
Personally, I wouldn’t trust Cryptocat until it’s had a true code audit (the pen-test they had last year clearly doesn’t count), and the crypto systems reviewed by a true cryptographer. If a mistake like this was allowed in, and overlooked for so long, I’ve no doubt that other weaknesses exist.
—
Adam Caudill is ... unimpressed ... with Cryptocat
Destroying cameras? And mice? Over malware? Are you serious?
Worse, the EDA [Economic Development Administration] continued destroying components until it could no longer afford to destroy them. In fact, the agency intended to continue destroying gear just as soon as it got more funds approved to do so. Uhh... okay!
And no, it does not end there. It turns out the malware infection was absolutely routine. All the EDA had to do was isolate the affected components, remove the malware, reconnect the hardware and move on. NOAA, which received a notice at the same time as EDA, completed this operation in one month.
—
Mario
Aguilar is ... unimpressed ... by a US government malware prevention scheme
Comments (none posted)
New vulnerabilities
glpi: three largely unspecified vulnerabilities
Comments (none posted)
kernel: multiple vulnerabilities
| Package(s): | kernel |
CVE #(s): | CVE-2013-1059
CVE-2013-2234
|
| Created: | July 8, 2013 |
Updated: | August 20, 2013 |
| Description: |
From the Red Hat bugzilla [1, 2]:
Linux kernel built with the IPSec key_socket support(CONFIG_NET_KEY=m) is
vulnerable to an information leakage flaw. It occurs while using key_socket's
notify interface.
A user/program able to access the PF_KEY key_sockets could use this flaw to
leak kernel memory bytes. (CVE-2013-2234)
Linux kernel built with the Ceph core library(CONFIG_CEPH_LIB) support is
vulnerable to a NULL pointer dereference flaw. It could occur while handling
auth_reply messages from a CEPH client.
A remote user/program could use this flaw to crash the system, resulting in
denial of service. (CVE-2013-1059) |
| Alerts: |
|
Comments (none posted)
kernel: multiple vulnerabilities
| Package(s): | kernel |
CVE #(s): | CVE-2013-2232
CVE-2013-2237
|
| Created: | July 8, 2013 |
Updated: | July 18, 2013 |
| Description: |
From the CVE entries:
The ip6_sk_dst_check function in net/ipv6/ip6_output.c in the Linux kernel before 3.10 allows local users to cause a denial of service (system crash) by using an AF_INET6 socket for a connection to an IPv4 interface. (CVE-2013-2232)
The key_notify_policy_flush function in net/key/af_key.c in the Linux kernel before 3.9 does not initialize a certain structure member, which allows local users to obtain sensitive information from kernel heap memory by reading a broadcast message from the notify_policy interface of an IPSec key_socket.
(CVE-2013-2237) |
| Alerts: |
|
Comments (none posted)
kernel: denial of service
| Package(s): | kernel |
CVE #(s): | |
| Created: | July 8, 2013 |
Updated: | July 10, 2013 |
| Description: |
From the SUSE advisory:
The SUSE Linux Enterprise 11 Service Pack 2 kernel was
respun with the 3.0.80 update to fix a severe
compatibility problem with kernel module packages (KMPs)
like e.g. drbd.
An incompatible ABI change could lead to those modules not
correctly working or crashing on loading and is fixed by
this update. |
| Alerts: |
|
Comments (none posted)
nagios: information disclosure
| Package(s): | nagios |
CVE #(s): | CVE-2013-2214
|
| Created: | July 8, 2013 |
Updated: | July 10, 2013 |
| Description: |
From the SUSE bugzilla entry:
It was reported that Nagios 3.4.4 at least, and possibly earlier
versions, would allow users with access to Nagios to obtain full access
to the servicegroup overview, even if they are not authorized to view
all of the systems (not configured for this ability in the
authorized_for_* configuration option). This includes the servicegroup
overview, summary, and grid.
Provided the user has access to view some services, they will be able to
see all services (including those they should not see). Note that the
user in question must have access to some services and must have access
to Nagios to begin with. |
| Alerts: |
|
Comments (none posted)
python-bugzilla: missing certificate verification
| Package(s): | python-bugzilla |
CVE #(s): | CVE-2013-2191
|
| Created: | July 8, 2013 |
Updated: | July 10, 2013 |
| Description: |
From the SUSE bugzilla entry:
It was found that python-bugzilla, a Python library for interacting with
Bugzilla instances over XML-RPC functionality, did not perform X.509
certificate verification when using secured SSL connection. A man-in-the-middle
(MiTM) attacker could use this flaw to spoof Bugzilla server via an arbitrary
certificate. |
| Alerts: |
|
Comments (none posted)
ReviewBoard: cross-site scripting
| Package(s): | ReviewBoard |
CVE #(s): | CVE-2013-2209
|
| Created: | July 8, 2013 |
Updated: | July 10, 2013 |
| Description: |
From the Red Hat bugzilla:
A persistent / stored cross-site scripting (XSS) flaw was found in the way reviews dropdown of Review Board, a web-based code review tool, performed sanitization of certain user information (full name). A remote attacker could provide a specially-crafted URL that, when visited would lead to arbitrary HTML or web script execution in the context of Review Board user's session.
See the Review Board announcement for additional information. |
| Alerts: |
|
Comments (none posted)
ssmtp: world-readable password file
| Package(s): | ssmtp |
CVE #(s): | |
| Created: | July 4, 2013 |
Updated: | July 10, 2013 |
| Description: |
From the Red Hat bugzilla entry:
In order to have ssmtp working for every user on the machine, the file /etc/ssmtp/ssmtp.conf must be readable by every user (others must at least have the read right to this file).
If an authentication smtp server is used (as gmail for example), the login and password appears in clear text in ssmtp.conf. This is obviously a security problem. |
| Alerts: |
|
Comments (none posted)
xorg-x11-server: denial of service
| Package(s): | xorg-x11-server |
CVE #(s): | |
| Created: | July 5, 2013 |
Updated: | July 10, 2013 |
| Description: |
From the openSUSE bug report:
If a client sends a request larger than maxBigRequestSize, the server is
supposed to ignore it.
Before commit cf88363d, the server would simply disconnect the client. After
that commit, it attempts to gracefully ignore the request by remembering how
long the client specified the request to be, and ignoring that many bytes.
However, if a client sends a BigReq header with a large size and disconnects
before actually sending the rest of the specified request, the server will
reuse the ConnectionInput buffer without resetting the ignoreBytes field. This
makes the server ignore new X clients' requests. |
| Alerts: |
|
Comments (none posted)
Page editor: Jake Edge
Kernel development
Brief items
The 3.11 merge window remains open; see the separate article, below,
for details on what has been merged in the last week.
Stable updates: 3.8.13.4 was
released on July 3,
3.6.11.6 on July 9, and
3.5.7.16 on July 5.
Comments (none posted)
When experienced developers tell you that you are mistaken, you
need to make an effort to understand what the mistake was so you
can learn from it and not make the same mistake again. If you make
the same mistakes again, maintainers will get annoyed and ignore
you (or worse), which is not a good situation to be in when you
want to get your patches merged.
—
Arnd Bergmann
Scalability is not an afterthought anymore - new filesystem and
kernel features need to be designed from the ground up with this in
mind. We're living in a world where even phones have 4 CPU
cores....
—
Dave Chinner
Copy and paste is a convenient thing, right? It just should have a
pop up window assigned which asks at the second instance of copying
the same thing whether you really thought about it.
—
Thomas Gleixner
Comments (none posted)
Kernel development news
By Jonathan Corbet
July 10, 2013
As of this writing, Linus has pulled 8,275 non-merge changesets into the
mainline repository for the 3.11 development cycle. Once again, a lot of
the changes are internal improvements and cleanups that will not be
directly visible to users of the kernel. But there has still been quite a
bit of interesting work merged since
last
week's summary.
Some of the more noteworthy user-visible changes include:
- There is a new "soft dirty" mechanism that can be employed by user
space to track the pages written to by a process. It is intended for
use by the checkpoint/restart in user space code, but other uses may be
possible; see Documentation/vm/soft-dirty.txt for
details.
- The Smack security module now works with the IPv6 network protocol.
- The ICMP socket mechanism has gained
support for ping over IPv6.
- The ptrace() system call has two new operations
(PTRACE_GETSIGMASK and PTRACE_SETSIGMASK) to
retrieve and set the blocked-signal mask.
- 64-Bit PowerPC machines can now make use of the transparent huge pages
facility.
- The kernel NFS client implementation now supports version 4.2 of the
NFS protocol. Also supported on the client side is labeled NFS,
allowing mandatory access control to be used with NFSv4 filesystems.
- The kernel has new support for LZ4 compression, both in the
cryptographic API and for compression of the kernel binary itself.
- Dynamic power management support for most AMD Radeon graphics chips
(the r600 series and all that came thereafter)
has been merged. It is a huge amount of code and is still considered
experimental, so it is disabled by default for now; booting with the
radeon.dpm=1 command-line option will turn this feature on
for those who would like to help debug it.
- The low-latency network polling
patches have been merged after a
last-minute snag.
- The Open vSwitch subsystem now supports tunneling with the generic
routing encapsulation (GRE) protocol.
- New hardware support includes:
- Graphics:
Renesas R-Car display units and
AMD Radeon HD 8000 "Sea Islands" graphics processors.
- Input:
Huion 580 tablets,
ELO USB 4000/4500 touchscreens,
OLPC XO-1.75 and XO-4 keyboards and touchpads, and
Cypress TrueTouch Gen4 touchscreens.
- Miscellaneous:
Toumaz Xenif TZ1090 pin controllers,
Intel Baytrail GPIO pin controllers,
Freescale Vybrid VF610 pin controllers,
Maxim MAX77693 voltage/current regulators,
TI Adaptive Body Bias on-chip LDO regulators,
NXP PCF2127/29 real-time clocks,
SiRF SOC real-time clocks,
Global Mixed-mode Technology Inc G762 and G763 fan speed PWM
controller chips,
Wondermedia WM8xxx SoC I2C controllers,
Kontron COM I2C controllers,
Kontron COM watchdog timers,
NXP PCA9685 LED controllers,
Renesas TPU PWM controllers, and
Broadcom Kona secure DHCI controllers.
- Networking:
Allwinner A10 EMAC Ethernet interfaces,
Marvell SD8897 wireless chipsets,
ST-Ericsson CW1100 and CW1200 WLAN chipsets,
Qualcomm Atheros 802.11ac QCA98xx wireless interfaces, and
Broadcom BCM6345 SoC Ethernet adapters.
There is also a new hardware simulator for near-field
communications (NFC) driver development.
- Sound:
Realtek ALC5640 codecs,
Analog Devices SSM2516 codecs, and
M2Tech hiFace USB-SPDIF interfaces.
Changes visible to kernel developers include:
- Two new device callbacks — offline() and online() —
have been added at the bus level. offline() will be called
when a device is about to be hot-unplugged; it should verify that the
device can, indeed, be unplugged, but not power the device down yet.
Should the unplug be aborted, online() will be called to put
the device back online. The purpose behind these calls is to ensure
that hot removal can be performed before committing to the action.
- The checkpatch utility has a new, experimental --fix option
that will attempt to automatically repair a range of simple formatting
problems.
The merge window should remain open for the better part of another week.
Next week's Kernel Page will include a summary of the final changes pulled
into the mainline for the 3.11 development cycle.
Comments (11 posted)
By Jonathan Corbet
July 10, 2013
Dave Miller's networking git tree is a busy place; it typically feeds over
1,000 changesets into the mainline each development cycle. Linus clearly
sees the networking subsystem as being well managed, though, and there are
rarely difficulties when Dave puts in his pull requests. So it was
surprising to see Linus reject Dave's request for the big 3.11 pull. In
the end, it came down to the
low-latency
Ethernet device polling patches, which had to go through some urgent
repairs while the rest of the networking pull request waited.
The point of this patch set is to enable low-latency data reception by
applications that are willing to busy wait (in the kernel) if data is not
available when a read() or poll() operation is performed
on a socket. Busy waiting is normally avoided in the kernel, but, if
latency matters more than anything else, some users will be willing to
accept the cost of spinning in the kernel if it allows them to avoid
the cost of context switches when the data arrives. The hope is that
providing this
functionality in the kernel will lessen the incentive for certain types of
users to install user-space networking stacks.
Since this patch set was covered here in May, it has seen a few changes.
As was predicted, a setsockopt() option (SO_LL) was added
so that the polling behavior could be adjusted on a per-socket basis;
previously, all sockets in the system would use busy waiting if the feature
was enabled in the kernel. Another flag (POLL_LL) was added for
the poll() system call; once again, it causes busy waiting to
happen even if the kernel is otherwise configured not to use it. The
runtime kernel configuration itself was split into two sysctl knobs:
low_latency_read to set the polling time for read()
operations, and low_latency_poll for poll() and
select(). Setting either knob to zero (the default) disables busy
waiting for the associated operation.
When the time came to push the networking changes for 3.11, Dave put the
low-latency patches at the top of his list of new features. Linus was not impressed, though. He had a number of
complaints, ranging from naming and documentation through to various
implementation issues and the fact that changes had been made to the core
poll() code without going through the usual channels. He later retracted some of his complaints, but still
objected to a number of things. For example, he called out code like:
if (ll_flag && can_ll && can_poll_ll(ll_start, ll_time))
saying that it "should have made anybody sane go 'WTF?' and wonder
about bad drugs." More seriously, he strongly disliked the
"low-latency" name, saying that it obscured the real effect of the patch.
That name, he said, should be changed:
The "ll" stands for "low latency", but that makes it sound all
good. Make it describe what it actually does: "busy loop", and
write it out. So that people understand what the actual downsides
are. We're not a marketing group.
So, for example, he was not going to accept POLL_LL in the
user-space interface; he requested POLL_BUSY_LOOP instead.
Beyond that, Linus disliked how the core polling code worked, saying that
it was more complicated than it needed to be. He made a number of
suggestions for improving the implementation. Importantly, he wanted to be
sure that polling would not happen if the need_resched flag is set
in the current structure. That flag indicates that a
higher-priority process is waiting to run on the CPU; when it is set, the
current process needs to get out of the way as quickly as possible.
Clearly, performing a busy wait for network data would not be the right
thing to do in such a situation. Linus did not say that the proposed patch
violated that rule, but it was not sufficiently clear to him that things
would work as they needed to.
In response to these perceived shortcomings, Linus refused the entire patch
set, putting just
over 1,200 changes on hold. He didn't reject the low-latency work
altogether, though:
End result: I think the code is salvageable and people who want
this kind of busy-looping can have it. But I really don't want to
merge it as-is. I think it was badly done, I think it was badly
documented, and I think somebody over-sold the feature by
emphasizing the upsides and not the problems.
As one might imagine, that put a bit of pressure on Eliezer Tamir, the
author of the patches in question. The merge window is only two weeks
long, so the requested changes needed to be made in a hurry. Eliezer was
up to the challenge, though, producing the requested changes in short
order. On July 9, Dave posted a new pull
request with the updated code; Linus pulled the networking tree the
same day, though not before posting a
complaint about some unrelated issues.
In this case, the last-minute review clearly improved the quality of the
implementation; in particular, the user-visible option to poll()
is now more representative of what it really does
(SO_LL remains unchanged, but it will become SO_BUSY_WAIT
before 3.11 is released). The cost, of course,
was undoubtedly a fair amount of adrenaline on Eliezer's part as he
imagined Dave busy waiting for the fixes. Better
review earlier in the process might have allowed some of these issues to be
found and fixed in a more relaxed manner. But review bandwidth is, as
is the case in most projects, the most severely limited resource of all.
Comments (5 posted)
By Jonathan Corbet
July 10, 2013
The
full dynamic tick feature that made its
debut in the 3.10 kernel can be good for users who want their applications
to have full use of one or more CPUs without interference from the kernel.
By getting the clock tick out of the way,
this feature minimizes kernel overhead and the potential latency problems.
Unfortunately, full dynamic tick operation also has the potential to increase
power consumption. Work is underway to fix that problem, but it turns out
to require a bit of information that is surprisingly hard to get: is the
system fully idle or not?
The kernel has had the ability to turn off the periodic clock interrupt on
idle processors for many years. Each processor, when it goes idle, will
simply stop its timer tick; when all processors are idle, the system will
naturally have the timer tick disabled systemwide. Fully dynamic tick —
where the timer tick can be disabled on non-idle CPUs — adds an interesting
complication, though. While most processors can (when the conditions are
right) run without the clock tick, one processor must continue to keep the
tick enabled so that it can perform a number of necessary system
timekeeping operations. Clearly, this "timekeeping CPU" should be able to
disable its tick and go idle if nothing else is running in the system, but,
in current kernels, there is no way for that CPU to detect this situation.
A naive solution to this problem will come easily to mind: maintain a
global counter tracking the number of idle CPUs. Whenever a processor goes
idle, it increments the counter; when the processor becomes busy again, it
decrements the counter. When the number of idle CPUs matches the number of
CPUs in the system, the kernel will know that no work is being done and the
timekeeping CPU can take a break.
The problem, of course, is that cache contention for that global counter
would kill performance on larger systems. Transitions to and from idle are
common under most workloads, so the cache line containing the counter would
bounce frequently across the system. That would defeat some of the point
of the dynamic tick feature; it seems likely that many users would prefer
the current power-inefficient mode to a "solution" that carried such a
heavy cost.
So something smarter needs to be done. That's the cue for an entry by Paul
McKenney, whose seven-part full-system idle
patch set may well be the solution to this problem.
As one might expect, the solution involves the maintenance of a per-CPU
array of idle states. Each CPU can update its status in the array without
contending with the other CPUs.
But, once again, the naive solution is inadequate.
With a per-CPU array, determining whether the system is fully idle requires
iterating through the entire array to examine the state of each CPU. So,
while maintaining the state becomes cheap, answering the "is the system
idle?" question becomes expensive if the number of CPUs is large. Given
that the timekeeping code is
likely to want to ask that question frequently (at each timer tick, at
least), an expensive implementation is not indicated; something else must
be done.
Paul's approach is to combine the better parts of both naive solutions. A
single global variable is created to represent the system's idle state and
make that
state easy to query quickly. That variable is updated from a scan over the
individual CPU idle states, but only under specific conditions that
minimize cross-CPU contention. The result should be the best of both
worlds, at the cost of delayed detection of the full-system idle state and
the addition of some tricky code.
The actual scan of the per-CPU idle flags is not done in the scheduler or
timekeeping code, as one might expect. Instead (as others might expect),
Paul put it into the read-copy-update (RCU) subsystem. That may seem like
a strange place, but it makes a certain sense: RCU is already tracking the
state of the system's CPUs, looking for "grace periods" during which
unused RCU-protected data structures can be reclaimed. Tracking whether
each CPU is fully idle is a relatively small change to the RCU code. As an
added benefit, it is easy for RCU to avoid scanning over the CPUs when
things are busy, so the overhead of maintaining the global full-idle state
vanishes when the system has other things to do.
The actual idleness of the system is tracked in a global variable called
full_sysidle_state. Updating this variable too often would bring
back the cache-line contention problem, though, so the code takes a more
roundabout path. Whenever the system is perceived to be idle, the code
keeps track of when the last processor went idle. Only after a delay will
the global idle state be changed. That delay drops to zero for "small"
machines (those with no more than eight processors), it increases linearly
as the number of processors goes up. So, on a very large system, all
processors must be idle for quite some time before
full_sysidle_state will change to reflect that state of affairs.
The result is that detection of full-system idle will be delayed on larger
systems, possibly by a significant fraction of a second. So the timer tick
will run a little longer than it strictly needs to. That is a cost
associated with
Paul's approach, as is the fact that his patch set adds some 500 lines of
core kernel code for what is, in the end, the maintenance of a single
integer value. But that, it seems, is the price that must be paid for
scalability in a world where systems have large numbers of CPUs.
Comments (11 posted)
Patches and updates
Kernel trees
Build system
Core kernel code
Device drivers
Filesystems and block I/O
Memory management
Architecture-specific
Page editor: Jonathan Corbet
Distributions
By Nathan Willis
July 10, 2013
Oracle recently released
an update to the Berkeley DB
(BDB) library, and near the end of that announcement was a line noting
a change in licensing, moving from the project's historical Sleepycat
License to the GNU Affero GPLv3 (AGPL). Some in the free software
community smell trouble with the move, suspicious that Oracle is
attempting to either marginalize BDB or to squeeze money out of
commercial BDB users. On a more practical level, the license change
has raised questions for Debian, which utilizes BDB for important
components like the Apt
packaging system. Any upstream license change can be a hassle,
but this one has the potential to cause considerably more waves
downstream.
BDB originated in the official BSD from the University of
California, Berkeley, and its simple architecture has made it popular
for quite a few free software projects that need database
functionality, but are not interested in SQL. Sleepycat Software was
formed to develop and extend BDB in response to a request from
Netscape, which was interested in developing a BDB-based LDAP server.
The Sleepycat License, subsequently selected for the code, is similar to
the GPL in that it requires corresponding source distribution (or a
source code offer) to accompany binary distributions of the software.
It is distinct from the BSD license, it qualifies as both open source
and as free software, and is compatible
with the GPL. Nevertheless, Sleepycat
Software also made BDB available under proprietary terms, primarily to
customers who did not want to release the source code to their own
commercial products.
Oracle acquired Sleepycat Software in 2006 and, with it, the
copyright to BDB. Since 2006, Oracle has continued to offer BDB under
both the Sleepycat and proprietary licenses; that only changed with the
BDB 6.0 release on June 10, which moved to a choice between the AGPLv3
and a proprietary license. The primary difference is that AGPL's
copyleft provision is considerably stronger, because it triggers
a source code requirement whenever the licensed software is accessed
"remotely through a computer network." Ondřej Surý drew attention to
the move on the debian-devel list in July, worried that under AGPL,
BDB will be headed "to oblivion" as other projects drop
it in favor of other lightweight databases.
Consequently, Surý said, Debian has a decision to make about its
future relationship with BDB. The project could stick with the last
Sleepycat-licensed release (version 5.3), at least for the time being,
but perhaps indefinitely. It could also remove BDB entirely, perhaps
replacing it with another database library, or writing a BDB wrapper
around some other library. Or it could simply package up the new BDB
6.0 release and relicense the downstream software that relies on it.
He suggested looking at Kyoto Cabinet as one
possible replacement. Finally, he noted that "the most
prominent users of Berkeley DB are moving away from the library
anyway, so this might not be a big deal after all." Those
users being the Cyrus IMAP server, OpenLDAP, and Subversion.
Upstream/downstream
One might well ask why a "decision" needs to be made at all. The
previous release of BDB (version 5.3) is still available, and many
projects who have been using it for years may see no compelling reason
to upgrade to 6.0 anyway. Plus, as
several on the list, such as Paul Tagliamonte, pointed
out, AGPL clearly meets the Debian Free Software Guidelines
(DFSG); there is no reason why it has to be removed from the archive.
On the other hand, the AGPL has its critics—for instance,
Bernhard R. Link contended that its requirement for
source distribution for remote network access prevents people from
running the software if (for example) they have modified it to include
passwords or other confidential data.
Link's interpretation of the AGPL is, to say the least, exceedingly
broad. The
Free Software Foundation (FSF) provides
some guidelines about what could accurately be described as
"interacting with [the software] remotely through a computer
network." Nevertheless, Link is correct to point out that
there are commercial users who avoid the AGPL because they want to
keep their own source code secret. Ben Hutchings pointed
out that the license change may be a business tactic on Oracle's
part to squeeze additional proprietary licenses out of existing BDB
users, since AGPL widens the scope of activities that trigger the source requirement.
Russ Allbery also commented that AGPL was not
written for libraries, and as a result has terms that are difficult to interpret for non-web-applications.
I think this one is all on Oracle. They're
using a license that was never intended for a basic infrastructure
library, quite possibly in an attempt to make it obnoxious and
excessively onerous to use the open source version, or to create a
situation where nearly all users of their library are violating some
technical term of the license (or at least are close enough that a
lawsuit wouldn't be immediately thrown out) and therefore can be
shaken down for cash if Oracle feels like it.
Regardless of Oracle's specific plans for the future, the most
fundamental issue for Debian is the fact that Apt uses BDB. In
particular, apt-get is licensed under GPLv2+, so in order to link
against an AGPLv3 library, it would effectively need to be relicensed
to AGPLv3. That could be a complicated process, and there are a lot of
downstream tools built on top of Apt that would be affected.
Surý also compiled a list of the other Debian packages that depend on
BDB; while there are several that are well-known (for example,
Bitcoin, Dovecot, and evolution-data-server), Apt is maintained by the
Debian community itself.
Bradley Kuhn chimed in to suggest
that there are in fact three separate issues at hand, which could be
unintentionally conflated if not addressed separately. First, there
is the question of whether the AGPL is an acceptable license for
packages Debian includes. Second, there is the question of whether
including a core library like BDB under the AGPL has ripple effects
that are undesirable from Debian's perspective, particularly with
regard to combining BDB with other programs. Third is whether
Oracle's "history of abusive copyleft enforcement (by refusing
to allow full compliance as an adequate remedy and demanding the
purchase of proprietary licenses by license violators)" makes
the new BDB too dangerous for Debian and downstream projects. The
first question is a clear yes, he said, while the second is a
policy decision for Debian as a project. But the third is more
difficult.
In particular, he said, the fact that Oracle is now the sole
copyright holder (at least on all changes since the acquisition) gives it considerable power in the event that Oracle
brings a license-violation suit to court. In litigation, the
interpretation of the license becomes the issue (presumably in an AGPL
case, the "interacting with [the software] remotely through a
computer network" clause in particular). To that end, Kuhn
suggested the unorthodox solution of forking the new, 6.0-era BDB code
and continuing to improve it without Oracle's help. Such a new
project could also aid the targets of Oracle AGPL-violation lawsuits,
offering a valid alternative interpretation of the license.
Surý replied that there is a fourth issue: whether Debian
should unilaterally relicense all the BDB-dependent packages (at
least, all of those packages that can be relicensed),
rather than letting the upstreams worry about that themselves. To
that point, Kuhn clarified what is a frequently-misunderstood issue:
when Debian releases its distribution, the combined work that it
constitutes can be relicensed without relicensing any of the original
upstream packages. In other words, if Debian builds and links BIND 9
(which is under the BSD-like ISC license) against the AGPL'ed BDB 6.0,
the resulting BIND 9 binaries would be under the AGPL, but that new
license only flows downstream to subsequent redistributions of Debian.
Then again, Kuhn continued, in the long term such relicensing
issues are out of Debian's hands—the upstream projects
themselves are all impacted by the BDB license change, and will all
have to make their own decisions about how to move forward.
Similarly, Hutchings noted that if an upstream has
already made the decision to remain GPLv2-only, then that project has
already taken a stance with regard to relicensing; those projects are
unlikely to relicense their code now.
Apt questions
Still, as Surý commented in his reply to Kuhn, from a practical standpoint Debian only
wants to include one version of BDB in its archive, so it must choose
which. And since Apt is so central to Debian, that may very well
determine the direction that the distribution takes.
David Kalnischkies pointed out how little of the Apt family
is actually dependent on BDB; a single component uses BDB to cache the
contents of .deb packages. Essentially none of the Debian project
infrastructure still relies on BDB either, he said, outside of
"'just' 'some' derivatives." So migrating to a new
database would be a plausible option. On the other hand, he said,
"it's more likely that hell freezes over than that we track down
every contributor since 1997 to ask for an agreement for a license
change." Relicensing Apt itself would also force downstream
projects built on top of Apt to consider their own license change.
It would seem, then, that migrating Apt to a different database
library would be the preferable option. Dan Shearer considered BDB
features, saying that "there are
many KVP (key value pair) stores not many are transactional, allow
multi-version concurrency control and support multi-threaded and
multi-process access." He recommended taking a look at
OpenLDAP's Lightning Memory-Mapped
Database (which is alternatively referred to
as LMDB or MDB), but said the outstanding question was whether MDB's
primary maintainer, Symas, was "likely to do what Oracle just
did to BDB." Symas's Howard Chu responded to
say that MDB was under the control of OpenLDAP, not Symas itself, and
that the project required no copyright assignment from contributors.
Consequently, the company would not be in the position to relicense
MDB itself if it wanted to, and it has no interest in doing so anyway.
Of course, Chu's point about holding copyright is relevant as
Debian considers moving to a different database. But Michael Banck
pointed out that Oracle has not been the only
contributor to BDB over the years. Banck was initially suggesting
that the mix of code from inside and outside Oracle might not be
relicensable. Upon further inspection, it does seem like the
combination can be relicensed, but Kuhn suggested
taking nothing for granted. "It's probably a good idea for
someone to carefully verify these facts. The community shouldn't
assume Oracle got all this right."
After all, Oracle does have a reputation for pressuring customers
into buying proprietary licenses with some parts of the community;
Kuhn at least has spoken about that
issue in the past. Certainly those licenses do not come cheap, either. The company's June 2013 price
list [PDF] ranges from US $900 to $13,800 for proprietary BDB
licenses—per processor. Regardless of the price tag
consideration, however, the simple act of swapping one license for
another is a major disruption. All of the open source projects that,
like Debian, rely on BDB are now faced with a practical dilemma
(regardless of what project member may feel about Oracle, the AGPL, or
copyleft). Surý summed up the situation quite succinctly when noting
that he does not object to the AGPL: "The evil thing is the
relicensing at the point where people depend on you, and not the
license itself."
Comments (4 posted)
Brief items
A GPLv3 only Debian
distribution is, in my opinion, about as useful as lobotomy performed
with a bazooka.
--
David Weinehall
Comments (none posted)
SUSE has
announced
the release of SUSE Linux Enterprise 11 Service Pack 3. "
Service Pack 3 gives customers more scale-up and scale-out options to run their mission-critical workloads with support for new hardware, features and enhancements. Service Pack 3 also includes all of the patches and updates released since the introduction of SUSE Linux Enterprise 11 in 2009. As a result, it is the most secure foundation available for migrating workloads from UNIX and other operating systems and running them reliably and cost effectively."
Comments (none posted)
Distribution News
Debian GNU/Linux
Debian Project Leader Lucas Nussbaum has posted his June activity report.
Topics include the state of the NEW queue, call for help: Debian Auditors,
new members in the trademark team ; logo registration, delegations updates,
Debian now a member of the Network Time Foundation, follow-up on
debian-multimedia.org, and more.
Full Story (comments: none)
Fedora
Eric Christensen has a report on the Fedora Security Special Interest Group
(SIG). "
The Fedora Security SIG is coming back with a new mission and new momentum. Previously the Security SIG concentrated on security responses to vulnerabilities and answered questions from the Fedora community. While this service isn't going away we will be adding two new functions: secure coding education and code audit services."
Full Story (comments: none)
Newsletters and articles of interest
Comments (none posted)
Mark Shuttleworth
defends the
development of the Mir display server on his blog. "
Of course,
there is competition out there, which we think is healthy. I believe Mir
will be able to evolve faster than the competition, in part because of the
key differences and choices made now. For example, rather than a rigid
protocol that can only be extended, Mir provides an API. The implementation
of that API can evolve over time for better performance, while it’s
difficult to do the same if you are speaking a fixed protocol."
Comments (65 posted)
Page editor: Rebecca Sobol
Development
By Nathan Willis
July 10, 2013
A fourth
draft of the in-development HTTP 2.0 protocol was published
by the Internet Engineering Task Force (IETF) on July 8. The update
to HTTP was first published in draft form in late 2012, but this
revision is "marked for implementation," with
interoperability tests slated to begin in August. HTTP 2.0 is largely
a remapping of existing HTTP semantics that is designed to improve
throughput of client-server connections. Consequently, it should not
significantly affect the design or implementation of web
applications. Browsers and web servers, however, can use HTTP 2.0 to
replace many of the tricks they are already employing to decrease HTTP
latency.
HTTP 2.0 is adapted largely from earlier work done on SPDY, an experimental
extension to HTTP cooked up at Google, but supported by a number of
other prominent software vendors (e.g., Mozilla). The newly-published
IETF draft splits the changes up into three categories: HTTP Frames,
HTTP Multiplexing, and a new "server push" feature. HTTP Frames are
the message format for HTTP 2.0; they are designed to serialize HTTP
requests, responses, and headers in a manner suited to faster
processing. HTTP Multiplexing interleaves multiple requests and
responses over a single HTTP connection. The server push feature allows servers to pre-emptively initiate the transfer of a resource to
clients. The intent of the push functionality is that the server would
know that a request for the resource is likely to come soon (such as
another resource on the same page), so starting the transfer ahead of
time makes better use of the available bandwidth and reduces perceived
latency.
"Perceived latency" is an oft-recurring phrase in the new
protocol; most if not all of the changes are tailored to allow servers
to optimize delivery of web content to clients. Some of these changes
mimic techniques already found in other protocols (such as TCP), which
is perhaps what has prompted criticism of HTTP 2.0 from several
bloggers and social media commenters. After all, TCP already has
multiplexing and flow control, so it seems fair to ask whether
implementing them in HTTP as well really adds anything. The presumed
answer, of course, is that the web server is in the position to do
application-level flow control and make quality-of-service (QoS)
decisions while the underlying TCP stack is not. How effectively that
works out in practice will presumably be seen once interoperability
testing begins in earnest.
Frames and headers
HTTP 2.0 connections are initiated in the same way as they are in
HTTP 1.x. The first difference comes in the enclosure of requests and
responses in frames over the HTTP connection. Frames begin with an
eight-byte binary header that lists the length of the frame payload,
the frame type, any necessary flags, and a 31-bit HTTP stream
identifier. A "stream" in HTTP 2.0 is one of the independent
sequences that is multiplexed over the HTTP connection; in general
each resource (HTML page, image, etc.) could get a separate stream,
with the frames from all of the streams interleaved over the same HTTP
connection. A stream can also be kept open and used to move
subsequent requests and resources. However, the stream IDs are not
reused over the connection; once a stream is closed, a new ID is
created for the next stream. The IDs also reflect who
initiated the stream: even numbers for server-initiated streams, odd
for client-initiated.
The adoption of a binary header is certainly a difference from HTTP
1.x, even if the requests and responses that HTTP 2.0 frames encode
are unchanged. But the differences are designed to decrease apparent
latency. Marking the frame size at the beginning allows the
connection endpoints to better manage multiplexing the connection. In
addition, HTTP HEADERS frames can be compressed to reduce the
payload size—a feature that was previously available only for
HTTP resource data.
The frame types defined include the aforementioned
HEADERS, the DATA frames carrying page resources,
SETTINGS frames for exchanging connection parameters (such as
the number of allowable streams per connection), RST_STREAM
frames to terminate a stream, PING frames for measuring the
round-trip time of the connection, WINDOW_UPDATE frames for
implementing flow control, PRIORITY frames for marking a
specific stream as important, and the new server push frames.
Server push uses two frame types; PUSH_PROMISE is a
server-to-client frame that designates a stream ID that the server
wants to push to the client. An uninterested client can reject the
suggestion by sending a RST_STREAM frame, or accept the
DATA frames that the server sends subsequently. For dealing
with an overly pushy server, the other new frame type,
GOAWAY, is provided. Clients can send this frame to a server
to tell it to stop initiating new streams altogether. The nightmare
scenario of mobile web users paying by the kilobyte, of course, is
chatty servers pushing unwanted streams; the HTTP 2.0 specification
mandates that a server must stop initiating streams when it
is told to shut up via a GOAWAY frame.
Multiplexing
Intelligently managing the stream IDs on a connection is the
essence of HTTP 2.0's multiplexing. Both the client and the server
can create new streams, and can use their streams to send resource
requests or to return data. By multiplexing several streams over one
HTTP 2.0 connection, the idea is that several requests and responses
can share the bandwidth, without setting up and tearing down a
separate connection for each request/response pair. To a degree, bandwidth sharing is already
available in HTTP 1.1's pipelining feature, but it has serious
limitations.
For example, HTTP 1.1 requires that servers answer requests in the
order that they arrive. In practice, browsers often open up multiple
HTTP connections to a single server, which consumes additional memory on
both sides, and gives the server multiple requests to handle at once.
Naturally, the web application has other mechanisms available to
decide that these connections are part of a single conversation, but
by shuffling them into a single HTTP connection, the server has less
guesswork to do and can do a better job of optimizing the connection.
Furthermore, it can do this without maintaining a large collection of
open ports and without spinning up separate processes to handle each connection.
QoS features are enabled by HTTP 2.0's WINDOW_UPDATE and
PRIORITY frames. The client can increase the amount of data
in flight by widening the flow-control window with
WINDOW_UPDATE. Or, rather than telling the server to send
more, the client can decrease the window to throttle back the
connection. The server, in turn, can designate a stream ID with a
PRIORITY frame, which allows it to send the most important
resources first, even if the requests came in in a different order.
It is also noteworthy that the flow control options offered in HTTP
2.0 are defined to be hop-to-hop, rather than being between the server
and browser endpoints. This enables intermediate nodes like web
proxies to influence the throughput of the connection (presumably for
the better).
Last, but not least, the HTTP 2.0 draft mandates one change to
HTTPS support. Supporting endpoints are required to implement the
Application Layer Protocol Negotiation Extension to TLS (TLSALPN),
which is itself an in-progress
IETF draft specification. TLSALPN is a mechanism for TLS clients and
servers to quickly agree on an application-level protocol; mandating
its presence means that HTTP 2.0–capable endpoints can agree on
the use of HTTP 2.0 and begin taking advantage of its throughput from
the very start of the conversation.
Apart from these new features, HTTP 2.0 introduces no changes to
the established semantics of HTTP: the request methods, status codes,
and headers from HTTP 1.1 remain the same. The stated goal is
to provide backward compatibility; the hope being that server and
browser implementations
can provide an HTTP 2.0 encapsulation layer and begin offering
throughput improvements without changing web pages or applications
themselves.
Both the Google Chromium and Mozilla Firefox teams have expressed
their support for the protocol, as have several of the proprietary
browser vendors. At this point, the big questions still to be
answered are things like how HTTP 2.0 will affect WebSockets, which also
implements multiplexing and QoS, but is usually considered part of
HTML5.
Microsoft had pushed for a different approach to HTTP
2.0—reusing the framing, multiplexing, and encoding semantics of WebSockets—but at this stage is the standardization process, the draft
resembles SPDY considerably more. There will no doubt continue to be
other means for clients and servers to reduce the perceived latency of
the web (for instance, by shuttling binary data over WebRTC's data
channels), but as the fundamental layer of web connections, HTTP
has more places to make an impact—not just browsers and servers,
but proxies, gateways, and all of the other nodes along the route.
Comments (13 posted)
Brief items
The assumption that most code has been debugged completely and that therefore the default behavior should be to turn off the checks and run as fast as possible is a bad joke.
—
Philip
Guenther, as
cited by wahern.
Pro-tip: "Automatic conflict resolution" is actually pronounced "snake oil"
—
Miguel de Icaza
Comments (none posted)
The
LXDE desktop environment project has a
preview of its new Qt-based development branch (including a screen shot). It uses Qt 4, but there are plans to move to Qt 5.1 eventually.
"
To be honest, migrating to Qt will cause mild elevation of memory usage compared to the old Gtk+ 2 version. Don't jump to the conclusion too soon. Migrating to gtk+ 3 also causes similar increase of resource usage.
Since gtk+ 2 is no longer supported by its developer and is now being deprecated, porting to Qt is not a bad idea at the moment."
Comments (26 posted)
Version 2.7.0 of the Ganeti VM cluster–management utility has been released. This version incorporates quite a few changes, such as automatic IP address allocation, support for arbitrary external storage, and opportunistic locking to speed up the creation of new instances. As the release notes, explain: "If the ``opportunistic_locking`` parameter is set the opcode will try to acquire as many locks as possible, but will not wait for any locks held by other opcodes. If not enough resources can be found to allocate the instance, the temporary error code :pyeval:`errors.ECODE_TEMP_NORES` is returned. The operation can be retried thereafter, with or without opportunistic locking."
Full Story (comments: none)
The creator of the
photographer.io
photo-sharing site has
announced
that the source for the site is now available under the MIT license.
"
On a more selfish note; I’m really excited to have something I feel
is worth open sourcing. I’ve spent 2 months working nearly every evening to
get the site to a stable point and I’m generally pleased with where it is
(other than the dire test suite)."
Comments (none posted)
Version 3.7.0 of the GNU Radio software-defined radio suite is out.
"
This is a major new release of GNU Radio, culminating a year and a
half of side-by-side development with the new features added to the GNU
Radio 3.6 API. With the significant restructuring that was occurring as
part of the 3.7 development, at times this seemed like rebuilding a race
car engine while still driving it."
Full Story (comments: none)
A "marked for implementation" draft of the HTTP 2.0 protocol specification
has been
released.
"
The HTTP/2.0 encapsulation enables more efficient use of network
resources and reduced perception of latency by allowing header field
compression and multiple concurrent messages on the same connection. It
also introduces unsolicited push of representations from servers to
clients."
Comments (43 posted)
The KDE project is contemplating
a proposal to
move to three-month releases. The
announcement of the proposal says:
"
Basically the idea is to cut testing time and compensate it by
keeping master always in a 'releaseable' state, now that two major
components are frozen it looks like it is a good time to get used to
it." The resulting discussion looks to go on for a while; see also
this thread where the openSUSE project
registers its disapproval of the idea.
Comments (31 posted)
Version 4.3 of the Xen paravirtualization system is out. New features
include ARM support (both 32- and 64-bit), use of upstream QEMU, NUMA
affinity in the scheduler, and more; see
the Xen 4.3
feature list and
the release
notes for details.
Full Story (comments: 4)
Version 2.0 of the GNU Health hospital information system is available. This version adds a number of new features, starting with an OS-independent installer, and including compatibility with Tryton 2.8. New modules add support for working with World Health Organization essential medicines, American Society of Anesthesiologists physical status classifications, and in-home health care.
Full Story (comments: none)
Newsletters and articles
Comments (none posted)
The H
introduces a new Scheme-based programming language for GPUs. "
Scheme serves as the basis because it has history at Indiana University, where some previous experience of building Scheme-based compilers has been gathered. [Erik] Holk has also gained GPU programming experience working with the Rust programming language; unlike Harlan, however, this language works much closer to the hardware. Holk reveals that the name Harlan comes from a mishearing of fried chicken icon Colonel Sanders' first name, Harland, and this association is also why all the file extensions for Harlan programs are .kfc."
Comments (57 posted)
KDE.News has an
interview with Malcom Moore, who is the network manager for Westcliff High School for Girls Academy (WHSG) in the southeast of England. WHSG switched the students to Linux desktops (openSUSE 12.2 and KDE Plasma 4.10) a year ago, and Moore talks about the motivations for the switch, the transition, the reactions to Linux and Plasma, hurdles that were overcome, and so on. By the sounds, the conversion from Windows has been quite successful, though not without its challenges. "
The Senior Leadership Team grilled me in two long meetings which was fun! Once you actually take a step back from the misconception that computers = Windows and actually seriously think about it, the pros clearly outweigh the cons. The world is changing very quickly. There is a survey that reports in 2000, 97% of computing devices had Windows installed, but now with tablets and phones, etc., Windows is only on 20% of computing devices, and in the world of big iron, Linux reigns supreme. We specialize in science and engineering and want our students to go on to do great things like start the next Google or collapse the universe at CERN. In those environments, they will certainly need to know Linux."
Comments (87 posted)
Page editor: Nathan Willis
Announcements
Brief items
The Daily Durham site is
reporting
that Fedora developer Seth Vidal has been killed in a hit-and-run
accident. Seth was perhaps best known as the creator of the "Yum" package
management system; he will be much missed in the community.
Update: see also this note from
Fedora project leader Robyn Bergeron. "To say he will be missed is
an understatement. He has been a colleague, a team member, a source of
wisdom and advice, and above all, a friend and inspiration to countless
people in the Fedora community over the past decade. His seminal and
invaluable work in Fedora and free software will live on for years to come,
and the legacy of his spirit will stay with the community, and with many of
us individually, forever."
Comments (24 posted)
Articles of interest
The Free Software Foundation Europe newsletter for July covers GPL
enforcement, the European Commission and vendor lock-in, surveillance, free
software in Turkey, and several other topics.
Full Story (comments: none)
Calls for Presentations
For those of you who haven't gotten around to putting in your linux.conf.au
speaking proposals yet: you now have a little more time. "
The Call
for Proposals (CFP) has now been open for four weeks, and the quality of
submissions so far has been fantastic. Originally scheduled to be closed
this week, the papers committee has agreed to extend the deadline by two
weeks, as there's been some requests for extension from potential speakers,
and we want to make sure that everyone a chance to have their proposal
considered!" The new deadline is July 20.
Full Story (comments: none)
Puppet Camp will take place November 28 in Munich, Germany. This
announcement states that the call for papers deadline is October 15, but
the web site says October 1. It's probably best to get your proposals in
sooner rather than later.
Full Story (comments: none)
FLOSS UK 'DEVOPS' will take place March 18-20, 2014 in Brighton, UK. The
call for papers closes November 15, 2013.
Full Story (comments: none)
CFP Deadlines: July 11, 2013 to September 9, 2013
The following listing of CFP deadlines is taken from the
LWN.net CFP Calendar.
| Deadline | Event Dates |
Event | Location |
| July 15 |
August 16 August 18 |
PyTexas 2013 |
College Station, TX, USA |
| July 15 |
October 22 October 24 |
Hack.lu 2013 |
Luxembourg, Luxembourg |
| July 19 |
October 23 October 25 |
Linux Kernel Summit 2013 |
Edinburgh, UK |
| July 20 |
January 6 January 10 |
linux.conf.au |
Perth, Australia |
| July 21 |
October 21 October 23 |
KVM Forum |
Edinburgh, UK |
| July 21 |
October 21 October 23 |
LinuxCon Europe 2013 |
Edinburgh, UK |
| July 21 |
October 19 |
Central PA Open Source Conference |
Lancaster, PA, USA |
| July 22 |
September 19 September 20 |
Open Source Software for Business |
Prato, Italy |
| July 25 |
October 22 October 23 |
GStreamer Conference |
Edinburgh, UK |
| July 28 |
October 17 October 20 |
PyCon PL |
Szczyrk, Poland |
| July 29 |
October 28 October 31 |
15th Real Time Linux Workshop |
Lugano, Switzerland |
| July 29 |
October 29 November 1 |
PostgreSQL Conference Europe 2013 |
Dublin, Ireland |
| July 31 |
November 5 November 8 |
OpenStack Summit |
Hong Kong, Hong Kong |
| July 31 |
October 24 October 25 |
Automotive Linux Summit Fall 2013 |
Edinburgh, UK |
| August 7 |
September 12 September 14 |
SmartDevCon |
Katowice, Poland |
| August 15 |
August 22 August 25 |
GNU Hackers Meeting 2013 |
Paris, France |
| August 18 |
October 19 |
Hong Kong Open Source Conference 2013 |
Hong Kong, China |
| August 19 |
September 20 September 22 |
PyCon UK 2013 |
Coventry, UK |
| August 21 |
October 23 |
TracingSummit2013 |
Edinburgh, UK |
| August 22 |
September 25 September 27 |
LibreOffice Conference 2013 |
Milan, Italy |
| August 30 |
October 24 October 25 |
Xen Project Developer Summit |
Edinburgh, UK |
| August 31 |
October 26 October 27 |
T-DOSE Conference 2013 |
Eindhoven, Netherlands |
| August 31 |
September 24 September 25 |
Kernel Recipes 2013 |
Paris, France |
| September 1 |
November 18 November 21 |
2013 Linux Symposium |
Ottawa, Canada |
| September 6 |
October 4 October 5 |
Open Source Developers Conference France |
Paris, France |
If the CFP deadline for your event does not appear here, please
tell us about it.
Upcoming Events
Events: July 11, 2013 to September 9, 2013
The following event listing is taken from the
LWN.net Calendar.
| Date(s) | Event | Location |
July 6 July 11 |
Libre Software Meeting |
Brussels, Belgium |
July 8 July 12 |
Linaro Connect Europe 2013 |
Dublin, Ireland |
| July 12 |
PGDay UK 2013 |
near Milton Keynes, England, UK |
July 12 July 14 |
5th Encuentro Centroamerica de Software Libre |
San Ignacio, Cayo, Belize |
July 12 July 14 |
GNU Tools Cauldron 2013 |
Mountain View, CA, USA |
July 13 July 19 |
Akademy 2013 |
Bilbao, Spain |
July 15 July 16 |
QtCS 2013 |
Bilbao, Spain |
July 18 July 22 |
openSUSE Conference 2013 |
Thessaloniki, Greece |
July 22 July 26 |
OSCON 2013 |
Portland, OR, USA |
| July 27 |
OpenShift Origin Community Day |
Mountain View, CA, USA |
July 27 July 28 |
PyOhio 2013 |
Columbus, OH, USA |
July 31 August 4 |
OHM2013: Observe Hack Make |
Geestmerambacht, the Netherlands |
August 1 August 8 |
GUADEC 2013 |
Brno, Czech Republic |
August 3 August 4 |
COSCUP 2013 |
Taipei, Taiwan |
August 6 August 8 |
Military Open Source Summit |
Charleston, SC, USA |
August 7 August 11 |
Wikimania |
Hong Kong, China |
August 9 August 11 |
XDA:DevCon 2013 |
Miami, FL, USA |
August 9 August 12 |
Flock - Fedora Contributor Conference |
Charleston, SC, USA |
August 9 August 13 |
PyCon Canada |
Toronto, Canada |
August 11 August 18 |
DebConf13 |
Vaumarcus, Switzerland |
August 12 August 14 |
YAPC::Europe 2013 “Future Perl” |
Kiev, Ukraine |
August 16 August 18 |
PyTexas 2013 |
College Station, TX, USA |
August 22 August 25 |
GNU Hackers Meeting 2013 |
Paris, France |
August 23 August 24 |
Barcamp GR |
Grand Rapids, MI, USA |
August 24 August 25 |
Free and Open Source Software Conference |
St.Augustin, Germany |
August 30 September 1 |
Pycon India 2013 |
Bangalore, India |
September 3 September 5 |
GanetiCon |
Athens, Greece |
September 6 September 8 |
State Of The Map 2013 |
Birmingham, UK |
September 6 September 8 |
Kiwi PyCon 2013 |
Auckland, New Zealand |
If your event does not appear here, please
tell us about it.
Page editor: Rebecca Sobol