GUADEC is the largest regular meeting of GNOME developers, so it
always marks the roll out of a variety of new additions to the
platform. At the 2013 event in Brno, Czech Republic, the new work
includes a significantly revamped geolocation framework, a
library-based approach to email, and predictive text input.
Geolocation
Zeeshan Ali spoke about GNOME's geo-awareness, which is undergoing
a rewrite. Geo-awareness consists of four major pieces, he said. The
first is geolocation, or the "where am I?" question. The second is
the opposite; the user wants to find a different location: a
particular address, a nearby restaurant or gas station, or other
points of interest. The third issue is routing, finding the best way
to get between locations. Finally, there is the user interface topic:
locations, points of interest, and routes all need to be presented to
the user on a map.
GNOME has had the GeoClue library for several years, Ali continued,
but it has always been difficult to use. The APIs it offered were complicated to
use and it exposed too many details to applications and users. For
example, it provided separate APIs for acquiring location information
from the different data sources (GPS, IP address mapping, etc.).
Consequently, Ali and Bastien Nocera have rewritten the library as
GeoClue2 with the goal of being simple.
GeoClue2 can determine location from four different sources:
coordinates from GPS devices (the most accurate), the location of
nearby WiFi access points (which is accurate to just a few hundred
meters), the location of 3G cellular towers (which are accurate only
to a few kilometers), and IP addresses (which are accurate only down
to the city level).
GeoClue2 also offers better privacy controls; the previous version
of the library would provide the current location to any application;
with GeoClue2, GNOME will require the user to confirm location
requests from each application. There are, of course, other privacy
issues involved in geo-awareness. For example, Ali mentioned that
Google had stirred up controversy when it mapped the SSIDs of WiFi
access points. GeoClue2 will not use the Google WiFi database because
doing so requires Google's permission. Instead, it plans to use an
open WiFi database, but the privacy concerns are not entirely clear-cut
with other services either.
GNOME's place-finding functionality is implemented in a library
called geocode-glib, written by Nocera. It provides both geocoding
(that is, taking a feature like a city name or street address and
transforming it into latitude/longitude coordinates) and
reverse-geocoding (which does the opposite, taking coordinates and
identifying the closest street address). This library used Yahoo's
Places API in the
past, but has since been migrated to the open data
service Nominatim,
which is based on Open Street Map (OSM) maps, and
is also more complete than Places.
GNOME already has a solid mapping UI component called libchamplain,
which renders OSM maps by default. There is also a new mapping
application slated for release with GNOME 3.10, called GNOME Maps.
The routing question is not solved; there is currently a Google
Summer of Code student working on developing a routing system. It is
a tricky problem because routing can involve several distinct modes of
transportation: walking, cycling, driving, and public transport. The
initial plan was to use Open Source Routing Machine
(OSRM), but the public web service OSRM provides is car-only. In
addition, running a separate OSRM instance (an option which would
allow GNOME to find routes using OSM's bike path data, for example) is very demanding: the
project recommends 128GB of RAM for the server.
Other options were discussed at a BoF session on August 7,
including the GraphHopper
service, which is less demanding on the server side. GraphHopper may
also provide an easier solution to the public transport problem, which
is tricky in its own right. The de-facto solution for publishing
public transportation schedules is the Google Transit Feed
Specification (GTFS), which is widely used, but there are still a
variety of countries and cities that offer their own transit
information APIs. Ultimately, a plugin-based approach may make the
most sense (although such "plugins" would not be user-installed components).
The BoF also touched on where users would want to integrate
geo-awareness in the GNOME desktop. There are certainly a lot of
possibilities, from linking addresses in contact entries to the Maps
application, to automatically recognizing and linking address-like
text in emails or chats, to allowing the user to see the location of
geotagged photos. Here again, there are privacy concerns to be worked
out, as well as how best to present the potentially bewildering array
of matches to a geocoding search.
Email as a desktop service
Srinivasa Ragavan spoke about his ongoing work splitting the
Evolution email client up into a shared library and a separate
front-end. His goal is to create a desktop "mail service" that
operates over D-Bus; this would allow for multiple front-end client
options as well as better integration with other GNOME desktop
components.
Stand-alone email clients like Evolution are a bit of a dated
concept, Ragavan said. Mobile device platforms have made users
accustomed to simple "send via email" actions that can be performed
from other applications, but Evolution is not capable of offering such
functionality. Instead, the Evolution application must be fully
started up, complete with checking for new mail from the IMAP server.
In addition, the popularity of GMail as that IMAP provider behind the
scenes causes other problems. GMail essentially offers one giant
INBOX folder, which requires considerable time and memory to load.
Ragavan's plan is to split Evolution's account-authentication,
message-retrieval, and mail-sending functionality into a separate
library called Evolution Mail
Factory (e-mail-factory). GNOME would store a user's IMAP
credentials and start a session daemon to log in to configured email
accounts automatically when the users logs into the desktop.
E-mail-factory would download INBOX messages locally, using the
notification system to alert the user without requiring the Evolution
front-end to start up.
Splitting out e-mail-factory would have other benefits as well, he
said, such as downloading messages during idle periods, and enabling
search of message contents and attachments from other applications.
Desktop integration would allow the lock screen to display new-email
notifications (which is not currently possible), or allow the
new-message-notification pop-up to include an inline reply function.
It would also allow other applications to send email without starting
up the Evolution GUI client. He mentioned LibreOffice and the
Nautilus file manager in particular as applications whose current
"send via email" functionality is painfully awkward.
Progress toward this goal is slow, but moving forward. Ragavan has
split e-mail-factory out in his own Evolution builds, dating back to
the 3.6 series. He is currently working on the D-Bus email API, which
he described as "bare-bones" at present, and he has developed a test
suite. Still to come is porting the Evolution front-end to
e-mail-factory, and the final API for client applications to search
and fetch messages.
Predictive text input
Anish Patil and Mike Fabian spoke about their work adding
predictive input for the Intelligent Input Bus (IBus). IBus is the Input
Method (IM) framework used by GNOME; it is used to speed up text entry
for writing systems that do not match up with the physical keyboard.
Logographic languages with more symbols than fit onto a keyboard and
languages that employ frequent accents can both require considerably
more keystrokes than users find convenient; Patil observed that it is
not uncommon in some languages to require nine keystrokes to type five
characters. Enabling the IM to accurately predict text input can cut
down considerably on the keystrokes required, which in turn makes
users happier.
Patil then provided a bit of background on how predictive text
input works. Predictions can be based on simple statistics, or on
more complex probabilistic models. The simple statistics approach
would include counting the relative occurrences of letters in a
particular language, or the odds that one word could follow
another according to the rules of the language. More complex
approaches include Markov models, where the probability of a word
occurring next is determined by studying the previous history of the
actual text typed. Markov models can be trained on a set of
training texts, and offer probabilities based on unigrams (single
words), bigrams (pairs of words), or even longer sequences.
The first implementation the two demonstrated was IBus Typing
Booster, which uses a Markov model to predict words, and can be
used with any IM supported by IBus. As the user types words, the
booster pops up autocompletion suggestions which the user can choose
from. They showed it using an onscreen software keyboard, which would
be helpful for mobile and touch-screen interfaces, but of course it
works with hardware keyboards as well. IBus Typing Booster is
limited, however, in that it can only be used by IBus, which
is far from the only IM framework available to GNOME users. It also
relies on the hunspell dictionaries for its word lists, which vary in
quality depending on the language.
Fabian then described the replacement tool in the works; the two
are porting IBus Typing Booster over to a new library called libyokan, which will
support other IM frameworks. It will handle all of the key events;
applications will only have to subscribe to the prediction events and
can present them to the user however they choose. Although the team
is making progress, the speakers did say they are in need of help
testing the new framework, improving the hunspell dictionaries, and
assembling a quality corpus of free training texts.
GUADEC is always a showcase for new GNOME work and experimentation;
one can never know for sure which bits will see the light of day in an
upcoming stable release and which will be radically reworked while still in
development. The new geo-awareness features are on track to become a
major new feature, while the near-term future of the Evolution
re-factoring effort and predictive text input are not as clear.
[The author wishes to thank the GNOME Foundation for assistance
with travel to GUADEC 2013.]
Comments (28 posted)
Packaging applications for Linux is a topic that can expand to fill the
available discussion time—there are security issues, shared
library concerns, privacy implications, and worries about upgrades,
among other subjects. At GUADEC 2013 in Brno, Czech
Republic, the GNOME project discussed the possibility of supporting
the installation of fully sandboxed "apps" like those found on mobile
phone platforms. Such apps would never replace normal application
packages provided in RPM or Debian files, but supporting them would impact
the GNOME platform in quite a few places.
Lennart Poettering introduced the concept in a session on the first
day of the event, and it was revisited later in the week at a
birds-of-a-feather (BoF) session. The goal, Poettering explained, was
to ensure that GNOME (or any other desktop Linux system) could support
apps downloaded from untrusted sources on the Internet without
compromising security. Such apps would be end-user programs, not
system services like Apache, and they would by definition not include
programs already provided by the distributions, like GIMP or Firefox.
User apps downloaded from the Internet differ from distribution-provided applications in several important ways, he said. Obviously,
the fact that the source is untrusted means that they should be
isolated from the system, from private user data, and from other
applications as much as possible. But they may still need a stable
API or ABI in order to provide useful functionality—such as that of
GNOME Shell extensions. The same user-level app may also be
installed separately by multiple users, which is not what most
distribution packages are designed for, and if they follow the "app
store model" they will likely be distributed as single-file bundles.
The trust issues can largely be dealt with by technical means,
Poettering said, while there are policy decisions to be discussed for
other issues, such as what APIs the system should provide. But
regardless of the specifics, he said, GNOME should offer support for
apps, since it is in a position to do so in a free, community-driven,
and vendor-agnostic manner.
9½ feats
Getting to that point breaks down into nine simple steps, he said
(although, thanks to the presence of a "step 2.5," the total list
ran a bit longer than most people would probably consider to be
"nine"). Poettering argued in favor of implementing much of the
security sandboxing at the lowest level possible, using kernel tools,
on the grounds that the lower the level, the greater the security, and
the less developers and users
would have to think about it. User-space application isolation cannot
be done in a simple way, he said.
The first step is to get kdbus, the in-kernel implementation of
D-Bus, completed, he said. This will serve as a secure conduit for
passing messages in and out of each app sandbox, allowing kernel-level
enforcement of message isolation. The systemd side of kdbus is "very
near working," he said, although it needs porting to the latest D-Bus API.
But it should be in a presentable form by the end of the year.
The second step is to implement app sandboxes, built on top of
namespaces, seccomp, control groups, and capabilities. These are all
very generic kernel tools, he observed, which makes them flexible.
Each app should get its own control group, he suggested, which will
enable several other useful features, such as the ability to freeze
background apps or boost the foreground app. Several mobile Linux
platforms have implemented this freeze/boost feature, including Android and Maemo.
Step 2.5 flows from this sandbox definition; sandboxed apps will
need to be well-contained, including where they can store bundled
libraries and where they can place temporary data. This will dictate
the specification of an app-centric analogue to the Filesystem
Hierarchy Standard (FHS) to define where apps can store what, he
said. It may not need to be done within the FHS itself, he added, but
it is not a GNOME-specific subject, either—getting it right is a
matter of getting the right people together to hash it out.
Step three is a feature that Poettering called "portals;" an
inter-process communication (IPC) infrastructure defining what apps
can request and respond in and out of the sandbox. Portals are akin
to Android's Intents,
he said: they would be an interactive security scheme that doubles as an integration
technology. Portals would be run over kdbus. In an example scenario, one app could request a photo from the
system's webcam; the system could respond by telling the app what
other apps had registered to provide the webcam photo-access feature
(if there is more than one), and when the photo is snapped, kdbus
would return the image to the app that made the original request.
This means the app can be isolated from the photo-taking function (and
the hardware access it requires), which is better for security and
means there is less for the app author to implement.
Step four involves working out the best way to package apps as compressed
filesystems. A requirement of the "app" model, he said, is that apps
need to be deliverable to users as a single file, for ease of use and
portability. A similar approach is used by Mac OS X, he said,
where .app bundles appear to be a single file. On Linux, he said,
they would probably involve compressing multiple
filesystem partitions—which would then be mounted to the system with a
loopback filesystem. There would need to be separate partitions for
each architecture supported (32-bit and 64 bit x86, plus ARM), as well
as one for common files.
Step five would involve extending GLib and related GNOME libraries
to support the sandboxed apps. In particular, the desktop needs to be
able to recognize when a new app is uncompressed and mounted to the system, and treat
its executables like a system-wide program, making it available in the
application launcher, search framework, and so on. Step six would involve deploying a
sandbox-aware display manager, presumably Wayland. Among other
things, the display manager would need to support cut-and-paste and
drag-and-drop, between sandboxed apps and the system. Step seven would
involve defining a configuration scheme in dconf for sandboxed apps.
Step eight is defining a set of "profiles" for apps: target sets of
libraries and APIs that app developers can test against in order to
release their code. Poettering described several possible profiles,
such as a generic Linux Standard Base (LSB) profile, a profile for
games (which tend not to receive many updates and thus need long-term
API stability), and a GNOME profile, which would offer better
integration with desktop services than a generic profile like the LSB
option. The final step is defining an app store model, and supporting what are
hopefully multiple, independent app stores from various vendors.
Open questions
Time ran short in the session, in part because Poettering took
several questions during the early steps of the nine-point plan (in
fact, the final few steps in the plan had to be explained in rapid
succession and in less detail). But there was one pointed question
raised at the very end of the talk: how would an "app" model for GNOME
or other Linux desktop environments handle the desire of app
developers to bundle shared libraries into their downloads.
Poettering admitted that this is a problem that needs solving, but he
contended that the approach should be "support bundled libraries, but
deal with the fact that they suck." And, he added, they suck
primarily because of the security problems that they cause. Thus, they must be
addressed with security tools, and providing a solid sandbox is the
correct approach.
There are plenty of other challenges ahead for GNOME (or any other
Linux platform) interested in offering an "app store"-like experience
on par with those touted by Android and iOS. Due to the time
constraint, the later steps in Poettering's nine-step plan barely got
any attention at all, and some of them are quite complex—such as
defining the "profiles" to be offered for app authors, and then
maintaining the profiles' advertised APIs in a stable fashion over
multiple release cycles. The "portals" concept (which has been under
discussion for some time already)
and the packaging of compressed app images (an idea that is being
investigated by several independent free software projects already)
spawned quite a few questions during the talk, questions to which the
answer often involved some variation of "we still need to discuss
this."
The discussion of exactly what interfaces would be covered by
the "portals" began at the August 7 App Sandboxing BoF session, but
there are plenty of questions remaining. The big one is the simplicity and
scope of the portal definitions themselves (e.g., how many options
should be accessible in the "print this document" portal), but there
are other as well, such as how the permissions system would allow
users to choose or restrict an app's access to a portal, and whether
system limitations like the maximum message size of D-Bus will prove
to be a roadblock for certain portals.
Attendees pointed out several times in the "hallway track" at
GUADEC that currently the project is only at step two of Poettering's
nine ten. It will, no doubt, be quite a long time
before any "app store" for GNOME reaches users. But it is also clear
from the chatter at the conference that most people recognize the need
to pursue such a goal. For some, the target is much simpler (such as
providing a way to run Android apps on the Linux desktop, or
supporting packaged HTML5 web-apps), but sitting still and not
exploring the next generation of application delivery systems is
simply not a viable option for GNOME—nor for any other Linux
environment that sets out to attract end users.
[The author wishes to thank the GNOME Foundation for assistance
with travel to GUADEC 2013.]
Comments (21 posted)
By Jonathan Corbet
August 7, 2013
Tor is a project intended to make
anonymous network browsing globally available. By encrypting connections
and routing them through a random set of intermediary machines, Tor hopes
to hide the identity and location of users from anybody who might be
attempting to spy on their activities. One can only imagine that recent
revelations about the scope of governmental data collection will have
increased the level of interest in tools like Tor. So the recent news of a
compromise of the Tor system with the potential to identify users is
certain to have worried a number of people; it also raises some interesting
questions about how projects like Tor should deal with security issues.
What happened
The Tor hidden
service protocol allows services to be offered anonymously through the
Tor network. Many of these services, it seems, are concentrated on servers
hosted by
a company called Freedom Hosting. They vary from well-known services like
Tor Mail to, evidently, a wide range of
services that most of us would rather not know about at all. The alleged
nature of
some of those services was recently emphasized when Eric Eoin Marques, the
alleged owner of Freedom Hosting, was arrested
on child pornography charges.
About the same time, users of various hidden services started reporting
that those services were sending a malicious JavaScript program to their
browsers. This program exploited a vulnerability in the Firefox browser to
gather information about the identity of the user and send it off to an IP
address that, it has been claimed,
is currently assigned to the US National Security Agency (though some backpedaling
is happening with regard to that claim). The exploit does
not appear to have been used for any other purpose, but what was done is
enough: Tor users hit by this code may have lost the anonymity that they
were using Tor to obtain in the first place.
Who are those users? The hostile code was designed specifically for users
running the Tor Browser
Bundle (TBB) on Windows systems. TBB is based on the Firefox Extended
Support Release with a number of security and
anonymity features added on
top. Anybody using Tor in a different configuration — or who was using a
current version of TBB — will not be vulnerable to this particular
attack. Linux users, perhaps on a system like Tails, were not targeted, but there is
probably no inherent reason why an exploit for Linux systems would not have
worked.
The specific vulnerability exploited by this attack is MFSA-2013-53,
otherwise known as CVE-2013-1690.
This vulnerability was patched in Firefox ESR 17.0.7, released on
June 25, 2013. The Tor project incorporated this update and released
new TBB versions one day later. So anybody who updated their TBB
installation in the time between June 26 and when the exploit was
launched will never have been vulnerable. Those who didn't get around to
updating found themselves in a rather less comfortable position.
What should Tor change?
Needless to say, Tor users have been shaken by this series of events and
would very much like to avoid seeing a repeat in the future. So there has
been a fair amount of discussion regarding TBB and how the Tor project
responds to vulnerabilities. But it is not at all clear that massive
improvements are possible.
One possibility is to reduce the attack surface of the browser by disabling
JavaScript; taking away the ability to execute code in the browser's
address space would make a lot of attacks impractical. TBB does ship with
the invaluable NoScript extension, but,
as any NoScript user quickly discovers, turning off JavaScript breaks a lot
of web sites. One does not know anger until one discovers, at the end of
filling in a long web form, that the "submit" button runs a JavaScript
snippet (usually for some relatively useless purpose) and the form cannot
be submitted. So, in the interest of having TBB actually be usable, recent
versions of TBB ship with NoScript configured to allow JavaScript on all
sites.
There is a
project in the works to equip TBB with a "security slider" that would
allow users to select the balance between security and usability. But
that feature is not yet ready for release; in the meantime, TBB users may
want to consider enabling NoScript on their own. But, as the Tor project
pointed out in its
August 5 advisory, disabling JavaScript is far from a complete
solution:
And finally, be aware that many other vectors remain for
vulnerabilities in Firefox. JavaScript is one big vector for
attack, but many other big vectors exist, like css, svg, xml, the
renderer, etc.
There has been a certain amount of complaining that the Tor project
silently fixed the vulnerability in June when it should have been making
sure that users knew about the scale of the problem. Some users have gone
so far as to state that TBB is a forked
version of Firefox and, as such, it should be issuing its own security
advisories.
The problem with this idea, of course, is that the Tor project is not
really in a position to understand all of the many fixes applied by the Firefox
developers. Even then, it is not always clear at the outset — even to
Firefox developers — that a
specific bug is exploitable. As Tor developer Jacob Appelbaum put it, the project just does not have the
resources to duplicate the advisories that Mozilla is already issuing when
it releases a browser update:
We're understaffed, so we tend to pick the few things we might
accomplish and writing such advisory emails is weird unless there
is an exceptional event. Firefox bugs and corresponding updates are
not exceptional events.
Experience quickly shows that security advisories are also far from being
exceptional events; more advisories would not necessarily convince that
many more users to upgrade their TBB installations. TBB already checks to
see if it is out of date and informs the user if an upgrade is available;
there is talk of making that notification stronger, especially as the time
since the update was released increases. Automatic updates were also
discussed, but there seems to be little interest in taking that path; there
seems to be some fear that the update mechanism itself could be targeted by
attackers.
In the end, there are a couple of straightforward conclusions that can be
drawn, starting with the fact that there may not be a whole lot the Tor
project can do to avoid a repeat of this type of attack. The simple truth
seems to be that we, as a community, have neither the resources nor the
skills to properly defend ourselves against attackers who have the
resources of national governments behind them. Software as complex as a
browser is always going to have vulnerabilities in it, even if its
developers are not constantly adding new features. Providing software of
that complexity that is sufficiently secure that people can depend on it
even when their lives are at stake is one of the great challenges of our
time; so far, we have not found the answer.
Comments (12 posted)
Page editor: Jonathan Corbet
Security
By Jake Edge
August 7, 2013
An attack against encrypted web traffic (i.e. HTTPS) that can reveal
sensitive
information to observers was presented at the
Black Hat
security conference. The vulnerability is not any kind of
actual decryption of HTTPS traffic, but can nevertheless determine whether
certain data is present in the page source. That
data might include email addresses, security tokens, account numbers, or
other potentially sensitive items.
The attack uses a modification of the CRIME
(compression ratio info-leak made easy) technique, but instead of targeting
browser
cookies, the new attack focuses on the pages served from the web server
side. Dubbed BREACH
(browser reconnaissance and exfiltration via adaptive compression of
hypertext—security researchers are nothing if not inventive with names),
the attack was demonstrated
on August 1. Both CRIME and BREACH require that the session use
compression, but CRIME needs it at the
Transport Layer Security (TLS, formerly Secure Sockets
Layer, SSL) level, while
BREACH only requires the much more common HTTP compression. In both cases,
because the data is
compressed, just comparing
message sizes can reveal important information.
In order to perform the attack, multiple probes need to be sent from a
victim's browser to the web site of interest. That requires that the
victim get infected with some kind
of browser-based malware that can perform the probes. The usual mechanisms
(e.g. email, a compromised web site, or man-in-the-middle) could be used to
install the probe. A
wireless access point and router would be one obvious place to house this
kind of attack as it has the man-in-the-middle position to see the
responses along with the ability to
insert malware into any unencrypted web page visited.
The probes are used as part of an "oracle" attack.
An oracle attack is one where the attacker can send multiple different
requests to the vulnerable software and observe the responses. It is, in
some ways, related to the "chosen plaintext" attack against a cryptography
algorithm. When trying to break a code, arranging for the "enemy" to
encrypt your message in their code can provide a wealth of details about
the algorithm. With computers, it is often the case that
an almost unlimited number of probes can be made and the results analyzed. The
only limit is typically time or bandwidth.
BREACH can only be used against sites that reflect the user input from
requests in their responses. That allows the site to, in effect, become an
oracle. Because the HTTP compression will replace
repeated strings with shorter constructs (as that is the goal of the
compression), a probe response with a (server-reflected) string that
duplicates one
that is already present in the page will elicit a shorter response than a
probe for an unrelated string. Finding that a portion of the
string is present allows the probing
tool to add an additional digit or character to the string, running through
all the possibilities checking for a match.
For data that has a fixed or nearly fixed format (e.g. email
addresses, account numbers, cross-site request forgery tokens), each probe
can try a variant (e.g. "@gmail.com" or "Account number: 1") and compare
the length of the reply to that of one without the probe. Shorter responses
correlate to correct guesses, because the duplicated string gets compressed
out of the response. Correspondingly, longer responses are for incorrect
guesses. It is
reported that 30 seconds is enough time to send enough probes to
essentially brute force
email addresses and other sensitive information.
Unlike CRIME, which can be avoided by disabling TLS
compression, BREACH will be more difficult to deal with. The researchers
behind BREACH list a number of mitigations, starting with
disabling HTTP compression. While that is a complete fix for the problem,
it is impractical for web servers to do so because of the additional
bandwidth it would require. It would also increase page load times.
Perhaps the most practical solution is to rework applications so that user
input is not reflected onto pages with sensitive information. That way,
probing will not be effective, but it does mean a potentially substantial
amount of work on the web application. Other possibilities like
randomizing or masking the sensitive data will also require application rework.
At the web server level, one could potentially add a random amount of data
to responses
(to obscure the length) or rate-limit requests, but both of those are
problematic from a performance perspective.
Over the years, various attacks against HTTPS have been found.
That is to be expected, really, since cryptographic systems always get
weaker over time. There's nothing to indicate that HTTPS is fatally
flawed, though this side-channel attack is fairly potent. With governments
actively collecting traffic—and using malware—it's not much of a
stretch to see the two being combined. Governments don't much like
encryption or anonymity, and flaws like BREACH will unfortunately be available to
help thwart both, now and in the future.
Comments (8 posted)
Brief items
The "My Satis" Android application has a hard-coded Bluetooth PIN of "0000"
[...]
As such, any person using the "My Satis" application can control any Satis
toilet. An attacker could simply download the "My Satis" application and
use it to cause the toilet to repeatedly flush, raising the water usage and
therefore utility cost to its owner.
Attackers could cause the unit to unexpectedly open/close the lid, activate
bidet or air-dry functions, causing discomfort or distress to user.
—
Trustwave advisory
— Android-controlled toilets, what could possibly go wrong?
Ellison's Law: For every keystroke or click required to use a crypto
feature, the userbase declines by half.
—
Garrett
LeSage (quoting Stef Walter from GUADEC)
Even the electronic civil lib contingent is lying to themselves. They're sore and indignant now, mostly because they weren't consulted — but if the NSA released PRISM as a 99-cent Google Android app, they'd be all over it. Because they are electronic first, and civil as a very distant second.
They'd be utterly thrilled to have the NSA's vast technical power at their own command. They'd never piously set that technical capacity aside, just because of some elderly declaration of universal human rights from 1947. If the NSA released their heaps of prying spycode as open-source code, Silicon Valley would be all over that, instantly. They'd put a kid-friendly graphic front-end on it. They'd port it right into the cloud.
—
Bruce Sterling
One day, we saw that Bruce Sterling was coming into town for a book reading, and we thought: here's our chance. Like good Nineties digital activists, we'd all read our Hacker Crackdown, and knew he might be a friend in getting some rip-roaring coverage in the heart of the beast. After horribly hijacking him from what looked a nice literary meal, we took him to heroin-chic dive bar in Soho, told him our problems, and begged him to help.
Forget defending crypto, he said. It's doomed. You're screwed.
No, the really interesting stuff, he said, is in postmodern literary theory.
—
Danny O'Brien
Comments (9 posted)
UCLA has a
report on "software obfuscation" research by computer science professor Amit Sahai. Essentially, code can be encrypted in such a way that it still operates correctly but cannot be reverse engineered. "
According to Sahai, previously developed techniques for obfuscation presented only a "speed bump," forcing an attacker to spend some effort, perhaps a few days, trying to reverse-engineer the software. The new system, he said, puts up an "iron wall," making it impossible for an adversary to reverse-engineer the software without solving mathematical problems that take hundreds of years to work out on today's computers — a game-change in the field of cryptography.
The researchers said their mathematical obfuscation mechanism can be used to protect intellectual property by preventing the theft of new algorithms and by hiding the vulnerability a software patch is designed to repair when the patch is distributed."
Comments (48 posted)
Ars technica is one of many sites with
coverage
of the Firefox exploit that was used to attack the anonymity of Tor
users. "
The attack code exploited a memory-management vulnerability,
forcing Firefox to send a unique identifier to a third-party server using a
public IP address that can be linked back to the person's ISP. The exploit
contained several hallmarks of professional malware development, including
'heap spraying' techniques to bypass Windows security protections and the
loading of executable code that prompted compromised machines to send the
identifying information to a server located in Virginia, according to an
analysis by researcher Vlad Tsrklevich."
Comments (16 posted)
Wired is
reporting that the
Open Source Digital Voting (OSDV) Foundation has finally gotten approval for its non-profit status from the US Internal Revenue Service after applying for it in February 2007. "
Then the revolution stalled. The Open Source Digital Voting Foundation spent the next four years in a kind of government-induced limbo as the Internal Revenue Service delayed processing of its application for nonprofit status. That delay cost the operation an untold amount of grant and donation dollars, and though the project has produced some software, it still hasn't begun work on important things like ballot-counting and tabulation devices and accessible voting machines." OSDV runs the
Trust the Vote project and seeks to create open source voting machine solutions.
Comments (99 posted)
New vulnerabilities
bluetile: command injection
| Package(s): | bluetile |
CVE #(s): | CVE-2013-1436
|
| Created: | August 6, 2013 |
Updated: | August 7, 2013 |
| Description: |
From the OSS security mailing list:
A remote command injection vulnerability was reported in xmonad-contrib.
The vulnerability is in the XMonad.Hooks.DynamicLog module.
As we know, web browsers usually set the window title to the current tab. A
malicious user, then, can craft a special title in order to inject commands
in the current bar. |
| Alerts: |
|
Comments (none posted)
chromium-browser: multiple vulnerabilities
| Package(s): | chromium-browser |
CVE #(s): | CVE-2013-2881
CVE-2013-2882
CVE-2013-2883
CVE-2013-2884
CVE-2013-2885
CVE-2013-2886
|
| Created: | August 5, 2013 |
Updated: | September 4, 2013 |
| Description: |
From the CVE entries:
Google Chrome before 28.0.1500.95 does not properly handle frames, which allows remote attackers to bypass the Same Origin Policy via a crafted web site. (CVE-2013-2881)
Google V8, as used in Google Chrome before 28.0.1500.95, allows remote attackers to cause a denial of service or possibly have unspecified other impact via vectors that leverage "type confusion." (CVE-2013-2882)
Use-after-free vulnerability in Google Chrome before 28.0.1500.95 allows remote attackers to cause a denial of service or possibly have unspecified other impact via vectors related to deleting the registration of a MutationObserver object. (CVE-2013-2883)
Use-after-free vulnerability in the DOM implementation in Google Chrome before 28.0.1500.95 allows remote attackers to cause a denial of service or possibly have unspecified other impact via vectors related to improper tracking of which document owns an Attr object. (CVE-2013-2884)
Use-after-free vulnerability in Google Chrome before 28.0.1500.95 allows remote attackers to cause a denial of service or possibly have unspecified other impact via vectors related to not properly considering focus during the processing of JavaScript events in the presence of a multiple-fields input type. (CVE-2013-2885)
Multiple unspecified vulnerabilities in Google Chrome before 28.0.1500.95 allow attackers to cause a denial of service or possibly have other impact via unknown vectors. (CVE-2013-2886) |
| Alerts: |
|
Comments (none posted)
evolution-data-server: encrypt email to unintended recipient
| Package(s): | evolution-data-server |
CVE #(s): | CVE-2013-4166
|
| Created: | August 1, 2013 |
Updated: | August 12, 2013 |
| Description: |
From the Ubuntu advisory:
Yves-Alexis Perez discovered that Evolution Data Server did not properly
select GPG recipients. Under certain circumstances, this could result in
Evolution encrypting email to an unintended recipient. |
| Alerts: |
|
Comments (none posted)
gksu-polkit: privilege escalation
| Package(s): | gksu-polkit |
CVE #(s): | CVE-2013-4161
|
| Created: | August 5, 2013 |
Updated: | August 7, 2013 |
| Description: |
From the Red Hat bugzilla:
It was found that the patch to correct CVE-2012-5617 (bug #883162) was improperly applied, so the vulnerability described by CVE-2012-5617 was never really fixed. |
| Alerts: |
|
Comments (none posted)
heat-jeos: improper handling of passwords
| Package(s): | heat-jeos |
CVE #(s): | CVE-2013-2069
|
| Created: | August 6, 2013 |
Updated: | September 30, 2013 |
| Description: |
From the Red Hat bugzilla:
It was discovered that when used to create images, livecd-tools gave the root user an empty password rather than leaving the password locked in situations where no 'rootpw' directive was used or when the 'rootpw --lock' directive was used within the Kickstart file, which could allow local users to gain access to the root account. |
| Alerts: |
|
Comments (none posted)
httpd: disrepects dirty flag
| Package(s): | httpd apache |
CVE #(s): | CVE-2013-2249
|
| Created: | August 6, 2013 |
Updated: | August 12, 2013 |
| Description: |
From the CVE entry:
mod_session_dbd.c in the mod_session_dbd module in the Apache HTTP Server before 2.4.5 proceeds with save operations for a session without considering the dirty flag and the requirement for a new session ID, which has unspecified impact and remote attack vectors. |
| Alerts: |
|
Comments (none posted)
mozilla: multiple vulnerabilities
| Package(s): | firefox, thunderbird, seamonkey |
CVE #(s): | CVE-2013-1701
CVE-2013-1709
CVE-2013-1710
CVE-2013-1713
CVE-2013-1714
CVE-2013-1717
|
| Created: | August 7, 2013 |
Updated: | August 30, 2013 |
| Description: |
From the CVE entries:
Multiple unspecified vulnerabilities in the browser engine in Mozilla Firefox before 23.0, Firefox ESR 17.x before 17.0.8, Thunderbird before 17.0.8, Thunderbird ESR 17.x before 17.0.8, and SeaMonkey before 2.20 allow remote attackers to cause a denial of service (memory corruption and application crash) or possibly execute arbitrary code via unknown vectors. (CVE-2013-1701)
Mozilla Firefox before 23.0, Firefox ESR 17.x before 17.0.8, Thunderbird before 17.0.8, Thunderbird ESR 17.x before 17.0.8, and SeaMonkey before 2.20 do not properly handle the interaction between FRAME elements and history, which allows remote attackers to conduct cross-site scripting (XSS) attacks via vectors involving spoofing a relative location in a previously visited document. (CVE-2013-1709)
The crypto.generateCRMFRequest function in Mozilla Firefox before 23.0, Firefox ESR 17.x before 17.0.8, Thunderbird before 17.0.8, Thunderbird ESR 17.x before 17.0.8, and SeaMonkey before 2.20 allows remote attackers to execute arbitrary JavaScript code or conduct cross-site scripting (XSS) attacks via vectors related to Certificate Request Message Format (CRMF) request generation. (CVE-2013-1710)
Mozilla Firefox before 23.0, Firefox ESR 17.x before 17.0.8, Thunderbird before 17.0.8, Thunderbird ESR 17.x before 17.0.8, and SeaMonkey before 2.20 use an incorrect URI within unspecified comparisons during enforcement of the Same Origin Policy, which allows remote attackers to conduct cross-site scripting (XSS) attacks or install arbitrary add-ons via a crafted web site. (CVE-2013-1713)
The Web Workers implementation in Mozilla Firefox before 23.0, Firefox ESR 17.x before 17.0.8, Thunderbird before 17.0.8, Thunderbird ESR 17.x before 17.0.8, and SeaMonkey before 2.20 does not properly restrict XMLHttpRequest calls, which allows remote attackers to bypass the Same Origin Policy and conduct cross-site scripting (XSS) attacks via unspecified vectors. (CVE-2013-1714)
Mozilla Firefox before 23.0, Firefox ESR 17.x before 17.0.8, Thunderbird before 17.0.8, Thunderbird ESR 17.x before 17.0.8, and SeaMonkey before 2.20 do not properly restrict local-filesystem access by Java applets, which allows user-assisted remote attackers to read arbitrary files by leveraging a download to a fixed pathname or other predictable pathname. (CVE-2013-1717) |
| Alerts: |
|
Comments (none posted)
mozilla: multiple vulnerabilities
| Package(s): | firefox, seamonkey |
CVE #(s): | CVE-2013-1702
CVE-2013-1704
CVE-2013-1705
CVE-2013-1708
CVE-2013-1711
|
| Created: | August 7, 2013 |
Updated: | August 19, 2013 |
| Description: |
From the CVE entries:
Multiple unspecified vulnerabilities in the browser engine in Mozilla Firefox before 23.0 and SeaMonkey before 2.20 allow remote attackers to cause a denial of service (memory corruption and application crash) or possibly execute arbitrary code via unknown vectors. (CVE-2013-1702)
Use-after-free vulnerability in the nsINode::GetParentNode function in Mozilla Firefox before 23.0 and SeaMonkey before 2.20 allows remote attackers to execute arbitrary code or cause a denial of service (heap memory corruption and application crash) via vectors involving a DOM modification at the time of a SetBody mutation event. (CVE-2013-1704)
Heap-based buffer underflow in the cryptojs_interpret_key_gen_type function in Mozilla Firefox before 23.0 and SeaMonkey before 2.20 allows remote attackers to execute arbitrary code or cause a denial of service (application crash) via a crafted Certificate Request Message Format (CRMF) request.(CVE-2013-1705)
Mozilla Firefox before 23.0 and SeaMonkey before 2.20 allow remote attackers to cause a denial of service (application crash) via a crafted WAV file that is not properly handled by the nsCString::CharAt function. (CVE-2013-1708)
The XrayWrapper implementation in Mozilla Firefox before 23.0 and SeaMonkey before 2.20 does not properly address the possibility of an XBL scope bypass resulting from non-native arguments in XBL function calls, which makes it easier for remote attackers to conduct cross-site scripting (XSS) attacks by leveraging access to an unprivileged object. (CVE-2013-1711) |
| Alerts: |
|
Comments (none posted)
otrs2: sql injection
| Package(s): | otrs2 |
CVE #(s): | CVE-2013-4717
CVE-2013-2625
|
| Created: | August 5, 2013 |
Updated: | August 13, 2013 |
| Description: |
From the Debian advisory:
It was discovered that otrs2, the Open Ticket Request System, does not
properly sanitize user-supplied data that is used on SQL queries. An
attacker with a valid agent login could exploit this issue to craft SQL
queries by injecting arbitrary SQL code through manipulated URLs. |
| Alerts: |
|
Comments (none posted)
perl-Proc-ProcessTable: symlink attack
| Package(s): | perl-Proc-ProcessTable |
CVE #(s): | CVE-2011-4363
|
| Created: | August 5, 2013 |
Updated: | August 23, 2013 |
| Description: |
From the CVE entry:
ProcessTable.pm in the Proc::ProcessTable module 0.45 for Perl, when TTY information caching is enabled, allows local users to overwrite arbitrary files via a symlink attack on /tmp/TTYDEVS. |
| Alerts: |
|
Comments (none posted)
samba: denial of service
| Package(s): | samba |
CVE #(s): | CVE-2013-4124
|
| Created: | August 6, 2013 |
Updated: | September 25, 2013 |
| Description: |
From the CVE entry:
Integer overflow in the read_nttrans_ea_list function in nttrans.c in smbd in Samba 3.x before 3.5.22, 3.6.x before 3.6.17, and 4.x before 4.0.8 allows remote attackers to cause a denial of service (memory consumption) via a malformed packet. |
| Alerts: |
|
Comments (none posted)
subversion: denial of service
| Package(s): | subversion |
CVE #(s): | CVE-2013-4131
|
| Created: | August 1, 2013 |
Updated: | August 12, 2013 |
| Description: |
From the Subversion advisory:
Subversion's mod_dav_svn Apache HTTPD server module will trigger an assertion
on some requests made against a revision root. This can lead to a DoS.
If assertions are disabled it will trigger a read overflow which may cause a
SEGFAULT (or equivalent) or undefined behavior.
Commit access is required to exploit this. |
| Alerts: |
|
Comments (none posted)
WebCalendar: multiple vulnerabilities
| Package(s): | WebCalendar |
CVE #(s): | |
| Created: | August 5, 2013 |
Updated: | August 7, 2013 |
| Description: |
From the WebCalendar bug report:
Version 1.2.7 (22 Jan 2013)
- Security fix: Do not show the reason for a failed login (i.e. "no such user")
- Security fix: Escape HTML characters in category name.
- Security fix: Check all passed in fields (either via HTML form or via URL parameter) for certain malicious tags (script, embed, etc.) and generate fatal error if found.
|
| Alerts: |
|
Comments (none posted)
Page editor: Jake Edge
Kernel development
Brief items
The current development kernel is 3.11-rc4,
released on August 4. "
I had hoped things would start calming down, but rc4 is pretty much
exactly the same size as rc3 was. That said, the patches seem a bit
more spread out, and less interesting - which is a good thing. Boring
is good."
All told, 339 non-merge changesets were pulled into the mainline for -rc4.
They are mostly fixes, but there is also a mysterious set of ARM security
fixes (starting
here)
that showed up without prior discussion.
Stable updates:
3.10.5
3.4.56,
3.2.50, and
3.0.89, were all released on August 4.
Also worth noting: Greg Kroah-Hartman has announced
that 3.10 will be the next long-term supported kernel. "I’m picking
this kernel after spending a lot of time talking about kernel releases, and
product releases and development schedules from a large range of companies
and development groups. I couldn’t please everyone, but I think that the
3.10 kernel fits the largest common set of groups that rely on the longterm
kernel releases."
Comments (none posted)
Well, lguest is particularly expendable. It's the red shirt of the
virtualization away team.
—
Rusty Russell
Don't be afraid of writing too much text - trust me, I've never
seen a changelog which was too long!
—
Andrew Morton
Comments (1 posted)
By Jonathan Corbet
August 7, 2013
There has long been a desire for an
flink() system call in the
kernel. It would take a file descriptor and a file name as arguments
and cause the name to be a new hard link to the file behind the
descriptor. There have been concerns about security, though, that have
kept this call out of the kernel; some see it as a way for a process to
make a file name for a file descriptor that came from outside — via
exec(), for example. That process may not
have had a reachable path to the affected file before, so the creation of a
new name could be seen as bypassing an existing security policy.
The problem with this reasoning, as noted by Andy Lutomirski in a
patch merged for 3.11-rc5, is that this functionality is already
available by way of the linkat() system call. All it takes is
having the /proc filesystem mounted — and a system without
/proc is quite rare. But the incantation needed to make a link in
this way is a bit arduous:
linkat(AT_FDCWD, "/proc/self/fd/N", destdirfd, newname, AT_SYMLINK_FOLLOW);
Where "N" is the number of the relevant file descriptor.
It would be a lot nicer, he said, to just allow the use of the
AT_EMPTY_PATH option, which causes the link to be made to the file
behind the original file descriptor:
linkat(fd, "", destdirfd, newname, AT_EMPTY_PATH);
In current kernels, though, that option is restricted to processes with the
CAP_DAC_READ_SEARCH capability out of the same security concerns
as described above. But, as Andy pointed out, the restriction makes no
sense given that the desired functionality is available anyway. So his
patch removes the check, making the second variant available to all users.
This functionality is expected to be useful with files opened with the
O_TMPFILE option, but other uses can be imagined as well. It will
be generally available in the 3.11 kernel.
Comments (17 posted)
Kernel development news
By Jonathan Corbet
August 6, 2013
Traffic on the kernel mailing lists often seems to follow a particular
theme. At the moment, one of those themes is memory management. What
follows is an overview of these patches,
hopefully giving an idea of what the memory management developers are up
to.
MADV_WILLWRITE
Normally, developers expect that a write to file-backed memory will execute
quickly. That data must eventually find its way back to persistent
storage, but the kernel usually handles that in the background while the
application continues running. Andy Lutomirski has discovered that things
don't always work that way, though. In particular, if the memory is backed
by a file that has never been written (even if it has been extended to the
requisite size with fallocate()), the first write to each page of that
memory can be quite slow, due to the filesystem's need to allocate on-disk
blocks, mark the block as being initialized, and otherwise get ready to
accept the data. If (as is the case with
Andy's application) there is a need to write multiple gigabytes of data,
the slowdown can be considerable.
One way to work around this problem is to write throwaway data to that memory
before getting into the time-sensitive part of the application, essentially
forcing the kernel to prepare the backing store. That approach works, but
at the cost of writing large amounts of useless data to disk; it might be
nice to have something a bit more elegant than that.
Andy's answer is to add a new operation,
MADV_WILLWRITE, to the madvise() system call. Within the
kernel, that call is passed to a new vm_operations_struct
operation:
long (*willwrite)(struct vm_area_struct *vma, unsigned long start,
unsigned long end);
In the current implementation, only the ext4 filesystem provides support
for this operation; it responds by reserving blocks so that the upcoming
write can complete quickly. Andy notes that there is a lot more that could
be done
to fully prepare for an upcoming write, including performing the
copy-on-write needed for private mappings, actually allocating pages of
memory, and so on. For the time being, though, the patch is intended as a
proof of concept and a request for comments.
Controlling transparent huge pages
The transparent huge pages feature uses
huge pages whenever possible, and without user-space awareness, in order to
improve memory access performance. Most of the time the result is faster
execution, but there are some workloads that can perform worse when
transparent huge pages are enabled. The feature can be turned off
globally, but what about situations where some applications benefit while
others do not?
Alex Thorlton's answer is to provide an
option to disable transparent huge pages on a per-process basis. It takes
the form of a new operation (PR_SET_THP_DISABLED) to the
prctl() system call. This operation sets a flag in the
task_struct structure; setting that flag causes the memory
management system to avoid using huge pages for the associated process.
And that allows the creation of mixed workloads, where some processes use
transparent huge pages and others do not.
Transparent huge page cache
Since their inception, transparent huge pages have only worked with
anonymous memory; there is no support for file-backed (page cache) pages.
For some time now, Kirill A. Shutemov has been working on a transparent huge page cache implementation to
fix that problem. The latest version, a 23-patch set, shows how complex
the problem is.
In this version, Kirill's patch has a number of limitations. Unlike the
anonymous page implementation, the transparent huge page cache code is
unable to create huge pages by coalescing small pages. It also, crucially,
is unable to create huge pages in response to page faults, so it does not
currently work well with files mapped into a process's address space; that
problem is slated to be fixed in a future patch set. The current
implementation only works with the ramfs filesystem — not, perhaps, the
filesystem that users were clamoring for most loudly. But the ramfs implementation is a good proof of
concept; it also shows that, with the appropriate infrastructure in place,
the amount of filesystem-specific code needed to support huge pages in the
page cache is relatively small.
One thing that is still missing is a good set of benchmark results showing
that the transparent huge page cache speeds things up. Since this is
primarily a performance-oriented patch set, such results are important.
The mmap() implementation is also important, but the patch set is
already a large chunk of code in its current form.
Reliable out-of-memory handling
As was described in this June 2013 article,
the kernel's out-of-memory (OOM) killer has some inherent
reliability problems. A process may have called deeply into the kernel by
the time it
encounters an OOM condition; when that happens, it is put on hold while
the kernel tries to make some memory available. That process may be
holding no end of locks, possibly including locks needed to enable a
process hit by
the OOM killer to exit and release its memory; that means that deadlocks
are relatively likely once the system goes into an OOM state.
Johannes Weiner has posted a set of patches
aimed at improving this situation. Following a bunch of cleanup work,
these patches make two fundamental changes to how OOM conditions are
handled in the kernel. The first of those is perhaps the most visible: it
causes the kernel to avoid calling the OOM killer altogether for most
memory allocation failures. In particular, if the allocation is being made
in response to a system call, the kernel will just cause the system call to
fail with an ENOMEM error rather than trying to find a process to
kill. That may cause system call failures to happen more often and in
different contexts than they used to. But, naturally, that will not be a
problem since all user-space code diligently checks the return status of
every system call and responds with well-tested error-handling code when
things go wrong.
The other change happens more deeply within the kernel. When a process
incurs a page fault, the kernel really only has two choices: it must either
provide a valid page at the faulting address or kill the process in
question. So the OOM killer will still be invoked in response to memory
shortages encountered when trying to handle a page fault. But the code has
been reworked somewhat; rather than wait for the OOM killer deep within the
page fault handling code, the kernel drops back out and releases all locks
first. Once the OOM killer has done its thing, the page fault is restarted
from the beginning. This approach should ensure reliable page fault
handling while avoiding the locking problems that plague the OOM killer
now.
Logging drop_caches
Writing to the magic sysctl file /proc/sys/vm/drop_caches will
cause the kernel to forget about all clean objects in the page, dentry, and
inode caches. That is not normally something one would want to do; those
caches are maintained to improve the performance of the system. But
clearing the caches can be useful
for memory management testing and for the production of reproducible
filesystem benchmarks. Thus, drop_caches exists primarily as a
debugging and testing tool.
It seems, though, that some system administrators have put writes to
drop_caches into various scripts over the years in the belief that
it somehow helps performance. Instead, they often end up creating
performance problems that would not otherwise be there. Michal Hocko, it
seems, has gotten a little tired of tracking down this kind of problem, so
he has revived an old patch from Dave
Hansen that causes a message to be logged whenever drop_caches
is used. He said:
I am bringing the patch up again because this has proved being
really helpful when chasing strange performance issues which
(surprise surprise) turn out to be related to artificially dropped
caches done because the admin thinks this would help... So mostly
those who support machines which are not in their hands would
benefit from such a change.
As always, the simplest patches cause the most discussion. In this case, a
number of developers expressed concern that administrators would not
welcome the additional log noise, especially if they are using
drop_caches frequently. But Dave expressed a hope that at least some of the
affected users would get in contact with the kernel developers and explain
why they feel the need to use drop_caches frequently. If it is
being used to paper over memory management bugs, the thinking goes, it
would be better to fix those bugs directly.
In the end, if this patch is merged, it is likely to include an option (the
value written to drop_caches is already a bitmask) to suppress the
log message. That led to another discussion on exactly which bit should be
used, or whether the drop_caches interface should be augmented to
understand keywords instead. As of this writing, the simple
printk() statement still has not been added; perhaps more
discussion is required.
Comments (20 posted)
By Jonathan Corbet
August 7, 2013
Kernel development, like development in most free software projects, is
built around the concept of peer review. All patches should be reviewed by
at least one other developer; that, it is hoped, will catch bugs before
they are merged and lead to a higher-quality end result. While a lot of
code review does take place in the kernel project, it is also clearly the
case that a certain amount of code goes in without ever having been looked
at by anybody other than the original developer. A couple of recent
episodes bear a closer look; they show why the community values code review
and the hazards of skipping it.
O_TMPFILE
The O_TMPFILE option to the open() system call was pulled
into the mainline during the 3.11 merge window; prior to that pull, it had
not been posted in any public location. There is no doubt that it provides
a useful feature; it allows an application to open a file in a given
filesystem with no visible name. In one stroke, it does away with a whole
range of temporary file vulnerabilities, most of which are based on
guessing which name will be used. O_TMPFILE can also be used with
the linkat() system call to create a file and make it visible in
the filesystem, with the right permissions, in a single atomic step. There
can be no doubt that application developers will want to make good use of
this functionality once it becomes widely available.
That said, O_TMPFILE has been going through a bit of a rough
start. It did not take long for Linus to express concerns about the new API; in short, there
was no way for applications to determine that they were running on a system
where O_TMPFILE was not supported. A couple of patches
later, those issues had been addressed. Since then, a couple of bugs have
been found in the implementation; one, fixed
by Zheng Liu, would oops the kernel. Another, reported by Andy Lutomirski, corrupts the
underlying filesystem through the creation
of a bogus inode. Finally, few filesystems actually support this
new option at this point, so it is not something that developers can count
on having available, even on Linux systems.
Meanwhile, Christoph Hellwig has questioned the
API chosen for this feature:
Why is the useful tmpfile functionality multiplexed over open when
it has very different semantics from a normal open?
In addition to the flag problems already discussed to death it also
just leads to splattering of the code in the implementation [...]
Christoph suggests that it would have been better to create a new
tmpfile() system call rather than adding this feature to
open(). In the end, he has said,
O_TMPFILE needs some more time:
Given all the problems and very limited fs support I'd much prefer
disabling O_TMPFILE for this release. That'd give it the needed
exposure it was missing by being merged without any previous public
review.
Neither Al Viro (the author of this feature) nor Linus has responded to
Christoph's suggestions, leading one to believe that the current plan is to
go ahead with the current implementation. Once the O_TMPFILE ABI
is exposed in the 3.11 release, it will need to be supported indefinitely.
It certainly is supportable in its current form, but it may well have come
out better with a bit more discussion prior to merging.
Secret security fixes
Russell King's pre-3.11-rc4 pull request does not appear to have been
sent to any public list. Based on the
merge commit in the mainline, what Russell said about this request was:
I've thought long and hard about what to say for this pull request,
and I really can't work out anything sane to say to summarise much
of these commits. The problem is, for most of these are, yet
again, lots of small bits scattered around the place without any
real overall theme to them.
Evidently, the fact that eight out of the 22 commits in that request were
security fixes does not constitute a "real overall theme." The patches
seem like worthwhile hardening for the ARM architecture, evidently written in response to disclosures
made at the recently concluded Black Hat USA 2013 event. While
most of the patches carry an Acked-by from Nicolas Pitre, none of them saw
any kind of public review before heading into the mainline.
It was not long before Olof Johansson encountered a number of problems with the
changes, leading to several systems that were unable to boot. LWN reader
kalvdans pointed out a different obvious bug
in the code. Olof
suggested that, perhaps, the patches might have benefited from some time in
the linux-next repository, but Russell responded:
Tell me how I can put this stuff into -next _and_ keep it secret
because it's security related. The two things are totally
incompatible with each other. Sorry.
In this case, it is far from clear that much was gained by taking these
patches out of the normal review process. The list of distributors rushing
to deploy these fixes to users prior to their public disclosure is likely
to be quite short, and, in any case, the cure, as was merged for 3.11-rc4,
was worse than the disease. As of this writing, neither bug has been fixed
in the mainline, though patches exist for both.
That said, one can certainly imagine scenarios where it might make sense to
develop and merge a fix outside of public view. If a security
vulnerability is known to be widely exploitable, one wants to get the fix
as widely distributed as possible before the attackers are able to develop
their exploits. In many cases, though, the vulnerabilities are not readily
exploitable, or, as is the case for the bulk of deployed ARM systems, there
is no way to quickly distribute an update in any case. In numerous other
cases, the vulnerability in question has been known to the attacker
community for a long time before it comes to the attention of a kernel
developer.
For all of those cases, chances are high that the practice of developing
fixes in secret does more harm than good. As has been seen here, such
fixes can introduce bugs of their own; sometimes, those new bugs can be new
security problems as well. In other situations, as in the
O_TMPFILE case, unreviewed code also runs the risk of introducing
suboptimal APIs that must then be maintained for many years. The code
review practices we have developed over the years exist for a reason;
bypassing those practices introduces a whole new set of risks to the kernel
development process. The 3.11 development cycle has demonstrated just how
real those risks can be.
Comments (5 posted)
Patches and updates
Kernel trees
Core kernel code
Development tools
Device drivers
Filesystems and block I/O
Memory management
Architecture-specific
Security-related
Miscellaneous
Page editor: Jonathan Corbet
Distributions
By Jake Edge
August 7, 2013
For some time now, openSUSE has been searching for its identity. It is not
alone in that, as Fedora and others have also wrestled with many of the
same questions. Some of the recent discussions were largely inspired by SUSE VP
of Engineering
Ralf Flaxa's keynote
[YouTube] at the
recently concluded openSUSE
conference—though many of the ideas have been floating around for
years. Since
then, discussion is ongoing regarding the future and
plans for the
distribution and its offshoots: Factory, Tumbleweed, and
Evergreen.
One of the problems for openSUSE right now is that the work that goes
into Tumbleweed, a rolling release, is not necessarily aligned with what is
going on in Factory, which is where the development for the next version
happens (much like Fedora's Rawhide). Packages may be updated in
Tumbleweed (or available from the openSUSE Build Service), but not be
the same as what is being developed for Factory, and thus for the next
openSUSE. Getting those efforts in better alignment would be one step in
the right direction.
There is also the belief that openSUSE is not well-targeted and tries to be
too many things to too many different kinds of users. For example, the
current eight-month
release cycle is too frequent for some, but too slow for others.
Similarly, the 18-month support cycle for each release is either too short
or far longer than many users are interested in. As openSUSE community
manager Jos Poortvliet put
it on his blog: "openSUSE is well known for doing everything a bit - making everybody a little happy, but nobody REALLY happy."
Flaxa offered more help for openSUSE from the SUSE Linux Enterprise (SLE)
team, but also noted that he was not there to mandate anything. OpenSUSE
has been working on transparent community governance for some time, and
SUSE wants to see that continue, he said. The conclusion of his keynote
was based
around suggestions from SUSE, not mandates, or even requests. Many of the
suggestions
parallel the problems that Poortvliet and others have been bringing up
(e.g. quality, life cycle, release integration, etc.).
Flaxa said that SUSE is pleased with openSUSE and plans to continue its
investment in it. But, he said, there is a gap between the users served by
openSUSE and those served by SLE. One suggestion he had was to try to
bridge that gap, such that there was a smoother path, not just for users
moving to SLE, but also for SLE users to be able to use openSUSE more
effectively. SUSE is also interested in seeing a more community-oriented
openSUSE as its governance broadens.
The roots of some of these issues go back at least three years to the openSUSE strategy discussions of 2010. While
a strategy
was determined in 2011, it didn't really change things that much. In
many ways, openSUSE is running in third place behind Ubuntu and Fedora.
Users who want user-friendliness tend to turn to Ubuntu, while those
looking for the cutting edge (or nearly so) look to Fedora.
But even if a particular focus can be settled on in terms of the kinds of
users the distribution will target, there are some technical barriers to
overcome. Some of those barriers and possible solutions were discussed back in June 2012 once it became
clear that the 12.2 release would have to be delayed.
More recently, openSUSE board member Robert Schweikert opened a discussion on "Fiddling with
our development model" on the opensuse-factory mailing list. He clearly
noted that the ideas were his, "NOT board intervention",
however. Schweikert sees problems with the current "push
model" where a new component like GCC can enter Factory and break
lots of other packages. There are only a few people from the release team
working to resolve those kinds of problems, so Factory (or the staging
portion of Factory) can be broken for a long time.
That problem led Schweikert to propose breaking the distribution up into
"components", each of which has a clearly defined set of dependencies.
Multiple versions of a component might be supported at any given time, and
the other components' teams would decide when to switch to a new version of
a component it is dependent on. That way a change to a dependency wouldn't
immediately be able to break any packages dependent on it as there would be
no forced upgrade (at least until distribution release time neared).
There were a number of posters disagreeing with Schweikert's approach, but
his post did spark a discussion that the distribution needs to have, at
least according to Poortvliet. An alternative that has
evidently been suggested by Stephan "Coolo" Kulow is to separate the
packages in Factory into "rings". Poortvliet mentions the idea in his blog
post above, as well as in an earlier
one.
Basically, the core of the OS, the kernel and minimum needed to
boot would be ring 0, while build tools and utilities would be in ring 1,
ring 2 would have desktops and their development frameworks, and, finally,
user applications would live in ring 3. The rings could be supported for
different lengths of time, and could be updated separately. So one could
keep the core and desktop stable, but update the applications frequently,
for example.
So far, the rings idea hasn't really been discussed on the mailing list,
but based on the comments on Poortvliet's posts, it may be more popular
than Schweikert's idea. In the final analysis, though, the question is
much bigger than just a technical one. Poortvliet put it this way:
The issue with all the above scenarios is that while we can technically do
them (some are harder than others, of course) the choice doesn't just
depend on what we can do but also on what we should do. That makes the
discussion a lot more interesting. We have to fix some things in our
process, adjust to reality, that is clear. But do we want to shift
direction, too? What is our real goal?
That, in some ways, takes the distribution full circle, back to the
2010/2011 strategy discussions and vote. It is not an easy problem to
solve. There are lots of people involved, each with their own personal
idea of what the distribution is "for". As we have seen, other
distributions have struggled with the same thing. Even Ubuntu, which seems
to have the clearest grasp on who its audience is, went through a long discussion of switching to rolling
release model recently. In all of those cases, the distributions in
question have
been struggling to better serve their users—openSUSE is following that same
path.
Comments (6 posted)
Brief items
This may be obvious to the average otaku, but not so much to
$debian_user who is trying to choose between the many Twitter clients we
have.
--
Ben Hutchings
Comments (none posted)
Jean-Baptiste Quéru, the developer behind the successful development of the
Android Open Source Project, has
announced
that he is leaving that project. "
There's no point being the
maintainer of an Operating System that can't boot to the home screen on its
flagship device for lack of GPU support, especially when I'm getting the
blame for something that I don't have authority to fix myself and that I
had anticipated and escalated more than 6 months ahead."
According
to Android and Me, the new Nexus 7 tablet was the straw that broke
the camel's back.
Many thanks to JBQ for the work he has done for AOSP!
Comments (28 posted)
Distribution News
openSUSE
YaST, the openSUSE configuration tool has been
converted
from YCP to Ruby. "
we are proud to announce that we just reached the main goal of the YCP Killer project: we did the final conversion of YaST codebase from YCP to Ruby and integrated the result into Factory (which means YaST in Ruby will be part of openSUSE 13.1 M4). At the same time, YaST version was officially increased to 3.0.0."
Comments (none posted)
Newsletters and articles of interest
Comments (none posted)
LinuxInsider
reviews
UberStudent. "
UberStudent, developed by education specialist Stephen Ewen, is a Linux distro that delivers tools for learning task completion and academic success, targeting advanced secondary and higher education students.
However, it goes beyond that. I have been using UberStudent because its tools set and features array are neatly packaged in well-designed menus. While my student learning days are long gone, some of these specialty applications are very useful for note taking, task planning, and project organization for the non-school things I do."
Comments (none posted)
Page editor: Rebecca Sobol
Development
For years, developers have lived comfortably with the assumption
that screen resolutions were in the predictable confines of a certain
pixel density—around 75 to 100 pixels-per-inch (ppi). As everyone
who sees the latest Chromebook models from Google knows, however, such
is no longer the case: the newest displays are well over 200 ppi. There have been partially-completed attempts
to modify GNOME for high-resolution display support, but with little
success. But now there appears to be a workable solution, as Alex
Larsson demonstrated at GUADEC 2013.
The latest Chromebook (the "Pixel") boasts a 239 ppi screen, Larsson
said, at which resolution GNOME's user interface elements are
unreadably tiny. Allegedly, the display resolution is configurable,
via the Xft.DPI setting, which has long been hard-coded to 96
in GNOME. But simply changing it to 239 does not work; as he showed
the audience, the result only scales the size of displayed text.
Although that improves legibility, UI elements like buttons and
scrollbars are still ridiculously tiny. In addition, text labels and
icons no longer line up, the default heights of menu bars and buttons
is no longer correct, and many other UI assumptions are broken.
Perhaps the system could be modified to scale everything according
to this DPI setting, he said. But despite the fact that the idea
seems intuitive, he continued, simply scaling all of the UI elements is not the
solution—scaling lines to a non-integer-multiple gives the user
fuzzy lines that should be sharp and blurry icons that arguably look
worse than the unscaled originals. Vector elements need to be drawn so
that they align to the pixel grid of the display, and a separate fix
needs to be available for those elements still using PNG or other
raster graphics formats.
There are a lot of places in GNOME's user interface that are
implicitly sized according to raster images and pixel-specific measurements: icons,
cursors, window borders, padding, the minimum sizes of GTK+
widgets—even the cursor speed is defined in terms of pixels.
The list of places where the code would need to change is lengthy, and
changing it holds the possibility for a lot of unexpected breakage.
But then again, he continued, scaling everything to the exact same
physical size is not strictly required; users already cope well with
the variations in size caused by the different resolutions of laptop
displays and external monitors; no one complains that a button is 6mm
high on one and 8mm high on the other. All that really matters is
that the system scales elements to approximately similar size on the
Pixel display.
Abstraction to the rescue
The notion that the correct solution only needs to approximate the
difference in resolution turned out to be one of the key insights.
Larsson's eventual answer was to treat the existing "pixel" sizes
already in use as an abstract, rather than a physical measurement, so
that high-resolution displays appear low-resolution to the top levels
of the software stack, and to set a scaling factor that multiplies the
actual pixel count rendered on high-resolution displays. For almost everything above the drawing layer, the current definition
of pixel would suffice; the pixel size itself could be scaled only
when rendered by
the lower-level libraries like Cairo and GDK, with very few side
effects. Moreover, by always scaling abstract pixels to monitor
pixels by integer factors, the pixel grid would automatically be
preserved, meaning vector images would remain sharp—and the math
would be considerably simpler, too.
He then implemented the abstract pixel scaling plan in Cairo, GDK,
and GTK+. Normal monitors are unaffected, as their scaling factor is
1. "HiDPI" monitors like the Pixel use a scaling factor of 2, which
results in a usable desktop interface, despite the on-screen elements
not quite being the same physical dimensions.
In Cairo, the high-resolution scale factor is applied when the
Cairo surface is rendered; applications can access the scaling factor
for the display with cairo_surface_get_device_scale(), but
normally GTK+ hides it completely. Similarly, in GDK, the sizes and
positions of windows, screens, and monitors is still reported in
abstract pixels; gdk_window_get_scale_factor() will report
the scaling factor, but it is usually unnecessary for applications to
know it. Wayland compositors will scale client buffers as necessary,
and will allow clients to provide double-scale buffers to cope with
the occasional window that spans both a high- and low-resolution
display. X support is less flexible; all displays must be the same
scale, which is reported via the GDK_SCALE environment
variable, but, Larsson said, Wayland is the protocol of the future, so
the development there is more important.
There are new functions like
gdk_window_create_similar_surface() to transparently create
an offscreen surface for the purpose of double-buffering, and
GtkIconTheme has been patched to support specifying both
normal-resolution and high-resolution versions of icons, but by and
large the scaling function is invisible to application code. There
are also hooks in place for use when displaying images and other
situations where scaling up window content is inappropriate. The
scaling functionality is due to land in Cairo 1.13, and Wayland
support has been added in version 1.2 of the Wayland protocol
definition. The GTK+ and GDK changes are currently available in
Larsson's wip/window-scales branch.
Larsson added that GTK+ on Mac OS X also supports the
scaling factor, using the operating system's Quartz library. Windows
support, however, remains on the to-do list. For the time being,
there are very few displays on the market that require a scaling
factor other than 1. Larsson described the Chromebook Pixel as the
primary driving factor, but Apple Retina displays are also supported,
and there are a handful of high-density netbook displays that qualify
as high-resolution as well. For the present, 1 and 2 remain the only
scaling factors in deployment. There is no telling if or when a 3 or
4 scaling factor will be required for some future display, but when it
does, the GNOME stack will be prepared well in advance.
[The author wishes to thank the GNOME Foundation for assistance
with travel to GUADEC 2013.]
Comments (19 posted)
Brief items
Anytime installing your package pulls in libbonobo, I kill your kitten. Thank you. That is all.
—
Joanmarie
Diggs
Attempt to preemptively defuse snarkers: Cinnamon/MATE/forks of Gnome are a good sign. Don't think so? Study more ecology.
—
Federico Mena-Quintero
Comments (none posted)
Version 2.7 of the Calligra office suite has been
released. There's lots
of new features, including a reworked toolbox in a number of applications,
epub3 support in Author, better task scheduling in Plan, and a long list of
improvements to the Krita paint application.
Comments (1 posted)
Version 2.1 of the
PyPy Python interpreter written in Python (we
looked at version 2.0 in May) has been released. It is the first version with official support for ARM processors (work that was supported by the
Raspberry Pi Foundation): "
PyPy is a very compliant Python interpreter, almost a drop-in replacement for
CPython 2.7. It's fast (http://speed.pypy.org)
due to its integrated tracing JIT compiler.
This release supports x86 machines running Linux 32/64, Mac OS X 64 or Windows
32. This release also supports ARM machines running Linux 32bit - anything with
ARMv6 (like the Raspberry Pi) or ARMv7 (like the Beagleboard,
Chromebook, Cubieboard, etc.) that supports VFPv3 should work. Both
hard-float armhf/gnueabihf and soft-float armel/gnueabi builds are
provided." In addition, the first
beta release of PyPy3 2.1, which is a Python 3, rather than 2.7, interpreter, is also available.
Full Story (comments: none)
Version 3.0 of the bison parser generator is out with a lot of new
features. "
An executive summary would include: (i) deep
overhaul/improvements of the diagnostics, (ii) more versatile means to
describe semantic value types (including the ability to store genuine C++
objects in C++ parsers), (iii) push-parser interface extended to Java, and
(iv) parse-time semantic predicates for GLR parsers."
Full Story (comments: 20)
Mozilla has released Firefox 23, for desktop systems and Android devices. The banner feature in this version is mixed content blocking, which provides users with a per-page option to block HTTP resources in HTTPS pages. Other changes include a revamp about:memory user interface and the addition of "social sharing" functionality. Last but certainly not least, this release removes support for the blink tag. At least until the .
Full Story (comments: 11)
Version 2.6.0 of the bzr version control system has been released. This update is a bugfix release of the 2.5.x series, but it is marked as the beginning of a long-term stable release series that will be supported by Canonical.
Full Story (comments: none)
Version 1.3.0 of matplotlib has been released. This version includes several new features, such as event plots, triangular grid interpolation, and "xkcd-style sketch plotting.
Full Story (comments: none)
Newsletters and articles
Comments (none posted)
On his blog, Julien Danjou
examines static, class, and abstract methods in Python and gives examples of how they might be used. "
Doing code reviews is a great way to discover things that people might struggle to comprehend. While proof-reading OpenStack patches recently, I spotted that people were not using correctly the various decorators Python provides for methods. So here's my attempt at providing me a link to send them to in my next code reviews. :-)"
Comments (7 posted)
Page editor: Nathan Willis
Announcements
Articles of interest
Free Software Foundation's monthly newsletter for July is out. Topics
include the FSF and other groups join EFF to sue NSA, fund raising for
Replicant, updated DRM-free Guide, Windows 8: PRISM Edition, new interns,
Netflix, an interview with Shiv Shankar Dayal of Kunjika, and more.
Full Story (comments: none)
The August edition of the Free Software Foundation Europe newsletter covers
a move by proprietary software companies to stifle competition, election
software in Estonia and Norway, NSA leaks motivate Free Software activists,
a call to stop surveillance, and several other topics.
Full Story (comments: none)
The Free Software Foundation Europe has sent an
open
letter to Estonia's National Electoral Committee (NEC) regarding the
country's Internet voting system. "
Estonia has used Internet voting
for general elections since 2005. Unfortunately, the system's technology
remains proprietary. Local activists have recently managed to convince the
NEC to release source code for some of the software under a non-free
licence, but this licence does not permit distribution of derivative works
or commercial use. These arbitrary restrictions on software developed with
public funds hinder security research."
Full Story (comments: none)
Calls for Presentations
PyConZA will take place October 3-4 in Cape Town, South Africa. The call
for speakers is open until September 1. "
The presentation slots will
be 30 minutes long, with an additional 10 minutes for discussion at the
end. Shared sessions are also possible. The presentations will be in English."
Full Story (comments: none)
CFP Deadlines: August 8, 2013 to October 7, 2013
The following listing of CFP deadlines is taken from the
LWN.net CFP Calendar.
| Deadline | Event Dates |
Event | Location |
| August 15 |
August 22 August 25 |
GNU Hackers Meeting 2013 |
Paris, France |
| August 18 |
October 19 |
Hong Kong Open Source Conference 2013 |
Hong Kong, China |
| August 19 |
September 20 September 22 |
PyCon UK 2013 |
Coventry, UK |
| August 21 |
October 23 |
TracingSummit2013 |
Edinburgh, UK |
| August 22 |
September 25 September 27 |
LibreOffice Conference 2013 |
Milan, Italy |
| August 30 |
October 24 October 25 |
Xen Project Developer Summit |
Edinburgh, UK |
| August 31 |
October 26 October 27 |
T-DOSE Conference 2013 |
Eindhoven, Netherlands |
| August 31 |
September 24 September 25 |
Kernel Recipes 2013 |
Paris, France |
| September 1 |
November 18 November 21 |
2013 Linux Symposium |
Ottawa, Canada |
| September 6 |
October 4 October 5 |
Open Source Developers Conference France |
Paris, France |
| September 15 |
November 8 |
PGConf.DE 2013 |
Oberhausen, Germany |
| September 15 |
November 15 November 16 |
Linux Informationstage Oldenburg |
Oldenburg, Germany |
| September 15 |
October 3 October 4 |
PyConZA 2013 |
Cape Town, South Africa |
| September 15 |
November 22 November 24 |
Python Conference Spain 2013 |
Madrid, Spain |
| September 15 |
April 9 April 17 |
PyCon 2014 |
Montreal, Canada |
| September 15 |
February 1 February 2 |
FOSDEM 2014 |
Brussels, Belgium |
| October 1 |
November 28 |
Puppet Camp |
Munich, Germany |
If the CFP deadline for your event does not appear here, please
tell us about it.
Upcoming Events
Ohio LinuxFest has announced that Fedora Project Leader Robyn Bergeron will
be a keynote speaker at the 2013 event, to be held September 13-15 in
Columbus, Ohio.
Full Story (comments: none)
Linux Plumbers Conference (LPC) will take place September 18-20 in New
Orleans, Louisiana. There will be an
opening
plenary session after which attendees are invited to attend the
LinuxCon keynotes. LPC is co-located with LinuxCon NA and the
refereed
track talks will be shared by the two events.
Comments (none posted)
Registration is open for the 2013 Tcl/Tk Conference, to be held September
23-27 in New Orleans, Louisiana.
Full Story (comments: none)
The LLVM Developers' Meeting will be held November 6-7 in San Francisco,
CA. "
This is a 1.5 day conference that serves as a forum for both developers and users of LLVM (and related projects) to meet, learn LLVM internals, learn how LLVM is used, and to exchange ideas about furthering development of LLVM and its potential applications."
Full Story (comments: none)
Events: August 8, 2013 to October 7, 2013
The following event listing is taken from the
LWN.net Calendar.
| Date(s) | Event | Location |
August 1 August 8 |
GUADEC 2013 |
Brno, Czech Republic |
August 6 August 8 |
Military Open Source Summit |
Charleston, SC, USA |
August 7 August 11 |
Wikimania |
Hong Kong, China |
August 9 August 11 |
XDA:DevCon 2013 |
Miami, FL, USA |
August 9 August 12 |
Flock - Fedora Contributor Conference |
Charleston, SC, USA |
August 9 August 13 |
PyCon Canada |
Toronto, Canada |
August 11 August 18 |
DebConf13 |
Vaumarcus, Switzerland |
August 12 August 14 |
YAPC::Europe 2013 “Future Perl” |
Kiev, Ukraine |
August 16 August 18 |
PyTexas 2013 |
College Station, TX, USA |
August 22 August 25 |
GNU Hackers Meeting 2013 |
Paris, France |
August 23 August 24 |
Barcamp GR |
Grand Rapids, MI, USA |
August 24 August 25 |
Free and Open Source Software Conference |
St.Augustin, Germany |
August 30 September 1 |
Pycon India 2013 |
Bangalore, India |
September 3 September 5 |
GanetiCon |
Athens, Greece |
September 6 September 8 |
State Of The Map 2013 |
Birmingham, UK |
September 6 September 8 |
Kiwi PyCon 2013 |
Auckland, New Zealand |
September 10 September 11 |
Malaysia Open Source Conference 2013 |
Kuala Lumpur, Malaysia |
September 12 September 14 |
SmartDevCon |
Katowice, Poland |
| September 13 |
CentOS Dojo and Community Day |
London, UK |
September 16 September 18 |
CloudOpen |
New Orleans, LA, USA |
September 16 September 18 |
LinuxCon North America |
New Orleans, LA, USA |
September 18 September 20 |
Linux Plumbers Conference |
New Orleans, LA, USA |
September 19 September 20 |
UEFI Plugfest |
New Orleans, LA, USA |
September 19 September 20 |
Open Source Software for Business |
Prato, Italy |
September 19 September 20 |
Linux Security Summit |
New Orleans, LA, USA |
September 20 September 22 |
PyCon UK 2013 |
Coventry, UK |
September 23 September 25 |
X Developer's Conference |
Portland, OR, USA |
September 23 September 27 |
Tcl/Tk Conference |
New Orleans, LA, USA |
September 24 September 25 |
Kernel Recipes 2013 |
Paris, France |
September 24 September 26 |
OpenNebula Conf |
Berlin, Germany |
September 25 September 27 |
LibreOffice Conference 2013 |
Milan, Italy |
September 26 September 29 |
EuroBSDcon |
St Julian's area, Malta |
September 27 September 29 |
GNU 30th anniversary |
Cambridge, MA, USA |
| September 30 |
CentOS Dojo and Community Day |
New Orleans, LA, USA |
October 3 October 4 |
PyConZA 2013 |
Cape Town, South Africa |
October 4 October 5 |
Open Source Developers Conference France |
Paris, France |
If your event does not appear here, please
tell us about it.
Page editor: Rebecca Sobol