By Nathan Willis
June 26, 2013
Back in February, Mozilla implemented
a new cookie policy for Firefox, designed to better protect users'
privacy—particularly with respect to third-party user tracking.
That policy was to automatically reject cookies that originate from
domains that the user had not visited (thus catching advertisers and
other nefarious entities trying to sneak a cookie into place). The
Electronic Frontier Foundation (EFF) lauded
the move as "standing up for its users in spite of powerful
interests," but also observed that cross-site tracking cookies
were low-hanging fruit compared to all of the other methods used to
track user behavior.
But as it turns out, nipping third-party cookies in the bud is not
as simple as Mozilla had hoped. The policy change was released in
Firefox nightly builds, but Mozilla's Brendan Eich pointed
out in May that the domain-based approach caused too many false
positives and too many false negatives. In other words,
organizations that deliver valid and desirable cookies from a
different domain name—such as a content distribution network (CDN)—were being unfairly blocked by the policy, while sites that a user
might visit only once—perhaps even accidentally—could set
tracking cookies indefinitely. Consequently, the patch was held back
from the Firefox beta channel. Eich commented that the project was
looking for a more granular solution, and would post updates within
six weeks about how it would proceed.
Six weeks later, Eich has come back with a description
of the plan. Seeing how the naïve has-the-site-been-visited policy
produced false positives and false negatives, he said, Mozilla
concluded that an exception-management system was required. But the
system could not rely solely on the user to manage
exceptions—Eich noted that Apple's Safari browser has a similar
visited-based blocking system that advises users to switch off the
blocking functionality the first time they encounter a false
positive. This leaves a blacklist/whitelist approach as the
"only credible alternative," with a centralized service
to moderate the lists' contents.
Perhaps coincidentally, on June 19 (the same day Eich posted his
blog entry), Stanford University's Center for Internet and Society (CIS)
unveiled just such a centralized cookie listing system, the Cookie Clearinghouse (CCH). Mozilla
has committed to using the CCH for its Firefox cookie-blocking policy,
and will be working with the CCH Advisory Board (a group that also
includes a representative from Opera, and is evidently still sending
out invitations to other prospective members). Eich likens the CCH exception
mechanism to Firefox's anti-phishing
features, which also use a whitelist and blacklist maintained
remotely and periodically downloaded by the browser.
Codex cookius
As the CCH site describes it, the system will publish blacklists
(or "block lists") and whitelists (a.k.a. "allow lists") based on
"objective, predictable criteria"—although it is
still in the process of developing those criteria.
There are already four presumptions made about how browsers will
use the lists. The first two more-or-less describe the naïve
"visited-based"
approach already tried by Safari and Firefox: if a user has visited a
site, allow the site to set cookies; do not set cookies originating
from sites the user has not visited. Presumption three is that if a
site attempts to save a Digital Advertising Alliance "opt out cookie," that
opt-out cookie (but not others) will be set. Presumption four is
that cookies should be set when the user consents to them. The site
also notes that CCH is contemplating adding a fifth presumption to
address sites honoring the Do Not Track (DNT) preference.
Obviously these presumptions are pretty simple on their own; the
real work will be in deciding how to handle the myriad sites that fall
into the false-match scenarios already encountered in the wild. To
that end, CCH reports that it is in the initial phase of drawing up
acceptance and blocking criteria, which will be driven by the Advisory
Board members. The project will then hammer out a file format for the
lists, and develop a process that sites and users can use to challenge
and counter-challenge a listing. Subsequently, the lists will be
published and CCH will oversee them and handle the challenge process.
The site's FAQ page
says the project hopes to complete drawing up the plans by Fall 2013
(presumably Stanford's Fall, that is).
The details, then, are pretty scarce at the moment. The Stanford
Law School blog posted
a news item with a bit more detail, noting that the idea for the CCH
grew out of the CIS team's previous experience working on DNT.
Indeed, that initial cookie-blocking patch to Firefox, currently
disabled, was written by a CIS student affiliate.
Still, this is an initiative to watch. Unlike DNT, which places
the onus for cooperation solely on the site owners (who clearly have
reasons to ignore the preference), CCH enables the browser vendor to
make the decision about setting the cookie whether the site likes it
or not. The EFF is right to point out that cookies are not required
to track users between sites, but there is no debating the fact that
user-tracking via cookies is widespread. The EFF Panopticlick illustrates
other means of uniquely identifying a particular browser, but it is
not clear that anyone (much less advertisers in particular) use
similar tactics.
To play devil's advocate, one might argue that widespread adoption
of CCH would actually push advertisers and other user-tracking vendors
to adopt more surreptitious means of surveillance, arms-race style.
The counter-argument is that ad-blocking software—which is also
purported to undermine online advertising—has been widely
available for years, yet online advertising has shown little sign of
disappearing. Then again, if Firefox or other browsers adopt CCH
blacklists by default, they could instantly nix cookies on millions of
machines—causing a bigger impact than user-installed ad
blockers.
The other challenges in making CCH a plausible reality include the
overhead of maintaining the lists themselves. The anti-phishing
blacklists (as well as whitelists like the browser's root
Certificate Authority store) certainly change, but the sheer
number and variety of legitimate sites that employ user-tracking
surely dwarfs the set of malware sites. Anti-spam blacklists, which
might be a more apt comparison in terms of scale, have a decidedly
mixed record.
It is also possible that the "input" sought from interested parties
will complicate the system needlessly. For example, the third
presumption of CCH centers around the DAA opt-out cookie, but there
are many other advertising groups and data-collection alliances out
there—a quick search for "opt out cookie" will turn up an
interesting sample—all of whom, no doubt, will have their own
opinion about the merits of their own opt-out system. And that does
not even begin to consider the potential complexities of the planned
challenge system—including the possibility of lawsuits from
blacklisted parties who feel their bottom-line has been unjustly harmed.
Privacy-conscious web users will no doubt benefit from any policy
that restricts cookie setting, at least in the short term. But the
arms-race analogy is apt; every entity with something to gain by
tracking users through the web browser is already looking for the next
way to do so more effectively. Browser vendors can indeed stomp out
irritating behavior (pop-up ads spring to mind), but they cannot
prevent site owners from trying something different tomorrow.
Comments (15 posted)
By Nathan Willis
June 26, 2013
The WebRTC standard is much hyped for its ability to
facilitate browser-to-browser media connections—such as placing video
calls without the need to install a separate client application.
Relying solely on the browser is certainly one of WebRTC's strengths,
but another is the fact that the media stream is delivered directly
from one client to the other, without making a pit-stop at the
server. But direct connections are available for arbitrary binary data
streams, too, through WebRTC's RTCDataChannel. Now the first applications to take advantage of this
feature are beginning to appear, such as the Sharefest peer-to-peer
file transfer tool.
Changing the channel
WebRTC data channels are an extension to the original WebRTC
specification, which
focused first on enabling two-way audio/video content delivery in
real-time. The general WebRTC framework remains the same; sessions
are set up using the WebRTC session layer, with ICE
and STUN used to
connect the endpoints across NAT. Likewise, the RTCPeerConnection
is used as a session control channel, taking care of signaling,
negotiating options, and other such miscellany. The original
justification for RTCDataChannel was that multimedia applications also
needed a conduit to simultaneously exchange ancillary information,
anything from text chat messages to real-time status updates in a
multiplayer game.
But data and real-time media are not quite the same, so WebRTC
defines a separate channel for the non-multimedia content. While
audio/video streams are delivered over Real-time Transport Protocol
(RTP), a RTCDataChannel is delivered over Stream Control Transmission
Protocol (SCTP), tunneled over UDP using Datagram TLS (DTLS). Using SCTP
enables a TCP-like reliability on top of UDP.
After all, if one video chat frame in an RTP conversation arrives too
late to be displayed, it is essentially worthless; arbitrary
binary data, however, might require retransmission or error handling.
That said, there is no reason that a web application must implement
a media stream in order to use an RTCDataChannel. The API is
deliberately similar to Web Sockets, with the obvious difference being
that the remote end of a RTCDataChannel is another browser, rather
than a server. In addition, RTCDataChannels use DTLS to encrypt the
channel by default, and SCTP provides a mechanism for congestion control.
Obviously there are other ways to send binary data from one browser
to another, but the WebRTC API should be more efficient, since the
clients can choose whatever (hopefully compact) format they wish, and
use low-overhead UDP to deliver it. At a WebRTC meetup in March,
developer Eric Zhang estimated
[SlideShare] that encoding data in
Base64 for exchange through JSON used about 37% more bandwidth than
sending straight binary data over RTCDataChannels.
Browser support for RTCDataChannels is just now rolling out.
Chrome and Chromium support the feature as of version 26 (released in
late March); Firefox currently requires a nightly build.
Sharefest
Sharefest is a project started by a video streaming company called
Peer5. A blog post introducing
the tool says the impetus was two coworkers who needed to share a
large file while in a coffee shop, but Dropbox took ages to transfer
the file to the remote server then back again. With RTCDataChannels, however, the transfer would be considerably faster since it
could take place entirely on the local wireless LAN. A few months
later, Sharefest was unveiled on GitHub.
The Sharefest server presents an auto-generated short-URL for each
file shared by the "uploading" (so to speak) party. When a client
browser visits the URL, the server
connects it to the uploader's browser and the uploader's browser
transfers the file over an RTCDataChannel. If multiple clients connect
from different locations, the server attempts to pair nearby clients
to each other to maximize the overall throughput; thus for a
persistent sharing session Sharefest does offer bandwidth savings for
the sharer.
Currently the size limit for shared files is
around 1GB, though there is an issue open to
expand this limit. A demo server is running at sharefest.me (continuing the
near-ubiquitous domination of the .me domain for web-app
demos). The server side is written in Node.js, and the entire work is
available under the Apache 2.0 license.
Naturally, only the recent versions of Chrome/Chromium and Firefox
are supported, although the code detects the browser and supplies a
friendly message illustrating the problem for unsupported browsers.
This is particularly important because the RTCDataChannel API is not yet
set in stone, and there are several pieces of WebRTC that are
implemented differently in the two browsers—not differences distinct
enough to break compatibility; mostly just varying default
configuration settings.
Sharefest is certainly simple to use; it does require that the uploading
browser keep the Sharefest page open, but it allows straightforward
"ephemeral" file sharing in a (largely) one-to-many fashion. From the
user's perspective, probably the closest analogy would be to Eduardo
Lima's FileTea, which
we covered in 2011. But FileTea is
entirely one-to-one; Sharefest does its best to maximize efficiency
by dividing each file into "slices" that the downloading peers exchange
with each other.
This is useful, of course, but to many people the term
"peer-to-peer" effectively means BitTorrent, which is a considerably
more complex distribution model than Sharefest supports (supporting,
for example, super-seeding and distributed hash trackers). One could
implement the existing BitTorrent protocol on top of RTCDataChannels, perhaps, and pick up some nice features like DTLS encryption
along the way, but so far no one seems to have taken that challenge
and run with it.
But there are others pursuing RTCDataChannels as a file
distribution framework. Chris Ball, for example, has written a serverless
WebRTC file-sharing tool, although it still requires sharing the
STUN-derived public IP address of the uploader's machine, presumably
off-channel in an email or instant message. The PeerJS library is a project that may
evolve in the longer term to implement more pieces of the file-sharing
puzzle.
In the meantime, for those curious to test out the new API, Mozilla
and Google have publicized several other RTCDataChannel examples
involving games. But as Zhang noted in his presentation, direct
client-to-client data transfer is likely to have more far-reaching
effects than that: people are likely to write libraries for file I/O,
database drivers, persistent memory, etc. The potential is there for
many assumptions about the web's client-server nature to be broken for
good.
Comments (25 posted)
By Nathan Willis
June 26, 2013
"Open government" is a relatively recent addition to the "open"
family; it encompasses a range of specific
topics—everything from transparency issues to open access to
government data sets. Some—although arguably not all—of
these topics are a natural fit with the open source software movement,
and in the past few years open source software has taken on a more
prominent role within open government. The non-profit Knight Foundation just
awarded a sizable set of grants to open government efforts, a number
of which are also fledgling open source software projects.
Like a lot of large charitable foundations, the Knight Foundation
has a set of topic areas in which it regularly issues grant money.
The open government grants just announced were chosen through the
foundation's "Knight
News Challenge," which deals specifically with news and media
projects, and runs several themed rounds of awards each year. The Open
Government round accepted entries in February and March, and the winning
projects were announced
on June 24.
Of the eight grant awardees, three are open source projects
already, with a fourth stating its intention to release its source
code. OpenCounter
is a web-based application that cities can deploy as a self-service
site for citizens starting up small businesses. The application walks
users through the process of registering and applying for the various
permits, filling out forms, and handling other red tape that can be both
time-consuming and confusing. The live version currently online is the site
developed specifically for the city of Santa Cruz, California as part
of a fellowship with the non-profit Code For America, but the source is available on
GitHub. GitMachines
is a project to build virtual machine images that conform to the
often-rigorous requirements of US government certification and
compliance policies (e.g., security certification), requirements which many off-the-shelf VMs evidently do not meet. The
project lead is the former founder of Sunlight Labs (another open
government software-development non-profit), and some of the source is already
online.
Plan
in a Box is not yet available online, because the proposal is an
effort to take several of the existing free software
tools developed by OpenPlans and
merge them into a turn-key solution suitable for smaller city and
regional governments without large IT staffs. OpenPlans's existing
projects address things like
crowdsourced mapping (of, for example, broken streetlights),
transportation planning, and community project management. Several of
OpenPlans's lead developers are Code For America alumni. Procure.io
is an acquisitions-and-procurement platform also developed by a former
Sunlight Labs veteran. At the moment, the Procure.io URL redirects to
a commercial service, although a "community version" is available on Github
and the project's grant proposal says the result will be open source.
There are also two "open data" projects among the grant winners,
although neither seems interested in releasing source code. The first
is Civic
Insight, a web-based tool to track urban issues like abandoned
houses. It originated as the "BlightStatus" site for New
Orleans, and was written during a Code For America
fellowship—although now its creators have started a for-profit company.
The Oyez Project at the
Chicago-Kent College of Law currently maintains a lossless,
high-bit-rate archive (in .wav format) of all of the US Supreme Court's audio
recordings; its grant
is supposed to fund expanding the archive to federal appellate courts
and state supreme courts.
The other grant winners include Open
Gov for the Rest of Us, which is a civic campaign to increase
local government engagement in low-income neighborhoods, and Outline.com,
a for-profit venture to develop a policy-modeling framework.
Outline.com is being funded as a start-up venture through a different
Knight Foundation fund. In addition to those eight top-tier winners,
several other minor grants were announced, some of which are also open
source software, like the Open Source Digital Voting Foundation's TrustTheVote.
Free money; who can resist?
Looking at the winners, there are several interesting patterns to
note. The first is that some of the projects sharply demarcate the
differences between open source and open data. There is obviously a
great deal of overlap in the communities that value these two
principles, but they are not inherently the same, either. One can
build a valuable public resource on top of an open data set (in a
sense, "consuming" the open data, like BlightStatus), but if
the software to access it remains non-free, it is fair to question its
ultimate value. On the flip side, of course, an open source
application that relies on proprietary data sets might also be
susceptible to criticism that it is not quite free enough. In either
case, the closed component can impose restrictions that limit its
usefulness; worse yet, if the closed component were to disappear,
users would be out of luck.
Perhaps more fundamentally, it is also interesting to see just how
much the grants are dominated by former members of Sunlight Labs and
Code For America. Both groups are non-profits, with a special
emphasis on open source in government and civic software development.
But neither has a direct connection to the Knight Foundation; they
simply have built up a sustainable process for applying and winning
grant money. And perhaps there is a lesson there for other, existing
free software projects; considering how difficult crowdfunding and
donation-soliciting can be in practice, maybe more large projects
ought to put serious effort into pursuing focused grants from
non-profit organizations.
To be sure, the Knight News Challenge for Open
Government is a niche, and might be well outside of the scope of many
open source projects—but there are certainly some existing open
source projects that would fit. It is also not the only avenue for
private grant awards. In fact, the Knight Foundation is just ramping
up its next
round of the Knight News Challenge grant contest; this one focuses on
healthcare
and medicine.
Comments (none posted)
Page editor: Nathan Willis
Security
By Jake Edge
June 26, 2013
Ensuring that the binary installed on a system corresponds to the source code for a project can be a tricky task. For those who build their own
packages (e.g. Gentoo users), it is in principle a lot easier, but most
Linux users probably delegate that job to their distribution's build
system. That does turn the distribution into a single point of failure,
however—any
compromise of its infrastructure could lead to the release of malicious
packages. Recognizing when that kind of compromise has happened, so that
alarms can be sounded, is not particularly easy to do, though it may become
fairly important over the coming years.
So how do security-conscious users determine that the published source code
(from the project or the distribution) is being reliably built into the
binaries that get installed on their systems? And how do regular users
feel confident that they are not getting binaries compromised by an
attacker (or government)? It is a difficult problem to
solve, but it also important to do so.
Last week, we quoted Mike Perry in our "quotes of the week", but he had
more to say in that liberationtech
mailing list post: "I don't believe that software development
models based on single party trust can actually be secure against
serious adversaries anymore, given the current trends in computer
security and 'cyberwar'". His argument is that the playing field
has changed; we are no longer just defending against attackers with limited
budgets who are mostly interested in monetary rewards from their efforts.
He continued:
This means that software development has to evolve beyond the simple
models of "Trust my gpg-signed apt archive from my trusted build
machine", or even projects like Debian going to end up distributing
state-sponsored malware in short order.
There are plenty of barriers in the way.
Even building from source will not result in a bit-for-bit
copy of the binary in question—things like the link time stamp or build
path strings within the binary
may be different. But differences in the toolchains used, the contents
of the system's or dependencies' include files, or other differences that
don't affect the
operation of the program may also lead to differences in the resulting
binary.
The only reliable way to reproduce a binary is to build it in the
exact same environment (including time stamps) that it was
originally built in. For binaries
shipped by distributions, that may be difficult to do as all of the
build environment information needed may not (yet?) be available. But, two
people can pull
the code from a repository, build it in the same environment with the same
build scripts, and each bit in the resulting binaries should be the same.
That is the idea behind Gitian, which
uses virtualization or containers that can be used to create identical build
environments in
multiple locations. Using Gitian, two (or more) entities can build from
source and compare the hashes of the binaries, which should be the same.
In addition, those people, projects, or organizations can sign the hash
with their GPG key, providing a "web of trust" for a specific binary.
Gitian is being used by projects like Bitcoin and the Tor browser (work on the latter was done by
Perry), both of which are particularly
security-sensitive. In both cases, Gitian is used to set up an
Ubuntu container or virtual machine (VM) to build the binaries (for Linux,
anyway), so support for other distributions (or even different versions of
Ubuntu)
would require a different setup.
That points to a potential problem with Gitian: scalability. For a few
different sensitive projects, creating the scripts and information needed
to build the containers or VMs of interest may not be a huge hurdle. But
for a distribution to set things up so that all of its packages can
be independently verified may well be. In addition, there is a question of
finding people to actually build all the packages so that the hashes can be
compared. Each time a distribution updates its toolchain or the package
dependencies,
those changes would need to be reflected in the Gitian (or some similar
system) configuration, packages would need to be built by both the
distribution and at least one (more is better, of course) other "trusted"
entity before consumers (users) could fully trust the binaries. Given the
number of packages, Linux distributions, and toolchain versions, it would
result in
a combinatorial explosion of builds required.
Beyond that, though, there is still an element of trust inherent in that
verification method. The compiler and other parts of the toolchain are
being trusted to produce correct code for the source in question. The
kernel's KVM and container implementation is also being trusted. To
subvert a binary using the compiler would require a "Trusting Trust" type
of attack, while some kind of nefarious (and undetected) code in the kernel
could potentially subvert the binary.
The diversity of the underlying
kernels (i.e. the host kernel for the container or VM) may help alleviate
most of the concern with that problem—though it can never really eliminate
it. Going deeper, the hardware itself could be malicious. That may sound
overly paranoid (and may in fact be), but when lives are possibly on the
line, as with the Tor browser, it's important to at least think about the
possibility. Money, too, causes paranoia levels to rise, thus Bitcoin's
interest in verification.
In addition to those worries, there is yet another: source code auditing.
Even if the compiler is reproducibly creating "correct" binaries from the
input source code, vigilance is needed to ensure that nothing untoward
slips into the source. In something as complex as a browser, even
run-of-the-mill bugs could be dangerous to the user, but some kind of
targeted malicious code injection would be far worse. In the free software
world, we tend to be sanguine about the dangers of malicious source code
because it is all "out in the open". But if people aren't actually
scrutinizing the source code, malicious code can sneak in, and we have seen
some instances of that in the past. Source availability is no panacea for
avoiding either intentional or accidental security holes.
By and large, the distributions do an excellent job of safeguarding their
repositories and build systems, but there have been lapses there as well
along the way. For now, trusting the distributions or building from source
and trusting the compiler—or some intermediate compiler—is all that's
available. For certain packages, that may change somewhat using Gitian or
similar schemes.
The compiler issue may be alleviated with David A. Wheeler's Diverse Double-Compiling
technique, provided there is an already-trusted compiler at
hand.
As Wheeler has said, one can write a new
compiler to use in the diverse double-compilation, though that compiler
needs to be compiled itself, of course. It may be hard to imagine a
"Trusting Trust" style attack on a completely unknown compiler, but it
isn't impossible to imagine.
As mentioned above, source-binary verification is a hard problem.
Something has to be trusted: the
hardware, the kernel, the compiler, Gitian, or, perhaps, the distribution.
It's a
little hard to see how Gitian could be applied to entire distribution
repositories, so looking into other techniques may prove fruitful. Simply
recording the entire build environment, including versions of all the tools
and dependencies, would make it easier to verify the correspondence of
source and binaries, but even simple tools (e.g. tar as reported
by Jos van den Oever) may have unexplained differences. For now, focusing
the effort
on the most security-critical projects may be the best we can do.
Comments (20 posted)
Brief items
In the long run, I suspect they will result in more deeply buried and impenetrable surveillance empires -- both in the U.S. and around the world -- and a determined sense by their proponents that in the future, the relative transparency we had this time around would be banished forever.
In the short run, we may see some small victories -- like Web firms being permitted by the government to more effectively defend themselves against false accusations, and perhaps a bit more transparency related to the court actions that enable and (at least in theory) monitor these programs.
But beyond that, while hope springs eternal, logic suggests that prospects for the masters of surveillance around the world have not been significantly dimmed, and in fact may have actually obtained a longer-term boost.
—
Lauren Weinstein
Conspiracy theorists may be unsurprised that:
- Microsoft's support for PFS is conspicuous by its absence across Internet Explorer, IIS, and some of its own web sites. Apple's support for PFS in Safari is only slightly better.
- Russia, long-time target of US spies, is the home of the developer of nginx, the web server which uses PFS most often.
- Almost all of the websites run by companies involved in the PRISM programme do not use PFS.
—
Netcraft
looks into
perfect forward
secrecy (PFS)
All of this mapping of vulnerabilities and keeping them secret for offensive use makes the Internet less secure, and these pretargeted, ready-to-unleash cyberweapons are destabilizing forces on international relationships. Rooting around other countries' networks, analyzing vulnerabilities, creating back doors, and leaving logic bombs could easily be construed as acts of war. And all it takes is one overachieving national leader for this all to tumble into actual war.
It's time to stop the madness. Yes, our military needs to invest in cyberwar capabilities, but we also need international rules of cyberwar, more transparency from our own government on what we are and are not doing, international cooperation between governments, and viable cyberweapons treaties. Yes, these are difficult. Yes, it's a long, slow process. Yes, there won't be international consensus, certainly not in the beginning. But even with all of those problems, it's a better path to go down than the one we're on now.
—
Bruce
Schneier
Just as important was what the Japanese government and people did not
do. They didn't panic. They didn't make sweeping changes to their way of
life. They didn't implement a vast system of domestic surveillance. They
didn't suspend basic civil rights. They didn't begin to capture, torture,
and kill without due process. They didn't, in other words, allow themselves
to be terrorized. Instead, they addressed the threat. They investigated and
arrested the cult's leadership. They tried them in civilian courts and
earned convictions through due process. They buried their dead. They
mourned. And they moved on. In every sense, it was a rational, adult,
mature response to a terrible terrorist act, one that remained largely in
keeping with liberal democratic ideals.
—
Freddie
on the Japanese reaction to the Aum Shinrikyo terrorism (from the
L'Hôte blog)
Comments (9 posted)
At his blog, Jos van den Oever
looks into recreating binaries from their published source code to verify that the executable contains what it says it does.
"
A license that promises access to the source code is one thing, but an interesting question is: is the published source code the same source code that was used to create the executable? The straightforward way to find this out is to compile the code and check that the result is the same. Unfortunately, the result of compiling the source code depends on many things besides the source code and build scripts such as which compiler was used. No free software license requires that this information is made available and so it would seem that it is a challenge to confirm if the given source code corresponds to the executable."
Comments (59 posted)
New vulnerabilities
curl: heap corruption
| Package(s): | curl |
CVE #(s): | CVE-2013-2174
|
| Created: | June 24, 2013 |
Updated: | July 23, 2013 |
| Description: |
From the cURL advisory:
libcurl is vulnerable to a case of bad checking of the input data which may
lead to heap corruption.
The function curl_easy_unescape() decodes URL encoded strings to raw binary data. URL encoded octets are represented with %HH combinations where HH is a two-digit hexadecimal number. The decoded string is written to an allocated memory area that the function returns to the caller.
The function takes a source string and a length parameter, and if the length provided is 0 the function will instead use strlen() to figure out how much data to parse.
The "%HH" parser wrongly only considered the case where a zero byte would
terminate the input. If a length-limited buffer was passed in which ended
with a '%' character which was followed by two hexadecimal digits outside of the buffer libcurl was allowed to parse alas without a terminating zero, libcurl would still parse that sequence as well. The counter for remaining data to handle would then be decreased too much and wrap to become a very large integer and the copying would go on too long and the destination buffer that is allocated on the heap would get overwritten.
We consider it unlikely that programs allow user-provided strings unfiltered
into this function. Also, only the not zero-terminated input string use case
is affected by this flaw. Exploiting this flaw for gain is probably possible
for specific circumstances but we consider the general risk for this to be
low.
The curl command line tool is not affected by this problem as it doesn't use
this function. |
| Alerts: |
|
Comments (none posted)
haproxy: denial of service
| Package(s): | haproxy |
CVE #(s): | CVE-2013-2175
|
| Created: | June 20, 2013 |
Updated: | September 5, 2013 |
| Description: |
From the Debian advisory:
CVE-2013-2175:
Denial of service in parsing HTTP headers. |
| Alerts: |
|
Comments (none posted)
java-1.7.0-openjdk: multiple vulnerabilities
| Package(s): | java-1.7.0-openjdk |
CVE #(s): | CVE-2013-1500
CVE-2013-1571
CVE-2013-2407
CVE-2013-2412
CVE-2013-2443
CVE-2013-2444
CVE-2013-2445
CVE-2013-2446
CVE-2013-2447
CVE-2013-2448
CVE-2013-2449
CVE-2013-2450
CVE-2013-2452
CVE-2013-2453
CVE-2013-2454
CVE-2013-2455
CVE-2013-2456
CVE-2013-2457
CVE-2013-2458
CVE-2013-2459
CVE-2013-2460
CVE-2013-2461
CVE-2013-2463
CVE-2013-2465
CVE-2013-2469
CVE-2013-2470
CVE-2013-2471
CVE-2013-2472
CVE-2013-2473
|
| Created: | June 20, 2013 |
Updated: | July 31, 2013 |
| Description: |
From the Red Hat advisory:
Multiple flaws were discovered in the ImagingLib and the image attribute,
channel, layout and raster processing in the 2D component. An untrusted
Java application or applet could possibly use these flaws to trigger Java
Virtual Machine memory corruption. (CVE-2013-2470, CVE-2013-2471,
CVE-2013-2472, CVE-2013-2473, CVE-2013-2463, CVE-2013-2465, CVE-2013-2469)
Integer overflow flaws were found in the way AWT processed certain input.
An attacker could use these flaws to execute arbitrary code with the
privileges of the user running an untrusted Java applet or application.
(CVE-2013-2459)
Multiple improper permission check issues were discovered in the Sound,
JDBC, Libraries, JMX, and Serviceability components in OpenJDK. An
untrusted Java application or applet could use these flaws to bypass Java
sandbox restrictions. (CVE-2013-2448, CVE-2013-2454, CVE-2013-2458,
CVE-2013-2457, CVE-2013-2453, CVE-2013-2460)
Multiple flaws in the Serialization, Networking, Libraries and CORBA
components can be exploited by an untrusted Java application or applet to
gain access to potentially sensitive information. (CVE-2013-2456,
CVE-2013-2447, CVE-2013-2455, CVE-2013-2452, CVE-2013-2443, CVE-2013-2446)
It was discovered that the Hotspot component did not properly handle
out-of-memory errors. An untrusted Java application or applet could
possibly use these flaws to terminate the Java Virtual Machine.
(CVE-2013-2445)
It was discovered that the AWT component did not properly manage certain
resources and that the ObjectStreamClass of the Serialization component
did not properly handle circular references. An untrusted Java application
or applet could possibly use these flaws to cause a denial of service.
(CVE-2013-2444, CVE-2013-2450)
It was discovered that the Libraries component contained certain errors
related to XML security and the class loader. A remote attacker could
possibly exploit these flaws to bypass intended security mechanisms or
disclose potentially sensitive information and cause a denial of service.
(CVE-2013-2407, CVE-2013-2461)
It was discovered that JConsole did not properly inform the user when
establishing an SSL connection failed. An attacker could exploit this flaw
to gain access to potentially sensitive information. (CVE-2013-2412)
It was discovered that GnomeFileTypeDetector did not check for read
permissions when accessing files. An untrusted Java application or applet
could possibly use this flaw to disclose potentially sensitive information.
(CVE-2013-2449)
It was found that documentation generated by Javadoc was vulnerable to a
frame injection attack. If such documentation was accessible over a
network, and a remote attacker could trick a user into visiting a
specially-crafted URL, it would lead to arbitrary web content being
displayed next to the documentation. This could be used to perform a
phishing attack by providing frame content that spoofed a login form on
the site hosting the vulnerable documentation. (CVE-2013-1571)
It was discovered that the 2D component created shared memory segments with
insecure permissions. A local attacker could use this flaw to read or write
to the shared memory segment. (CVE-2013-1500) |
| Alerts: |
|
Comments (none posted)
java-1.7.0-oracle: multiple unspecified vulnerabilities
| Package(s): | java-1.7.0-oracle |
CVE #(s): | CVE-2013-2400
CVE-2013-2437
CVE-2013-2442
CVE-2013-2451
CVE-2013-2462
CVE-2013-2464
CVE-2013-2466
CVE-2013-2468
CVE-2013-3744
|
| Created: | June 20, 2013 |
Updated: | July 31, 2013 |
| Description: |
From the Red Hat advisory:
975757 - CVE-2013-2464 Oracle JDK: unspecified vulnerability fixed in 7u25 (2D)
975761 - CVE-2013-2468 Oracle JDK: unspecified vulnerability fixed in 7u25 (Deployment)
975764 - CVE-2013-2466 Oracle JDK: unspecified vulnerability fixed in 7u25 (Deployment)
975769 - CVE-2013-2462 Oracle JDK: unspecified vulnerability fixed in 7u25 (Deployment)
975770 - CVE-2013-2442 Oracle JDK: unspecified vulnerability fixed in 7u25 (Deployment)
975773 - CVE-2013-2437 Oracle JDK: unspecified vulnerability fixed in 7u25 (Deployment)
975774 - CVE-2013-2400 Oracle JDK: unspecified vulnerability fixed in 7u25 (Deployment)
975775 - CVE-2013-3744 Oracle JDK: unspecified vulnerability fixed in 7u25 (Deployment)
|
| Alerts: |
|
Comments (none posted)
kfreebsd-9: privilege escalation
| Package(s): | kfreebsd-9 |
CVE #(s): | CVE-2013-2171
|
| Created: | June 26, 2013 |
Updated: | June 26, 2013 |
| Description: |
From the Debian advisory:
Konstantin Belousov and Alan Cox discovered that insufficient permission
checks in the memory management of the FreeBSD kernel could lead to
privilege escalation. |
| Alerts: |
|
Comments (none posted)
mozilla: multiple vulnerabilities
| Package(s): | firefox thunderbird seamonkey |
CVE #(s): | CVE-2013-1682
CVE-2013-1684
CVE-2013-1685
CVE-2013-1686
CVE-2013-1687
CVE-2013-1690
CVE-2013-1692
CVE-2013-1693
CVE-2013-1694
CVE-2013-1697
|
| Created: | June 26, 2013 |
Updated: | July 23, 2013 |
| Description: |
From the CVE entries:
Multiple unspecified vulnerabilities in the browser engine in Mozilla Firefox before 22.0, Firefox ESR 17.x before 17.0.7, Thunderbird before 17.0.7, and Thunderbird ESR 17.x before 17.0.7 allow remote attackers to cause a denial of service (memory corruption and application crash) or possibly execute arbitrary code via unknown vectors. (CVE-2013-1682)
Use-after-free vulnerability in the mozilla::dom::HTMLMediaElement::LookupMediaElementURITable function in Mozilla Firefox before 22.0, Firefox ESR 17.x before 17.0.7, Thunderbird before 17.0.7, and Thunderbird ESR 17.x before 17.0.7 allows remote attackers to execute arbitrary code or cause a denial of service (heap memory corruption) via a crafted web site. (CVE-2013-1684)
Use-after-free vulnerability in the nsIDocument::GetRootElement function in Mozilla Firefox before 22.0, Firefox ESR 17.x before 17.0.7, Thunderbird before 17.0.7, and Thunderbird ESR 17.x before 17.0.7 allows remote attackers to execute arbitrary code or cause a denial of service (heap memory corruption) via a crafted web site. (CVE-2013-1685)
Use-after-free vulnerability in the mozilla::ResetDir function in Mozilla Firefox before 22.0, Firefox ESR 17.x before 17.0.7, Thunderbird before 17.0.7, and Thunderbird ESR 17.x before 17.0.7 allows remote attackers to execute arbitrary code or cause a denial of service (heap memory corruption) via unspecified vectors. (CVE-2013-1686)
The System Only Wrapper (SOW) and Chrome Object Wrapper (COW) implementations in Mozilla Firefox before 22.0, Firefox ESR 17.x before 17.0.7, Thunderbird before 17.0.7, and Thunderbird ESR 17.x before 17.0.7 do not properly restrict XBL user-defined functions, which allows remote attackers to execute arbitrary JavaScript code with chrome privileges, or conduct cross-site scripting (XSS) attacks, via a crafted web site. (CVE-2013-1687)
Mozilla Firefox before 22.0, Firefox ESR 17.x before 17.0.7, Thunderbird before 17.0.7, and Thunderbird ESR 17.x before 17.0.7 do not properly handle onreadystatechange events in conjunction with page reloading, which allows remote attackers to cause a denial of service (application crash) or possibly execute arbitrary code via a crafted web site that triggers an attempt to execute data at an unmapped memory location. (CVE-2013-1690)
Mozilla Firefox before 22.0, Firefox ESR 17.x before 17.0.7, Thunderbird before 17.0.7, and Thunderbird ESR 17.x before 17.0.7 do not prevent the inclusion of body data in an XMLHttpRequest HEAD request, which makes it easier for remote attackers to conduct cross-site request forgery (CSRF) attacks via a crafted web site. (CVE-2013-1692)
The SVG filter implementation in Mozilla Firefox before 22.0, Firefox ESR 17.x before 17.0.7, Thunderbird before 17.0.7, and Thunderbird ESR 17.x before 17.0.7 allows remote attackers to read pixel values, and possibly bypass the Same Origin Policy and read text from a different domain, by observing timing differences in execution of filter code. (CVE-2013-1693)
The PreserveWrapper implementation in Mozilla Firefox before 22.0, Firefox ESR 17.x before 17.0.7, Thunderbird before 17.0.7, and Thunderbird ESR 17.x before 17.0.7 does not properly handle the lack of a wrapper, which allows remote attackers to cause a denial of service (application crash) or possibly execute arbitrary code by leveraging unintended clearing of the wrapper cache's preserved-wrapper flag. (CVE-2013-1694)
The XrayWrapper implementation in Mozilla Firefox before 22.0, Firefox ESR 17.x before 17.0.7, Thunderbird before 17.0.7, and Thunderbird ESR 17.x before 17.0.7 does not properly restrict use of DefaultValue for method calls, which allows remote attackers to execute arbitrary JavaScript code with chrome privileges via a crafted web site that triggers use of a user-defined (1) toString or (2) valueOf method. (CVE-2013-1697) |
| Alerts: |
|
Comments (none posted)
mozilla: multiple vulnerabilities
| Package(s): | firefox |
CVE #(s): | CVE-2013-1683
CVE-2013-1688
CVE-2013-1695
CVE-2013-1696
CVE-2013-1698
CVE-2013-1699
|
| Created: | June 26, 2013 |
Updated: | July 3, 2013 |
| Description: |
From the CVE entries:
Multiple unspecified vulnerabilities in the browser engine in Mozilla Firefox before 22.0 allow remote attackers to cause a denial of service (memory corruption and application crash) or possibly execute arbitrary code via unknown vectors. (CVE-2013-1683)
The Profiler implementation in Mozilla Firefox before 22.0 parses untrusted data during UI rendering, which allows user-assisted remote attackers to execute arbitrary JavaScript code via a crafted web site. (CVE-2013-1688)
Mozilla Firefox before 22.0 does not properly implement certain DocShell inheritance behavior for the sandbox attribute of an IFRAME element, which allows remote attackers to bypass intended access restrictions via a FRAME element within an IFRAME element. (CVE-2013-1695)
Mozilla Firefox before 22.0 does not properly enforce the X-Frame-Options protection mechanism, which allows remote attackers to conduct clickjacking attacks via a crafted web site that uses the HTTP server push feature with multipart responses. (CVE-2013-1696)
The getUserMedia permission implementation in Mozilla Firefox before 22.0 references the URL of a top-level document instead of the URL of a specific page, which makes it easier for remote attackers to trick users into permitting camera or microphone access via a crafted web site that uses IFRAME elements. (CVE-2013-1698)
The Internationalized Domain Name (IDN) display algorithm in Mozilla Firefox before 22.0 does not properly handle the .com, .name, and .net top-level domains, which allows remote attackers to spoof the address bar via unspecified homograph characters. (CVE-2013-1699) |
| Alerts: |
|
Comments (none posted)
nagios: should be built with PIE flags
| Package(s): | nagios |
CVE #(s): | |
| Created: | June 25, 2013 |
Updated: | June 26, 2013 |
| Description: |
From the Red Hat bugzilla:
http://fedoraproject.org/wiki/Packaging:Guidelines#PIE says that "you MUST
enable the PIE compiler flags if your package has suid binaries...".
However, currently nagios is not being built with PIE flags. This is a
clear violation of the packaging guidelines.
|
| Alerts: |
|
Comments (none posted)
otrs2: privilege escalation
| Package(s): | otrs2 |
CVE #(s): | CVE-2013-4088
|
| Created: | June 20, 2013 |
Updated: | June 26, 2013 |
| Description: |
From the Debian advisory:
It was discovered that users with a valid agent login could use
crafted URLs to bypass access control restrictions and read tickets to
which they should not have access. |
| Alerts: |
|
Comments (none posted)
python-swift: code execution
| Package(s): | swift |
CVE #(s): | CVE-2013-2161
|
| Created: | June 20, 2013 |
Updated: | August 13, 2013 |
| Description: |
From the Ubuntu advisory:
Alex Gaynor discovered that Swift did not safely generate XML. An
attacker could potentially craft an account name to generate arbitrary XML
responses to trigger vulnerabilties in software parsing Swift's XML.
(CVE-2013-2161) |
| Alerts: |
|
Comments (none posted)
xen: multiple vulnerabilities
| Package(s): | xen |
CVE #(s): | CVE-2013-2194
CVE-2013-2195
CVE-2013-2196
|
| Created: | June 24, 2013 |
Updated: | June 26, 2013 |
| Description: |
From the Xen advisory:
The ELF parser used by the Xen tools to read domains' kernels and
construct domains has multiple integer overflows, pointer dereferences
based on calculations from unchecked input values, and other problems.
A malicious PV domain administrator who can specify their own kernel
can escalate their privilege to that of the domain construction tools
(i.e., normally, to control of the host).
Additionally a malicious HVM domain administrator who is able to
supply their own firmware ("hvmloader") can do likewise; however we
think this would be very unusual and it is unlikely that such
configurations exist in production systems. |
| Alerts: |
|
Comments (none posted)
Page editor: Jake Edge
Kernel development
Brief items
3.10-rc7 is the current development kernel. It was released on June 22 and is likely the last -rc
for 3.10. "rc7 contains a fairly mixed collection of fixes all over, with (as
usual) drivers and some arch updates being most noticeable. This time
we had media updates in particular.
But there's also core fixes for things like fallout from the common
cpu-idle routines and some timer accounting for the new full-NOHZ
modes etc. So we have stuff all over, most of it happily fairly small."
Stable updates: The 3.9.7,
3.4.50, and
3.0.83 stable kernels were released on June
20.
The 3.9.8, 3.4.51, and 3.0.84 stable kernels are under review and
should be expected on June 27.
Comments (none posted)
All your cgroups are belong to us!
—
Lennart
Poettering
So I settled in and read the whole spec. It was fun reading (side note, it seems that BIOS engineers think Windows kernel developers are lower on the evolutionary scale than they are, and for all I know, they might be right...), and I'll summarize the whole super-secret, NDA-restricted specification, when it comes to how an operating system is supposed to deal with Thunderbolt, shhh, don't tell anyone that I'm doing this:
Thunderbolt is PCI Express hotplug, the BIOS handles all the hard work.
—
Greg Kroah-Hartman
Tracing should be treated as a drug. Just like not allowing smoking
in public places, only punish the users, not those that want to
breathe fresh air.
—
Steven Rostedt
Comments (4 posted)
Here's
an
interesting post from Arjan van de Ven on how power management works in
contemporary Intel processors. In short, it's complicated. "
The key
thing here is that Core A gets a very variable behavior, independent of
what it asked for, due to what Core B is doing. Or in other words, the
forward predictive value of a P state selection on a logical CPU is rather
limited."
Comments (3 posted)
Kernel development news
By Jonathan Corbet
June 26, 2013
The 3.10 kernel development cycle is nearing its completion; as of this
writing, the
3.10-rc7 prepatch is out and
the kernel appears to be
stabilizing as expected. As predicted, 3.10 has turned out to be the
busiest development cycle ever, with almost 13,500 non-merge changesets
pulled into the mainline repository (so far). What follows is LWN's
traditional look at where those changes came from.
3.9 set a record of its own, with 1,388 developers contributing changes.
So far, with a mere 1,374 contributors, 3.10 falls short of that record,
but that situation clearly could change before the final release is
made. The size of our development community, it seems, continues to
increase.
The most active 3.10 developers were:
| Most active 3.10 developers |
| By changesets |
| H Hartley Sweeten | 392 | 2.9% |
| Jingoo Han | 299 | 2.2% |
| Hans Verkuil | 293 | 2.2% |
| Alex Elder | 268 | 2.0% |
| Al Viro | 205 | 1.5% |
| Felipe Balbi | 202 | 1.5% |
| Sachin Kamat | 192 | 1.4% |
| Laurent Pinchart | 174 | 1.3% |
| Johan Hovold | 159 | 1.2% |
| Mauro Carvalho Chehab | 158 | 1.2% |
| Wei Yongjun | 139 | 1.0% |
| Arnd Bergmann | 138 | 1.0% |
| Eduardo Valentin | 138 | 1.0% |
| Axel Lin | 112 | 0.8% |
| Lee Jones | 111 | 0.8% |
| Lars-Peter Clausen | 99 | 0.7% |
| Kuninori Morimoto | 98 | 0.7% |
| Tejun Heo | 97 | 0.7% |
| Mark Brown | 97 | 0.7% |
| Johannes Berg | 96 | 0.7% |
|
| By changed lines |
| Joe Perches | 34561 | 4.5% |
| Hans Verkuil | 18739 | 2.4% |
| Kent Overstreet | 18690 | 2.4% |
| Larry Finger | 17222 | 2.2% |
| Greg Kroah-Hartman | 16610 | 2.2% |
| Shawn Guo | 12879 | 1.7% |
| Dave Chinner | 12838 | 1.7% |
| Paul Zimmerman | 12637 | 1.6% |
| H Hartley Sweeten | 12518 | 1.6% |
| Al Viro | 11116 | 1.4% |
| Andrey Smirnov | 11107 | 1.4% |
| Mauro Carvalho Chehab | 9726 | 1.3% |
| Laurent Pinchart | 9258 | 1.2% |
| Jussi Kivilinna | 8960 | 1.2% |
| Lee Jones | 8598 | 1.1% |
| Sylwester Nawrocki | 8305 | 1.1% |
| Artem Bityutskiy | 8094 | 1.0% |
| Dave Airlie | 7546 | 1.0% |
| Guenter Roeck | 7510 | 1.0% |
| Sanjay Lal | 7428 | 1.0% |
|
H. Hartley Sweeten's position at the top of the list seems like a permanent
aspect of these reports as he continues his work on the endless task of
cleaning up the Comedi drivers in the staging tree. Jingoo Han contributed
a long list of driver cleanup patches, moving the code toward the use of
standard helper functions and the "managed" resource allocation API. Hans
Verkuil improved a number of video acquisition drivers as part of his
new(ish) role as the maintainer of the Video4Linux subsystem. Alex
Elder's work is focused on the Ceph filesystem and associated "RADOS" block
device, and Al Viro implemented a large number of core kernel improvements
and API changes. Together, these five developers accounted for nearly 11%
of all the changes going into the kernel.
In the "lines changed" column, Joe Perches topped the list with a set of
patches effecting whitespace cleanups, printk() format changes,
checkpatch.pl tweaks, and more. Kent Overstreet added the bcache block caching subsystem and a number of
asynchronous I/O improvements. Larry Finger's 17 patches added new
features and device support to the rtlwifi driver, and Greg Kroah-Hartman
removed the Android "CCG" USB gadget driver from the staging tree.
Just over 200 employers are known to have supported work on the 3.10
kernel. The most active of these were:
| Most active 3.10 employers |
| By changesets |
| (None) | 1495 | 11.1% |
| Red Hat | 1269 | 9.4% |
| Intel | 912 | 6.8% |
| Linaro | 877 | 6.5% |
| Texas Instruments | 765 | 5.7% |
| (Unknown) | 746 | 5.5% |
| Samsung | 615 | 4.6% |
| IBM | 402 | 3.0% |
| Vision Engraving Systems | 392 | 2.9% |
| Google | 350 | 2.6% |
| SUSE | 332 | 2.5% |
| Renesas Electronics | 331 | 2.5% |
| Cisco | 300 | 2.2% |
| Inktank Storage | 277 | 2.1% |
| Broadcom | 182 | 1.3% |
| NVidia | 180 | 1.3% |
| Freescale | 175 | 1.3% |
| Oracle | 175 | 1.3% |
| Trend Micro | 139 | 1.0% |
| Fujitsu | 138 | 1.0% |
|
| By lines changed |
| (None) | 118326 | 15.3% |
| Red Hat | 88080 | 11.4% |
| Linaro | 64697 | 8.4% |
| Intel | 50641 | 6.6% |
| Google | 33342 | 4.3% |
| Cisco | 24109 | 3.1% |
| (Unknown) | 24033 | 3.1% |
| Samsung | 20893 | 2.7% |
| Texas Instruments | 20289 | 2.6% |
| NVidia | 18470 | 2.4% |
| Linux Foundation | 16759 | 2.2% |
| Renesas Electronics | 15777 | 2.0% |
| IBM | 14385 | 1.9% |
| QLogic | 14165 | 1.8% |
| Synopsys | 13698 | 1.8% |
| Vision Engraving Systems | 13111 | 1.7% |
| Broadcom | 12770 | 1.7% |
| Synapse Product Development | 11107 | 1.4% |
| OpenSource AB | 9584 | 1.2% |
| SUSE | 9479 | 1.2% |
|
With 3.10, Red Hat regained its usual place as the company with the most
contributions, though even Red Hat, once again, falls short of the
contributions from volunteers. The increase in contributions from the
mobile and embedded community continues its impressive growth; Linaro, in
particular, continues to grow, with 42 developers contributing code under
its name to 3.10.
In summary, the kernel's busiest development cycle ever shows the
continuation of a number of patterns that have been observed for a while:
increasing participation from the mobile and embedded worlds, more
developers, and more companies. There was a slight uptick in volunteer
contributors this time around, but it is not at all clear that the
long-term decline in that area has been interrupted. As a whole,
the kernel development machine continues to operate in its familiar,
predictable, and productive manner.
Comments (1 posted)
By Jake Edge
June 26, 2013
The Ftrace kernel tracing facility has long had the ability to turn on or off tracing based on hitting
a certain function (the trigger) during the trace. For 3.10, several new
trigger commands that allow even more control over the tracing process were
added, but can only be triggered by function entry.
A recent patch set from Tom Zanussi would
extend
the Ftrace trigger feature to apply more widely so that the triggers could be
specified for any tracepoint. Triggers allow the user to reduce the amount
of tracing information gathered, so that narrowing in on the behavior of
interest is easier.
In 3.10, patches by
Steven Rostedt add several trigger actions to Ftrace
that will allow triggers to create a
snapshot of the tracing buffers, add a stack trace to the buffers, or
to disable and enable other trace events when the trigger is hit. Any of
those (and the long-available traceon and traceoff trigger
actions) can be associated with a particular function call in the kernel,
but not with an arbitrary trace event. The latter is what Zanussi's
patches would allow.
Tracepoints can be placed by kernel developers in locations of interest
for debugging or other kinds of monitoring; when an active tracepoint is
hit while
tracing is active, it will generate a trace event that gets stored in the
tracing buffers.
Some tracepoints have been
added permanently to the kernel, but others can be placed ad hoc to
provide extra information, particularly when tracking down a bug.
In contrast, the function
entry events are placed automatically at build time when Ftrace is enabled
in the
kernel. In both cases, the tracepoints are disabled until a user enables them
through the debugfs interface.
Disabling or enabling a tracepoint is a fairly intrusive process that involves
modifying the code at the tracepoint site. That, in turn, requires the use
of stop_machine(), which will stop execution on all other CPUs in
the system so that the modified code properly propagates to the entire
system. But a trigger can fire on any function call (or, with Zanussi's
patches, any tracepoint), so there needs to be a way to enable and disable
tracepoints that doesn't
require taking any locks or sleeping. That constraint led Rostedt to add a
way to "soft
disable" trace events.
Tracepoints are simply marked as disabled using a flag,
rather than modifying the code itself, which means that the function handling the trace event
will be called but will
immediately return. Any tracepoints that might be
enabled by a trigger are initialized at the time the trigger is set, but
marked as soft disabled until the triggering event is hit.
All of the trigger capabilities are aimed at narrowing in on a particular
behavior (typically a bug) of interest. The pre-3.10 triggers only allowed
using the "big hammer" of disabling and enabling tracing itself, while
these new trigger actions provide more fine-grained control. For example,
using the "snapshot" trigger action will create a snapshot of the existing
tracing buffer in a separate file while continuing the trace. Rostedt
shows an example in his update
to Documentation/trace/ftrace.txt:
# in the tracing debugfs directory, typically /sys/kernel/debug/tracing
echo 'native_flush_tlb_others:snapshot:1' > set_ftrace_filter
That command will trigger a snapshot, once (that's what the "
:1"
does), when the
native_flush_tlb_others() function is called.
Or to enable a specific event:
echo 'try_to_wake_up:enable_event:sched:sched_switch:2' > set_ftrace_filter
That will enable the
sched_switch event in the
sched
subsystem when the
try_to_wake_up() function is
called. The event will be enabled the first two times that
try_to_wake_up() is called, presumably some other trigger is
disabling the event in
between.
Zanussi's patches (which come with good documentation and examples, both in
the introductory patch and an update to
Documentation/trace/events.txt) apply those same triggers to
the events interface. Thus the kmem:kmalloc event can be enabled
when the read() system call is entered as follows:
# again from /sys/kernel/debug/tracing
echo 'enable_event:kmem:kmalloc:1' > events/syscalls/sys_enter_read/trigger
That will also cause the event to be soft disabled until the trigger is
hit, which can be verified by:
cat events/kmem/kmalloc/enable
That will show a "0*" as output, which means that the event is soft disabled.
But there is more that can be done using triggers. The trace events filter
interface allows tests to
be performed to filter the tracing output,
and Zanussi's patches extend that to the trigger feature. So adding a
stack backtrace to the tracing buffer for the first five
kmem:kmalloc events where the bytes requested value is 512 or more
could
be done this way:
echo 'stacktrace:5 if bytes_req >= 512' > events/kmem/kmalloc/trigger
One can check on the status of the trigger using:
cat events/kmem/kmalloc/trigger
which would show:
stacktrace:count=2 if bytes_req >= 512
if three of the five stack backtraces had been emitted into the tracing
buffer (
count=2).
The triggers and tests can be mixed and matched in various ways, but
there are some restrictions. There can only be one
stacktrace or snapshot trigger per event and each event can only have one
traceon and traceoff trigger. A triggering event can have multiple
enable_event and
disable_event triggers, but each must refer to a different event to be enabled
or disabled. So sys_enter_read could enable two different events (with two different commands echoed into its trigger
file) but the events so enabled must be distinct.
The Ftrace triggers will be released soon with 3.10. Zanussi's patches are
still in the review stage. Given that the 3.11 merge window will likely
open soon, triggers for all events will probably have to wait for 3.12.
Comments (1 posted)
By Jonathan Corbet
June 26, 2013
The number of latency-sensitive applications running on Linux seems to be
increasing, with the result that more latency-related changes are finding
their way into the kernel. Recently LWN looked at the
Ethernet device polling patch set, which
implements
polling to provide minimal latency to critical networking tasks. But what
happens if you want the lowest possible latency for block I/O requests
instead? Matthew Wilcox's
block driver
polling patch is an attempt to answer that question.
As Matthew put it, there are users who are
willing to go to great lengths to lower the latencies they experience with
block I/O requests:
The problem is that some of the people who are looking at those
technologies are crazy. They want to "bypass the kernel" and "do
user space I/O" because "the kernel is too slow". This patch is
part of an effort to show them how crazy they are.
The patch works by adding a new driver callback to struct
backing_dev_info:
int (*io_poll)(struct backing_dev_info *bdi);
This function, if present, should poll the given device for completed I/O
operations. If any are found, they should be signaled to the block layer;
the return value is the number of operations found (or a negative error
code).
Within the block layer, the io_poll() function will be called
whenever a process is about to sleep waiting for an outstanding operation.
By placing the poll calls there, Matthew hopes to avoid going into polling when there
is other work to be done; it allows, for example, the submission of
multiple operations without invoking the poll loop. But, once a process
actually needs the result of a submitted operation, it begins polling rather
than sleep.
Polling continues until one of a number of conditions comes about. One of
those, of course, is that an operation that the current process is waiting
for completes. In the absence of a completed operation, the process will
continue polling until it receives a signal or the scheduler
indicates that it would like to switch to a different process. So, in
other words, polling will stop if a higher-priority process becomes
runnable or if the current process exhausts its time slice. Thus, while
the polling happens in the kernel, it is limited by the relevant process's
available CPU time.
Linus didn't like this approach, saying
that the polling still wastes CPU time even if there is no higher-priority
process currently contending for the CPU. That said, he's not necessarily
opposed to polling; he just does not want it to happen if there might be other
runnable processes. So, he suggested, the polling should be moved to the
idle thread. Then polling would only happen when the CPU was about to go
completely idle, guaranteeing that it would not get in the way of any other
process that had work to do.
But Linus might actually lose in this case. Block maintainer Jens Axboe responded that an idle-thread solution would
not work. "If you need to take the context
switch, then you've negated pretty much all of the gain of the polled
approach." Also he noted that the
current patch does the polling in (almost) the right place, just where the
necessary information is available. So Jens appears to be disposed toward
merging something that looks like the current patch; at that point, Linus
will likely accept it.
But Jens did ask for a bit more smarts when it comes to deciding when the
polling should be done; in the current patch, it happens unconditionally
for any device that provides an io_poll() function. A better
approach, he said, would be to provide a way for specific processes to opt
in to the polling, since, even on latency-sensitive systems, polling will
not be needed by all processes. Those processes that do not need extremely
low latency should not have to give up some of their allotted CPU time for
I/O polling.
So the patch will certainly see some work before it is ready for merging.
But the benefits are real: in a test run by Matthew on an NVMe device, I/O
latencies dropped from about 8.0µs to about 5.5µs — a significant
reduction. The benefit will only become more pronounced as the speed of
solid-state storage devices increases; as the time required for an I/O
operation approaches 1µs, an extra 2.5µs of overhead will come to dominate
the picture. Latency-sensitive users will seek to eliminate that overhead
somehow; addressing it in the kernel is a good way to ensure that all users
are able to take advantage of this work.
Comments (2 posted)
Patches and updates
Kernel trees
- Sebastian Andrzej Siewior: 3.8.13-rt12 .
(June 21, 2013)
Core kernel code
Development tools
Device drivers
Filesystems and block I/O
Memory management
Networking
Architecture-specific
Security-related
Virtualization and containers
Page editor: Jake Edge
Distributions
By Jake Edge
June 26, 2013
It is something of a recurring question in the Linux world: where should users
file bug reports, with the distribution or the upstream project? There are
pros and cons to each side, but there are some subtleties. Generally, the
first bug report will come to the distribution, but how it should be
handled from there, and who is in the best position to work with the
upstream project on a bug, is sometimes contentious.
The topic came up again in a thread on the fedora-devel mailing list started by Florian Weimer, who asked about the
preferred place to file bugs, enhancement requests in particular. Requests
for enhancement are something of a special case, really, because they
typically represent a personal preference that may or not be shared by
others—the package maintainer, for example. As Rex Dieter put it, it should be up to the maintainer's
discretion:
Obviously, if
it's something they are interested in or is important, it likely is in their
best interest to help move feature requests upstream. If not, well, then
that burden is probably best left to you (or some other interested party).
That dispensed with the feature request question for the most part, and the
thread moved on to the larger question of bug reports and the
responsibilities of a package maintainer. Jóhann B. Guðmundsson took a hard line on what maintainers should do:
Yes it's the distribution maintainers responsibility to forward that
request upstream if that is the case followed by notifying the reporter
they have done so with a link to the upstream report in the relevant bug
report in bugzilla.redhat.com.
But others were not so absolute. Many of Fedora's packagers are
volunteers, so it is difficult to expect them to devote their time to
working with upstream projects on bugs they may not even be able to
reproduce. In those cases, it makes more sense to refer the reporter to
the upstream bug tracker so that they can work with the project on solving
the bug. Jeffrey Ollie outlined some of
those reasons packagers may not be able, or sometimes willing, to be the
go-between for the bug reporter and the project. He notes that it is not
part of his regular job to maintain packages. In addition, many of the bugs
reported are not
those he can track down himself, but, when he can, he does try to look into
the problem: "For most bug
reports, I'm willing to take a little bit of time and see if there's a
new release I've missed or if the bug has been already identified
upstream and there's a patch that can be applied."
Guðmundsson replied
with an unbending view: "Then you should not be maintaining that
component". Others saw things differently, finding a bit more room
for divergent opinions. Richard Shaw noted
that a programming background isn't required to be a packager for Fedora,
while others took umbrage at the tone. Ollie said that there is no one right answer, as it
depends on a number of factors:
The tl;dr summary is that there shouldn't be a single
standard for what we expect of packagers, especially in the context of
what to expect when bugs are filed against their packages on Red Hat's
bugzilla. My view is that expectations are going to depend on the
criticality of the package to Fedora (kernel, installer, default
desktop stack) and the packager (are they being paid to maintain the
package in Fedora or are they volunteering their time).
Package maintainers are not necessarily intimately familiar with the code
for the project—or projects—they maintain. Building the code and pulling it into an RPM file is all
that is really required. Having maintainers that are knowledgeable
about the code is certainly desirable, but may not be all that realistic.
Given that, it probably makes sense not to have a rigid policy. Eric Smith put it this way:
For some of the packages I maintain, I am able to do some
bug fixing myself, and forward the patches upstream. For other
packages, I have done the packaging because I use the software, but I
am not at all knowledgeable about the innards, and get lost quickly
trying to fix any bugs. I think it's reasonable in those cases to
advise that the user report the bugs upstream.
But, as Ian Pilcher pointed out, bugs filed
upstream by a package maintainer may carry more weight than a user's bug
would. It is not a malicious act by the project, he said, more of a
"social triage" to try to separate the wheat from the chaff. "In many cases, the Fedora packager has built up a level
of credibility with the development community that could get a bug
report the attention that it deserves."
The Fedora package
maintainer responsibilities document contains a lot of suggestions for
how packagers should work with upstreams, but doesn't actually
require all
that much. For example, it is recommended that packagers "forward
severe bugs upstream when possible for assistance". It also suggests
that bugs reported be dealt with in a timely manner, possibly with the
assistance of co-maintainers or others more familiar with the code
(including the upstream project).
Bug reports vary widely in their quality, but those generated from
distributions are often not very
applicable upstream as well. Distributions tend to lag project releases,
so the version that the distribution user is reporting a bug against may
well be one or more releases behind the project's version. Many projects
aren't terribly
interested in bugs in older versions, and would rather see whether the bug
exists in the latest release. Other projects are only interested in bugs
that are reproducible in the latest code in their source code repository.
In either of those cases it may be difficult for a user (or even a packager) to
determine whether the bug is still present,
which just throws up one more barrier to bug reporting.
Systems like Ubuntu's Launchpad, where upstream bugs can be linked to the
distribution bug report may help, but don't solve some of the underlying
problems inherent in bug reporting. Bug handling is not primarily a technical
problem, though some technical measures may help. Getting better bug
reports from users, with reproducible test cases and all the required
information is, to a large extent, a social problem.
Back in early 2011, we looked at a similar
discussion in the Debian community, which touched on earlier Fedora
struggles with the problem as well. Handling bugs is tricky for any number
of reasons, and it is hard to imagine that there is "one true policy" out
there that would alleviate most or all of the problems. Proper bug
handling just depends on too many variables to make any kind of rational,
hard-and-fast rules about it. Discussions like these can only help
devising guidelines and best practices, though, and we will undoubtedly see
them pop up again.
Comments (23 posted)
Brief items
The Xen4CentOS project has been announced with the aim of helping CentOS 5 Xen users migrate to CentOS 6 while also updating to Xen 4. Since RHEL 6 did not ship with Xen (it switched to KVM), CentOS and Xen users have not had an upgrade path, but that has changed with this announcement. The project is a collaboration between CentOS, Xen, Citrix, GoDaddy, and Rackspace; it intends to maintain the Xen hypervisor and its tools for CentOS 6. More information can be found in the
release notes,
quick start guide, and an
article at Linux.com.
Full Story (comments: 12)
Distribution News
Fedora
The Fedora election results have been announced. Kevin Fenzi, Bill
Nottingham, Tomáš Mráz, Matthew Miller, and Peter Jones have been elected
to FESCo. Jiri Eischmann, Christoph Wickert, Luis Enrique Bazán De León,
and Robert Mayr have been elected to FAmSCo. Matthew Garrett, Josh Boyer,
and Eric Christensen have been elected to the Advisory Board.
Full Story (comments: none)
Newsletters and articles of interest
Comments (none posted)
The H
looks at the
CyanogenMod 10.1.0
release. It is the first in six months for the alternative firmware for Android devices. 10.1.0 is the second release based on Android 4.2 ("Jelly Bean"). "
With the next few releases, which are planned to arrive monthly (hence their labelling as "M-series"), the developers plan to build on the AOSP Jelly Bean foundations and deliver new features of their own making, such as the new privacy mode that has already been merged into the code base and will be tested in upcoming nightly releases."
Comments (9 posted)
LinuxInsider
reviews
SolydXK. "
SolydXK surpasses the typical new Linux distro in a number of ways. It is polished and fast, despite its otherwise young and lightweight nature. The Xfce and KDE choices provide solid computing power and convenience without being clones of other distros.
I particularly like its emphasis on no-nonsense computing without bogging down users in mundane setup and tinkering. I constantly look for Linux distros that do not try to reinvent the wheel. SolydXK will not discourage newcomers but will not turn off seasoned Linux users either."
Comments (none posted)
Page editor: Rebecca Sobol
Development
By Nathan Willis
June 26, 2013
Not many developers actually prefer reviewing patches; it is
time-consuming and is often not as much fun as writing code. Nevertheless, it is an essential part of software development, so
open source projects must find some way to motivate volunteers to
undertake the task. PostgreSQL recently tried out a rather unusual
approach: sending out a public list of all of those patch submitters who had
not taken on a patch to review—which project policy mandates.
The move ruffled a few feathers, but on the whole, more people seemed
supportive of the idea than seemed offended by it.
At the heart of the issue is the project's practice of holding CommitFests—periodic pauses in PostgreSQL development
during which the focus changes from new development work to patch
review and merging. There have been many such CommitFests in
recent years; four per calendar year. The current
one started on June 16, targeting the upcoming 9.4 release. Josh
Berkus, the current CommitFest Manager (CFM), emailed the pgsql-hackers list to announce it,
and included a reminder about the project's policy that everyone who
submits a patch is expected to pitch in and review at least one patch
by someone else.
Berkus followed up on the subject on June 24,
sending a list of the patch submitters who had not yet claimed a patch
to review. The list was sizable, he said—at least as a
proportion of the overall participants:
The two lists below, idle submitters and committers, constitutes 26
people. This *outnumbers* the list of contributors who are busy
reviewing patches -- some of them four or more patches. If each of
these people took just *one* patch to review, it would almost entirely
clear the list of patches which do not have a reviewer. If these folks
continue not to do reviews, this commitfest will drag on for at least 2
weeks past its closure date.
In addition, he noted, the patch-review policy (adopted for the
PostgreSQL 9.2 development cycle) indicates that not helping with
patch review would "affect the attention your patches
received in the future." He further encouraged developers who
submit patches for their employer to let their supervisors know that
participating in the review process is considered part of contributing.
Naming names?
The tone of Berkus's email was certainly conciliatory, but there
were nevertheless a few people on the mailing list who seemed taken
aback by the act of publicly listing the names of the non-reviewers.
Dimitri Fontaine said "I very much don't like
that idea of publishing a list of names," and that he would
prefer sending out individual notices. Andreas Karlsson concurred, arguing that private
reminders are more effective. Berkus replied that private emails had been sent
a week earlier, but only a few of the recipients had picked up a patch
in the interim. And, as he noted
later, although the "submit one, review one" policy had been in place
for previous CommitFests, it had simply never been enforced in the past.
Several of these early critics referred to the process as "naming
and shaming," but Joshua Drake disagreed
with that characterization:
I believe you are taking the wrong perspective on this. This is
not a shame wall. It is a transparent reminder of the policy and those
who have not assisted in reviewing a patch but have submitted a patch
themselves.
Drake quipped that participants should "leave the ego at the
door," but Fontaine replied that ego was not the
issue at all. Emailing the list of non-reviewers to everyone, he said,
would discourage contributors from participating. Instead, he said, the
project ought to do something positive to reward the people who
do review patches. Maciej Gajewski did not object to publicizing the list of
names, but argued that the policy
itself needed to be more prominently communicated, so that "newbies like myself (who wouldn't even dare reviewing patches
submitted be seasoned hackers) are not surprised by seeing own name on
a shame wall."
Let's review
Whether or not sending the names to the mailing list constitutes
"shaming" is, of course, a matter of opinion. Mark Kirkwood pointed out
that the subject line of Berkus's email ("[9.4 CF 1] The
Commitfest Slacker List") may have triggered that interpretation: "Lol - Josh's choice of title has made it a small shame wall (maybe
only knee high)." Nevertheless, the content of Berkus's email
simply reminded recipients of the review policy in pretty straightforward
language. One might argue that working in the open dictates public
scrutiny of all sorts of potentially embarrassing incidents; for
instance, is it "shaming" to have a patch rejected?
Furthermore, the only reason that it is important to remind list
participants of the CommitFest review policy (and perhaps the key reason to
have the policy in the first place) is that patch reviewers
are in short supply relative to submitters, and the project is trying
to rectify the imbalance.
As Kirkwood's email
pointed out, reviewing patches is a fundamentally more difficult task
than submitting them:
I've submitted a few patches in a few
different areas over the years - however if I grab a patch on the queue
that is not in exactly one of the areas I know about, I'll struggle to
do a good quality review.
Now some might say "any review is better than no review"... I don't
think so - one of my patches a while was reviewed by someone who didn't
really know the context that well and made the whole process grind to a
standstill until a more experienced reviewer took over.
But, Tom Lane countered, a major
reason for the existing CommitFest process is that, by reviewing
patches, a submitter learns more about the code base. It is a community
effort, he said, "and each of us can stand to improve our
knowledge of what is fundamentally a complex system."
Less-experienced reviewers are no problem, he said, and it is fair to
expect reviewers to ask questions or request better comments and
explanations if they find something unclear.
David Rowley agreed, adding that the CommitFest
offers a chance for reviewers to work side-by-side with patch
submitters, confident that the submitters will have the time to reply (and
perhaps iterate on the patch), since new development is on pause.
"Even if a novice reviewer can only help a tiny bit, it might
still make the difference between that patch making it to a suitable
state in time or it getting bounced to the next commitfest or even the
next release." Rowley said that he had started out as a novice
in the PostgreSQL 8.4 cycle, and learned a great deal by reviewing
patches and by being perfectly clear about what he did and did not
understand. Other new reviewers, he said, would do well to be up
front about the limits of their experience as well.
Almost every free software project grapples with the fundamental
problem of cultivating new users and developers into experienced
contributors. PostgreSQL's patch-review policy for CommitFests is
clearly part of its program to do such developer cultivation. To that
end, it may not matter so much how the project notifies patch
submitters that they are also on the hook for a patch review, just as
long as the submitter participates. PostgreSQL gets better-reviewed
code, and the patch reviewer learns something in the process.
Seeing one's name of a list of people who still owe a review might be
surprising to some, but judging by the responses on the mailing list,
at least a handful of those listed got on the ball and signed up for a
patch to review. Consequently, it is hard to argue that the list of
names did not have a positive effect on the project. Furthermore, if
there were any hurt feelings, Berkus subsequently took steps to mend
them, asking the list for suggestions
on how to give public kudos to patch reviewers. What is still up in
the air is whether such kudos will take the form of public thanks,
physical gifts, or something else altogether.
Comments (3 posted)
Brief items
It’s easy for a misbehaving program to write garbage, but not to worry! we’re absolutely certain that garbage is consistently replicated across the cluster. Yeah, well done there.
—
Andrew Cowie.
Wha? Is there anyone in the world who needs to sync their feeds between two different phones?
—
Jamie
Zawinski, lamenting the feature set of Google Reader replacements.
Comments (7 posted)
Daala is a new video codec from Xiph.org; the project has posted
a detailed
explanation of how it works. "
Daala tries for a larger leap
forward— by first leaping sideways— to a new codec design and numerous
novel coding techniques. In addition to the technical freedom of starting
fresh, this new design consciously avoids most of the patent thicket
surrounding mainstream block-DCT-based codecs. At its very core, for
example, Daala is based on lapped transforms, not the traditional
DCT."
Comments (36 posted)
Version 5.5.0 of the
PHP language is out. New features include generator and coroutine support,
a new
finally keyword, a better password hashing API, a new opcode
caching mechanism, and more.
Comments (none posted)
Lennart Poettering posted a message to the
systemd-devel mailing list that outlines some changes coming. It refers to
an earlier message that started describing
differences coming in the kernel's control group (cgroup) implementation
and their effects on systemd. Those cgroup changes were discussed in February 2012 and
cgroup maintainer Tejun Heo decided to
eventually drop
multiple cgroup hierarchies in March 2012. The basic idea is that the
single hierarchy will also have a single manager on the system and all
changes to the cgroups will need to go through that manager. For systems
that use it, systemd will become the manager of cgroups, so other tools
that want to make changes will no longer follow the PaxControlGroups
recommendations and will instead make D-Bus requests to systemd.
Non-systemd systems will need to come up with their own manager and
management interface once the cgroup changes are made in the kernel (for
which the timeline is unclear). "In the long run there's only going to be a single kernel cgroup
hierarchy, the per-controller hierarchies will go away. The single
hierarchy will allow controllers to be individually enabled for each
cgroup. The net effect is that the hierarchies the controllers see are
not orthogonal anymore, they are always subtrees of the full single
hierarchy."
Comments (114 posted)
Version 1.0.0 of the osm-gps-map library has been released. The library is a general-purpose mapping widget supporting OpenStreetMap and other online services, as well as GPS tracks. This is the first stable release in more than a year, and adds support for GTK+ 3.0.
Full Story (comments: none)
Mozilla has released Firefox 22. Among the changes in this version are WebRTC support being enabled by default, major performance improvements to ASM.js, playback-rate adjustment for HTML5 audio and video, the addition of support for HTML5 <data> and <time> elements, social networking management and, best of all, a built-in font inspector.
Full Story (comments: none)
The Document Foundation has announced LibreOffice 4.0.4, which is expected to be the final minor release before the announcement of LibreOffice 4.1.0, currently slated for late July. A number of improvements to interoperability with proprietary office suites land in 4.0.4, in addition to the usual host of bug- and regression-fixes.
Full Story (comments: none)
Newsletters and articles
Comments (none posted)
On his blog, Vincent Sanders
peers in on the process of rendering an image in a web browser. While conceptually simple, there is actually a fair amount of work that has to be done to get from the bits over the wire to the image on the screen. "
In addition the image may well require tiling (for background gradients) and quite complex transforms (including rotation) thanks to CSS 3. Add in that javascript can alter the css style and hence the transform and you can imagine quite how complex the renderers can become."
Comments (none posted)
Opensource.com has an
interview with
Tux Paint creator Bill Kendrick. Tux Paint is a drawing program targeted at children, but is used by adults, particularly teachers. "
I didn't design Tux Paint to be a tool that's only used for teaching art, I designed it to be a tool like a piece of paper and pencil. Some schools use it to create narrated storybook videos. Other schools use it for counting and math. Ironically, these days, after being exposed to things like Mr. Pencil Saves Doodleburg on my son's Leapster Explorer (which, by the way, runs Linux!), I'm beginning to think Tux Paint needs some additional features to help teach concepts about art and color. Or at least a nicely done set of curricula."
Comments (10 posted)
Wired notes a potentially major complication for the Bitcoin Foundation: the California Department of Financial Institutions has decided that the foundation is in "the business of money transmission," which would subject it to regulatory scrutiny, and has sent it a cease-and-desist letter. For its part, the Bitcoin Foundation does not seem to feel that the letter is a source of trouble, but evidently other US states are similarly taking a close look at how Bitcoin operates.
Comments (none posted)
Page editor: Nathan Willis
Announcements
Brief items
The Free Software Foundation Europe
announces
that Harald Welte has won a GPL infringement case against FANTEC in
Germany. "
The court decided that FANTEC acted negligently: they would
have had to ensure to distribute the software under the conditions of the
GPLv2. The court made explicit that it is insufficient for FANTEC to rely
on the assurance of license compliance of their suppliers. FANTEC itself is
required to ascertain that no rights of third parties are violated."
Comments (2 posted)
The Document Foundation has announced that King Abdulaziz City for Science
and Technology (KACST) of Saudi Arabia has joined the Advisory Board. "
KACST sponsors the National Program for Free and Open Source Software Technologies (Motah), which has been contributing to LibreOffice for almost one year, to enhance the Arabic language and the RTL (right-to-left) support, and to develop new features."
Full Story (comments: none)
The Linux Foundation has announced that dotCloud, LSI Corporation and
Nefedia are joining the organization. "
The demand for fast paced
technology innovation creates pressure for shorter time to market and
shrinks technology lifecycles. Process and tool efficiencies are required
to address these needs and help grow businesses faster. Today’s new Linux
Foundation members are supporting this pace of innovation through
collaborative development in three IT markets—cloud computing, storage and
networking, and IP Multimedia System (IMS)."
Full Story (comments: none)
The Free Software Foundation reports that Richard Stallman has been
inducted into the 2013 Internet Hall of Fame. "
He joins other
visionaries like Aaron Swartz, Jimmy Wales, and John Perry Barlow, who have
all made significant contributions to the advancement of the global
Internet. The Internet Hall of Fame inducted Stallman for his contributions
as creator of the GNU Project, main author of the GNU General Public
License, and his philosophical contributions as founder of the free
software movement. Richard has been named an Innovator, a category which
recognizes and celebrates individuals who made outstanding technological,
commercial, or policy advances and helped to expand the Internet's reach."
Full Story (comments: 26)
New Books
O'Reilly Media has released "Functional JavaScript" by Michael Fogus.
Full Story (comments: none)
Calls for Presentations
Linux.conf.au 2014 will be held in
Perth, Western Australia, January 6-10, 2014. The
call for proposals closes July
6, 2013.
Full Story (comments: none)
There will be a Tracing Summit on October 23, co-located with
LinuxCon Europe in Edinburgh, UK. "
Our objective is to
gather people involved in development and users of tracing tools and
trace analysis tools to allow discussing the latest developments, allow
users to express their needs, and generally ensure that developers and
end users understand each other." The proposal deadline is August 21.
Full Story (comments: none)
The FLOSS UK 'DEVOPS' Spring 2014 Workshop & Conference will take place
March 18-20, 2014, in Brighton, UK. The call for papers closes November
15, 2013.
Full Story (comments: none)
Upcoming Events
PGDay UK will take place July 12, 2013, near Milton Keynes, UK.
"
PGDay UK is the main event for the local PostgreSQL community, with
speakers from various companies and spread across the UK. We've got many of
the key developers in town, so please come to listen and to talk to
developers and users."
Full Story (comments: none)
Events: June 27, 2013 to August 26, 2013
The following event listing is taken from the
LWN.net Calendar.
| Date(s) | Event | Location |
June 26 June 28 |
USENIX Annual Technical Conference |
San Jose, CA, USA |
June 27 June 30 |
Linux Vacation / Eastern Europe 2013 |
Grodno, Belarus |
June 29 July 3 |
Workshop on Essential Abstractions in GCC, 2013 |
Bombay, India |
July 1 July 5 |
Workshop on Dynamic Languages and Applications |
Montpellier, France |
July 1 July 7 |
EuroPython 2013 |
Florence, Italy |
July 2 July 4 |
OSSConf 2013 |
Žilina, Slovakia |
July 3 July 6 |
FISL 14 |
Porto Alegre, Brazil |
July 5 July 7 |
PyCon Australia 2013 |
Hobart, Tasmania |
July 6 July 11 |
Libre Software Meeting |
Brussels, Belgium |
July 8 July 12 |
Linaro Connect Europe 2013 |
Dublin, Ireland |
| July 12 |
PGDay UK 2013 |
near Milton Keynes, England, UK |
July 12 July 14 |
5th Encuentro Centroamerica de Software Libre |
San Ignacio, Cayo, Belize |
July 12 July 14 |
GNU Tools Cauldron 2013 |
Mountain View, CA, USA |
July 13 July 19 |
Akademy 2013 |
Bilbao, Spain |
July 15 July 16 |
QtCS 2013 |
Bilbao, Spain |
July 18 July 22 |
openSUSE Conference 2013 |
Thessaloniki, Greece |
July 22 July 26 |
OSCON 2013 |
Portland, OR, USA |
| July 27 |
OpenShift Origin Community Day |
Mountain View, CA, USA |
July 27 July 28 |
PyOhio 2013 |
Columbus, OH, USA |
July 31 August 4 |
OHM2013: Observe Hack Make |
Geestmerambacht, the Netherlands |
August 1 August 8 |
GUADEC 2013 |
Brno, Czech Republic |
August 3 August 4 |
COSCUP 2013 |
Taipei, Taiwan |
August 6 August 8 |
Military Open Source Summit |
Charleston, SC, USA |
August 7 August 11 |
Wikimania |
Hong Kong, China |
August 9 August 11 |
XDA:DevCon 2013 |
Miami, FL, USA |
August 9 August 12 |
Flock - Fedora Contributor Conference |
Charleston, SC, USA |
August 9 August 13 |
PyCon Canada |
Toronto, Canada |
August 11 August 18 |
DebConf13 |
Vaumarcus, Switzerland |
August 12 August 14 |
YAPC::Europe 2013 “Future Perl” |
Kiev, Ukraine |
August 16 August 18 |
PyTexas 2013 |
College Station, TX, USA |
August 22 August 25 |
GNU Hackers Meeting 2013 |
Paris, France |
August 23 August 24 |
Barcamp GR |
Grand Rapids, MI, USA |
August 24 August 25 |
Free and Open Source Software Conference |
St.Augustin, Germany |
If your event does not appear here, please
tell us about it.
Page editor: Rebecca Sobol