By Jonathan Corbet
September 4, 2013
There was a period where it appeared that the smartphone industry would be
dominated by closed products and non-free software. Android has done a lot
to change that situation; it is now possible to own a hackable device that
runs mostly free software. But it would be nice to have some viable
alternatives, preferably even more free and more Linux-like. Among the
many would-be
contenders for the title of leading alternative, Firefox OS offers a
special appeal. It is, after all, a Linux-based system built by an
organization that has a history of looking out for the interests of its
users. So when the opportunity came along to try out Firefox OS on real
hardware, your editor did not hesitate for long.
The ZTE Open
The device in question is the ZTE Open, a Firefox OS
handset that can be had for a mere $80. That is a low price for a
smartphone, but it is consistent with Mozilla's apparent strategy of
targeting the cheaper end of the market. Cheap is nice, but, as one might
expect, some severe compromises had to be made to arrive at that price.
The phone uses an oldish Qualcomm MSM7225A processor with only 256MB of
memory. The camera offers a two-megapixel sensor, which is low by
contemporary standards. Internal storage is minimal, but the phone comes
with a 4GB MicroSD card.
Visually, the device is smaller than many current devices. It is also
bright orange; it looks a lot like a Nexus One that has been outfitted for
hunting season. The 480x320 HVGA screen is decidedly low-resolution by
current standards. As one might expect, the device is often slow to
respond, especially when switching between applications. Perhaps most
annoying, though, is that the touchscreen itself is often unresponsive.
Using the Firefox OS on-screen keyboard can be a slow and painful
experience.
The Firefox OS interface has not changed a great deal since this review was written at the end of last
year. The annoying three-step process (hit the power button, swipe upward,
tap the "unlock" icon) to unlock the screen is still
necessary. Swiping toward the left on the home screen yields a list of
installed applications, while swiping to the right yields a list of
installable application categories. Strangely, many of the categories are
not initially visible on that
screen. Instead, one must hit the "more" button to see the full list of
categories; only
thereafter is it possible to see which applications can be found therein.
There is a reasonably long list of available applications, but relatively
few that would be familiar to iOS or Android users.
Application installation is a matter of holding a finger down on the
relevant icon. Since applications are all web-based, though, there is no
real need to install them unless one wants to run one offline or have the
icon in a handy
place. There is a
permissions model for applications, but that is all
hidden from the user; for the most part, users are supposed to rely on the
maintainers of the application "marketplace" to ensure that malicious
applications are not made available. The one exception is for location
data; the system will ask the user before allowing an application to access
the user's current location.
There is a basic email client that, unfortunately, could not be tested,
since it refuses to deal with mail servers that have self-signed
certificates. The web browser is Firefox, of course; it works as
expected. There is a basic mapping tool (using "HERE") that can generate driving
directions; there is no turn-by-turn navigation available, though. As an
added "benefit," the maps include location-based advertisements. Weather
information is available through an Accuweather app; there is also a basic
calendaring tool. The contact manager can import data from Facebook, but
not from other sources (Google, for example).
At the interface level, one of the most striking decisions is the complete
absence of a "back" button. The result is that one often seems to end up
in some application-specific dead end, with no recourse other than to hit
the "home" button and drop out entirely. Getting rid of "back" may make
application development easier, but the result seems to be less friendly
for the user.
The home button will, if held down, produce a scrollable screen showing the
currently
running applications. The user can then switch to one of those
applications; there is an option to close running applications as well.
This screen is supposed to show a thumbnail with the current screen
contents of each app, but those thumbnails are often blank for some
reason.
All told, the ZTE Open is reminiscent in many ways of the first Android
phones. It is slow, somewhat buggy, and the functionality is not up to
what the market leaders provide. Whether Firefox OS will yet turn out
to be a
disruptive technology like Android was remains to be seen.
Under the hood
One does not need to look too hard at Firefox OS to realize that its
developers have taken advantage of a lot of free infrastructure from
Android. The kernel on the ZTE Open is an Android-derived, bleeding-edge
3.0.8 model, with wakelocks and all. Services like binder are running.
The Android USB debugging protocol is supported, so tools like adb
and fastboot can be used in the usual manner (though there is an update that should be applied for
fastboot use). Much of the
graphics subsystem is built on the Android "gralloc" API as well. All
told, Firefox OS has benefited strongly from the availability of the
Android code as a base to build on.
There appears to be no available terminal emulator application for Firefox
OS. But one can, naturally, get a shell on the device by plugging it into
a USB port and running adb shell. The shell environment is
based on BusyBox and is rudimentary — but not worse than what one
encounters on an Android device. It is also an unprivileged
shell; there does not appear to be any way to gain root access short of
exploiting a vulnerability — or installing a new version of the operating
system.
In the limited time available your editor was unable to succeed in the
latter task — replacing the operating system. There is extensive
documentation on how this should be done on the Mozilla web site, and
it is a simple matter of patience to download the 12GB "source" tree
("source" being in quotes because it includes things like a binary
cross compiler, video files, and more). The actual build process requires
that the phone be
connected so that a number of binary files can be copied off of it; these
(proprietary) files are needed to build a replacement image.
Thereafter the build fails (in an equal manner on Ubuntu, Debian, and
Fedora boxes) after a long list of warnings. Somewhat discouraging.
Perhaps this particular problem is a temporary setback resulting from the
state of the source tree when this build was attempted. But it's clear
that, like building Android, making a new Firefox OS image is not a
task for the faint of heart. Should this system take off, future users are
far more likely to exercise their freedoms once a CyanogenMod-like project
comes along to take care of a lot of the details.
Conclusion
But will Firefox OS take off? It is hard to see the system, as
demonstrated by the ZTE Open, displacing Android anytime soon. It is
too slow, too rough-edged, and lacking too many third-party applications.
Most people with access to a recent Android-based handset are likely to
stick with that rather than shift over to Firefox OS.
But the world is full of people without access to such a handset. Mozilla
seems to be making a play for the attention of many of those people by
going after the low end of the market. After all, $80 will not buy a
particularly satisfying Android device either; it is hard to imagine
Android running on hardware like the ZTE Open in any kind of pleasing way.
Perhaps Firefox OS will find a place running on low-end devices; by
the time the system matures (and it does appear to be developing quickly),
there might just be an established user base for it.
Working with this device reminded your editor of a scene from Charlie
Stross's classic Accelerando:
Amber clutches the phone like a lifesaver: It's a cheap disposable
cereal-packet item, and the cardboard is already softening in her
sweaty grip.
If we can envision an era where cardboard telephones can be obtained
from a box of cereal, it is not much of a stretch to think about those
phones running a relatively undemanding system like Firefox OS.
Meanwhile, though, Firefox OS hopes for a place on the plastic devices that
we use now. Anybody wanting to experiment with the system can build it for
a number of current devices, including most recent "Nexus" phones. If
enough developers do that and start taking the system in interesting
directions, if more applications appear, and if people actually buy
Firefox OS devices, it may well develop to a point where it is a realistic
competitor to the more established mobile operating systems. Another free
Linux-based mobile system would be a good thing, so one can only wish Mozilla
luck as it pursues that goal.
Comments (87 posted)
By Jake Edge
September 5, 2013
The Linux Foundation (LF) is well-known for its support of Linux
(obviously), but over the last few years it has also taken on the role of a
shepherd for
other open source efforts. One of the earliest examples was
the now moribund Nokia-Intel collaboration on the MeeGo mobile operating
system. Others, such as Yocto, OpenMAMA, OpenDaylight, Tizen, and Xen all
seem to be chugging along at some level, while CodeAurora Forum and
FOSSBazaar seem rather quiet. All of those projects, though, have a fairly
strong connection to Linux and open source software, but the same cannot be
said for the most recent LF collaborative project: OpenBEL.
The "BEL" in the project's name is for Biological Expression Language,
which gives an idea how far from open source software the LF has ventured.
BEL is a data format used by life sciences researchers to
encapsulate their research findings in a form that can be used by
programs. It can represent the relationships between various biological
entities as discovered in experiments. Importantly, BEL can store these
relationships in context, where that might include the experimental
regimen, research cited, and other important characteristics of those
experiments.
In some ways, BEL is similar to the ideas behind the semantic web and other
structured knowledge efforts. It is meant to structure information in a
way that allows computers to "reason" about it, potentially finding
correlations or other kinds of relationships that are present in the data, but
difficult for a human to uncover.
The OpenBEL web site notes a
number of areas where BEL-encoded information could be useful:
"network visualization of neural brain function; understanding of
complex inter-related disease biology; comparison of human diseases with
various animal models; deep investigation of drug efficacy and toxicity; as
well as development of innovative therapeutics and diagnostics for
personalized healthcare".
The OpenBEL project came about in April 2012, but BEL itself was opened up by
Selventa (formerly known as Genstruct)
in 2011. Beyond the BEL
language, Selventa also open sourced several tools to visualize and curate BEL
"knowledge graphs". At the end of August, the LF announced
that OpenBEL had become one of its collaborative projects.
Essentially, the LF is using its knowledge of open source to assist the
project in "converting" to an open source mindset. That mindset includes
governance and infrastructure for projects that are targeted at
collaboration between competitors, which are things that the LF has a fair
amount of experience in. But biological research (and "knowledge
engineering") are far afield from open source operating systems and other
fairly closely related projects. Branching out to a project like OpenBEL is
an effort to take the ideas of open source and apply them more widely.
It evinces a similar goal to that of Red Hat's opensource.com.
The OpenBEL members page only
lists Selventa and Foundation Medicine
currently, but that may well be part of why the project is now with the
LF. By providing a neutral ground for collaboration, the LF may seem less
threatening for Selventa competitors that might be interested in the
project. In addition, the LF has a fair amount of credibility in bringing
competitors together to work on something in all of their
interests—successfully—while
they still compete in lots of other areas. One look at the LF members page
will show rival silicon and hardware vendors, distributions, car companies,
software vendors, and so on.
There are some clear advantages for the OpenBEL project, but
it's also worth asking what the LF gets from taking on the project.
Certainly the ability to further push the "open source paradigm" in new
directions has some attraction. One would guess there is some money
involved as well. Beyond that, though, the LF has set itself up to be able
to quickly and easily serve these new communities. Knowledge about
open source, its governance and management, along with the infrastructure
to support this kind of collaboration are all present in the organization.
Cranking out another collaborative project may almost be "old hat" at this
point.
That's probably overstating things a bit, but it is clear that the LF is
presenting itself as the "collaboration organization". There is, perhaps, a
risk that it will spread itself too thinly, but there don't seem to be any
real indications of that, at least yet. Some of the crop of collaborative
projects have
either
withered away or may be on their way to doing so, which might pose a kind of
credibility issue at some point. It may behoove the organization to be a
little pickier in the future. But, overall, OpenBEL is an interesting
foray for an organization that has previously stuck pretty close to home.
It will be worth watching to see where things go from here.
Comments (none posted)
By Nathan Willis
September 5, 2013
Email is not only one of the killer Internet applications, but it
is also central to the way the free software community functions.
Thus, the shift in recent years toward proprietary webmail clients
poses a serious obstacle to people who value software
freedom—not to mention people with all-too-real concerns about
the privacy of their communications. A small team of developers in
Iceland is working to improve the situation with the Mailpile project. In a short
amount of time Mailpile has attracted a considerable following and a
successful crowfunding campaign, although trouble is looming
that could delay the project's ability to collect those donated funds.
The concept
Mailpile is the brainchild of Bjarni Einarsson, Smári McCarthy, and
Brennan Novak. The trio launched the project on August 3 at the
Observe, Make, Hack (OHM) conference held near Amsterdam. As Einarsson's slides
[PDF] put it, the chief technical goals of the project are to make
decentralization easy, make migration painless, make email encryption
understandable, and to make a mail client that offers better spam
filtering than that offered by the big email providers . Mailpile is
designed to be "personal web-mail," meaning that it can
be run anywhere from a remote server to a local machine. The
interface will be an HTML, CSS, and JavaScript application that runs in the
browser, while the back-end code will be written in Python. Despite
the browser-based interface, Mailpile will be a mail user agent
only, and users must rely on other software for mail transfer and mail
delivery. The license chosen is the Affero GPLv3.
Collectively, the ability to host one's email anywhere and the ability
to migrate it from one location to another protect the user from
vendor lock-in. Self-hosting also preserves the user's privacy by
eliminating data-mining by the email provider and ads in the client
application. Naturally, hosting one's email on a remote server
introduces security risks, which is why the team is also intent on
building OpenGPG encryption support into the client.
Making email encryption easy-to-use is a tall order. McCarthy, the
security lead on the team, described
Mailpile's encryption workflow as a "core part of its construction" as
opposed to "tacked on with a plugin," but there are precious few
details about how this will be accomplished. The project's GitHub
repository has a discussion
thread on the topic that includes some interface mock-ups,
although they deal primarily with how options are presented to the
user. While there is definitely room for improvement on that front,
the core concepts of public-key encryption may prove harder to explain
than they are to show in a UI.
There is more detail on the project's blog about the other
architectural decisions. One interesting facet of the design is that
the message storage system is built around searching, not IMAP's
traditional notion of folders. Instead, the user will be able to set
up "filters" that constitute
stored searches, so that a filter like from:example.com will
take the place of an Example Co. folder. There will also be
tags that can be applied to filter output, making it possible to
construct other message-sorting schemes. The application will come
with a set of
"sensible" default tags and filters (like "Inbox" and "New"), and
perhaps will include filters for well-known senders like Facebook and
Twitter, too.
Einarsson justifies this search-driven approach by noting that
"email used to be big" but now it is small—small enough in fact
that an account's email metadata can fit entirely into RAM. The
current estimate is that Mailpile's index consumes 250 bytes per
message, including the overhead added by Python, which he calculates
is sufficient on a modern system with several gigabytes of RAM.
Mailpile
will support several storage backends, including mbox, maildir,
gmvault, and IMAP. Regardless of the source of the email, Mailpile
will build a single, unified search index that is stored in a special
subfolder of the user's home directory. For security purposes, the
index keys can be one-way hashed, and all user settings can be GPG
encrypted.
Despite the (some would say) lofty goals of Mailpile, at this stage
the project is intent on writing a considerable proportion of the code
from scratch—including the search engine—in standard
Python. The reason is that not relying on external dependencies will
make the product easier to package. The goal is to produce a tool
that can be run on Linux, Mac OS X, and Windows.
The code is available on
GitHub, and as of press time the web interface is only beginning to
take shape, with a terminal-mode user interface offering access to
more features (such as tagging and filtering) through a command-line
interface. IMAP and POP3 support has not yet been implemented, nor has
spam-detection or decrypting GPG-encrypted messages, but the Mailpile
CLI can encrypt the local mail storage and settings with
gpg-agent.
Capital ideas
Shortly after announcing the project at OHM, the Mailpile team
launched a crowdsourced fundraising campaign
at Indiegogo. The target amount is US $100,000, which Mailpile
reached well ahead of the scheduled September 10 deadline. The launch
of the campaign attracted considerable attention in the popular press,
which surely contributed to the rapid meeting of the fundraising
target.
As of today, the pledged total stands at $139,798 dollars and
counting, but the project encountered a surprise obstacle on August
31. Novak posted a blog
entry on September 5 explaining that PayPal (one of several
payment methods accepted by Indiegogo) had canceled the debit card
associated with the project's account, and informed him that a block
had been placed on the account to prevent transferring funds out.
After an inquiry to PayPal, a clearer picture emerged:
After 4 phone calls, the last of which I spoke to a supervisor, the
understanding I have come to is, unless Mailpile provides PayPal with
a detailed budgetary breakdown of how we plan to use the donations
from our crowd funding campaign they will not release the block on my
account for 1 year until we have shipped a 1.0 version of our
product.
The Mailpile team felt that this request was out of PayPal's
jurisdiction, and, moreover, out of line with Indiegogo's policies on
the same subject. Indiegogo's policy, he said, is to transfer
"all funds to successful campaigns within 15 days of their
conclusion. If IndieGoGo can do it, so can PayPal."
Indiegogo is an official Paypal "partner,"
which does make it surprising that the two companies would be
significantly out of sync. However, Mailpile's Indiegogo campaign is
of the "flexible funding" variety, meaning
primarily that the funds would be released to Mailpile even if the
target amount was not met. But Indiegogo's disbursement
policy indicates that flexible funding projects have donations
from PayPal users transferred immediately to the project's PayPal
account, so the "within 15 days of conclusion" rule does not apply to
any donations made through PayPal itself. In a separate post
on the subject, Einarsson estimated that these funds added up to
$45,000.
Einarsson also said that the project has asked its legal
representative, the Software Freedom Law Center (SFLC), to help
resolve the situation, but that in the meantime it has disabled PayPal
as a funding option. Intriguingly, his post also said that PayPal's
rationale for cutting off access to the funds was to guard against
"chargebacks," which is when a buyer attempts to
retroactively reverse a transaction through his or her credit card
company.
PayPal allows chargebacks when a purchased item is never delivered
or is significantly different than it should be. It is not entirely
clear that the chargeback issue is identical to concern over a
budgetary breakdown, but that would explain quite a bit. After all,
so far Mailpile has not delivered the software that it describes in
its campaign material—it is a brand-new project that has set
some lofty goals by anyone's standards.
In addition, the campaign site is quite vague on how the funds will
be spent, especially those funds that exceed the target amount. In a
post
about "stretch goals," the team lists options like "raise our
salaries" and "set money aside for a 'rainy day' or
unexpected events"—which may not sound reassuring to
those in the banking industry.
Late on September 5, Einarsson posted a brief update to his post
about the PayPal trouble, stating just that the account had been
unfrozen. No word yet on whether this means that the payment
processor is backing down on its demand to see specifics about how the
donated funds will be spent—nor is there any guarantee that
another freeze will not be placed on the account without advance warning.
Nevertheless, the project has met its fundraising goal and is close
to meeting it even without the PayPal donations, so users will get to
see what the Mailpile project can produce. The campaign promises the
first milestone in January 2014. Finding trouble-free fundraising for
free software development may take noticeably longer, though.
Comments (8 posted)
Page editor: Jonathan Corbet
Security
By Jake Edge
September 5, 2013
While encrypted communication over the internet is certainly nothing new, recent
events have highlighted some good reasons to use it. But protocols using
encryption atop TCP or UDP are generally easily identified, such that they
can be blocked by governments or ISPs. A new protocol, called Dust [PDF], sets out to provide
"blocking resistance", so that commonly used techniques, like blocking
based on deep packet
inspection (DPI), will be difficult to apply. The overall
goal for Dust is to resist censorship in the form of internet blocking.
There are a number of different projects that provide some form of
censorship resistance, including document publishing services such as Publius, Tangler, and Mnemosyne
[PDF]. But in order to retrieve documents, users must be able to
connect to the service, which is easy to thwart via IP address blocking.
So it makes sense to combine anonymous document storage with "hidden
services" from an
anonymizing proxy like Tor. But those connections are
still vulnerable to DPI-based blocking based on the contents of the
packets. What is needed, then, is a way to
avoid the DPI filters while connecting to the anonymizing proxy. To that end, "the ideal communication protocol is therefore one which is unobservable, meaning
that a packet or sequence of packets is
indistinguishable from a random packet or
random sequence of packets", according to Dust developer Brandon
Wiley. Creating that is essentially the design goal for the protocol.
There have been other efforts to create encrypted, censorship-resistant
protocols. Wiley's paper mentions several, including Message Stream
Encryption (MSE) for BitTorrent, Obfuscated TCP, and
Tcpcrypt (which we looked at in 2010). MSE and Tcpcrypt have flaws
such as static strings in the handshake or predictability in the packet
size that make it easy to detect—and filter—them. Obfuscated TCP has
several variants that communicate keys in different ways (e.g. TCP options,
HTTP headers, DNS records) all of which can be detected by current DPI
filtering.
Key exchange is the most difficult piece of any encryption puzzle. To
some extent, Dust punts on that by requiring an "out of band" invitation to
be received by a client before it can connect to the server. The
invitation has the IP address, port, and public key for the server, along
with an invitation-specific password and invitation ID, all of which
is encrypted using the password. The invitation ID is
a random, single-use identifier that the server can use to determine which
invitation (and thus which password) is being used when the client
introduces itself with the invitation.
The actual invitation is of no use without the password, so it could be
sent via any channel. Because of the encryption, the invitation is
"indistinguishable from random
bytes". Wiley is focused on automated DPI, so he seems a little
cavalier about transmitting the password:
It can then be safely
transmitted, along with the password, over an out-of-band channel such as
email [or]
instant messaging. It will not be susceptible to the attacks which block email
communication containing IP addresses
because only the password is transmitted
unencrypted. If the invitation channel is under observation by the attacker, and only in
the case that the attacker is specifically attempting to filter Dust packets, then the
password should be sent by another channel that, while it can still be observed by the
attacker, should be uncorrelated with the invitation channel.
With an invitation and password in hand, a client can connect to the server
by sending an introduction (or intro) packet to the server. The intro
packet is prepended with the invitation ID (which is random). The rest of
the packet is encrypted with the password and contains the client's public
key. When the server receives a packet from an unknown host, it assumes
that the first 32 bytes are the ID and tries to look up the password based
on that. It then decrypts the rest of the packet and stores the IP
address, port, and public key.
At that point, the handshake is complete. Both server and client can
compute shared session keys using each other's public key and the password
so that they can exchange encrypted messages from then on. That is done
using the data packet, which is the third packet type (invite and intro are
the other two).
There are several other features of the Dust packet format that bear
mention. To start with, packets can be chained within a single TCP or UDP
packet. Since the client has the server's public key from the invite, it
can send both an intro and data packet in a single TCP packet. That may
constitute all of what the client wants to say, which is a useful
optimization, but also helps protect against inter-packet timing analysis
to detect Dust.
The packets are protected with a message
authentication code (MAC) and the MAC is calculated using a password-based key derivation
function (PBKDF) with a random
initialization vector (IV) transmitted with each Dust packet. Both the MAC
and IV are sent in the clear; since the IV is a random per-packet value and
the MAC is calculated from it, both are effectively random to an observer.
In the encrypted portion of the packet, timestamps are included to protect against replay attacks and a random
amount of random-padding bytes is added to each packet so that the packet
length
is unpredictable. As might be obvious, good random number generation is an
important part of a Dust implementation.
All of those techniques should make Dust resistant to protocol
fingerprinting using DPI. The packets look like random data of random
length, which could be almost anything: streaming audio/video, some kind of
file transfer, etc. Of course, the connection just immediately
starts up in that mode, which might be considered suspicious in and of
itself. But the existing blocking typically centers around blacklists of
protocols that DPI can detect. Dust will not easily fall prey to that kind
of filtering.
A bigger worry is whitelist-oriented filtering. If the DPI filters will
only allow recognized protocols through, Dust will clearly fail the test.
Whitelists can be circumvented using steganography
(i.e. by hiding the real message inside a packet of one of the "legal"
protocols), but that has its own set of problems. Steganographic techniques
may lead to packets that can be more easily fingerprinted and blocked.
Whitelists will also be difficult for ISPs or governments to enforce, just
from a social point of view.
Code for Dust (in Haskell) can
be found at GitHub. More information can be found in the README
files there in addition to Wiley's paper.
Overall, Dust is an intriguing idea. It is meant to serve as an underlying
protocol for something like Tor (which, in turn, may underlie secure and
anonymous document distribution). While it is well-tuned to avoid today's DPI
(and other) attacks, one wonders if just random gibberish at the start of a
connection will be enough to set off tomorrow's filters. Of course, an
internet where all of the data was encrypted would potentially obviate the
need for something like Dust. In the meantime,
at least, Dust seems worth a look.
Comments (3 posted)
Brief items
In seeking a balance that puts liberty first, my administration will unwind
the surveillance apparatus to a substantial degree. Some surveillance is
necessary, to be sure. But we will have clear rules and boundaries, and we
will punish those in government who go beyond them. As we have seen
repeatedly in recent years, without genuine accountability, rules and laws
mean nothing.
—
Dan
Gillmor hopes for a 2016 US presidential candidate with a focus on privacy
Right now the upper practical limit on brute force is somewhere under 80
bits. However, using that as a guide gives us some indication as to how
good an attack has to be to break any of the modern algorithms. These days,
encryption algorithms have, at a minimum, 128-bit keys. That means any NSA
cryptoanalytic breakthrough has to reduce the effective key length by at
least 48 bits in order to be practical.
There's more, though. That DES attack requires an impractical 70 terabytes
of known plaintext encrypted with the key we're trying to break. Other
mathematical attacks require similar amounts of data. In order to be
effective in decrypting actual operational traffic, the NSA needs an attack
that can be executed with the known plaintext in a common MS-Word header:
much, much less.
—
Bruce
Schneier is skeptical of claims of NSA decryption superpowers
Most internet users would like to be anonymous online at least occasionally, but many think it is not possible to be completely anonymous online. New findings in a national survey show:
- 86% of internet users have taken steps online to remove or mask their digital footprints—ranging from clearing cookies to encrypting their email, from avoiding using their name to using virtual networks that mask their internet protocol (IP) address.
- 55% of internet users have taken steps to avoid observation by specific people, organizations, or the government
Still, 59% of internet users do not believe it is possible to be completely anonymous online, while 37% of them believe it is possible.
—
Anonymity,
Privacy, and Security Online, a survey by Pew Internet
Comments (11 posted)
New vulnerabilities
389-ds-base: denial of service
| Package(s): | 389-ds-base |
CVE #(s): | CVE-2013-4283
|
| Created: | August 29, 2013 |
Updated: | September 5, 2013 |
| Description: |
From the Red Hat advisory:
It was discovered that the 389 Directory Server did not properly handle the
receipt of certain MOD operations with a bogus Distinguished Name (DN). A
remote, unauthenticated attacker could use this flaw to cause the 389
Directory Server to crash. (CVE-2013-4283) |
| Alerts: |
|
Comments (none posted)
ansible: predictable filenames
| Package(s): | ansible |
CVE #(s): | CVE-2013-4260
CVE-2013-4259
|
| Created: | September 3, 2013 |
Updated: | September 5, 2013 |
| Description: |
From the ansible advisory:
We are releasing Ansible version v1.2.3 to address two CVEs that have been reported regarding the core Ansible package. Both of these involve potential local exploits on systems where access to the Ansible control machine is being shared between multiple users. These require updating Ansible on control machines and do not require any changes on managed (controlled) machines.
The first could allow a malicious local user to place a symlink at a predictable location to make Ansible connect to a different remote system than expected when using ControlPersist. If that target system were itself compromised, and you are not verifying SSH host keys and are also using SSH passwords (rather than keys) for authentication, this exploit could result in obtaining a user’s password information. It could also result in the target system receiving sensitive configuration data it was not supposed to receive. This does not affect kernels that have fs.protected_symlinks=1/fs.protected_hardlinks=1 set in sysctl, or on systems that are using SELinux with strict/MLS policies. Enterprise Linux 5/6 do not support these sysctl options, however the default on these platforms is actually paramiko (because ControlPersist is not yet available on them), so this would only be a problem if the “ssh” connection is explicitly selected on the command line with “-c ssh” or configured in the configuration file (or environment). Fedora 19, for instance, has these protections on by default and Ubuntu has been shipping these protections on for some time as well.
The second allows using a predictable location of the retry file from a failed playbook in /var/tmp to clobber a file on the local filesystem using a link. |
| Alerts: |
|
Comments (none posted)
asterisk: multiple vulnerabilities
| Package(s): | asterisk |
CVE #(s): | CVE-2013-5641
CVE-2013-5642
|
| Created: | August 30, 2013 |
Updated: | September 16, 2013 |
| Description: |
From the Mandriva advisory:
A remotely exploitable crash vulnerability exists in the SIP channel driver if an ACK with SDP is received after the channel has been terminated. The handling code incorrectly assumes that the channel will always be present (CVE-2013-5641).
A remotely exploitable crash vulnerability exists in the SIP channel driver if an invalid SDP is sent in a SIP request that defines media descriptions before connection information. The handling code incorrectly attempts to reference the socket address information even though that information has not yet been set (CVE-2013-5642). |
| Alerts: |
|
Comments (none posted)
cacti: multiple vulnerabilities
| Package(s): | cacti |
CVE #(s): | CVE-2013-5588
CVE-2013-5589
|
| Created: | September 3, 2013 |
Updated: | September 10, 2013 |
| Description: |
From the CVE entries:
Multiple cross-site scripting (XSS) vulnerabilities in Cacti 0.8.8b and earlier allow remote attackers to inject arbitrary web script or HTML via (1) the step parameter to install/index.php or (2) the id parameter to cacti/host.php. (CVE-2013-5588)
SQL injection vulnerability in cacti/host.php in Cacti 0.8.8b and earlier allows remote attackers to execute arbitrary SQL commands via the id parameter. (CVE-2013-5589) |
| Alerts: |
|
Comments (none posted)
cyrus-sasl: denial of service
| Package(s): | cyrus-sasl |
CVE #(s): | CVE-2013-4122
|
| Created: | September 3, 2013 |
Updated: | September 5, 2013 |
| Description: |
From the Gentoo advisory:
In the GNU C Library (glibc) from version 2.17 onwards, the crypt()
function call can return NULL when the salt violates specifications or
the system is in FIPS-140 mode and a DES or MD5 hashed password is
passed. When Cyrus-SASL's authentication mechanisms call crypt(), a
NULL may be returned.
A remote attacker could trigger this vulnerability to cause a Denial of
Service condition. |
| Alerts: |
|
Comments (none posted)
drupal7-entity: Entity API - access bypass
| Package(s): | drupal7-entity |
CVE #(s): | CVE-2013-4273
|
| Created: | September 3, 2013 |
Updated: | September 5, 2013 |
| Description: |
From the Drupal bug report:
The Entity API module extends the entity API of Drupal core in order to provide a unified way to deal with entities and their properties.
The module doesn't sufficiently enforce node access restrictions when checking for a user's access to view a comment associated with a particular node. The vulnerability is mitigated by the fact that it only applies to a user's access to view a comment in a situation where access should be restricted with entity access.
The Entity API also does not properly restrict access when displaying selected entities using the Views field or area plugins, allowing users to view entities that they do not have access to. The vulnerability is mitigated by the fact that entities are only improperly exposed when a View has been configured to display them in a field, header or footer of a View. |
| Alerts: |
|
Comments (none posted)
drupal7-theme-zen: cross-site scripting
| Package(s): | drupal7-theme-zen |
CVE #(s): | CVE-2013-4275
|
| Created: | September 3, 2013 |
Updated: | September 5, 2013 |
| Description: |
From the drupal bug report:
Zen doesn't sufficiently escape the breadcrumb separator field, allowing a possible XSS exploit.
This vulnerability is mitigated by the fact that an attacker must have a role with the permission "administer themes". |
| Alerts: |
|
Comments (none posted)
exactimage: denial of service
| Package(s): | exactimage |
CVE #(s): | CVE-2013-1438
|
| Created: | September 3, 2013 |
Updated: | September 11, 2013 |
| Description: |
From the Debian advisory:
Several denial-of-service vulnerabilities were discovered in the dcraw
code base, a program for processing raw format images from digital
cameras. This update corrects them in the copy that is embedded in
the exactimage package. |
| Alerts: |
|
Comments (none posted)
Foreman: multiple vulnerabilities
| Package(s): | Foreman |
CVE #(s): | CVE-2013-4180
CVE-2013-4182
|
| Created: | September 4, 2013 |
Updated: | September 5, 2013 |
| Description: |
From the Red Hat advisory:
A flaw was found in the API where insufficient privilege checks were
conducted by the hosts controller, allowing any user with API access to
control any host. (CVE-2013-4182)
A denial of service flaw was found in Foreman in the way user input was
converted to a symbol. An authenticated user could create inputs that would
lead to excessive memory consumption. (CVE-2013-4180) |
| Alerts: |
|
Comments (none posted)
imagemagick: code execution
| Package(s): | imagemagick |
CVE #(s): | CVE-2013-4298
|
| Created: | September 4, 2013 |
Updated: | September 10, 2013 |
| Description: |
From the Debian advisory:
Anton Kortunov reported a heap corruption in ImageMagick, a program
collection and library for converting and manipulating image files.
Crafted GIF files could cause ImageMagick to crash, potentially
leading to arbitrary code execution. |
| Alerts: |
|
Comments (none posted)
kde: code execution
| Package(s): | kde |
CVE #(s): | CVE-2013-2127
|
| Created: | September 3, 2013 |
Updated: | September 5, 2013 |
| Description: |
From the CVE entry:
Buffer overflow in the exposure correction code in LibRaw before 0.15.1 allows context-dependent attackers to cause a denial of service (crash) and possibly execute arbitrary code via unspecified vectors. |
| Alerts: |
|
Comments (none posted)
kernel: two vulnerabilities
| Package(s): | kernel |
CVE #(s): | CVE-2013-4162
CVE-2013-4163
|
| Created: | August 29, 2013 |
Updated: | September 26, 2013 |
| Description: |
From the Debian advisory:
CVE-2013-4162:
Hannes Frederic Sowa reported an issue in the IPv6 networking subsystem.
Local users can cause a denial of service (system crash).
CVE-2013-4163:
Dave Jones reported an issue in the IPv6 networking subsystem. Local
users can cause a denial of service (system crash).
|
| Alerts: |
|
Comments (none posted)
libdigidoc: file overwrite
| Package(s): | libdigidoc |
CVE #(s): | CVE-2013-5648
|
| Created: | September 3, 2013 |
Updated: | September 5, 2013 |
| Description: |
From the CVE entry:
Absolute path traversal vulnerability in the handleStartDataFile function in DigiDocSAXParser.c in libdigidoc 3.6.0.0, as used in ID-software before 3.7.2 and other products, allows remote attackers to overwrite arbitrary files via a filename beginning with / (slash) or \ (backslash) in a DDOC file. |
| Alerts: |
|
Comments (none posted)
libmodplug: two code execution vulnerabilities
| Package(s): | libmodplug |
CVE #(s): | CVE-2013-4233
CVE-2013-4234
|
| Created: | September 5, 2013 |
Updated: | September 16, 2013 |
| Description: |
From the Red Hat bugzilla entry:
It was reported [1],[2] that libmodplug suffers from two flaws when parsing ABC files:
1) An error within the "abc_MIDI_drum()" function (src/load_abc.cpp) can be exploited to cause a buffer overflow via a specially crafted ABC file.
2) An integer overflow within the "abc_set_parts()" function (src/load_abc.cpp) can be exploited to corrupt heap memory via a specially crafted ABC file.
Successful exploitation of the vulnerabilities may allow execution of arbitrary code.
|
| Alerts: |
|
Comments (none posted)
mysql: multiple unspecified vulnerabilities
| Package(s): | MySQL |
CVE #(s): | CVE-2013-3794
CVE-2013-3795
CVE-2013-3796
CVE-2013-3798
CVE-2013-3801
CVE-2013-3805
CVE-2013-3806
CVE-2013-3807
CVE-2013-3808
CVE-2013-3810
CVE-2013-3811
|
| Created: | August 30, 2013 |
Updated: | September 5, 2013 |
| Description: |
CVE-2013-3794: Unspecified vulnerability in the MySQL Server component in Oracle MySQL 5.5.30 and earlier and 5.6.10 allows remote authenticated users to affect availability via unknown vectors related to Server Partition.
CVE-2013-3795: Unspecified vulnerability in the MySQL Server component in Oracle MySQL 5.6.11 and earlier allows remote authenticated users to affect availability via unknown vectors related to Data Manipulation Language.
CVE-2013-3796: Unspecified vulnerability in the MySQL Server component in Oracle MySQL 5.6.11 and earlier allows remote authenticated users to affect availability via unknown vectors related to Server Optimizer.
CVE-2013-3798: Unspecified vulnerability in the MySQL Server component in Oracle MySQL 5.6.11 and earlier allows remote attackers to affect integrity and availability via unknown vectors related to MemCached.
CVE-2013-3801: Unspecified vulnerability in the MySQL Server component in Oracle MySQL 5.5.30 and earlier and 5.6.10 allows remote authenticated users to affect availability via unknown vectors related to Server Options.
CVE-2013-3805: Unspecified vulnerability in the MySQL Server component in Oracle MySQL 5.5.30 and earlier and 5.6.10 allows remote authenticated users to affect availability via unknown vectors related to Prepared Statements.
CVE-2013-3806: Unspecified vulnerability in the MySQL Server component in Oracle MySQL 5.6.11 and earlier allows remote authenticated users to affect availability via unknown vectors related to InnoDB, a different vulnerability than CVE-2013-3811.
CVE-2013-3807: Unspecified vulnerability in the MySQL Server component in Oracle MySQL 5.6.11 and earlier allows remote attackers to affect confidentiality and integrity via unknown vectors related to Server Privileges.
CVE-2013-3808: Unspecified vulnerability in the MySQL Server component in Oracle MySQL 5.1.68 and earlier, 5.5.30 and earlier, and 5.6.10 allows remote authenticated users to affect availability via unknown vectors related to Server Options.
CVE-2013-3810: Unspecified vulnerability in the MySQL Server component in Oracle MySQL 5.6.11 and earlier allows remote authenticated users to affect availability via unknown vectors related to XA Transactions.
CVE-2013-3811: Unspecified vulnerability in the MySQL Server component in Oracle MySQL 5.6.11 and earlier allows remote authenticated users to affect availability via unknown vectors related to InnoDB, a different vulnerability than CVE-2013-3806. |
| Alerts: |
|
Comments (none posted)
ngircd: denial of service
| Package(s): | ngircd |
CVE #(s): | CVE-2013-5580
|
| Created: | September 3, 2013 |
Updated: | September 5, 2013 |
| Description: |
From the Mageia advisory:
Denial of service bug (server crash) in ngIRCd before 20.3 which could happen
when the configuration option "NoticeAuth" is enabled (which is NOT the
default) and ngIRCd failed to send the "notice auth" messages to new clients
connecting to the server. |
| Alerts: |
|
Comments (none posted)
openstack-cinder: multiple vulnerabilities
| Package(s): | openstack-cinder |
CVE #(s): | CVE-2013-4183
CVE-2013-4202
|
| Created: | September 4, 2013 |
Updated: | September 5, 2013 |
| Description: |
From the Red Hat advisory:
It was found that the fixes for CVE-2013-1664 and CVE-2013-1665, released
via RHSA-2013:0658, did not fully correct the issues in the Extensible
Markup Language (XML) parser used by Cinder. A remote attacker could use
this flaw to send a specially-crafted request to a Cinder API, causing
Cinder to consume an excessive amount of CPU and memory, or possibly crash.
(CVE-2013-4202)
A bug in the Cinder LVM driver prevented LVM snapshots from being securely
deleted in some cases, potentially leading to information disclosure to
other tenants. (CVE-2013-4183) |
| Alerts: |
|
Comments (none posted)
openstack-nova: multiple vulnerabilities
| Package(s): | openstack-nova |
CVE #(s): | CVE-2013-2256
CVE-2013-4179
CVE-2013-4185
CVE-2013-4261
|
| Created: | September 4, 2013 |
Updated: | September 5, 2013 |
| Description: |
From the Red Hat advisory:
It was found that the fixes for CVE-2013-1664 and CVE-2013-1665, released
via RHSA-2013:0657, did not fully correct the issues in the Extensible
Markup Language (XML) parser used by Nova. A remote attacker could use
this flaw to send a specially-crafted request to a Nova API, causing
Nova to consume an excessive amount of CPU and memory, or possibly crash.
(CVE-2013-4179)
A denial of service flaw was found in the way Nova handled network source
security group policy updates. An authenticated user could send a large
number of server creation operations, causing nova-network to become
unresponsive. (CVE-2013-4185)
An information disclosure flaw and a resource limit bypass were found in
the way Nova handled virtual hardware templates (flavors). These allowed
tenants to show and boot other tenants' flavors and bypass resource limits
enforced via the os-flavor-access:is_public property. (CVE-2013-2256)
It was discovered that, in some configurations, certain messages in
console-log could cause nova-compute to become unresponsive, resulting in a
denial of service. (CVE-2013-4261) |
| Alerts: |
|
Comments (none posted)
perl-Module-Metadata: code execution
| Package(s): | perl-Module-Metadata |
CVE #(s): | CVE-2013-1437
|
| Created: | September 3, 2013 |
Updated: | September 5, 2013 |
| Description: |
From the Red Hat bug report:
It was reported that the perl Module::Metadata module incorrectly claimed that it would gather metadata about a .pm file without executing unsafe code. However, when Module::Metadata determines the version of a module, it can extract a small amount of code (if present in the $Version variable assignment) and evaluates it, which can lead to the execution of arbitrary code (the same code that module would execute to obtain the value of $Version). |
| Alerts: |
|
Comments (none posted)
php-pear-Auth-OpenID: denial of service
| Package(s): | php-pear-Auth-OpenID |
CVE #(s): | CVE-2013-4701
|
| Created: | September 3, 2013 |
Updated: | September 16, 2013 |
| Description: |
From the CVE entry:
Auth/Yadis/XML.php in PHP OpenID Library 2.2.2 and earlier allows remote attackers to read arbitrary files, send HTTP requests to intranet servers, or cause a denial of service (CPU and memory consumption) via XRDS data containing an external entity declaration in conjunction with an entity reference, related to an XML External Entity (XXE) issue. |
| Alerts: |
|
Comments (none posted)
python-virtualenv: code execution
| Package(s): | python-virtualenv |
CVE #(s): | CVE-2013-1633
|
| Created: | September 5, 2013 |
Updated: | September 18, 2013 |
| Description: |
From the Red Hat bugzilla entry:
easy_install in setuptools before 0.7 uses HTTP to retrieve packages
from the PyPI repository, and does not perform integrity checks on
package contents, which allows man-in-the-middle attackers to execute
arbitrary code via a crafted response to the default use of the
product. |
| Alerts: |
|
Comments (none posted)
roundcubemail: two cross-site scripting flaws
Comments (none posted)
ruby: switch to https for gem installation
| Package(s): | ruby |
CVE #(s): | |
| Created: | September 5, 2013 |
Updated: | September 5, 2013 |
| Description: |
From the openSUSE advisory:
The ruby gemrc configured the gem installation source as
http source, allowing man in the middle attacks (if someone
could provide a different address for rubygems.org).
|
| Alerts: |
|
Comments (none posted)
ssmtp: user credentials leak
| Package(s): | ssmtp |
CVE #(s): | |
| Created: | September 3, 2013 |
Updated: | September 5, 2013 |
| Description: |
From the Red Hat bugzilla:
It was reported that ssmtp, an extremely simple MTA to get mail off the system to a mail hub, did not perform x509 certificate validation when initiating a TLS connection to server. A rogue server could use this flaw to conduct man-in-the-middle attack, possibly leading to user credentials leak. |
| Alerts: |
|
Comments (none posted)
strongswan: code execution
| Package(s): | strongswan |
CVE #(s): | CVE-2013-2054
|
| Created: | September 3, 2013 |
Updated: | September 5, 2013 |
| Description: |
From the CVE entry:
Buffer overflow in the atodn function in strongSwan 2.0.0 through 4.3.4, when Opportunistic Encryption is enabled and an RSA key is being used, allows remote attackers to cause a denial of service (pluto IKE daemon crash) and possibly execute arbitrary code via crafted DNS TXT records. NOTE: this might be the same vulnerability as CVE-2013-2053 and CVE-2013-2054. |
| Alerts: |
|
Comments (none posted)
Page editor: Jake Edge
Kernel development
Brief items
The 3.11 kernel is out,
released on
September 2. Some significant features in this release include the
Lustre distributed filesystem, transparent huge page support for the ARM
architecture, Xen and KVM virtualization for ARM64, the
O_TMPFILE
open flag, dynamic power management in the Radeon graphics driver, the
low-latency Ethernet polling patch set, and
more. See
the KernelNewbies
3.11 page for lots of details.
Stable updates:
3.10.10, 3.4.60, and 3.0.94 were released on August 29.
Comments (none posted)
Please people! When you post ssh addresses, always remember to also
post your user name and password or private key with the pull
request.
—
Linus Torvalds
I fail to see the benefit of just using the hardware random number
generator. We are already mixing in the hardware random number
generator into the /dev/random pool, and so the only thing that
using only the HW source is to make the kernel more vulnerable to
an attack where the NSA leans on a few Intel employee and
forces/bribes them to make a change such that the last step in the
RDRAND's AES whitening step is changed to use a counter plus a AES
key known by the NSA.
—
Ted Ts'o
Comments (none posted)
David Herrmann
describes
some recent graphics driver work, in which control of display modes is
being separated from access to the rendering engine. "
So whenever an
application wants hardware-accelerated rendering, GPGPU access or
offscreen-rendering, it no longer needs to ask a graphics-server (via DRI
or wl_drm) but can instead open any available render node and start using
it. Access-control to render-nodes is done via standard file-system
modes. It’s no longer shared with mode-setting resources and thus can be
provided for less-privileged applications."
Comments (none posted)
Greg Kroah-Hartman has put together
a
step-by-step tutorial on how to build and boot a self-signed kernel on
a UEFI secure boot system. "
The first two options here enable EFI
mode, and tell the kernel to build itself as a EFI binary that can be run
directly from the UEFI bios. This means that no bootloader is involved at
all in the system, the UEFI bios just boots the kernel, no “intermediate”
step needed at all. As much as I love gummiboot, if you trust the kernel
image you are running is 'correct', this is the simplest way to boot a
signed kernel."
Comments (none posted)
Kernel development news
By Jonathan Corbet
September 5, 2013
The 3.12 merge window started right on time on September 3; by the time
this article was written, over 3,500 patches had been pulled into the
mainline. There is, once again, a great deal of internal cleanup work
going on that does not look impressive in a feature list, but the benefits
of that work will be felt well into the future. In particular, some of the
performance work that has been done this time around should speed up Linux
considerably in a number of settings.
User-visible features merged for 3.12 so far include:
- The Lustre filesystem, added in 3.11, is now enabled in the build
system. Quite a bit of cleanup work for Lustre has been merged for
3.12.
- The long-deprecated /proc/acpi/event interface has been
removed. If anybody actually needed this file, they should raise a
fuss during the 3.12 development cycle.
- The pstore mechanism (which stores
crash information in a persistent storage location) is now able to
store compressed data.
- The full-system idle detection patch
set has been pulled. This work enables the kernel to detect when the
entire system is idle and turn off the clock tick, thus improving the
performance when the full dynamic tick feature is used.
- The "paravirtualized ticket spinlocks" mechanism allows for more
efficient locking in virtualized guests. In short, if a spinlock is
unavailable for anything more than a brief period, the lock code will
stop spinning and call into the hypervisor to simply wait until the
lock becomes available again.
- New hardware support includes:
- Audio:
Wolfson Microelectronics WM8997 codecs,
Atmel AT91ASM9x5 boards with WM8904 codecs,
TI PCM1792A and PCM1681 codecs,
Asahi Kasei Microdevices AK4554 audio chips,
Renesas R-Car SoC audio controllers, and
Freescale S/PDIF and SSI AC'97 controllers.
- Block:
ATTO Technology ExpressSAS RAID adapters.
The ATA layer has also gained the ability to take advantage of
newer solid-state drives
that support the queued version of the TRIM command, removing
much of the cost of TRIM operations.
- Hardware monitoring and related:
Dialog Semiconductor DA9063 regulators,
Marvell 88PM800 power regulators,
Freescale PFUZE100 PMIC-based regulators, and
Measurement Specialties HTU21D humidity/temperature sensors.
- Miscellaneous:
Humusoft MF624 DAQ PCI cards,
Xillybus generic FPGA interfaces,
Digi EPCA, Neo and Classic serial ports,
ST ASC serial ports,
Nuvoton NAU7802 analog-to-digital converters,
TI TWL6030 analog-to-digital converters,
TI Palmas series pin controllers,
Avago APDS9300 ambient light sensors, and
Bosch BMA180 triaxial acceleration sensors.
- Networking:
Realtek RTL8188EU wireless interfaces.
- Serial peripheral interface:
Freescale DSPI controllers,
Energy Micro EFM32 SoC-based SPI controllers,
Blackfin v3 SPI controllers, and
TI DRA7xxx QSPI controllers.
- USB:
Faraday FOTG210 OTG controllers and
GCT GDM724x LTE chip-based USB modem devices.
Changes visible to kernel developers include:
- There is a new reference count called a "lockref", defined in
<linux/lockref.h>. It combines a spinlock and a
reference count in a way that allows changes to the reference count to
be made without having to take the lock. See this article for details on how
lockrefs work.
- The S390 architecture has been converted to the generic
interrupt-handling mechanism. Since S390 was the last holdout, this
mechanism
will become mandatory and the associated CONFIG_GENERIC_HARDIRQS
configuration option will go away.
- There is a new mechanism for debugging kobject lifecycle issues; it
works by delaying the calling of the release() function when
the reference count drops to zero. Most of the time,
release() is called while the driver is shutting down the
associated device, but there is no guarantee of that. Turning on
CONFIG_DEBUG_KOBJECT_RELEASE will help find cases where the driver is
not prepared for a delayed release() call.
- The PTR_RET() function has been renamed
PTR_ERR_OR_ZERO();
all internal users have been changed.
Your editor predicts that the merge window will close on September 15,
just before the start of LinuxCon and the Linux Plumbers Conference.
Comments (2 posted)
By Jonathan Corbet
September 4, 2013
Reference counts are often used to track the lifecycle of data structures
within the kernel. This counting is efficient, but it can lead to a lot of
cache-line bouncing for frequently-accessed objects. The cost of this
bouncing is made even worse if the reference count must be protected by a
spinlock. The 3.12 kernel will include a new locking primitive called a
"lockref" that, by combining the spinlock and the reference count into a
single eight-byte quantity, is able to reduce that cost considerably.
In many cases, reference counts are implemented with atomic_t
variables that can be manipulated without taking any locks. But the
lockless nature of an atomic_t is only useful if the reference count
can be changed independently of any other part of the reference-counted
data structure.
Otherwise, the structure as a whole must be locked first. Consider, for
example, the heavily-used dentry structure, where reference count
changes cannot be made if some other part of the kernel is working with the
structure. For this reason, struct dentry prior to 3.12
contains these fields:
unsigned int d_count; /* protected by d_lock */
spinlock_t d_lock; /* per dentry lock */
Changing d_count requires acquiring d_lock first. On a
system with a filesystem-intensive workload, contention
on d_lock is a serious performance bottleneck; acquiring the lock
for reference count changes is a significant part of the problem. It would thus
be nice to find a way to to avoid that locking overhead, but it is not
possible to use
atomic operations for d_count, since any thread holding
d_lock must not see the value of d_count change.
The "lockref" mechanism added at the beginning of the 3.12 merge window
allows mostly-lockless manipulation of a reference count while still
respecting an associated
lock; it was originally implemented by
Waiman Long, then modified somewhat
by Linus prior to merging. A lockref works by packing the reference count
and the spinlock into a single eight-byte structure that looks like:
struct lockref {
union {
aligned_u64 lock_count;
struct {
spinlock_t lock;
unsigned int count;
};
};
};
Conceptually, the code works by checking to be sure that the lock is not
held, then incrementing (or decrementing) the reference count while
verifying that no other thread takes the lock while the change is
happening. The key to this operation is the magic cmpxchg()
macro:
u64 cmpxchg(u64 *location, u64 old, u64 new);
This macro maps directly to a machine instruction that will store the
new value into *location, but only if the current value
in *location matches old. In the lockref case, the
location is the lock_count field in the structure, which
holds both the spinlock and the reference count. An increment operation
will check the state of the lock, compute the new reference count, then use
cmpxchg() to atomically store the new value, insuring that neither
the count nor the lock has changed in the meantime. If things do
change, the code will either try again or fall back to old-fashioned
locking, depending on whether the lock is free or not.
This trickery allows reference count changes to be made (most of the time)
without actually acquiring the spinlock and, thus, without contributing to
lock contention. The associated performance improvement can be impressive
— a factor of six, for example, with one of
Waiman's benchmarks testing filesystem performance on a large system.
Given that the new lockref code is
only being used in one place (the dentry cache), that is an impressive
return from a relatively small amount of changed code.
At the moment, only 64-bit x86 systems have a full lockref implementation.
It seems likely, though, that other architectures will gain support by the
end of the 3.12 development cycle, and that lockrefs will find uses in
other parts of the kernel in later cycles. Meanwhile, the focus on lock
overhead has led to improvements elsewhere
in the filesystem layer that should make their way in during this merge
window; it has also drawn attention to some other places where the locking
can clearly be improved with a bit more work. So, in summary, we will see
some significant
performance improvements in 3.12, with more to come in the near future.
Comments (10 posted)
September 4, 2013
This article was contributed by John Stultz
As part of the Android + Graphics micro-conference at the
2013 Linux Plumbers
Conference, we'll be discussing
the
ION memory allocator and how its
functionality might be upstreamed to the mainline
kernel. Since time will be limited, I
wanted to create some background documentation to try to provide context
to the issues we will discuss and try to resolve at the micro-conference.
ION overview
The main goal of Android's ION subsystem is to allow for allocating and
sharing of buffers between hardware devices and user space in order to
enable zero-copy memory sharing between devices.
This sounds simple enough, but in practice it's a difficult problem. On
system-on-chip (SoC) hardware, there are usually many different devices that have direct
memory access (DMA). These devices, however, may have different
capabilities and can view and access memory with different constraints.
For example, some devices may handle scatter-gather lists, while others
may be able to only access physically contiguous pages in memory. Some
devices may have access to all of memory, while others may only access a
smaller portion of memory. Finally, some devices might sit behind an
I/O memory management unit (IOMMU), which may require configuration to give
the device access to
specific pages in memory.
If you have a buffer that you want to share with a device, and the
buffer isn't allocated in memory that the device can access, you have to
use bounce
buffers to copy the contents of that memory over to a location where the
other devices can access
it. This can be expensive and greatly hurt performance. So the ability to
allocate a buffer in a location accessible by all the devices
using the buffer is important.
Thus ION provides an interface that allows for centralized allocation of
different "types" of memory (or "heaps").
In current kernels without ION, if you're trying to share memory
between a DRM graphics device and a video4linux (V4L) camera, you
need to be sure to allocate the memory using the subsystem that manages
the most-constrained device. Thus, if the camera is the most constrained
device, you need to do your allocations via the V4L kernel interfaces,
while if the graphics is the most constrained device, you have to do the
allocations via the Graphics Execution Manager (GEM) interfaces.
ION instead provides one single centralized interface that allows
applications to allocate memory that satisfies the required constraints.
One thing that ION doesn't provide, though, is a method for determining what
type of memory satisfies the constraints of the relevant hardware.
This is instead a problem left to the device-specific user-space
implementations doing the allocation ("Gralloc," in the case of Android).
This hard-coded constraint solving isn't ideal, but there are not
any better mainline solutions for allocating buffers with GEM and
V4L. User space just has to know what is the most-constrained device. On
mostly static hardware devices, like phones and tablets, this information
is known ahead of time, but this limitation makes ION less suitable for
upstream adoption in its current form.
To share these buffers, ION exports a file descriptor which is linked
to a specific buffer. These file descriptors can be then passed between
applications and to ION-enabled drivers. Initially these were ION-specific
file descriptors, but ION has since been reworked to utilize
dma-buf structures for sharing. One caveat is that while ION can export
dma-bufs it won't import dma-bufs exported from other drivers.
ION cache management
Another major role that ION plays as a central buffer allocator and
manager is handling cache maintenance for DMA. Since many devices
maintain their own memory
caches, it's important that, when serializing device and CPU access to shared
memory, those devices and CPUs flush their private caches before letting other devices
access the buffers. Providing a full background on caching would be out
of the scope of this article, so I'll instead point folks to this LWN
article if they are interested in learning more.
ION allows for buffer users to set a flag describing the needed cache
behavior on allocations.
This allows those users to specify if mappings to the buffer should be cached
with ION doing the cache maintenance, if the buffers will be uncached but
use write-combining (see this article for details),
or if the buffers will be uncached and managed explicitly via ION's
synchronization ioctl().
In the case where the buffers are cached and ION performs cache
maintenance, ION further tries to allow for optimizations by delaying
the creation of any mappings at mmap() time. Instead, it provides a fault handler
so pages are mapped in only when they are accessed. This method allows ION to
keep track of the changed pages and only flush pages that were
actually touched.
Also, when ION allocates memory for uncached buffers, it is managing
physical pages which aren't mapped into kernel space yet. Since these
buffers may be used by DMA before they are mapped into kernel space, it is
not correct to flush them at mapping time; that could result in data
corruption.
So these buffers have to be pre-flushed for DMA when they are allocated. So
another performance optimization ION provides is that it pre-flushes pools
of pages for DMA. On some systems, flushing memory for DMA on frequent
small buffer allocations is a major performance penalty. Thus ION uses a
page pool, which allows a large pool of uncached pages to be
pre-allocated and flushed all at once, then when smaller allocations are
made they just pick pages from the pool.
Unfortunately both of these optimizations are somewhat problematic from
an upstream perspective.
Delayed mapping creation is problematic because the DMA API uses either
scatter-gather
lists or larger contiguous DMA areas; there isn't a generic
interface to flush a single page. Because of this, when ION tries to
flush only the pages that have been touched, it ends up using the
ARM-specific __dma_page_cpu_to_dev() function, as it was too costly to
iterate across the scatter-gather lists to find the faulted page. The
use of this interface makes ION only buildable on 32-bit ARM systems.
The pre-flushed page pools are also problematic: since these pools
of memory are allocated ahead of time, it's not necessarily clear which
device is going to be using them. Normally, when flushing pages for DMA,
one must specify the device which will access the memory next, so in the
case of a device behind an IOMMU, that IOMMU can be set up so the device can
access those pages. ION gets away with this again by using the 32-bit
ARM-specific __dma_page_cpu_to_dev() interface, which does not
take a device
argument. Thus this further limits ION's ability to function in more
generic environments where IOMMUs are more common.
For Android's uses, this limitation isn't problematic. 32-bit ARM is its
main target, and, on Intel systems there is coherent memory and fewer
device-specific constraints, so ION isn't needed there. Further, for
Android's use cases, IOMMUs can be statically configured to specific heaps
(boot-time reserved carve-out memory, for example) so it's not necessary to
dynamically reconfigure the IOMMUs. But these limitations are problematic for
getting ION upstream. The problem is that without these optimizations
the performance penalty will be too high, so Android is unlikely to
make use of more upstream-friendly approaches that leave out these
optimizations.
Other ION details
Since ION is a centralized allocator, it has to be somewhat flexible in
order to handle all the various
types of hardware. So ION allows
implementations to define their own heaps beyond the common heaps
provided by default. Also, since many devices can have quirky allocation
rules, such as allocating on specific DIMM banks, ION allows some of the
allocation flags to be defined by the heap implementation.
It also provides an ION_IOC_CUSTOM ioctl() multiplexer which
allows ION implementations to implement their own buffer operations,
such as finer-grained cache management or special allocators. However,
the downside to this is that it makes the ION interface actually quite
hardware-specific — in some cases, specific devices require fairly large
changes to the ION core. As a result, user-space applications that use the
ION interface must be customized to use the specific ION implementation for
the hardware they are running on. Again, this isn't really a problem for
embedded devices where kernels and user space are delivered together, so
strict ABI consistency isn't required, but is an issue for merging
upstream.
This hardware- and implementation-specific nature of ION also brings into
question the viability of the centralized allocator approach ION uses.
In order to enable the various features of all the different
hardware, it basically has hardware-specific interfaces, forcing the
writing of
hardware-specific user-space applications. This removes some of the conceptual
benefit of having a centralized allocator rather than using device
specific allocators. However, the Android developers have reasoned that,
by having a ION be a centralized memory manager, they can
reduce the amount of complex code each device driver has to implement
and allows for optimizations to be made once in the core, rather than
over and over to various drivers of differing quality.
To summarize the issues around ION:
- It does not provide a method to discover device constraints.
- The interface exposes hardware-specific heap IDs to user space.
- The centralized interface isn't sufficiently generic for all devices, so
it exposes an ioctl() multiplexer for device-specific options.
- ION only imports dma-bufs from itself.
- It doesn't properly use the DMA API, failing to specify a device when
flushing caches for DMA.
- ION only builds on 32-bit ARM systems.
ION compared to current upstream solutions
In some ways GEM is a similar memory allocation and sharing system. It
provides an API for allocating graphics buffers that can be used by an
application to communicate with graphics drivers. Additionally, GEM
provides a way for an application to pass an allocated buffer to another
process. To do this one uses the DRM_IOCTL_GEM_FLINK operation,
which provides a GEM-specific reference that is conceptually similar to a
file descriptor
that can be passed to another process over a socket. One drawback with
this is that these GEM-specific "flink" references are just a global 32-bit
value, and thus can be guessed by applications which otherwise should not
have access to them. Another problem with GEM-allocated buffers is that
they are specific to the device they were allocated for. Thus, while GEM
buffers could be shared between applications, there is no way to share
GEM buffers between different devices.
With the advent of hybrid graphics implementations (usually discrete
NVIDIA GPUs combined with integrated Intel GPUs), the need for sharing
buffers between devices arose and dma-bufs and PRIME
(a GEM-specific mechanism for sharing buffers between devices) were created.
For the most part, dma-bufs can be considered to be marshaling structures for
buffers. The dma-buf system doesn't provide any method for allocation,
but provides a generic structure that can be used to to share buffers
between a number of different devices and applications. The dma-buf
structures are shared to user space using a file descriptor, which avoids
the potential security issues with GEM flink IDs.
The DRM PRIME infrastructure allows drivers to share GEM buffers via
dma-bufs, which allows for things like having the Nouveau driver be able
to render directly into a buffer that the Intel driver will display to
the screen. In this way GEM and PRIME together provide functionality
similar to that of ION, allowing for the type of buffer sharing (both
utilizing dma-bufs) that ION enables on SoCs on more conventional
desktop machines. However PRIME does not handle any
information about what kind of memory the device can access, it just
allows for GEM drivers to utilize dma-buf sharing, assuming all the devices
sharing the buffer can access it.
The V4L subsystem, which is used for cameras and video recorders, also has
integrated dma-buf functionality, allowing camera buffers to be
shared with graphics cards and other devices. It provides its own
allocation interfaces but, like GEM, these allocation interfaces only
make sure the buffer being allocated works with the device that the
driver manages, and are unaware of what constraints other drivers the
buffer might be shared with have.
So with the current upstream approach, in order to share buffers between
devices, user space must know which devices will share the buffer
and which device has the most restrictive constraints; it must then allocate
the buffer using the API that most-constrained driver uses.
Again, much like in the ION case, user space has no
method available in order to determine which device is most-constrained.
The upstream issues can thus be summarized this way:
- There is no existing solution for constraint-solving for sharing buffers
between devices.
- There are different allocation APIs for different devices, so, once users
determine the most constrained device, they have to then do allocation
with the matching API for that device.
- The IOMMU and DMA API interfaces do not currently allow for the DMA
optimizations used in ION.
Possible solutions
Previously, when ION has been discussed in the community, there have
been a few potential approaches proposed. Here are the
ones I'm aware of.
One idea would be to try to just merge a centralized ION-like allocator
upstream, keeping a similar interface. To address the problematic
constraint discoverability issue, devices would export a opaque heap
cookie via sysfs and/or via an ioctl(), depending on the device's needs
(devices could have different requirements depending on device-specific
configuration). The meaning of the bits would not be defined to
user space, but could be ANDed together by a user-space application and passed to the
allocator, much as the heap mask is currently with ION. This provides a
way for user space to do the constraint solving but avoids the problem of
fixing heap types into the ABI; it also allows the kernel to define which
bits mean which heap for a given machine, making the interface
more flexible and extensible. This, however, is a more complicated
interface for user space to use, and many do not like the idea of exposing
the constraint information to user space, even in the form of an opaque cookie.
Another possible solution
is to allow dma-buf exporters to not allocate the backing buffers
immediately. This would allow multiple drivers to attach to a dma-buf
before the allocation occurs. Then, when the buffer is first used,
the allocation is done; at that time, the allocator could scan the list of
attached drivers and be able to determine the constraints of the
attached devices and allocate memory accordingly. This would allow
user space to not have to deal with any constraint solving.
While this
approach was planned for when dma-bufs were originally designed, much
needed infrastructure is still missing and no drivers yet use this
solution. The Android developers have raised the concern that this sort
of delayed allocation could cause non-deterministic latency in
application hot-paths, though, without an implementation, this has not
yet been quantified. Another downside is that this delayed
allocation isn't required of all dma-buf exporters, so it would only
work with drivers that actually implement this feature.
Since the possibility exists that not all of the drivers one would want to
share a buffer with would support delayed allocation, applications would have to
somehow detect the functionality and make sure to allocate memory to be
shared using the dma-buf exporter that does support this delayed
allocation functionality. This approach also requires the exporter driver
allocators to each handle this constraint solving individually (though
common helper functions may be something that could be provided).
Another possible approach could be to prototype the dma-buf late-allocation
constraint solving using a generic dma-buf exporter. This in
some ways would be ION-like in that it would be a centralized exporter,
but would not expose heap IDs to user space. Then the buffer would be
attached to the various hardware drivers, and, on the first use, the
exporter would determine the attached constraints and allocate the
buffer. This would provide a testing ground for the delayed-allocation
approach above while having some conceptual parallels to ION. The
downside to this approach would be that the centralized interface would
likely not be able to address the more intricate hardware-specific
allocation flags that possibly could be needed.
Finally, none of these proposals address the non-generic caching
optimizations ION uses, so those issues will have to be discussed further.
Conference Discussion
I suspect at the Linux Plumbers Android + Graphics mini-conference, we
won't find a magic or easy solution on how to get Android's ION
functionality upstream. But I hope that, by having key developers from
both the Android team and the upstream kernel community able to discuss
their needs and constraints and be able to listen to each other, we
might be able to get a sense of which subproblems have to be addressed
and what direction forward we might be able to take. To this end I've
created a few questions for folks to think about and discuss so we can
hopefully come up with answers for during the discussion:
- Current upstream dma-buf sharing uses, such as PRIME, seem focused on
x86 use cases (such as two devices sharing buffers). Will these interfaces
really scale to ARM-style uses cases (many devices sharing buffers) in a
generic fashion? As non-centralized allocation requires exporters to
manage more logic and understand device constraints, and there seems to
be a potential that this approach will eventually be unmaintainable.
- ION's centralized allocation style is problematic in many cases, but
also provides significant performance gains. Is this too major of an
impasse or is there a way forward?
- What other potential solutions haven't yet been considered?
- If a centralized dma-buf allocation API is the way forward, what would
be the best approach (ie: heap-cookies, vs post-attach allocation)?
- Is there any way to implement some of the caching optimizations ION
uses in a way that is also more generically applicable? Possibly extending
IOMMU/DMA-API?
- Given Android's needs, what next steps could be done to converge on a
solution? How can we test to see if attach-time solving will be usable
for Android developers? What would it miss that ION still provides?
- How do Android developers plan to deal with IOMMUs and non-32-bit ARM
architecture issues?
Some general reference links follow:
Credits
Thanks to Laurent Pinchart, Jesse Barker, Benjamin Gaignard and Dave
Hansen for reviewing and providing feedback on early drafts of this
document, and many thanks to Jon Corbet for his
careful editing.
Comments (none posted)
Patches and updates
Kernel trees
- Sebastian Andrzej Siewior: 3.10.10-rt7 .
(August 31, 2013)
Core kernel code
Development tools
Device drivers
Filesystems and block I/O
Memory management
Networking
Architecture-specific
Virtualization and containers
Miscellaneous
Page editor: Jonathan Corbet
Distributions
By Nathan Willis
September 5, 2013
Ubuntu's Personal Package Archive (PPA) feature is a
popular way for developers—and, in some cases, project
teams—to publish binary packages without risking too much
upheaval to the end user's local package set. Ubuntu has
offered the service since 2007, and while other distributions have
rolled out similar services of one form or another over the years,
Fedora is now targeting the complete "PPA-like" offering
with an effort of its own. That effort is called the COPR project, which has
existed in limbo for several years. But COPR recently acquired a new
maintainer, who is asking hard questions about how much of the
service Fedora should really be building from scratch.
According to the Fedora wiki,
COPR stands for "Cool Other Package Repo," though there is little
reference to that acronym expansion anywhere else (perhaps
understandably, for those who remember the origins
of KDE). In any case, the COPR project was initiated in 2010 with an
eye toward offering a PPA-like service for Fedora developers, although
it appears that no substantial development began until 2012.
Building packages and the case for PPAs
Of critical interest was duplicating the ease-of-use of the Ubuntu
PPA system. The Launchpad PPA service automatically builds programs
for both the 32- and 64-bit x86 architectures, with any number of Ubuntu
releases available as the target environment, and serves up the
resulting packages as an Apt repository. Users can add the PPA
repository with a variety of Apt tools, then install, update, and
remove the PPA-provided packages just like those from the distribution
itself. Just as importantly,
developers only need to upload source packages to the
system—although, of course, they are responsible for hunting
down and fixing any build errors.
On the Red Hat/Fedora side, it has long been possible to set up a
private RPM repository. Over the years many such repositories were
established and proved popular for acquiring packages outside
of—or newer than—the official Fedora releases. Dag
Wieers, for example, ran his eponymous "DAG" repository for many years
using a homebrew build system, and later followed it up with the RepoForge project. But both required
packagers to oversee the build process manually.
More recently, the Fedora
People Repositories project offered Fedora packagers server
space to publish unofficial RPM packages, although again, manual
effort was required to set up the build environment and to publish
the resulting packages. The wiki also states that the repositories
"should only be used for packages that are intended for end-user
non-transient use," a use case that excludes putting packages out
for public review.
At the same time, Fedora has developed the Koji build system to
automatically build RPM packages for the distribution's core
components. It would seem, then, that Koji is a natural fit to take
the place of the manual package-building process usually required of
those wishing to publish a personal RPM repository.
But there is another build system that has seen wider usage than
Koji, the Open Build
Service (OBS). Originally designed by openSUSE, OBS can be used
to build packages, package sets, or even entire distribution releases,
targeting a wide variety of distributions and architectures.
COPR returns
As mentioned earlier, the COPR concept has been around since 2010,
but it was not until late 2012 that Slavek Kabrda and Seth Vidal began
coding on it in earnest. They produced a working prototype (although
it has not been open to public testing). But due to the tragic loss of Vidal in July and Kabrda's
commitments to other projects, the future of COPR seemed, at the very
least, uncertain.
But the project soon gained the attention of Miroslav Suchý, who
has since undertaken a reassessment of the community's specific
requirements of a PPA-like service, and has started asking hard
questions about the architecture best suited to implement them.
In Suchý's first blog
post on the subject, he enumerated the higher-level use cases for
the service. The existing COPR prototype takes a single source RPM as input,
builds the package on a virtual machine, and imports it into a Yum
repository. In addition to that basic functionality, Suchý said, COPR
needs to be usable for projects built on top of Fedora (such as
OpenShift or the various Fedora SIGs' "spins"), it should provide
personal—perhaps even private—repositories for users, and
it should facilitate automated rebuilds of upstream packages for
testing purposes. These missing features are what will make COPR
useful for teams of developers, for individuals doing testing or
branch development, and for monitoring the build-ability of packages.
Those useful outcomes constitute the real goals of the system,
after all: making COPR a service that improves the development
process, not just automating a single task. But the problem is that
COPR is currently a standalone system, not integrated with the other
pieces of the Fedora infrastructure. To meet the community's real
needs, Suchý said, COPR should either be rebased on Koji or Fedora
should adopt OBS. Koji, of course, is already a Fedora project, but
OBS offers more features.
In a second
post, Suchý explored the costs and benefits of rebasing COPR on
Koji. There would need to be changes to Koji itself, including adding
support for multiple packages of the same name scattered among
different user accounts (and for multiple branches of a project within
one user account) and adapting Koji to build packages on virtual
machines instead of in chroots. On the other hand, those changes to
Koji would improve it in its own right. In addition, the Fedora
project would benefit from improved tooling all around, because other
components (like mock and
various other existing build scripts) will receive attention. The
downside would be that Koji cannot easily be modified to add support
for monitoring and automatically rebuilding packages, and its support
for collaborative team features is minimal.
Suchý then examined
the prospect of rebuilding COPR on top of OBS. In the plus column,
OBS already supports many of the features missing from Koji (such as
teams, package monitoring, and multiple branches of a package). It is
also more mature, has six full-time developers, and has a more
full-featured web interface. In fact, OBS already supports virtually
the full feature set desired for COPR, including automatically
publishing a Yum repository for built packages; the existing COPR
prototype code would not really be necessary.
In the minus column, however, OBS uses a significantly different
build system than the one used to build Fedora packages elsewhere.
The workflow would be different than the one used to build official
packages on Koji, and dependency resolution is performed using a
different utility. A different workflow would require re-educating
packagers, but using a different dependency resolver brings in the
possibility that Koji and OBS would actually produce different
packages.
The great debate
Suchý does not propose an answer in his posts. There are pros and
cons to each path forward, and he estimated six to seven months would
be required to implement either solution. Ultimately, he put the
question to the community to decide—which, for a
community-driven distribution like Fedora, is probably the most
appropriate approach.
The options received quite a bit of debate on the fedora-devel
list. Some, like Colin Walters, found
the prospect of differences between the OBS and Koji build systems to
be a major problem (and a point in favor of the Koji-based approach).
Toshio Kuratomi said
that asking the Fedora system administrators to add OBS to their
already-full plates would be too costly, while others pointed out that
asking packagers to learn two build systems would increase the
packager's workload (in addition to the technical differences between
the systems). There were also security questions raised, since OBS uses virtual machines rather than Koji's chroot-based approach.
In the end, Suchý concluded that
the consensus leaned toward rolling out the existing version of COPR
in the short term, and working on integration with Koji over the
coming seven months. But he did not give up on OBS altogether.
Although few in the mailing list discussion were excited by it, Suchý
said he still thought it offered more than COPR and Koji can together,
so he also vowed to "try to get (in spare time) OBS to Fedora
anyway and build some community around it. And revisit the decision in
two-three years." So Fedora packagers actually have more than
one option to look forward to, regardless of which build system they
prefer.
Comments (12 posted)
Brief items
If you can, have dinner with your upstreams at least once a year.
--
Enrico
Zini
Comments (none posted)
GNU Linux-libre 3.11-gnu has been released. "
As usual, upstream
introduced new drivers that request non-Free
Software, and modified other drivers so as to request additional
non-Free programs, or newer versions of previously-requested ones.
These requests are disabled in 3.11-gnu. However, our goal is not to
keep users from running these programs, but rather to avoid asking users
to install and use non-Free Software. We regard this limitation of the
current implementation as a bug, and a fix that enables firmware to be
loaded without inducing users to install non-Free Software is expected
in a future release."
Full Story (comments: none)
RebeccaBlackOS is a live CD that showcases Wayland. "
I have added
the Orbital shell by Giucam ( https://github.com/giucam/orbital
) and Hawaii by Plfiorini ( https://github.com/hawaii-desktop/ ) as selectable shells along with the default Weston-Desktop-Shell, to try out. These can be selected at login, by a menu provided by the waylandloginmanager, similar to xsession selection."
Full Story (comments: 1)
Distribution News
Fedora
For reasons that are not entirely clear no matter how closely one looks,
the Fedora project would appear to have chosen "Heisenbug" as the name of
the upcoming Fedora 20 release.
Full Story (comments: 28)
Newsletters and articles of interest
Comments (none posted)
Ars technica has posted
a
look at the new "Google Play Services" mechanism running on most
Android devices. "
Google's strategy is clear. Play Services has
system-level powers, but it's updatable. It's part of the Google apps
package, so it's not open source. OEMs are not allowed to modify it, making
it completely under Google's control. Play Services basically acts as a
shim between the normal apps and the installed Android OS. Right now Play
Services handles the Google Maps API, Google Account syncing, remote wipe,
push messages, the Play Games back end, and many other duties. If you ever
question the power of Google Play Services, try disabling it. Nearly every
Google App on your device will break."
Comments (39 posted)
Michal Hrusecky and Jos Poortvliet team up to
describe the openSUSE release process on the SUSE openSUSE blog. They cover the nuts and bolts of the software side, as well as the marketing activity preparing for a release. "
There are several bots that check packages [sent] to Factory. First is factory-auto, which does some basic checks regarding you spec file, rpmlint warnings and similar. If you pass this quick test, it’s time for more thorough testing. A bot named factory-repo-checker tries to actually install your package in a testing environment to make sure it is possible and it also looks for possible file conflicts, so you wouldn’t overwrite somebody [else's] package. Last check before a package gets in front of the review team is legal-auto. This one checks the licence (did it change? is it a new package?) and if needed calls in our legal team to take a look at the package. The final step is manual review by members of review team which will try to spot mistakes that automatic checks overlook."
Comments (16 posted)
Page editor: Rebecca Sobol
Development
By Nathan Willis
September 5, 2013
As Ubuntu continues its march toward Ubuntu-based mobile
devices, it has undertaken a number of efforts designed to attract
third-party mobile app developers. The latest development is the beta
launch of the project's mobile app store, which is tied with an
app-development contest slated to run through September 15. Perhaps
more interesting in the long run, though, is the publication of the
specifications for how the platform's mobile apps are to be packaged.
The format is called Click, and it is related to the .deb package
format used for standard Ubuntu packages—but with a few key
changes.
Martin Albisetti announced
the opening of the app store and the publication of the package format
documentation in an August 30 blog post. The Ubuntu Touch store is only accessible by
registered developers for the time being; they can upload
packages to the Ubuntu "MyApps" service.
Uploads are then subject to a manual review
process before they are published.
The process is much the same as the one offered to third-party
developers of desktop applications seeking publication in the Ubuntu
Software Center service, except that mobile apps can only be written
for the APIs supported by the Ubuntu SDK. At the moment, just two
app frameworks are supported—QML and HTML5—with HTML5
support relying on the Apache
Cordova library layer.
Support for compiled apps is apparently under discussion, although
no clear timetable has been publicized. One open question on that
front is how to encourage app developers to license their code under
GPL-compatible terms; for the contest, this is important because the
winning entries will be included in subsequent Ubuntu Touch builds. Since apps written in QML and HTML are (at least
in theory) their own source code, not handling binary apps neatly
sidesteps the question. For the time being, the app store does not
support selling apps, although that feature is expected to arrive
eventually as well.
Click, here
To put an app store in place, of course, one must have some idea of
what format apps will take. The format defined for Ubuntu Touch is
called "Click," using the .click file
extension—although there are copious caveats that the actual
name of the format could still change before the Ubuntu Touch app
store opens to the public. Click is derived from the .deb Debian
package format, and in most ways a .click file is a valid .deb file
too. Consequently, the distinct file extension helps to make it less
likely that a user will attempt to install a .click app using
dpkg or Apt.
There is HTML documentation
on the format available, but a newer revision of the specification can
be found
in the Click source repository on Launchpad. There was a lengthy
debate on the ubuntu-devel mailing list about the suitability of
existing package formats, such as Android's .apk and cross-platform
efforts like Listaller. In the end, Colin Watson said
that:
[...] existing app packaging systems are more of a reflection of
the system they were built for than anything else. If you look at, say,
Android's APK format, it's essentially a zip file with a manifest and
some conventions about Dalvik class installation and the like. Most of
the other mobile app formats are similar.
Similarly, by basing Click on the Debian package format already in
use, many of the existing package tools can be reused. Like a .deb, a
.click package is an ar archive, which itself contains "control" and
"data" tar archives. The control archive includes package management
information, while the data archive holds the app's filesystem.
Click apps are designed to be self-contained; the directory tree in the
package's data archive is unpacked into a single directory and cannot
access the filesystem above its own root. The app filesystem
is uncompressed and stays on disk, though—as opposed to
being stored as a compressed image and loop-mounted.
The control archive must contain both a control file and a
manifest.json file. The manifest includes several required
fields like app name, app version, and installed size, and may
include several optional fields as well (such as maintainer
information, a reference to the app's icon, or descriptions in various
languages). The control file duplicates some of this
manifest information for use by the package-management tools, and it
also includes a field listing the version number of the Click format
used.
Because Click apps are meant to be stand-alone entities, some of
the manifest and control fields diverge somewhat from their
equivalents in a standard Debian package. The app name is specified
in reverse-network-domain name format like
com.example.catpictures, which is the same scheme used in
D-Bus, in Android app naming, and in plenty of other places. The
"framework" field is a simplified version of the Debian package
format's way to specify dependencies. Click apps cannot have dependencies on
anything other than base system services. In other words, they cannot
introduce dependencies on other apps or on components installed
through other means, and they cannot depend on specific versions of
system libraries (which in the worst-case scenario could make dependencies unresolvable). As such, for now the only valid value for "framework" is the
current Ubuntu Touch SDK framework. Perhaps in the future, different
versions of the framework will need to be designated, or non-phone
form factors will require separate frameworks, but this has not been explored in the
documentation.
But the fact that Click apps cannot have dependencies has benefits
beyond simply avoiding dependency hell. Since Click apps are isolated
from system packages, the app installer can install them without
querying or searching the system's dpkg database—which can be
quite slow relative to unpacking a tar archive to disk.
Another—perhaps major—distinction from the Debian
package format is that maintainer
scripts (such as preinst, postinst, or
prerm) are forbidden. This is to isolate Click apps from the
system; since they are intended to be self-contained, they should not
generally trigger any system-level events which could potentially be
exploited. Not allowing them makes the task of auditing app
submissions considerably simpler. There is one exception to the
general maintainer-script moratorium: apps can include a
preinst script that prevents the package from being installed
directly with dpkg.
Hook 'em
On the downside, however, one useful feature lost in disabling
maintainer scripts is the ability to easily hook into system services
at install time. Even for self-contained apps, there are a number of
system packages that need to be informed of the installation, such as
updating the icon cache to include the new app's icon, locating the
.desktop launcher to add to the app menu, or registering a service
with D-Bus.
But Ubuntu Touch still needs to alert system services to the
app-installation event somehow. Currently the plan is to use a "hook"
mechanism inspired by dpkg's triggers. Triggers are calls out to
other programs that are generally placed in a Debian package's
postinst script. A trigger mechanism could, for example,
tell Fontconfig to update the font info cache and pick up a
newly-installed font. The problem with this approach is that Click
packages can be installed to any directory on the system. That is to say,
there is no fixed location for them; the directory name for an app is
created based on a hash. This approach enables the updating of apps by installing the new version in a separate directory, then garbage-collecting the old one. The unpleasant side-effect is that, in this example,
Fontconfig needs some way to learn which directory it should scan to update its cache.
Dpkg triggers cannot pass user data, so apps cannot simply report
their location through the trigger mechanism.
The "hooks" solution works by enumerating a stable, well-known set
of services that need to perform such post-app-installation updates.
Each app is responsible for listing the hooks that it needs to call in
its manifest file. The hooks field lists the hooks used and the
relative pathname of each new resource (e.g., fonts, icons, or
.desktop files) within the app's filesystem. The app installer
is then responsible for creating a symlink from the newly-installed
app resource to the canonical location for such resources. Based on
the examples in the hook documentation, there will be separate
directories for these symlinks, so that new .desktop entries might be
placed in /opt/click.ubuntu.com/.click/desktop-files/ as
opposed to /usr/share/applications/.
Still to come
The Click format is still in rapid development; although the
current app store contest ends in mid-September, there is no indication when the mobile app store itself
will go live to the public. But that probably matters little while there are no
Ubuntu Touch devices in widespread distribution.
There are still several unanswered questions, such as how best to
support
"fat" packages built for multiple architectures, and how to implement
security signatures. For now, the selected approach to signatures is
to use debsigs.
Developers will sign their uploads to the app store, and the store
maintainers will verify those signatures. When the app is pushed out
to users, it will carry the store's signature, which is all that
end-user devices are expected to verify.
The format may still undergo significant changes before any Ubuntu
Touch devices hit the market, as may the upload and review processes
used for Click apps. However, it does seem likely that the format
will retain its roots in the .deb package format; the Click installer
reuses dpkg code in a number of places, even if third-party apps are
destined to remain in total isolation from the rest of an Ubuntu Touch
system.
Comments (10 posted)
Brief items
Specifically, no court anywhere in the world, to my knowledge, has
sat down and lined two Free Software licenses up next to each
other and tried to determine if, upon creating a whole work based
on two works under the two licenses, if the terms of any license
was violated and thus the distributor of that whole work infringed
copyright of one party or the other.
Thus, people argue about what a court might say. Some lawyers
bluster and claim they know the answer when they really don't.
[...] In the meantime, though, we have to operate, share code,
and (hopefully) uphold software freedom -- with the tools we
have.
—
Bradley Kuhn
Comments (none posted)
Version 3.8.0 of the SQLite database management library is out. "
Do
not fear the zero in this version number! The 3.8.0 release might easily
have been called 3.7.18 instead. We decided to increase the minor version
from 7 to 8 because of the rewrite of the query planner. But the software
is quite stable and is ready for production use." Along with the
new query planner, this
release adds
partial
indexes and more; see
the changelog for details.
Full Story (comments: none)
The Apache Software Foundation has
announced
the release of version 2.0 of the Cassandra distributed database system.
"
New features in Apache Cassandra v2.0 include lightweight
transactions, triggers, and CQL (Cassandra Query Language) enhancements
that increase productivity in creating modern, data-driven
applications."
Comments (none posted)
Version 1.0 of the peer-to-peer file sharing client gtk-gnutella has been released. The release notes list a handful of new features, such as the ability to define maximum lease times for UPnP and NAT-MPM mappings, persistent DHT keys, and the ability to prioritize the rarest chunks of a file for download first. Nevertheless, the announcement also highlights what comes next, saying "this new release is an important milestone because it is the last version that will be mono-threaded. Future releases will use a new runtime that will allow multiple threads to run concurrently, to be able to exploit common multi-core systems nowadays."
Full Story (comments: none)
Version 3.0 of the GTK+ audio player application Rhythmbox has been released. Changes include the migration of plugins to Python 3, support for "composer" tags in audio metadata, separation of constant- and variable-bitrate encoding options into separate presets, and numerous UI improvements.
Full Story (comments: none)
Version 0.5 of MediaGoblin is now available. This release takes initial steps toward true network federation by supporting the pump.io flavor of OAuth, and it adds support for logins using OpenID or Mozilla Persona. Also new are notifications of new comments, comment previews, and Unicode filename support. Finally, starting with this release, all media types and authentication schemes are supported via plugins, which should make the system easier to extend.
Comments (none posted)
KDE.News
announces the release of Plasma Active 4, KDE's interface for consumer electronics devices like tablets, smartphones, set-top boxes, and more. The release has improvements to the File application, a better on-screen keyboard,
ownCloud integration, is based on the
Mer core, and has additional new features and improvements described in the post. "
Plasma Active 4 is a stabilization and performance release, the result of developers focusing on fit and finish throughout. This release is intended to complete the evolution of Plasma Active to a polished product relative to its proof of concept first release. The emphasis is now on providing a solid foundation for third party applications, additional content and device adaptations."
Comments (2 posted)
Newsletters and articles
Comments (none posted)
KDE News
presents
a report on the successful conclusion of the
ALERT Project. The project aimed to
help open source developers to work more effectively and to produce better
software by improving bug tracking, resolution and software quality tools. "
KDE's involvement in the early stages was mostly in the form of contributing to documents describing the problems that should be solved. For example, Bugzilla is a popular tool to track bugs, but for large projects such as KDE there are problems with duplicate reports and with reports being filed to the wrong team (it is not always easy for a user to understand that an apparent problem with a web browser failing to show pages is actually due to a separate software handling wireless connections). With this in mind, the KDE experts decided to focus on Solid (the KDE software components dealing with hardware interaction) as a base for KDE's testing of the ALERT software."
Comments (5 posted)
KDE is
changing
the way it does releases. "
After the 4.x series, the KDE Release Team expects the releases of the Frameworks, Workspaces and Applications to diverge. Separate release cycles will benefit both users and developers. Individual components can skip releases if they require a longer development cycle. Separate cycles will encourage developers to have an always-releasable master while work goes on porting to KDE Frameworks 5. There is more emphasis on continuous integration and other automated testing to improve development work."
Comments (none posted)
Page editor: Nathan Willis
Announcements
Brief items
Michael Meeks
announces
that SUSE's LibreOffice team is moving over to Collabora, which will be
providing commercial LibreOffice support going forward. "
It seems to
me that the ability to say 'no' to profitable but peripheral business in
order to strategically focus the company is a really important management
task. In the final analysis I'm convinced that this is the right business
decision for SUSE. It will allow Collabora's Productivity division to focus
exclusively on driving LibreOffice into Windows, Mac and Consulting markets
that are peripheral to SUSE. It will also retain the core of the existing
skill base for the benefit of SUSE's customers, and the wider LibreOffice
community, of which openSUSE is an important part." See also the
press releases from
Collabora
and
SUSE.
Comments (36 posted)
Articles of interest
The Free Software Foundation's monthly newsletter for August covers the GNU
30th anniversary celebration, the Free JavaScript campaign, videos from
LibrePlanet, gNewSense 3.0, an interview with Bernd Kreuss of TorChat, and
several other topics.
Full Story (comments: none)
The Free Software Foundation Europe newsletter covers F-Droid: Privacy
aware software repository for Android, New Zealand bans software patents,
Groklaw shutdown, interviews and talks, and more.
Full Story (comments: none)
The Free Software Foundation has launched the
Free JavaScript campaign.
"
For some time now, free software users have been concerned about the
increasing number of Web sites that cannot run without nonfree
JavaScript programs downloaded and executed on the visitor's
computer. Richard Stallman first raised the concern with his article
The JavaScript Trap, pointing out that most JavaScript programs
are not freely licensed, and that even free software Web browsers are
usually configured to download and run these nonfree programs without
informing the user. We've recently started organizing free software
users around the issue."
Full Story (comments: 2)
Simon Phipps
looks
at Microsoft's Nokia acquisition with an emphasis on Nokia's patents.
"
By divesting its devices business yet retaining ownership of the
patents that relate to them, Nokia has immunized itself from retaliatory
action when it makes future patent offensives. In the past, a company such
as Google -- most likely Nokia's primary target -- could retaliate by
attacking Nokia's infringement of its own patents, but that line of defense
is no longer available since all the products now belong to
Microsoft. That's the background to Nokia's statement that it 'plans to
continue to build Nokia's patent portfolio [and] to expand its
industry-leading technology licensing program.'"
Comments (9 posted)
Luis Villa has posted
a set
of notes from the Creative Commons Global Summit. "
Conversation
around the revised CC 4.0 license drafts was mostly quite positive. The
primary expressed concerns were about fragmentation and
cross-jurisdictional compatibility. I understand these concerns better now,
having engaged in several good discussions about them with folks at the
conference. That said, I came away only confirmed on my core position on
CC’s license drafting: when in doubt, CC should always err on the side of
creating a global license and enabling low-complexity sharing."
Comments (7 posted)
Education and Certification
The Linux Professional Institute (LPI) has launched
LPI Academy, "
a
Linux education program for accredited degree / diploma-granting
academic institutions, high schools, middle schools, and government
training programs."
Full Story (comments: none)
Calls for Presentations
PyConZA will take place October 3-4 in Cape Town, South Africa. The call
for talks has been extended until September 15.
Full Story (comments: none)
The KDE community is
looking for a
host for Akademy 2014. "
Hosting Akademy is a rare opportunity. The local hosting team plays a key and highly visible role in producing this event, being actively involved within the KDE community and with the local community as well. The hosting community gains financially with attendees visiting from all over the world. In addition, the local community can benefit from the value of Free and Open Source Software and the close involvement of one of its premier organizations. People from local businesses, educators, students, government officials and technology enthusiasts are encouraged to attend and are warmly welcomed. As has been proven by previous Akademies, the opportunity is as big and varied as the hosting team can create."
Comments (none posted)
CFP Deadlines: September 6, 2013 to November 5, 2013
The following listing of CFP deadlines is taken from the
LWN.net CFP Calendar.
| Deadline | Event Dates |
Event | Location |
| September 6 |
October 4 October 5 |
Open Source Developers Conference France |
Paris, France |
| September 15 |
November 8 |
PGConf.DE 2013 |
Oberhausen, Germany |
| September 15 |
November 15 November 16 |
Linux Informationstage Oldenburg |
Oldenburg, Germany |
| September 15 |
October 3 October 4 |
PyConZA 2013 |
Cape Town, South Africa |
| September 15 |
November 22 November 24 |
Python Conference Spain 2013 |
Madrid, Spain |
| September 15 |
April 9 April 17 |
PyCon 2014 |
Montreal, Canada |
| September 15 |
February 1 February 2 |
FOSDEM 2014 |
Brussels, Belgium |
| October 1 |
November 28 |
Puppet Camp |
Munich, Germany |
| October 4 |
November 15 November 17 |
openSUSE Summit 2013 |
Lake Buena Vista, FL, USA |
| November 1 |
January 6 |
Sysadmin Miniconf at Linux.conf.au 2014 |
Perth, Australia |
If the CFP deadline for your event does not appear here, please
tell us about it.
Upcoming Events
The linux.conf.au (LCA) team has announced the selection of LCA 2014's miniconfs. As the name suggests, miniconfs are smaller, one-day conference events that are organized in conjunction with LCA. Each miniconf is focused on a specialized topic and selects in own program. The 2014 selections include the kernel, open government, browsers, astronomy, multimedia, and more. LCA 2014 is scheduled for January 6–10 in Perth, Western Australia.
Comments (none posted)
Events: September 6, 2013 to November 5, 2013
The following event listing is taken from the
LWN.net Calendar.
| Date(s) | Event | Location |
September 6 September 8 |
State Of The Map 2013 |
Birmingham, UK |
September 6 September 8 |
Kiwi PyCon 2013 |
Auckland, New Zealand |
September 10 September 11 |
Malaysia Open Source Conference 2013 |
Kuala Lumpur, Malaysia |
September 12 September 14 |
SmartDevCon |
Katowice, Poland |
| September 13 |
CentOS Dojo and Community Day |
London, UK |
September 16 September 18 |
CloudOpen |
New Orleans, LA, USA |
September 16 September 18 |
LinuxCon North America |
New Orleans, LA, USA |
September 18 September 20 |
Linux Plumbers Conference |
New Orleans, LA, USA |
September 19 September 20 |
UEFI Plugfest |
New Orleans, LA, USA |
September 19 September 20 |
Open Source Software for Business |
Prato, Italy |
September 19 September 20 |
Linux Security Summit |
New Orleans, LA, USA |
September 20 September 22 |
PyCon UK 2013 |
Coventry, UK |
September 23 September 25 |
X Developer's Conference |
Portland, OR, USA |
September 23 September 27 |
Tcl/Tk Conference |
New Orleans, LA, USA |
September 24 September 25 |
Kernel Recipes 2013 |
Paris, France |
September 24 September 26 |
OpenNebula Conf |
Berlin, Germany |
September 25 September 27 |
LibreOffice Conference 2013 |
Milan, Italy |
September 26 September 29 |
EuroBSDcon |
St Julian's area, Malta |
September 27 September 29 |
GNU 30th anniversary |
Cambridge, MA, USA |
| September 30 |
CentOS Dojo and Community Day |
New Orleans, LA, USA |
October 3 October 4 |
PyConZA 2013 |
Cape Town, South Africa |
October 4 October 5 |
Open Source Developers Conference France |
Paris, France |
October 7 October 9 |
Qt Developer Days |
Berlin, Germany |
October 12 October 13 |
PyCon Ireland |
Dublin, Ireland |
October 14 October 19 |
PyCon.DE 2013 |
Cologne, Germany |
October 17 October 20 |
PyCon PL |
Szczyrk, Poland |
| October 19 |
Hong Kong Open Source Conference 2013 |
Hong Kong, China |
| October 19 |
Central PA Open Source Conference |
Lancaster, PA, USA |
| October 20 |
Enlightenment Developer Day 2013 |
Edinburgh, Scotland, UK |
October 21 October 23 |
Open Source Developers Conference |
Auckland, New Zealand |
October 21 October 23 |
KVM Forum |
Edinburgh, UK |
October 21 October 23 |
LinuxCon Europe 2013 |
Edinburgh, UK |
October 22 October 23 |
GStreamer Conference |
Edinburgh, UK |
October 22 October 24 |
Hack.lu 2013 |
Luxembourg, Luxembourg |
| October 23 |
TracingSummit2013 |
Edinburgh, UK |
October 23 October 24 |
Open Source Monitoring Conference |
Nuremberg, Germany |
October 23 October 25 |
Linux Kernel Summit 2013 |
Edinburgh, UK |
October 24 October 25 |
Embedded LInux Conference Europe |
Edinburgh, UK |
October 24 October 25 |
Xen Project Developer Summit |
Edinburgh, UK |
October 24 October 25 |
Automotive Linux Summit Fall 2013 |
Edinburgh, UK |
October 25 October 27 |
vBSDcon 2013 |
Herndon, Virginia, USA |
October 25 October 27 |
Blender Conference 2013 |
Amsterdam, Netherlands |
October 26 October 27 |
PostgreSQL Conference China 2013 |
Hangzhou, China |
October 26 October 27 |
T-DOSE Conference 2013 |
Eindhoven, Netherlands |
October 28 October 31 |
15th Real Time Linux Workshop |
Lugano, Switzerland |
October 28 November 1 |
Linaro Connect USA 2013 |
Santa Clara, CA, USA |
October 29 November 1 |
PostgreSQL Conference Europe 2013 |
Dublin, Ireland |
November 3 November 8 |
27th Large Installation System Administration Conference |
Washington DC, USA |
If your event does not appear here, please
tell us about it.
Page editor: Rebecca Sobol