By Jake Edge
July 31, 2013
Amidst all the hubbub surrounding the Ubuntu Edge
crowdfunding effort, we came across another, similar effort that merits a
look: the Fairphone. Its vision is
different than Canonical's convergence concept, but it is also using
crowdfunding to jumpstart production. Fairphone is more than just
technology, however, as the company seeks to redefine the economics and
supply chains of
phones (and other electronics) with the goals of more transparency and
... well ... fairness.
The goals are ambitious, but one milestone has already been met. The crowdfunding
target of 5,000 phones (at €325) was easily met in June, three weeks into its
month-long campaign. Over 10,000 were eventually sold. The money raised has
allowed the non-profit to build
20,000 phones, so there are still phones available—at the original price.
The phones are only available in Europe, at least currently, though there
are hints that other regions will be added eventually. The delivery date
is expected to be in October.
For phone
hardware, the first Fairphone model is solid, but not
spectacular: a quad-core 1.2 GHz ARM processor running a customized Android
4.2 with 1G
of RAM, 16G of
storage, 4.3" display (960x540), dual SIM slots, removable battery, 8 and
1.3 megapixel
cameras, the usual
array of sensors, and so on. No chargers or headphones are
included and
the phone is said to use "minimal packaging". Both of those are in keeping
with the low-impact mission of Fairphone.
The company got its start in 2010 as a research project of several Dutch
non-profit
organizations to gather information and raise awareness of the conflicts
and wars fueled by the
extraction of minerals used in consumer electronics. That research, which
focused on minerals from the Democratic Republic of the Congo, took three
years. The phone project came about in 2013 with the "aim of
designing, creating and producing our first smartphone and taking the next
crucial step in uncovering the story behind the sourcing, production,
distribution and recycling of electronics", according to the company
web site.
So Fairphone wants to produce a long-lasting phone (to reduce waste) that
is made from "conflict-free" minerals, mined by workers who are paid a fair
wage. In fact, the goal is that all of the workers in the supply chain are
paid a fair wage and work under reasonable conditions (both from a safety
and environmental protection standpoint). The company is also conscious of
reducing e-waste; recycling and reusing any materials that can be.
"Our end
goal is
fewer phones in circulation – not more".
Obviously, those are some ambitious goals—overly ambitious, some would
say. But they are worthwhile goals. Anything that can be learned from
pursuing them will be valuable information that can be used by other device
makers. This is an area where
Fairphone clearly shines, as transparency is yet another goal of the
project. That leads
to blog
posts detailing the production process, including sourcing conflict-free
tin paste and tantalum capacitors, packaging issues, and more. In fact,
the blog has a wealth of
information about various facets of Fairphone, its mission, and its progress.
Transparency is not limited to the production process. Pricing, and how
€325 was arrived at, are part of what Fairphone will be disclosing. The
design of the phone is open as well. As might be guessed for a company
whose manifesto is "if you can't open it, you don't own it",
the phone is rootable and the OS is easily replaceable. There is mention
of both Ubuntu Touch and Firefox OS as possible replacements—CyanogenMod
seems like it should be a slam dunk.
Like many of the goals, the transparency goals have not been completely
met. More information is pending on pricing, for example, and the list
of suppliers [PDF] is incomplete, but the intentions seem good. Given
that it all started as a research project, which morphed into an actual
product, it may take some time to fully realize all of the goals.
In fact, full realization of the goals of the project are probably many
years away, if ever.
Not all of the components will be "conflict free", for example, at
least in the first model. As described in a ZDNet
article, the company is running into many of the same issues that other
phone and device-makers have hit—it's simply not easy to change the parts
that go into a device. But, that doesn't mean that it isn't worth trying.
From a cost perspective, the Fairphone seems fairly reasonable. Many
smartphones are substantially more expensive. The extra effort in making a
cleaner and more fair device seems to come almost for free. It's a bit
hard to see major phone makers switching to conflict-free tin paste (or
fair pay throughout the supply chain) any
time soon, as it might impact the all-important bottom line. Over time,
though, efforts like Fairphone may help bring the costs down to a level
where the "big boys" will start using them. It may also raise consumer
awareness to a point where there is demand for devices of this nature.
Either outcome would certainly be a step
in the right direction.
Comments (18 posted)
July 31, 2013
This article was contributed by Martin Michlmayr
OSCON 2013
O'Reilly celebrated 15 years of its OSCON open source convention this
year. The success of OSCON mirrors the success of the wider open-source community. This success has caused many open source
projects to investigate ways of formalizing their governance and corporate
structures. Several sessions at OSCON covered aspects of open source
non-profits, such as reasons for establishing a non-profit for a project as
well as reasons for looking at alternative solutions. In addition, several
of those alternatives — existing non-profit "umbrella" organizations — described the services they provide to
open source projects.
Should you start a new non-profit?
Dave Neary, a board member of the Yorba
Foundation
and a long-time member of the GNOME Foundation, gave a talk that tackled
whether a project should start a
new non-profit (slides
[SlideShare]). Neary shared some of the reasons for starting a
non-profit, separating them into three categories. The first is to provide a
financial infrastructure for a project. This includes opening a bank
account, being able to make contracts with venues as a
legal entity rather than as an individual, and reimbursing volunteers for
travel. These activities are difficult for individuals. While some
projects put money into the personal account of a volunteer, Neary stressed
that this is "not a good idea".
Second, a corporate structure helps insulate members from liability. If
you're working in IT, you're exposing yourself to risk, he said. Similarly,
you're
exposing yourself to risk when you sign contracts for a conference venue.
If something goes wrong, you're liable. Neary said that "you want an entity
that protects you from that".
Finally, there are reasons related to governance and pooling of resources.
A non-profit organization can help to formalize the governance rules of the
project, although Neary believes that this should be done within the
community rather than the organization. A non-profit entity can also
provide a level playing field for projects with major corporate
involvement, and it can be used to pool resources from corporations
participating in a project. It can also be an entity to assign trademarks
or copyrights to.
Costs
While there are good reasons for starting a non-profit, there are also
significant costs. Neary believes that the costs outweigh the benefits in
most cases. First, you need money to start a non-profit. You need a lawyer
to draft the by-laws of the organization and to do various other paperwork.
A bookkeeper has to be paid to do annual returns and possibly run payroll
(if you're planning to hire staff).
Neary remarked that interesting projects can usually find the funds
required to start a non-profit, but there is a much bigger cost: time,
which is "something you cannot get back". According to Neary, the amount of
time you'll spend is significant — one volunteer will spend all of
their volunteer time on non-profit-related activities and bureaucracy every
year. Furthermore, it's quite likely that the volunteer will burn out after
a year or two. Many people underestimate the amount of ongoing work that
running an organization requires, such as making sure elections happen and
that paperwork is filed on time. Another problem is that the approval time
for open source related 501(c)(3)
organizations (US-based charities)
has to be measured in years at the moment, partly because open source has
found its
way onto a watch list.
Finally, another significant cost is the risks and responsibilities for
the people involved. The
president and board are accountable for the organization, both fiscally and
legally. This is a great responsibility and you have to think carefully
about it.
Alternatives
Given the significant costs of starting and running a non-profit
organization, projects will want to consider alternatives; Neary outlined three possibilities. The obvious alternative
is to join an existing umbrella organization. This is a good way to "get
most of the goodies and avoid most of the bad stuff". Neary cited some
established organizations, such as the Software Freedom
Conservancy, Software in the Public
Interest, and the Outercurve
Foundation. He also noted that several
projects have grown to a point where they provide services to other
projects, such as the Apache
Foundation, the
GNOME Foundation, and KDE e.V..
The second option is to find a "sugar daddy" — a benevolent corporate
sponsor. In an ideal scenario, the corporate sponsor would provide
financial resources (directly and by employing developers) and other
services, such as organizing events and helping with legal matters. At the
same time, the company would provide a level playing field by leaving it up
to the community to manage the project.
Unfortunately, there are considerable risks with this approach. Neary
remarked that winds of change often blow through companies, for example
when a new CEO
comes in, and this may lead to a desire to monetize the project. He cited
OpenOffice, MySQL, and Symbian to illustrate the dangers. It's also
possible that the project might get neglected if it's no longer part of the
core of what the company does.
Neary remarked that this model has worked well for several projects in the
short term, but that the long-term viability is questionable. If you want a
vibrant community with individuals and companies contributing, this is
probably not the model to follow, he said.
The third option is to use management services. You can offload financial
and administrative work to another organization, but such services will
obviously cost money. The Outercurve Foundation, which has outsourced the
majority of its administrative work, was given as a successful example of
this approach. One problem, according to Neary, is that you're not getting
the benefit of the organization being aligned with your community.
Given the work involved in running a non-profit and the problematic aspects
of the alternatives, Neary suggested that joining an umbrella organization
is probably the way to go for most projects. In the discussion
following Neary's talk, the question of whether there are
circumstances under which it makes sense to start a non-profit was
raised. Simon
Phipps commented that a separate organization may be the right solution if
the administrative needs of a project
would overwhelm the fiscal sponsor. He named OpenStack as a project that
required its own foundation given the level of activity around it and its
desire to hire several staff members. While forming an organization may
make sense for large projects, Phipps cautioned that "everyone thinks they
are one of those and honestly you're not".
Non-profit organizations for FLOSS projects
Bradley M. Kuhn, the Executive Director of the Software Freedom
Conservancy, organized a session at OSCON in which several non-profit
organizations introduced themselves. This allowed projects wishing to join
an existing organization to get an overview of the range of options to
choose from.
Josh Berkus, the Assistant Treasurer of Software in the Public Interest
(SPI), introduced SPI as a "minimalist financial sponsor". SPI was
initially created to act as the fiscal sponsor for the Debian project, but
it has over 30 projects at this point. It provides basic services, such as
the ability to receive charitable donations, collective ownership of
project assets, and light legal assistance.
SPI does not provide project infrastructure, start-up funds, liability protection, or advice with project governance. An
advantage of SPI is that it does not dictate any specific governance or
infrastructure, and that it does not require exclusive representation. Some
projects use SPI to hold funds in the US while making use of other
organizations to do the same in Europe. Berkus said that SPI would be a
good choice for projects that just need a bank account or want to run a
fundraising campaign.
Kuhn followed Berkus and
described SPI as a grantor/grantee fiscal sponsor. Software Freedom
Conservancy, on the
other hand, is a "direct project", or comprehensive, sponsor. What this
means is that projects affiliate with SPI whereas a project joining
Conservancy actually becomes part of the organization. (See this recent LWN
article for a detailed explanation of
these two models.) Kuhn compared it to becoming a wholly owned subsidiary
— a project is a "division" within Conservancy and has its own
committee, but ultimately Conservancy is the legal entity, which has to
ensure that regulations are followed.
The benefits of this approach are that Conservancy can offer liability
protection for volunteer developers, and that it can officially act in the
name of project, for example when signing a venue contract. The downside is
that there is more oversight of the project as Conservancy takes on the
project's liability.
Conservancy offers a wide range of
services from which projects
can choose à la carte. Its services include asset management, legal
assistance, help with conferences, fundraising, and more. Kuhn concluded
that "you want us if you want full service".
Noirin Plunkett represented the Apache Software
Foundation. She said that Apache offers
indemnity to developers, infrastructure, and independence. There are
several requirements to being an Apache project. Plunkett said that there
is "no negotiation" with regard to these three requirements: the use of
the Apache license, a collaborative, consensus-driven development process,
and a diverse community. Commenting on the latter requirement, Plunkett
remarked that "we don't have projects that have one sole source of
contributors". Apache projects are also expected to use Apache
infrastructure. There is diversity in other areas, though, such as the
technology focus of a project, ranging from an office suite to a web
server. Joining Apache involves an incubation process to ensure the
project meets Apache's legal and community standards.
Ian Skerrett spoke about the Eclipse Foundation. Unlike the
first three organizations, Eclipse is a
501(c)(6)
organization, which is a US trade
association — its goal is to promote the Eclipse community. Eclipse
offers various services, including infrastructure, IP management, and
community development. It has a lawyer and two paralegals on staff
— Eclipse puts a lot of focus on IP management, scanning all code
that comes into its repositories to ensure license compatibility and
copyright pedigree.
Skerrett clarified some misconceptions people often have about Eclipse.
First, Eclipse is technology neutral — while its focus used to be
on Java, it has a lot of projects in other languages these days,
including Lua and JavaScript. Second, Eclipse is forge neutral, having
embraced GitHub in addition to its own infrastructure "just last month".
Finally, Eclipse is flexible in terms of licensing. While the Eclipse
Public License (EPL) is the default, exceptions can be granted to use other
licenses.
Skerrett explained how Eclipse views success, as this influences the
projects it is interested in. In addition to large numbers of users and
contributions, Eclipse sees commercial adoption of its projects as a key
factor of success. Specifically, Eclipse is a great place for building
industry platforms.
Paula Hunter explained that the Outercurve
Foundation provides business operations, technical
services, and a legal structure for its projects. Outercurve is open to
many projects
— it is not tied to a particular license, technology base, or
development process. The only requirements in terms of the development
process is that the project needs to have one. Outercurve offers a neutral
place for people to collaborate and Hunter believes that neutrality
encourages contributions. She mentioned that projects hosted by Outercurve
— originally started by Microsoft — have almost 400 developers now and that
less than 40% are employed by Microsoft.
Hunter said that the key question for her is how Outercurve can help
projects be successful. In order to support projects, Outercurve maps out
services throughout the lifecycle of a project. This includes concept
stage, launch, building community, and adoption.
Jim Zemlin was the last speaker and he joked that the Linux
Foundation provides the "same things" as
the other organizations, "only better". Instead of running through the
service catalog of the Linux Foundation, Zemlin talked about the importance
of FOSS foundations. He discussed the role of standards bodies, like ISO,
in supporting collaborative standards development and noted that FOSS
foundations play a similar role for open source. They support a
collaborative development process, which is a "better, faster way to
innovate", according to Zemlin. Noting that people use Linux multiple times
every day and don't even know it, he said that it's the "coal and steel of
our time" — but "instead of being owned by the Carnegies, it's owned
by
us".
Discussion
The non-profit sessions at OSCON led to various interesting discussions.
One question that came up several times was about non-profit options in
Europe. There are various open source organizations in Europe, such as KDE
e.V., the Document
Foundation, and the OW2
Consortium. Unfortunately, it's difficult to
have a European-wide organization, and many countries have different
non-profit structures. For example, Phipps mentioned the concept of a
Community Interest
Company in the UK
and suggested that it deserves further investigation.
Another takeaway is that it is important to consider
how well aligned the project is with the organization. When the Vert.x
project was looking for a home, several organizations offered. One such
organization was the Software Freedom Conservancy, but Kuhn (its Executive Director) openly admitted that Eclipse or Apache were a better fit due to
their stronger connections to the Java community.
Finally, it is important to remember that projects give up some control by
joining an umbrella organization. How much, and what kind, depends on the
particular organization they are joining. Projects interested in joining an
umbrella organization are therefore
advised to carefully evaluate their options.
Comments (3 posted)
Jolla's Vesa-Matti Hartikainen came to Akademy 2013 to talk about—and show
off—the new Jolla phone with Sailfish
OS. Like a number of related projects, Jolla (pronounced as
"Yo-la") was created in the wake of Nokia's move away from MeeGo. While
Sailfish OS is based on Linux and many other open source technologies, the
"user experience" (UX) layer is (currently) closed, but it's clear that
Jolla has
put a lot of thought into how to interact with a mobile phone. Whether
that translates to success in the marketplace remains to be seen, but
Hartikainen's demo was impressive.
Hartikainen began by noting that he is an engineer at Jolla, so he does "code—real
stuff". Some of the software he will be showing was written by him, and much of
the rest was written by his friends and colleagues. His talk was about
"Sailfish OS, open source, and Qt" which covers "who we are", "what we do",
and "how we do it", he said.
He switched to a bit of history, going back to the beginning of 2011, when
"Nokia was a coward" that "didn't believe in themselves", and killed its
own platforms. That was when Nokia switched to Windows Mobile for its
phones. Several MeeGo people who were working on the N9 phone (which ran a MeeGo
derivative of sorts) recognized that the MeeGo platform was a good one and
was open source. They
wanted to continue working with MeeGo, so they started the process of
creating a company to do so. That company is Jolla.
Starting the company
In order to start a company, you need people, money, and technology, Hartikainen
said. Initially it looked like MeeGo would be the technology. Lots of
people were getting laid off from Nokia and other MeeGo contractors, which
would help fill the people requirement. Money took a bit longer. After a
year or so, though, there was enough money and confidence to announce the
company as a continuation of the legacies of the N9 and MeeGo.
At roughly the same time, Nokia had "another strategy change" and
decided to give up on Qt. It sold Qt to Digia and many of the Nokia Qt
employees also made the switch, but some did not, including teams
working on QML and Qt Mobility. So Jolla recruited some of those people
and was able to build a Qt team that way, he said.
Somewhere in all of that, Intel also gave up on MeeGo and moved on to Tizen
(which was not Qt-based), which resulted in the creation of Mer—a stripped-down version of
MeeGo. Another project is Nemo mobile, which
continues the MeeGo handset path. Nemo mobile provides packages for
applications like a dialer and for SMS messaging. Jolla uses Mer for the core
of its OS and Nemo mobile for some of the "middleware" applications.
But Jolla also needed something unique to offer, Hartikainen said. It was
determined that the UX would be the differentiator for the phone. Luckily,
there were a lot of talented designers available as well, due to the MeeGo
fallout. The folks at Jolla had already knew many of those
designers, knew "they were easy to work with", and those designers could
create "innovative ideas" that the other companies were too afraid to try.
In June there was a public launch of the resulting "Jolla phone". It is a
"real thing", he said, and held one up to show.
Currently, the team is working on the "final stretch": last minute bugs,
optimizations, and small features, to get it ready to go into stores.
Demo
At that point, he asked a colleague to run a video camera while he
operated the phone. That allowed the audience to see the phone and the
UX as exercised by Hartikainen. He started with the "basic lockscreen", which has
notifications that can be accessed by pulling right, a "pulley menu" that is
accessed by pulling down, or the main screen which is reached by pulling
up to "open the phone". Unlike Android's notifications, he said, you don't
need to pull all
the way from the
top, pulling (i.e. a one-finger swipe-like gesture) from anywhere on the
screen in the
proper direction will activate the feature.
The interface seems to be more gesture oriented than other phones. As he
was showing different features, there were numerous different gestures
used, which may require users to learn more before they can get the most
out of the phone. For example, there is no back button, so swiping from
the left goes back. In addition, status items like time, power, network
status, and so on, are on their own screen, rather than lined up across the
top as in most phones. That screen can be "peeked" at by pushing partly
across the screen from the right. Once the finger is lifted, the display
returns to wherever it was.
Leaving the status off each screen is part of the design philosophy of
using the screen real estate for user or app content, rather than
controls. The "pulley menu" which can be pulled down from the top is another
example. It is application specific, with haptic and sound feedback for
each menu item that
makes it "almost" able to be used without looking, Hartikainen said. When it is
not needed, though, it takes up no screen real estate for a button or
control. One can also interact with running apps from the multitasking
screen (which has thumbnails of each running app), rather than switching to
them (by tapping), you can use gestures on the thumbnail to perform certain
actions (e.g. switch songs).
There are various ways to personalize the device, as well. The "ambience"
feature allows you to choose a photo from the gallery as a background, and
the phone will switch its interface colors to match the "mood" of the
photo. Ambience will also be tied in with the idea of the "other half",
which is an intelligent back cover for the phone (aka "active covers").
Attaching different
covers will change the ambience of the phone, but it may also do more than
that. Hartikainen described a "party profile" that might be associated with a red
cover; when it is attached, perhaps work email and certain incoming phone
calls would be disabled along with
other changes that correspond to a party mood.
The covers have both data and power connections, so they could serve other
purposes as well. Additional battery power or a hardware keyboard are two
that were mentioned. The protocols and pinout information will be
available, so the hope is that other companies come up with their own innovative
ideas.
Hartikainen showed a few more features of the interface, including the event feed,
which is accessed by pulling up. It is similar to that of the N9, he said,
but you
can "+1", "Like", or comment on an event directly in the feed, there is no need
to open an app or web site. The app store is meant to be a "social store",
he said, with true recommendations from friends. But that is "hard to get
right".
More details
The device he showed is a prototype, the final device will be smaller, but
even now it is "so nice" to use, he said. That ended the demo, and he
moved into a
rundown of the specifications of the device: 4.5" display, dual-core
processor, "nice camera", user-replaceable battery, and so on. There is a
runtime for Android apps, so any that are absolutely required and only
available for that platform can still be run on the phone. Those apps will
provide an "Android experience", rather than the normal Jolla experience
(no ambience, pulley menu, or interaction with an active cover, for example).
To sum up the user interface, Hartikainen said, it is meant to be beautiful and
personal. It maximizes the screen space for user content and uses gestures
rather than tapping for control. It provides a modern
look, he said, but is "not boring".
Before his talk, everyone told him that he "needed technical details" in
it, he said, so he
turned to the architecture of Sailfish OS. The user interface layer is
Jolla-specific and is currently closed, but it will not remain that way
forever. Parts of the Sailfish Silica QML components were released
under a BSD license with the alpha SDK; the native (C++) code parts will
follow "soon". Silica is what is used internally as well;
there is "no secret magic", as everything uses the Silica API. The Jolla
team in Australia has been working on QML performance and have gotten it to
work "extremely well", even on older hardware.
On the "middleware side", there is Nemo, he said. It provides services
like tracker for indexing multimedia, mallit for virtual keyboards,
lipstick for building home screens, grillo for multimedia, Gecko for the
web, and so on. Under that is Mer, which provides Gstreamer, ConnMan,
Qt 5, Wayland, PulseAudio, and systemd "for startup". It is, he said,
a very normal Linux distribution, just with a "tight set of packages" so
that a small company like Jolla can maintain it.
The Sailfish SDK is based on Qt Creator and virtual machines: one as a
build engine
and the other as an emulator. The emulator runs a full x86 version of Sailfish
OS. Because of that, much of the development for the phone can be done
just using the SDK, he said.
Jolla is not just "sitting alone at home coding", but instead does a lot of
collaboration with other projects. The main projects Jolla works with are
Qt, Mer, and Nemo. Jolla started with Qt 4.8 and added some
hardware-specific code and optimizations; eventually backporting some
Qt 5 code to 4.8. Since then, it has moved on to Qt 5 and the
main focus right now is to "get Qt 5 fast as hell" on Wayland and ARM.
"We are a small company", Hartikainen said, so right now the intent is to ship
products before it does more ambitious things, such as helping out more on
Qt upstream. Jolla is the main contributor to the Mer project; "we love
it". It is also a "significant contributor" to Nemo, mostly in the
middleware layer (as Jolla does not use the Nemo UX)
Jolla has two offices, in Tampere and Helsinki,
Finland, and a "bunch of people
working from home" in various locations around the world.
There is "light
process" in the company and it takes an iterative approach to solving
problems, especially in the user interface design. Most importantly, though,
it is not afraid of change, he said. Everything is set up to try new
things quickly. It is an "open source way of working", he said, without
that and open source software itself, "we wouldn't be here". Pull
requests, patch reviews, IRC, and other distributed working environment
techniques have made it all work for Jolla.
[Thanks to KDE e.V. for travel assistance to Bilbao for Akademy.]
Comments (18 posted)
Page editor: Jonathan Corbet
Security
By Nathan Willis
July 31, 2013
Firefox users have been able to synchronize various browser
features between multiple desktops and mobile devices for several
years: bookmarks, history, preferences, and even installed add-ons.
But that synchronization comes with a risk; the data stored remotely
on the synchronization server must be protected—some of it has
privacy implications, while other data (such as saved passwords) pose
much greater problems if stolen. Mozilla only stores encrypted data
on the server, but it is still working to make improvements. It
recently began to publicize its
plans for the future of the Firefox Sync service, called "Profile in
the Cloud" (PiCL). The plan calls for a number of changes to the
security architecture. One is a move away from
requiring full-strength cryptographic keys, while another is the
ability to separate high-value and low-value data for separate classes
of security.
Mozilla developer Brian Warner posted a blog entry
about PiCL on July 23. The architecture document on the Mozilla wiki
provides an overview of the revised system, which will evidently add
to the data types currently synchronized by Firefox Sync (including
WebRTC bridging providers, social API preferences, and file storage).
Warner's post, however, focuses on the security model.
The biggest user-visible change is likely to be the dropping of
Firefox Sync's existing credentials (a username, email address, and
separate encryption key) in favor of a simpler
email-address–plus–password approach. But the real magic takes place
behind the scenes: PiCL will offer multiple security levels and will
better protect the cryptographic key that protects user data, but
users will only need to remember their chosen password.
Currently, Firefox Sync randomly
generates a full-strength cryptographic key on the browser, encrypts
user data with it, and uploads the encrypted file to the Sync server.
The key itself is stored on the user's computer, not the server. In addition, users
also set up an account on the Firefox Sync server, using an email
address and a user-selected password. This email/password combination
is only used to completely reset an account if the key is lost.
Building levels
The revised plan as described by Warner rearranges the pieces
somewhat. Users will still set up an account using their email
address and a selected password, although the account setup system
will make use of Mozilla's Persona, which did not exist
when the current Firefox Sync system was rolled out. But, more
importantly, the email/password pair will be used to derive encryption
keys. Yes, there are keys, plural: for starters, one (known
as kA) corresponds to the "class A" or low-value data set and one (kB)
corresponds to the "class B" or high-value data set.
An important facet of the new
design is that it offers users the choice between two storage options: class A data can
be recovered by Mozilla, while class B data cannot. Users can choose
which synchronization data is assigned to which class; Mozilla may
default to saving passwords as class B and everything else as class A,
but that decision is not yet final. Users can also change their minds
after the initial setup, and move data from one class to another. In
addition to this feature, the
plan also takes steps to make brute-force attacks against account
passwords as expensive as possible, even in the event of compromised
servers at Mozilla.
The higher-security kB key is created client side by the
browser, which then sends a verification code—but not kB
itself—to the PiCL server. The server, in turn, creates kA when
it creates the user's account, which is what allows class A data to be
recovered in the event that the user forgets his or her password.
Warner describes the key-generation and account-setup process on the
Mozilla wiki. Among the salient points is that kB, even though it is
derived from a user-selected password, must be strengthened as much as
possible to make it resistant to brute-force guessing. PiCL does this
by salting and then stretching the email/password pair on the client side.
Warner cites Password-Based Key Derivation Function 2 (PBKDF2, from RFC 2898), which is
computationally expensive, and scrypt,
which is memory-expensive, as the stretching techniques. There are
some initial parameters for the stretching algorithms under discussion,
but they do not appear to be final as of now.
Naturally, even after stretching, kB will not be as random as the
cryptographically strong key currently used by Firefox Sync, but the
overall PiCL system takes more effort to protect kB against discovery.
For comparison, the Firefox Sync system relays the user's key to each synchronized
client using the J-PAKE protocol, meaning it is stored locally on
multiple devices as well as occasionally being sent across the
network. It is not sent in the clear, of course, but it is sent, so
there are always possible attack vectors. In contrast, kB does not leave the local machine, and is
never stored on any device.
In a synchronization session, the client proves to the server that
it possesses kB using the Secure
Remote Password (SRP)
protocol. The client prompts the user for the
email/password combination, derives kB, then calculates an SRP verifier
code to send to the server. The server can then send encrypted class
B data back to the client to use or modify as desired.
The situation for class A data is much simpler, as the server
generates and possesses
kA. The current plan is to use kA and kB to derive several
distinct AES HMAC encryption keys (using HKDF, the HMAC-based
Key Derivation Function), one for each type of data stored (e.g.,
passwords or preferences). Whether kA or kB is used to create the
per-datatype key
depends on the class in which the user wishes to save the data.
Encrypting each type (e.g., bookmarks, passwords, or history)
separately permits the user to have a change of heart about whether a
particular data type should be saved in class A or class B.
Architecture and trade-offs
PiCL also differs from Firefox Sync in that it uses two separate
servers: a keyserver, which stores kA and the kB verification code for
each account, and a storage server which retains the actual encrypted
data. The benefit is that both the keyserver and the storage server
must be compromised for an attacker to gain access to any data, and
if that happened, only the class A data would be revealed. This, Warner notes,
makes class A data no more vulnerable than any "service provider holds
everything" scenario, while also offering a higher level of protection
with class B. Another benefit is that an attacker compromising the
storage server alone would not be able to steal any keys.
Warner also points out the known vulnerabilities of the system. A
user's email provider, for instance, can intercept the account-setup
and account-reset emails, and obtain access to class A data. For that
matter, the email provider can fake the entire password-reset process
and hide that fact from the user. As for
class B data, even though kB is never sent over the network (nor are
the password and salt that generates it), the keyserver does retain a
copy of the kB verifier code, so an attacker could attempt to guess
the password by brute force. Furthermore, an attacker that compromises the
keyserver can perform this attack offline. The password-stretching
process is meant to make this attack as expensive as possible, but
ultimately if you choose a guessable password you will make the
attacker's life easier.
PiCL is still under heavy development; Warner asked for feedback on
the keyserver
protocol in his blog post. But there are already several features
of note. First, the ability to separate high-value and low-value data
will likely garner fans, especially among those who have lost an
irrecoverable Firefox Sync password previously. Interestingly enough,
some earlier draft documents on the wiki discuss even more security
levels, so Mozilla clearly sees this as a feature desired by its
users.
Second, separating the keyservers and storage servers on
Mozilla's end does offer some additional protections, although it does
so at the cost of additional complexity. Mozilla can presumably
afford to manage this complexity, but one of the nicest features of
Firefox Sync is that it is possible to run a private
synchronization server. It will be harder for self-hosting users to
take advantage of the split-server model. Finally, it is an
interesting question (probably open for lengthy debate) whether or not
the stretched-password kB key in PiCL is more secure than the random
key in Firefox Sync. One is more random and thus harder to guess, but
it is also transmitted over the network and stored. Since users
cannot memorize the random Firefox Sync key, it must be
stored—but that makes it more vulnerable to theft. The one thing
everyone will agree with is that the two systems illustrate the
classic security/convenience trade-off.
The name "Profile in the Cloud" certainly suggests the ability to
synchronize lots and lots more data, and "the cloud" is not exactly
synonymous with secure storage of private data. Thus, it will be
interesting to watch where PiCL heads next. Perhaps simply forcing users
to actively think about security "classes" will, if nothing else,
raise awareness of the risks of storing personal information remotely.
Comments (4 posted)
Brief items
The two [Jeremiah Grossman and Matt
Johansen] discovered that even reputable ad networks do a poor job of vetting
the java script that is bundled with ad images. "As long as it looks
pretty, they have no problem with it," Johansen said. "The folks we were
dealing with (at the ad networks) didn't really have the javascript reading
skills to know the difference anyway."
—
ITworld
reports on a Black Hat security conference presentation
In the [Bradley] Manning case, the prosecution used Manning's use of a standard, over
15-year-old Unix program called Wget to collect information, as if it were
a dark and nefarious technique. Of course, anyone who has ever called up
this utility on a Unix machine, which at this point is likely millions of
ordinary Americans, knows that this program is no more scary or spectacular
(and far less powerful) than a simple Google search. Yet the court
apparently didn't know this and seemed swayed by it.
We've seen this trick before. In a case EFF handled in 2009, Boston College police used the fact that our client worked on a Linux operating system with "a black screen with white font" as part of a basis for a search warrant. Luckily the Massachusetts Supreme Court tossed out the warrant after EFF got involved, but who knows what would have happened had we not been there.
—
Cindy
Cohn of the Electronic Frontier Foundation (EFF)
What would a spoofing attack look like in practice? Suppose the spoofer's
goal is to run the target vessel aground on a shallow underwater
hazard. After taking control of the ship's GPS unit, the spoofer induces a
false trajectory that slowly deviates from the ship's desired
trajectory. As cross-track error accumulates, the ship's autopilot or
officer of the watch maneuvers the ship back into apparent alignment with
the desired trajectory. In reality, however, the ship is now off
course. After several such maneuvers, the spoofer has forced the ship onto
a parallel track hundreds of meters from its intended one. Now as the ship
moves into shallow waters, the ECDIS display and the down-looking depth
sounder may indicate plenty of clearance under the keel when in truth a
dangerous shoal lies just underwater dead ahead. Maybe the officer of the
watch will notice the strange offset between the radar overlay and the
underlying electronic charts. Maybe, thinking quickly, he will reason that
the radar data are more trustworthy than the ship's GPS-derived position
icon displayed on the ECDIS. And maybe he will have the presence of mind to
deduce the ship's true location from the radar data, recognize the looming
danger, and swing clear of the shoal to avert disaster. Or maybe not.
—
Todd
Humphreys on GPS spoofing as reported by
ars technica
To call Prime Minister Cameron a "clown" at all might reasonably be taken by some as an affront to clowns and jesters reaching back through history. Because Cameron's style of clowning is far more akin to the nightmarish, sneering "clowns" of "B" horror movies, not the bringers of entertainment under the big top.
Cameron, through a series of inane and grandstanding statements and pronouncements both deeply technically clueless and shamelessly politically motivated, has been channeling Napoleon by placing the clown prince crown on his own head.
Laughing at his antics would be a terrible mistake. For his wet dream of
Internet censorship poses an enormous risk not only to the UK, but to other
nations around the world who might seek comfort in his idiocy for their own
censorship regimes (already, calls have been made in Canada to emulate
Cameron's proposed model).
—
Lauren Weinstein
Comments (10 posted)
On his blog, Tim Janik
reports on his efforts to run a
Tor exit node. Unfortunately, he was shut down quickly—because of the terms of service of his server provider.
"
It turned out the notice had a twist to it. It was actually my virtual server provider who sent that notice on behalf of a complaining party and argued that I was in violation of their general terms and conditions for purchasing hosting services. Checking those, the conditions read:
"Use of the server to provide anonymity services is excluded."
Regardless of the TMG [German telecommunications law], I was in violation of the hosting provider’s terms and conditions which allowed premature termination of the hosting contract. At that point I had no choice but stopping the Tor services on this hosting instance."
Comments (24 posted)
Canonical has
announced
the return of the Ubuntu forums to normal service; there is also a detailed
description of how the system was compromised. "
In summary, the root
cause was a combination of a compromised individual account and the
configuration settings in vBulletin, the Forums application software.
There was no compromise of Ubuntu itself, or any other Canonical or Ubuntu
services. We have repaired and hardened the Ubuntu Forums, and as the
problematic settings are the default behaviour in vBulletin, we are working
with vBulletin staff to change and/or better document these
settings." It all started with a cross-site scripting attack.
Comments (2 posted)
Wired has a
report on Google's
response [PDF] to Douglas McClendon's complaint about "servers" being prohibited on Google Fiber. The response, which was ordered by the US Federal Communications Commission (FCC), states that Google Fiber can prohibit servers by noting, in part, that other providers' Terms of Service do likewise. But that flies in the face of earlier support for network neutrality as espoused by the company. "
But, it turns out that Google's real net neutrality policy is that big corporate services like YouTube and Facebook shouldn't get throttled or banned by evil ISPs like Verizon, but it's perfectly fine for Google to control what devices citizens can use in their homes.
We, it seems, are supposed to be good consumers of cloud services, not hosting our own Freedom Boxes, media servers, small-scale commercial services or e-mail servers."
Comments (58 posted)
New vulnerabilities
389-ds-base: information disclosure
| Package(s): | 389-ds-base |
CVE #(s): | CVE-2013-2219
|
| Created: | July 31, 2013 |
Updated: | July 31, 2013 |
| Description: |
From the Red Hat advisory:
It was discovered that the 389 Directory Server did not honor defined
attribute access controls when evaluating search filter expressions. A
remote attacker (with permission to query the Directory Server) could use
this flaw to determine the values of restricted attributes via a series of
search queries with filter conditions that used restricted attributes. |
| Alerts: |
|
Comments (none posted)
bind9: denial of service
| Package(s): | bind9 |
CVE #(s): | CVE-2013-4854
|
| Created: | July 29, 2013 |
Updated: | August 19, 2013 |
| Description: |
From the CVE entry:
The RFC 5011 implementation in rdata.c in ISC BIND 9.7.x and 9.8.x before 9.8.5-P2, 9.8.6b1, 9.9.x before 9.9.3-P2, and 9.9.4b1, and DNSco BIND 9.9.3-S1 before 9.9.3-S1-P1 and 9.9.4-S1b1, allows remote attackers to cause a denial of service (assertion failure and named daemon exit) via a query with a malformed RDATA section that is not properly handled during construction of a log message, as exploited in the wild in July 2013. |
| Alerts: |
|
Comments (none posted)
fdupes: file permission overwrite
| Package(s): | fdupes |
CVE #(s): | |
| Created: | July 31, 2013 |
Updated: | July 31, 2013 |
| Description: |
From the Red Hat bugzilla:
A SUSE bug report noted a problem with how fdupes is used in the %fdupes RPM macro. When there are two files with identical content that differs in owner/group/permissions, the %fdupes macro overwrites one of the files with a link that effectively gives both files the same owner/group/permissions. If one of the files has tighter permissions than the other, this could result in one of the files having more relaxed permissions than appropriate. |
| Alerts: |
|
Comments (none posted)
gnupg: information leak
| Package(s): | gnupg |
CVE #(s): | CVE-2013-4242
|
| Created: | July 30, 2013 |
Updated: | August 15, 2013 |
| Description: |
From the Debian advisory:
Yarom and Falkner discovered that RSA secret keys could be leaked via
a side channel attack, where a malicious local user could obtain private
key information from another user on the system. |
| Alerts: |
|
Comments (none posted)
java-1_6_0-ibm: multiple unspecified vulnerabilities
| Package(s): | java-1_6_0-ibm |
CVE #(s): | CVE-2013-3009
CVE-2013-3011
CVE-2013-3012
CVE-2013-4002
|
| Created: | July 26, 2013 |
Updated: | July 31, 2013 |
| Description: |
From the SUSE bug reports:
Unspecified vulnerability in the Java Runtime Environment (JRE) in IBM Java 1.4.2 before 1.4.2 SR13-FP18, 5.0 before 5.0 SR16-FP3, 6 before 6 SR14, 6.0.1 before 6.0.1 SR6, and 7 before 7 SR5 allows remote attackers to affect confidentiality, availability, and integrity via unknown vectors, a different vulnerability than CVE-2013-3011 and CVE-2013-3012. (CVE-2013-3009)
Unspecified vulnerability in the Java Runtime Environment (JRE) in IBM Java 1.4.2 before 1.4.2 SR13-FP18, 5.0 before 5.0 SR16-FP3, 6 before 6 SR14, 6.0.1 before 6.0.1 SR6, and 7 before 7 SR5 allows remote attackers to affect confidentiality, availability, and integrity via unknown vectors, a different vulnerability than CVE-2013-3009 and CVE-2013-3012. (CVE-2013-3011)
Unspecified vulnerability in the Java Runtime Environment (JRE) in IBM Java 1.4.2 before 1.4.2 SR13-FP18, 5.0 before 5.0 SR16-FP3, 6 before 6 SR14, 6.0.1 before 6.0.1 SR6, and 7 before 7 SR5 allows remote attackers to affect confidentiality, availability, and integrity via unknown vectors, a different vulnerability than CVE-2013-3009 and CVE-2013-3011. (CVE-2013-3012)
Unspecified vulnerability in the Java Runtime Environment (JRE) in IBM Java 5.0 before 5.0 SR16-FP3, 6 before 6 SR14, 6.0.1 before 6.0.1 SR6, and 7 before 7 SR5 allows remote attackers to affect availability via unknown vectors. (CVE-2013-4002) |
| Alerts: |
|
Comments (none posted)
java-1_7_0-ibm: multiple unspecified vulnerabilities
| Package(s): | java-1_7_0-ibm |
CVE #(s): | CVE-2013-3006
CVE-2013-3007
CVE-2013-3008
CVE-2013-3010
|
| Created: | July 26, 2013 |
Updated: | July 31, 2013 |
| Description: |
From the SUSE bug reports:
Unspecified vulnerability in the Java Runtime Environment (JRE) in IBM Java 7 before 7 SR5 allows remote attackers to affect confidentiality, availability, and integrity via unknown vectors, a different vulnerability than CVE-2013-3008. (CVE-2013-3006)
Unspecified vulnerability in the Java Runtime Environment (JRE) in IBM Java 6.0.1 before 6.0.1 SR6 and 7 before 7 SR5 allows remote attackers to affect confidentiality, availability, and integrity via unknown vectors, a different vulnerability than CVE-2013-3006. (CVE-2013-3007)
Unspecified vulnerability in the Java Runtime Environment (JRE) in IBM Java 7 before 7 SR5 allows remote attackers to affect confidentiality, availability, and integrity via unknown vectors, a different vulnerability than CVE-2013-3006. (CVE-2013-3008)
Unspecified vulnerability in the Java Runtime Environment (JRE) in IBM Java 6.0.1 before 6.0.1 SR6 and 7 before 7 SR5 allows remote attackers to affect confidentiality, availability, and integrity via unknown vectors, a different vulnerability than CVE-2013-3007. (CVE-2013-3010) |
| Alerts: |
|
Comments (none posted)
lcms2: denial of service
| Package(s): | lcms2 |
CVE #(s): | CVE-2013-4160
|
| Created: | July 30, 2013 |
Updated: | August 12, 2013 |
| Description: |
From the Ubuntu advisory:
It was discovered that Little CMS did not properly verify certain memory
allocations. If a user or automated system using Little CMS were tricked
into opening a specially crafted file, an attacker could cause Little CMS
to crash. |
| Alerts: |
|
Comments (none posted)
mysql: multiple vulnerabilities
| Package(s): | mysql-5.5, mysql-dfsg-5.1 |
CVE #(s): | CVE-2013-2162
CVE-2013-3783
CVE-2013-3793
CVE-2013-3809
CVE-2013-3812
|
| Created: | July 26, 2013 |
Updated: | August 14, 2013 |
| Description: |
From the Debian and Ubuntu bug reports:
CVE-2013-2162: The file "/etc/mysql/debian.cnf", which contains plain text credentials
for the "debian-sys-maint" mysql user, is created in an insecure manner
during the package installation phase. This can lead a non-privileged
local user to disclose its content and use this special account to
perform administration tasks.
CVE-2013-3783: Unspecified vulnerability in the MySQL Server component in Oracle MySQL
5.5.31 and earlier allows remote authenticated users to affect availability
via unknown vectors related to Server Parser.
CVE-2013-3793: Unspecified vulnerability in the MySQL Server component in Oracle MySQL
5.5.31 and earlier and 5.6.11 and earlier allows remote authenticated users
to affect availability via unknown vectors related to Data Manipulation
Language.
CVE-2013-3809: Unspecified vulnerability in the MySQL Server component in Oracle MySQL
5.5.31 and earlier and 5.6.11 and earlier allows remote authenticated users
to affect integrity via unknown vectors related to Audit Log.
CVE-2013-3812: Unspecified vulnerability in the MySQL Server component in Oracle MySQL
5.5.31 and earlier and 5.6.11 and earlier allows remote authenticated users
to affect availability via unknown vectors related to Server Replication. |
| Alerts: |
|
Comments (none posted)
openafs: two encryption flaws
| Package(s): | openafs |
CVE #(s): | CVE-2013-4134
CVE-2013-4135
|
| Created: | July 25, 2013 |
Updated: | July 31, 2013 |
| Description: |
From the Scientific Linux advisory:
OpenAFS uses Kerberos tickets to secure network traffic. For historical
reasons, it has only supported the DES encryption algorithm to encrypt these
tickets. The weakness of DES's 56 bit key space has long been known, however
it has recently become possible to use that weakness to cheaply (around $100)
and rapidly (approximately 23 hours) compromise a service's long term key. An
attacker must first obtain a ticket for the cell. They may then use a brute
force attack to compromise the cell's private service key. Once an attacker
has gained access to the service key, they can use this to impersonate any
user within the cell, including the super user, giving them access to all
administrative capabilities as well as all user data. Recovering the service
key from a DES encrypted ticket is an issue for any Kerberos service still
using DES (and especially so for realms which still have DES keys on their
ticket granting ticket). (CVE-2013-4134)
The -encrypt option to the 'vos' volume management command should cause it to
encrypt all data between client and server. However, in versions of OpenAFS
later than 1.6.0, it has no effect, and data is transmitted with integrity
protection only. In all versions of OpenAFS, vos -encrypt has no effect when
combined with the -localauth option. (CVE-2013-4135) |
| Alerts: |
|
Comments (none posted)
phpmyadmin: multiple vulnerabilities
| Package(s): | phpmyadmin |
CVE #(s): | CVE-2013-4995
CVE-2013-4996
CVE-2013-4998
CVE-2013-5000
CVE-2013-5002
CVE-2013-5003
|
| Created: | July 30, 2013 |
Updated: | July 31, 2013 |
| Description: |
From the Mandriva advisory:
* XSS due to unescaped HTML Output when executing a SQL query
(CVE-2013-4995).
* 5 XSS vulnerabilities in setup, chart display, process list, and
logo link. If a crafted version.json would be presented, an XSS could
be introduced (CVE-2013-4996).
* Full path disclosure vulnerabilities (CVE-2013-4998, CVE-2013-5000).
* Self-XSS due to unescaped HTML output in schema export
(CVE-2013-5002).
* SQL injection vulnerabilities, producing a privilege escalation
(control user) (CVE-2013-5003). |
| Alerts: |
|
Comments (none posted)
rubygem-passenger: insecure temporary directory usage
| Package(s): | rubygem-passenger |
CVE #(s): | CVE-2013-4136
|
| Created: | July 31, 2013 |
Updated: | August 23, 2013 |
| Description: |
From the Red Hat bugzilla:
It was reported [1],[2] that Phusion Passenger would reuse existing server instance directories (temporary directories) which could cause Passenger to remove or overwrite files belonging to other instances. This has been corrected in upstream version 4.0.8 via two fixes (the initial fix and a regression fix; both are required to fully fix the issue). This is an issue similar to CVE-2013-2119. |
| Alerts: |
|
Comments (none posted)
wireshark: multiple vulnerabilities
| Package(s): | wireshark |
CVE #(s): | CVE-2013-4927
CVE-2013-4929
CVE-2013-4930
CVE-2013-4931
CVE-2013-4932
CVE-2013-4933
CVE-2013-4934
CVE-2013-4935
|
| Created: | July 29, 2013 |
Updated: | September 30, 2013 |
| Description: |
From the Mageia advisory:
The Bluetooth SDP dissector could go into a large loop (CVE-2013-4927).
The DIS dissector could go into a large loop (CVE-2013-4929).
The DVB-CI dissector could crash (CVE-2013-4930).
The GSM RR dissector (and possibly others) could go into a large loop (CVE-2013-4931).
The GSM A Common dissector could crash (CVE-2013-4932).
The Netmon file parser could crash (CVE-2013-4933, CVE-2013-4934).
The ASN.1 PER dissector could crash (CVE-2013-4935). |
| Alerts: |
|
Comments (none posted)
Page editor: Jake Edge
Kernel development
Brief items
The current development kernel is 3.11-rc3,
released on July 28. "
Anyway,
remember how I asked people to test the backlight changes in rc2 because
things like that have really bad track records? Yup. That all got
reverted. It fixed things for some people, but regressed for others, and we
don't do that 'one step forward, two steps back' thing. But never fear, we
have top people looking at it."
Stable updates: 3.10.3 was released
on July 25; it was followed by 3.10.4,
3.4.55, and 3.0.88 on July 28. The 3.2.49 release came out on July 27.
3.2.50 is in the review process as of this
writing; it can be expected sometime after August 2.
Comments (none posted)
I'm not going to add new race conditions today that I need to fix
up tomorrow, my patch count is high enough as it is.
—
Greg Kroah-Hartman
If you have gotten to the point where you have to make
this decision you should probably call it a work day, go
home, have a nice drink and spend some time with a loved
one. In the morning take a good hard look at your network
configuration. You may end up with a different security
policies being enforced with IPv4 and IPv6 communications.
—
Casey Schaufler
We of course bit more than we could chew and it started pouring so
my car is sitting on +Johannes Weiner's drive way with two belts
undone, a wheel bolt broken and an engine mount bolt missing
somewhere in the maze of pulleys.
—
Tejun
Heo, whose kernel work must certainly be in better shape
Comments (2 posted)
The 0.87
release of the multipath TCP patch set is available. Improvements
include better hardware offload support, zero-copy sendfile/splice support,
working NFS support, better middlebox handling, and more. See
this article for an overview of multipath TCP.
Comments (3 posted)
Kernel development news
By Jonathan Corbet
July 30, 2013
Writing device drivers can be a painful process; hardware has a tendency to
behave in ways other than those described in the documentation. The job
can be even harder, though, in the absence of the hardware itself.
Developing a complete driver without the hardware can require a simulator
built into a tool like QEMU — a development project in its own right. For
simpler situations, though, it may be enough to fool the driver about the
contents of a few device registers. Rui Wang's recently posted
I/O hook patch set aims to make that
functionality available.
The I/O hook module works by overriding the normal functions used to access
I/O memory, I/O ports, and the PCI configuration space. When kernel code
calls one of those functions, the new version will check to see whether
an override has been configured for the address/port of interest; if so,
the operation will be redirected. In the absence of an explicit override,
the I/O operation will proceed as usual. Needless to say, adding this kind
of overhead to every I/O operation executed by the kernel could slow things
down significantly. In an attempt to minimize the impact, the static key mechanism is used to patch the
kernel at run time. So the I/O hooks will
not run unless they are in active use at the time.
There is an in-kernel interface that can be used to set up register
overrides; it is a simple matter of calling:
void hook_add_ovrd(int spaceid, u64 address, u64 value, u64 mask,
u32 length, u8 attrib);
Here, spaceid is one of OVRD_SPACE_MEM for regular I/O
memory, OVRD_SPACE_IO for an I/O port, or
OVRD_SPACE_PCICONF for the PCI configuration space. The
combination of address, mask, and length
describe the range of addresses to be overridden, while value is
the initial value to be set in the overridden space. By using the mask
value it is possible to override a
space as narrow as a single bit. The attrib parameter describes
how the space is to behave: OVRD_RW for a normal read/write
register, OVRD_RO for read-only, OVRD_RC for a register
whose bits are cleared on being read, or OVRD_WC to clear bits on
a write.
There are two functions, hook_start_ovrd() and
hook_stop_ovrd(), that are used to turn the mechanism on and off.
Any number of overrides can be set up prior to turning the whole thing on,
so a complicated set of virtual registers can be configured. It's worth
noting, though, that the overrides are stored internally in a simple linked
list, suggesting that the number of overrides is expected to be relatively
small.
While the in-kernel interface may be useful, it will probably be more common
to control this facility through the debugfs interface. The module
provides a set of files through which overrides can be set up; see the documentation file for details on the
syntax. The debugfs interface also provides a mechanism by which a
simulated interrupt can be delivered to the driver; if an interrupt number
is given to the system (by writing it to the appropriate debugfs file),
that interrupt will be triggered once the overrides
are enabled.
A system like this clearly cannot be used to emulate anything other than
the simplest of devices. A real device has a long list of registers and,
importantly, the contents of those registers will change as the device
performs the operations requested of it. One could imagine enhancing this
module with an interface by which a user-space process could supply
register values on demand, but there comes a point where it is probably
better just to add a virtual device to an emulator like QEMU.
So where, then, does a tool like this fit in? The use cases provided with
the patch posting mostly have to do with the testing of hotplug operations
on hardware without hotplug support. A hotplug event typically involves an
interrupt and a relatively small number of registers; by overriding just
those registers, the I/O hook mechanism can convince a driver that its
hardware just went away (or came back). That allows testing the hotplug
paths without needing to have suitably capable hardware.
Similarly, overrides can be used to test error paths by injecting various
types of errors into the system. Error paths are often difficult to
exercise; there are almost certainly large numbers of error paths in the
kernel that have never been executed. Code that has never run has a
higher-than-average chance of containing bugs. The fault injection framework can be used to test
a wide range of error paths, but it is not comprehensive; the I/O hook
module could be useful to fill in the gaps.
But, then, anecdotal evidence suggests that relatively few developers even
use the fault injection facilities, so uptake of a more complex mechanism
may be limited. But, for those who use it, the I/O hook subsystem might
well prove to be a useful addition to the debugging toolbox.
Comments (3 posted)
By Jonathan Corbet
July 31, 2013
Transparent compression is often found on the desired feature list for new
filesystems; compressing data on the fly allows the system to make better
use of both storage space and I/O bandwidth, at the cost of some extra CPU
time. The "transparent" in the name indicates that user space need not be
aware that the data is compressed, making the feature easy to use.
Thus, filesystems like btrfs support transparent compression, while Tux3
has
a draft design toward that end. A
recent proposal to add compression support to ext4, however, takes a bit of
a different approach. The idea may run into trouble on its way into a mainline
kernel, but it is indicative of how some developers are trying to get
better performance out of the system.
Dhaval Giani's patch does not implement
transparent compression; instead, the feature is transparent
decompression. With this feature, the kernel will allow an
application to read a file that has been compressed without needing to know
about that
compression; the kernel will handle the process of decompressing the data
in the background. The creation of the compressed file is not transparent,
though; that must be done in user space. Once the file has been
created and marked as compressed (using chattr), it cannot be
changed, only deleted and replaced. So this feature
enables the transparent use of read-only compressed files, but only after
somebody has taken the time to set those files up specially.
This feature is aimed at a rather narrow use case: enabling Firefox to
launch more quickly. Desktop users will (as Taras Glek notes) benefit from this feature, but the
target users are on Android. Such systems tend to have relatively slow
storage devices — slow enough that compressing the various shared objects
that make up the Firefox executable and taking the time
to decompress them in the CPU is a net win. Decompression at startup time
slows things down, but it is still faster than reading the uncompressed
data from a slow drive. Firefox currently uses its own
custom dynamic linker to load compressed libraries (such as libxul.so)
during startup. Moving the decompression code into the filesystem would
allow the Firefox developers to dispense with their custom linker.
Dhaval's implementation has a few little problems that could get in the way
of merging. Decompression must happen in a single step into a single
buffer, so the application must read the entire file in a single
read() call; that makes the feature a bit less than fully
transparent. Mapping compressed files into memory with mmap() is
not supported. The "szip" compression format is hardwired into the
implementation. A new member is added to the file_operations
structure to read compressed files. And so on. These shortcomings are
understood and acknowledged from the outset; Dhaval's main purpose in
posting the code at this time was to get feedback on the general design.
He plans to fix these issues in subsequent versions of the patch.
But fixing all of those problems will not help if the core filesystem
maintainers (who have, thus far, remained silent) object to the
intent of the patch. A normal expectation when dealing with filesystems
is that data written with write() will look the same when
retrieved by a subsequent read() call. The transparent
decompression patch violates that assumption by having the kernel interpret
and modify the data written to disk — something the kernel normally tries
hard not to do.
Having the kernel interpret the data stream could perhaps be countenanced
if there were a compelling reason to add this functionality to the kernel.
But, if such a reason exists, it was not presented with the patch set.
Firefox has already solved this problem with its own dynamic linker; that
solution lives entirely in user space. A fundamental rule of kernel design
is that work should not be done in the kernel if it can be done equally
well in user space; that suggests that an in-kernel implementation of file
decompression would have to be somehow better than what Firefox is using
now. Perhaps an in-kernel implementation is better, but that case
has not yet been made.
The end result is that Dhaval's patch is unlikely to receive serious
consideration at this point. Before kernel developers look at the details
of a patch, they usually want to know why the patch exists in the first
place — how does that patch make the system better than before? That "why"
is not yet clear, so the contents of the patch itself are not entirely
relevant. That may be part of why this particular patch set has not
received much in the way of feedback in the first week after it was
posted. Transparent decompression is an interesting idea for speeding
application startup with a relatively easy kernel hack; hopefully the next
iteration will contain a stronger justification for why it has to be a
kernel hack in the first place.
Comments (13 posted)
By Jonathan Corbet
July 30, 2013
Last week's device tree article introduced
the ongoing discussion on the status of device tree maintainership in the
kernel and how things needed to change. Since then, the discussion has
only intensified as more developers consider the issues, especially with
regard to the stability of the device tree interface. While it seems
clear that most (but not all) participants believe that device tree
bindings should be treated like any other user-space ABI exported by the
kernel, it is also clear that they are not treated in this way currently.
Those seeking to change this situation will have a number of obstacles to
overcome.
Device tree bindings are a specification of how the hardware is described
to the kernel in the device tree data structure. If they change in
incompatible ways, users may find that newer kernels may not boot on older
systems (or vice versa). The device tree itself may be buried deeply
within a system's firmware, making it hard to update, so incompatible
binding changes may be more than slightly inconvenient for users. The
normal kernel rule is that systems that work with a given kernel should
work with all releases thereafter; no explicit exception exists for device
tree bindings. So, many feel, bindings should be treated like a stable
kernel ABI.
Perhaps the strongest advocate of the position that device tree bindings
should be treated as any other ABI right now (rather than sometime in the
future) is ARM maintainer Russell King:
We can draw the line at an interface becoming stable in exactly the
same way that we do every other "stable" interface like syscalls -
if it's in a -final kernel, then it has been released at that point
as a stable interface to the world. [...]
If that is followed, then there is absolutely no reason why a
"Stable DT" is not possible - one which it's possible to write a DT
file today, and it should still work in 20 years time with updated
kernels. That's what a stable interface _should_ allow, and this
is what DT _should_ be.
As is often the case, though, there is a disconnect between what should be
and what really is. The current state of device tree stability was perhaps
best summarized by Olof Johansson:
Until now, we have been working under the assumption that the
bindings are _NOT LOCKED_. I.e. they can change as needed, and we
_ARE_ assuming that the device tree has to match the kernel. That
has been a good choice as people get up to speed on what is a good
binding and not, and has given us much-needed room to adjust things
as needed.
Other developers agreed with this view of the situation: for the first few
years of the ARM migration from board files to device trees, few developers
(if any) had a firm grasp of the applicable best practices. It was a
learning experience for everybody involved, with the inevitable result that
a lot of
mistakes were made. Being able to correct those mistakes in subsequent
kernel releases has allowed the quick application of lessons learned and
the creation of better bindings in current kernels. But Olof went on to
say that the
learning period is coming to a close: "That obviously has to change,
but doing so needs to be done carefully." This transition will need
to be done carefully indeed, as can be seen from the issues raised in the
discussion.
Toward stable bindings
For example: what should be done about "broken" bindings that exist in the
kernel currently? Would they immediately come under a guarantee of
stability, or can they be fixed one last time? There is a fair amount of
pressure to stop making incompatible changes to bindings immediately, but
to do so would leave kernel developers supporting bindings that do not
adequately describe the hardware, are not extensible to newer versions of
the hardware, and are inconsistent with other bindings. Thus, Tomasz Figa
argued, current device tree bindings should
be viewed as a replacement for board files, which were very much tied to
a specific kernel version:
We have what we have, it is not perfect, some things have been
screwed up, but we can't just leave that behind and say "now we'll
be doing everything correctly", we must fix that up.
Others contend that, by releasing those bindings in a stable kernel, the
community already committed itself to supporting them. Jon Smirl has advocated for a solution that might satisfy
both groups: add a low-level "quirks" layer that would reformat old device
trees to contemporary standards before passing them to the kernel. That
would allow the definitive bindings to change while avoiding breaking older
device trees.
Another open question is: what is the process by which a particular set of
bindings achieves stable status, and when does that happen? Going back to
Olof's original message:
It's likely that we still want to have a period in which a binding
is tentative and can be changed. Sometimes we don't know what we
really want until after we've used it a while, and sometimes we,
like everybody else, make mistakes on what is a good idea and
not. The alternative is to grind most new binding proposals to a
halt while we spend mind-numbing hours and hours on polishing every
single aspect of the binding to a perfect shine, since we can't go
back and fix it.
Following this kind of policy almost certainly implies releasing drivers in
stable kernels with unstable device tree bindings. That runs afoul of the
"once it's shipped, it's an ABI" point of view, so it will not be popular with
all developers. Still, a number of developers seem to think that, with the
current state of the art, it still is not possible to create bindings that
are long-term supportable from the beginning. Whether bindings truly
differ from system calls and other kernel ABIs in this manner is a topic
of ongoing debate.
Regardless of when a binding is recognized as stable, there is also the
question of who does this recognition. Currently, bindings are added to
the kernel by driver developers and subsystem maintainers; thus, in some
eyes, we have a situation where the community is being committed to support
an ABI by people who do not fully understand what they are doing. For this
reason, Russell argued that no device tree
binding should be merged until it has had an in-depth review by somebody
who not only understands device tree bindings, but who also understands the
hardware in question. That bar is high enough to make the merging of new
bindings difficult indeed.
Olof's message, instead, proposed the creation of a "standards committee"
that would review bindings for stable status. These bindings might already
be in the kernel but not yet blessed as "locked" bindings. As Mark Rutland
(one of the new bindings maintainers) pointed
out, this committee would need members from beyond the Linux community;
device tree bindings are supposed to be independent of any specific
operating system, and users may well want to install a different system
without having to replace the device tree. Stephen Warren (another new
bindings maintainer) added that
bootloaders, too, make use of device trees, both to understand the hardware
and to tweak the tree before passing it to the kernel. So there are a lot
of constituents who would have to be satisfied by a given set of bindings.
Tied to this whole discussion is the idea of moving device tree bindings
out of the kernel entirely and into a repository of their own. Such a move
would have the effect of decoupling bindings from specific kernel releases;
it would also provide a natural checkpoint where bindings could be
carefully reviewed prior to merging. Such a move does not appear to be
planned for the immediate future, but it seems likely to happen eventually.
There are also some participants who questioned the value of stable
bindings in the first place. In particular, Jason Gunthorpe described the challenges faced by companies
shipping embedded hardware with Linux:
There is no way I can possibly ship a product with a DT that is
finished. I can't tie my company's product release cycles to the
whims of the kernel community.
So embedded people are going to ship with unfinished DT and upgrade
later. They have to. There is no choice. Stable DT doesn't change
anything unless you can create perfect stable bindings for a new
SOC instantaneously.
In Jason's world, there is no alternative to being able to deal with device
trees and kernels that are strongly tied together, and, as he sees it, no
effort to stabilize device tree bindings is going to help. That led him to
ask: "So who is getting the benefit of
this work, and is it worth the cost?" That particular question went
unanswered in the discussion.
Finally, in a world where device tree bindings have been stabilized, there
is still the question of how to ensure that drivers adhere to those
bindings and add no novelties of their own. The plan here appears to be
the creation of a schema to
provide a formal description for bindings, then to augment the dtc
device tree compiler to verify device trees against the schema. Any
strange driver-specific bindings would fail to compile, drawing attention
to the problem.
The conversation quickly acquired
a number of interesting side discussions on how the schema itself should
be designed. A suggestion that XML could
be used evoked far less violence than one might imagine; kernel developers
are still trying hard to be nice, it seems. But David Gibson's suggestion that a more C-like language be used
seems more likely to prevail. The process of coming up with comprehensive
schema definition and checking that it works with all device tree bindings is
likely to take a while.
Reaching a consensus on when device tree bindings should be stabilized,
what to do about substandard existing bindings, and how to manage the whole
process will also probably take a while. The topic has already been
penciled in for an entire afternoon during
the ARM Kernel Summit, to be held in Edinburgh this October. In the
meantime, expect a lot of discussion without necessarily binding the
community to more stable device trees.
Comments (36 posted)
Patches and updates
Kernel trees
Core kernel code
Development tools
Device drivers
Documentation
Filesystems and block I/O
Memory management
Networking
Architecture-specific
Security-related
Miscellaneous
Page editor: Jonathan Corbet
Distributions
By Jonathan Corbet
July 31, 2013
The Fedora distribution prides itself on its image as a leading-edge Linux
distribution
where new stuff shows up first. But, as some developers recently
discovered to their chagrin, that image does not necessarily extend to
deleting features first. After an extensive debate, the Fedora project has
decided, for now, to retain the sendmail mail transfer agent — a program
that is many years older than Linux — in the default installation. Whether
this outcome is a victory for Fedora's "stone age faction" or a careful
defense of Fedora's Unix tradition is very much in the eye of the beholder.
The no
default sendmail proposal, intended for the Fedora 20 development
cycle, had a simple goal: remove sendmail from the "core" and "standard"
package groups, so that sendmail would not be installed by default in
Fedora's minimal or desktop configurations. Sendmail would, of course,
remain as an option for anybody who wanted to install it separately. The
simplest ideas are often those most subject to discussion, though; in this
case, the proposal set off an
email thread that was epic even by the standards of the Fedora
development list.
The proponents of the change (Lennart Poettering and Matthew Miller) argued
that there was little use for a mail transfer agent (MTA) installed by
default. On today's Internet, there is no way to configure an MTA so that
it comes up in a working configuration by default. Many or most Fedora
users run in a situation where mail cannot be sent directly from (or to) their
systems anyway, due to Internet service provider blocking policies and
anti-spam measures taken at remote sites. Sendmail is set up to deliver
mail into /var/spool/mail — a location that no
mail user agent in the default Fedora install reads. So, it is said, there
is little value in having sendmail around.
Going on, they argue that there are some costs to installing sendmail by
default. Sendmail's time as a constant
source of severe security problems is long past, but it is still a
privileged program that need not be installed much of the time. Removing
sendmail would reduce the distribution's disk space requirements and
decrease system
boot time. Ubuntu has not installed an MTA by default for years; systems
like Mac OS X also install without an MTA. And in the end,
Lennart and Matthew argued, even if Fedora were to install an MTA by
default, sendmail is a bit of a peculiar choice.
The opposition to the proposal is a bit harder to summarize. A number of
participants cited the fact that cron jobs send email if they
produce unexpected output. In the absence of an MTA, that output goes
instead to the system log which, for some, is a place where it can get lost
in the noise. Miloslav Trmač suggested
that the /usr/sbin/sendmail binary forms a sort of API by which
applications can reliably send email; in its absence, those applications
would have to grow their own SMTP implementations, which might not lead to
a better system overall.
Some of the arguments against the removal of sendmail expressed a vague
feeling that an MTA is an important and traditional component of a Unix
system. Removing sendmail would take Fedora (further) away from that
tradition, making some people uncomfortable. Having sendmail in the mix
does not bother people who are not using it, they said, and anybody who
really objects to its presence can always uninstall it. So, rather than
making such a fundamental change to the system, it would be better to come
up with a better set of default configurations for the mail transfer and
user agents installed with the distribution.
The discussion went back and forth for some time while, seemingly,
convincing few of the participants. Some Fedora users evidently value
getting cron output in email, while others point out that, on most
systems, it piles up unnoticed in /var/spool/mail and might as
well be discarded. For the latter camp, the logging subsystem is clearly a
better way for system daemons to communicate results to users or
administrators, but discussions around the logging system tend to turn into
an
even bigger can of worms in short order. Arguments based on
tradition almost never get anywhere with those who are determined to change
that tradition — or leave it behind altogether.
The Fedora Engineering Steering Committee (FESCo) met on July 24 to (among other agenda
items) make a decision on this issue. The discussion there was rather
shorter; it can be found in the
IRC log. In the end, the board voted unanimously to remove sendmail
from the minimal "core" group. When it came to the "standard" group (which
is what matters for most installations), though, the proposal faltered,
eventually failing on a 4-4 vote after Matthew, despite having proposed the
change, abstained from the actual vote as a gesture of respect for those
who wanted to take a slower, more careful approach. So a Fedora 20 desktop
installation will include sendmail.
It is fair to say that Lennart did
not react well to the decision:
If FESCO decides that Fedora's home is the stone age, then that's
fine, I don't think I have to care anymore. They should really
drop the "F" for "First" though from their 4-F motto, it's a blunt
lie. Fedora is seldom first on anything non-trivial, anymore. It is
just another conservative distribution. Sendmail is just the
pinnacle of it. I mean, really? sendmail???
In truth, some progress has been made toward Lennart's goal, and some
developers have expressed an interest in trying again for the
Fedora 21 development cycle.
Fedora, as a Linux distribution, does have a lot of roots in the Unix
tradition. It also has a number of users who have been with the
distribution (and its predecessor, Red Hat Linux) for a very long time.
Ways of working that have been established over decades can be awfully hard
to change, especially when the people involved see no reason why they
should change. It is, thus, unsurprising that the people who are trying to
drive significant changes run into a certain type of conservatism at times.
What matters is how the system evolves over the long term. There are few
who would say that Unix (or Linux) in any of its forms is the pinnacle of
computing. The Linux systems we use ten years from now will certainly look
quite different from what we have now — either that, or we'll not be using
Linux at all. Chances are that distant-future Linux distributions will not
have sendmail installed by
default. In the meantime, though, proponents of change can expect to have
to work through some resistance at times; that is, after all, how human
communities work.
Comments (131 posted)
Brief items
But if they have a vested interest in Fedora, RHEL, CentOS and other
downstreams using some bit of technology, if they have a vested interest in
getting their code and ideas out to that userbase... participate. Convince.
Collaborate. To do otherwise, and just assume someone else will do that for
them, seems foolish to me.
--
Bill Nottingham
Differentiation is key... floppy support is a key differentiator and a really smart move. Someone is going to kickstart a retro-computing tablet that looks just like a 5 1/4 inch floppy, with a slot for a 3 1/2 inch not-so-floppy disk.. and we will be there on day one..and it will be glorious.
--
Jef
Spaleta (in the comments)
Comments (none posted)
Distribution News
Fedora
Fedora 17 has reached its end-of-life. There will be no further updates,
including security updates. See
this
page for instructions on upgrading to Fedora 18 or later using FedUp.
Full Story (comments: none)
Red Hat Enterprise Linux
Red Hat has sent out a notice that Red Hat Enterprise Linux 3 Extended
Lifecycle Support will end in six months.
Full Story (comments: none)
Newsletters and articles of interest
Comments (none posted)
One significant change in the Android 4.3 release is that it has become
harder to obtain and make use of root privileges. Steve "Cyanogen" Kondik
ponders
how CyanogenMod will respond to this change; it may not involve
restoring easy root access. "
+Koushik Dutta and +Chainfire are
working hard to permit root in some way on 4.3, but I feel that anything
done at this point might severely compromise the security of the system and
we should start considering better options. Going forward, I'm interested
in building framework extensions and APIs into CM to continue to abolish
the root requirement."
Comments (28 posted)
Page editor: Rebecca Sobol
Development
By Nathan Willis
July 31, 2013
The GNU project has released version 0.3 of Guix, its
package-management tool for GNU-based systems. Like most other
package managers, Guix is responsible for installing, updating, and
removing software packages, but it brings some uncommon features to
the task, such as unprivileged package management, per-user
installation, and garbage collection. Guix is usable today to install
and manage the current palette of GNU packages, but could be used to
keep track of others as well.
The 0.3 release was announced on
July 17 by lead developer Ludovic Courtès. Launched in
mid-2012, the project has now reached the point where it can be used
to install packages maintained in a "GNU Distribution" repository.
That repository is also maintained by Courtès, who is using it
to (eventually) bring the entire collection of GNU project software
directly to users, without waiting for downstream Linux distributions
to package it.
Guix is written in Guile, the GNU implementation of Scheme. Its
low-level package installation and removal functionality is based on
Nix, but Guix replaces much of Nix's
package-management architecture (including the package description
format) with its own Guile code. The Guix framework adds a set of
features that manages multiple installed versions of each package,
enabling unprivileged users to install and update their own versions
of packages.
The design of the package-management
system (which is explained in detail in a white paper [PDF])
refers to the core concept of "functional package
management." Several of Guix's features are said to derive
directly from its functional approach, such as garbage collection and
the rollback of updates. In a nutshell, this concept means that each
operation (e.g., installation, update, or removal) is treated as a
function: it is atomic (a property most other package managers offer),
but its outcome is also idempotent—reproducible when the
operation is repeated with the same inputs. In Guix's case, this is
important because the program offers the guarantee of idempotence even
for packages compiled from source.
The system relies on a fairly specific definition of
function "inputs," which are captured in its package description
format. Each installed package is placed in its own directory within
the Guix "package store," for example
/nix/store/hashstring-gcc-4.7.2/. The string
prepended to the package name in this directory is a hash of the
inputs used to build the package: the compiler, the library
versions, build scripts, and so on. Consequently, two versions of a
package compiled with identical inputs would result in the
same hash value, allowing Guix to re-use the same binary. But any
distinction between the inputs (say, one user using a different
version of a library) would cause Guix to install a completely new
package in the store.
A privileged Guix daemon handles the actual package maintenance
operations, with clients talking to it. The default guix
command-line client allows each user on a system to install his or her own version
of any particular package. So if one user wants to use GCC 4.8.0 and
another prefers to use 4.7.2, no problem. Moreover, in such a
scenario, both users can install their own versions of GCC without
touching the version installed by the system itself. Guix manages
this by installing every package in its own self-contained directory,
and linking each user's requested version in with symlinks.
Whenever a user installs a package, a link to the appropriate real
executable (in the package store) is created in
~/.guix-profile/bin/. Consequently,
~/.guix-profile/bin/ must be in the user's PATH. By default,
updating a package to a new release (or rebuilding it with different
options) creates a new entry in the package store. For a single-user
system, such superfluous packages may never take up a significant
percentage of disk space, but on a multi-user machine, the chaff could
certainly build up over time.
To deal with this problem, Guix supports garbage collection of
packages. Running guix gc will locate all of the packages
that are no longer used by any user profiles and remove them. In
multi-user setups, the user profile directories (e.g.,
~/.guix-profile are actually symbolic links into another
directory designated as Guix's garbage-collection root). That allows
the garbage collector to run even if the user's home directory is not
mounted or is encrypted.
Package proliferation in the store is a potential problem, but the
fact that garbage collection is not automatic has an up side: it
allows an install or update to be rolled back completely. The
--roll-back option undoes the previous transaction, whatever
it was. So if the user installs a buggy update, it can be rolled back
immediately afterward with little effort. Undoing a buggy update
several transactions back, however, requires stepping through each
intermediate rollback, and possibly re-applying any unrelated
operations caught in the middle.
Packages, distributions, and repositories
The Guix "distribution" system is akin to the binary package
repositories used by Apt or RPM, but Guix allows users to install binary
versions of packages when they are available in the repository, and
fall back to building packages from source when binaries are
unavailable—and, at least in theory, Guix's atomic transactions,
rollbacks, and other features work for source-installed packages, too.
The package
definition format uses Scheme, but it is pretty straightforward to
decipher even for those unfamiliar with Lisp-like languages. The Guix
manual lists the following example for the GNU Hello package:
(use-modules (guix packages)
(guix download)
(guix build-system gnu)
(guix licenses))
(define hello
(package
(name "hello")
(version "2.8")
(source (origin
(method url-fetch)
(uri (string-append "mirror://gnu/hello/hello-" version
".tar.gz"))
(sha256
(base32 "0wqd8sjmxfskrflaxywc7gqw7sfawrfvdxd9skxawzfgyy0pzdz6"))))
(build-system gnu-build-system)
(inputs `(("gawk" ,gawk)))
(synopsis "GNU Hello")
(description "Yeah...")
(home-page "http://www.gnu.org/software/hello/")
(license gpl3+)))
Some of
the key fields to consider are build-system, which tells Guix
how to compile the package from source if no binary is available, and
inputs, which defines the function inputs that contribute to
the installation function (and, consequently, the hash value used in
the package store). As one might expect, the only build system
defined so far is the canonical GNU autotools build system.
The GNU
Distribution is available to Guix users, including the core GNU
toolchain and a range of common GNU applications (plus several non-GNU
projects). Currently it is usable on 32-bit and 64-bit
Intel Linux systems, but the project says more platforms are to come.
Installing an available package is as simple as guix
package -i gcc, although there are a few wrinkles to consider.
For example, Guix's package store model works by linking a single
directory in the system-wide store to a directory in the user's
profile. This assumes that a package only installs files to one
location, which is often not the case. Packages that install content
in multiple output directories (e.g., /usr/bin and /usr/share/doc) are
split up into separate pieces for Guix, so the GNU Distribution's
glib package contains the Glibc binaries, while
glib:doc contains its corresponding documentation.
Because the package definition format specifies the build system,
it is possible for Guix to transparently support the mixed deployment
of binary and source packages—that is, when a binary package is
available, Guix can fetch and install it, but when unavailable, Guix
can build from source. Appending the --fallback switch to a
guix --install command will tell Guix to build the package
from source if there is no binary package or if the binary fails to
install. Users can query the repository to see which packages are
available with guix --list-available[=regexp], optionally
providing a regular expression to search for.
Admittedly, in practice a package-management system is only as good
as its packages. For example, the --fallback command is of
little value if the package does not compile successfully, and the
Guix GNU Distribution repository can currently only be deployed on
x86 Linux systems. But it is growing; the repository's package
list stands at just over 400 at press time, which are built using
the Hydra continuous-integration
system.
The repository is perhaps the most interesting aspect of the Guix
project. Other package managers may pick up ideas like per-user
installation and garbage collection (which is significantly more
important in a per-user installation setup) in due time. But for many years,
GNU itself has reached the computers of the majority of its users via
Linux distributions. Guix offers an alternative distribution
channel—in particular, an alternative that allows one user to
install GNU packages that have not yet worked their way through the
distribution release process, and to do so in a way that does not
overwrite the distribution package. That may have positive benefits
for GNU as a project, as well as providing inspiration for other large
free software projects (such as GNOME, which is not currently packaged
in the GNU Distribution repository) that also struggle from time to
time with the process of getting freshly-released software into the eager hands of
users.
Comments (14 posted)
Brief items
One day I'll quit my job and write a psychological study on why tech journalists love stock Android. It's fascinating.
—
Philip Berne
But, today, who really cares about Unity/GNOME/KDE or GTK+/Qt when all you need to do is to launch a browser full screen? All I need, all I want are web based versions of the free software I use.
—
Lionel Dricot
Comments (3 posted)
LibreOffice 4.1 has been
released. More information about the changes in 4.1 can be found on the "
New Features and Fixes" page. "
LibreOffice 4.1 is also importing some AOO [Apache OpenOffice] features, including the Symphony sidebar, which is considered experimental. LibreOffice developers are working at the integration with the widget layout technique (which will make it dynamically resizeable and consistent with the behaviour of LibreOffice dialog windows)."
Comments (35 posted)
Luiz Henrique de Figueiredo posted a brief note
to the Lua list on July 28 to note that the date marked 20 years since
the first known implementation of Lua. He went on to thank the Lua
community on behalf of the entire development team.
Comments (none posted)
Guillaume Lesniak
describes
the interesting new features to be found in the "Focal" camera app,
soon to make its appearance in CyanogenMod nightly builds. "
Timer
mode lets you set up a countdown timer before taking a shot, and our
favorite Voice Trigger is back to take a shot as soon as you say 'Cheese',
'Cid', or 'Whiskey'. The burst mode, as its name says, makes a burst of
shots. The number of shots can be 5, 10, 15, or an infinite number of shots
(stops when you press the shutter button again)."
Comments (7 posted)
Version 2.3 of systemtap has been released. Among the changes included, systemtap will now suggest alternative functions when a function probe fails to resolve, the regular expression engine has been overhauled, and a host of tapsets have been updated. Plus, there is one particularly colorful change; as the announcement notes: "Has life been a bit bland lately? Want to spice things up? Why not write a few faulty probes and feast your eyes upon the myriad of colours adorning your terminal as SystemTap softly whispers in your ear... 'parse error'. Search for '--color' in 'man stap' for more info."
Full Story (comments: none)
Version 1.4.3 of the Scribus desktop-publishing (DTP) application
has been released.
Although referred to as a bugfix release, 1.4.3 rolls in quite a few
significant changes, such as QR code generation, support for the
Galaxy Gauge color-matching system, the removal of page-size limits in
TeX-rendered graphics, and a port to the Haiku operating system. On
the down side, support for automatic hyphenation on Linux has been disabled.
Comments (1 posted)
Newsletters and articles
Comments (none posted)
Mozilla's Doug Belshaw introduces
the browser-maker's formal RFC for the Web
Literacy Standard, which is an attempt to formally describe a
number of web development competencies. Although parts of the
standard are decidedly non-technical, others cover oft-overlooked
areas like accessibility and privacy issues. Interested parties are
encouraged to provide feedback.
Comments (none posted)
Page editor: Nathan Willis
Announcements
Brief items
The Electronic Frontier Foundation (EFF) and a coalition of organizations
and law schools have launched
Trolling
Effects, an online resource that aims to "
unite and empower
would-be victims of patent trolls through a crowdsourced database of demand
letters and to serve as a clearinghouse of information on the troll epidemic."
Full Story (comments: none)
The Free Software Foundation has
announced
a fundraising effort for the Replicant project. "
While most of
Android is already free software, device manufacturers distribute the OS
with some key nonfree parts. Those parts are in the layer of Android that
communicates with the phone or tablet hardware, such as the WiFi and
Bluetooth chips. In addition, every commonly available Android device comes
pre-loaded with a variety of proprietary applications running on top of the
operating system. Replicant seeks to provide all of the same functionality
using only free software."
Comments (88 posted)
Videos from the
recently-concluded GNU Tools Cauldron event are now publicly
available. Many topics are covered, including re-architecting GCC, data
race detection in GDB, the impact of compiler options on energy
consumption, and more.
Comments (none posted)
Articles of interest
The Ada Initiative has a report on its progress during the first half of
2013. "
2012 was a tipping point for women in open technology and culture. In 2013 the Ada Initiative has worked hard to build on that momentum, through the AdaCamp conference, Impostor Syndrome training, workshops, speeches, interviews in the mainstream media, and more. With your help, we're continuing to make a difference for women in open technology and culture. Thank you so much for your support of our work!"
Full Story (comments: none)
A coalition led by Microsoft has recently submitted an antitrust complaint
to the European Commission claiming that the distribution of Free
Software free of charge hurts competition. The Free Software Foundation
Europe has
written
a letter to the European Commission's competition authorities refuting
that claim. "
In its letter, FSFE urges the Commission to consider
the facts properly
before accepting these allegations at face value. "Free Software is a
boon for humankind. The only thing that it is dangerous to is
Microsoft's hopelessly outdated, restrictive business model," says
Karsten Gerloff, FSFE's president."
Full Story (comments: none)
The Massachusetts Institute of Technology has
released
its report on its actions regarding the Aaron Swartz case; the
conclusion is that MIT did nothing wrong. "
However, the report says
that MIT’s neutrality stance did not consider factors including 'that the
defendant was an accomplished and well-known contributor to Internet
technology'; that the law under which he was charged 'is a poorly drafted
and questionable criminal law as applied to modern computing'; and that
'the United States was pursuing an overtly aggressive prosecution.' While
MIT’s position 'may have been prudent,' the report says, 'it did not duly
take into account the wider background' of policy issues 'in which MIT
people have traditionally been passionate leaders.'"
Comments (18 posted)
New Books
Pragmatic Bookshelf has released "OpenGL ES 2 for Android" by Kevin Brothaler.
Full Story (comments: none)
Pragmatic Bookshelf has released "Rapid Android Development" by Daniel Sauter.
Full Story (comments: none)
Contests and Awards
The recipients of the 2013 O’Reilly Open Source Awards have been
announced.
Awards go to Behdad Esfahbod (HarfBuzz), Jessica McKellar (Python Software
Foundation), Limor Fried (Adafruit Industries), Valerie Aurora (Ada
Initiative), Paul Fenwick (Perl), and Martin Michlmayr (Debian Project).
Comments (none posted)
Calls for Presentations
CFP Deadlines: August 1, 2013 to September 30, 2013
The following listing of CFP deadlines is taken from the
LWN.net CFP Calendar.
| Deadline | Event Dates |
Event | Location |
| August 7 |
September 12 September 14 |
SmartDevCon |
Katowice, Poland |
| August 15 |
August 22 August 25 |
GNU Hackers Meeting 2013 |
Paris, France |
| August 18 |
October 19 |
Hong Kong Open Source Conference 2013 |
Hong Kong, China |
| August 19 |
September 20 September 22 |
PyCon UK 2013 |
Coventry, UK |
| August 21 |
October 23 |
TracingSummit2013 |
Edinburgh, UK |
| August 22 |
September 25 September 27 |
LibreOffice Conference 2013 |
Milan, Italy |
| August 30 |
October 24 October 25 |
Xen Project Developer Summit |
Edinburgh, UK |
| August 31 |
October 26 October 27 |
T-DOSE Conference 2013 |
Eindhoven, Netherlands |
| August 31 |
September 24 September 25 |
Kernel Recipes 2013 |
Paris, France |
| September 1 |
November 18 November 21 |
2013 Linux Symposium |
Ottawa, Canada |
| September 6 |
October 4 October 5 |
Open Source Developers Conference France |
Paris, France |
| September 15 |
November 8 |
PGConf.DE 2013 |
Oberhausen, Germany |
| September 15 |
November 15 November 16 |
Linux Informationstage Oldenburg |
Oldenburg, Germany |
| September 15 |
October 3 October 4 |
PyConZA 2013 |
Cape Town, South Africa |
| September 15 |
November 22 November 24 |
Python Conference Spain 2013 |
Madrid, Spain |
| September 15 |
April 9 April 17 |
PyCon 2014 |
Montreal, Canada |
| September 15 |
February 1 February 2 |
FOSDEM 2014 |
Brussels, Belgium |
If the CFP deadline for your event does not appear here, please
tell us about it.
Upcoming Events
The schedule for DebConf13 (August 11-18 in Vaumarcus, Switzerland) has
been posted. "
DebConf13 talks will mostly happen in two rooms
simultaneously, except for a few plenaries which will be presented in the
main room with no parallel event. A third room will be available for
discussion groups and ad-hoc sessions. As usual, we’ve tried to cluster
related activities in sequence, even though it was not always possible."
Full Story (comments: none)
The Python Game Programming Challenge,
PyWeek, will run its 17th challenge during
the week of September 1-8, 2013. The PyWeek challenge invites entrants to
write a game in one week from scratch either as an individual or in a team.
Full Story (comments: none)
Ohio LinuxFest (September 13-15 in Columbus, Ohio) has announced that Kirk
McKusick will be a keynote speaker. "
Kirk McKusick is a longtime promoter of Free Software, particularly the
Berkeley Software Distribution (BSD) of Unix. In the early days he shared
an office with Bill Joy (later founder of Sun Microsystems), and is
credited with designing the original Berkeley Fast File System (FFS). He
implemented soft updates, an alternative approach to maintaining disk
integrity after a crash or power outage, in FFS, and a revised version of
UFS known as "UFS2", and is primarily responsible for creating the
complementary features of filesystem snapshots and background fsck (file
system check and repair)."
Full Story (comments: none)
Events: August 1, 2013 to September 30, 2013
The following event listing is taken from the
LWN.net Calendar.
| Date(s) | Event | Location |
July 31 August 4 |
OHM2013: Observe Hack Make |
Geestmerambacht, the Netherlands |
August 1 August 8 |
GUADEC 2013 |
Brno, Czech Republic |
August 3 August 4 |
COSCUP 2013 |
Taipei, Taiwan |
August 6 August 8 |
Military Open Source Summit |
Charleston, SC, USA |
August 7 August 11 |
Wikimania |
Hong Kong, China |
August 9 August 11 |
XDA:DevCon 2013 |
Miami, FL, USA |
August 9 August 12 |
Flock - Fedora Contributor Conference |
Charleston, SC, USA |
August 9 August 13 |
PyCon Canada |
Toronto, Canada |
August 11 August 18 |
DebConf13 |
Vaumarcus, Switzerland |
August 12 August 14 |
YAPC::Europe 2013 “Future Perl” |
Kiev, Ukraine |
August 16 August 18 |
PyTexas 2013 |
College Station, TX, USA |
August 22 August 25 |
GNU Hackers Meeting 2013 |
Paris, France |
August 23 August 24 |
Barcamp GR |
Grand Rapids, MI, USA |
August 24 August 25 |
Free and Open Source Software Conference |
St.Augustin, Germany |
August 30 September 1 |
Pycon India 2013 |
Bangalore, India |
September 3 September 5 |
GanetiCon |
Athens, Greece |
September 6 September 8 |
State Of The Map 2013 |
Birmingham, UK |
September 6 September 8 |
Kiwi PyCon 2013 |
Auckland, New Zealand |
September 10 September 11 |
Malaysia Open Source Conference 2013 |
Kuala Lumpur, Malaysia |
September 12 September 14 |
SmartDevCon |
Katowice, Poland |
| September 13 |
CentOS Dojo and Community Day |
London, UK |
September 16 September 18 |
CloudOpen |
New Orleans, LA, USA |
September 16 September 18 |
LinuxCon North America |
New Orleans, LA, USA |
September 18 September 20 |
Linux Plumbers Conference |
New Orleans, LA, USA |
September 19 September 20 |
UEFI Plugfest |
New Orleans, LA, USA |
September 19 September 20 |
Open Source Software for Business |
Prato, Italy |
September 19 September 20 |
Linux Security Summit |
New Orleans, LA, USA |
September 20 September 22 |
PyCon UK 2013 |
Coventry, UK |
September 23 September 25 |
X Developer's Conference |
Portland, OR, USA |
September 23 September 27 |
Tcl/Tk Conference |
New Orleans, LA, USA |
September 24 September 25 |
Kernel Recipes 2013 |
Paris, France |
September 24 September 26 |
OpenNebula Conf |
Berlin, Germany |
September 25 September 27 |
LibreOffice Conference 2013 |
Milan, Italy |
September 26 September 29 |
EuroBSDcon |
St Julian's area, Malta |
September 27 September 29 |
GNU 30th anniversary |
Cambridge, MA, USA |
If your event does not appear here, please
tell us about it.
Page editor: Rebecca Sobol