By Jonathan Corbet
December 12, 2012
Your editor has frequently written that, while Android is a great system
that has been highly beneficial to the cause of open mobile devices, it
would be awfully nice to have a viable, free-software alternative. Every
month that goes by makes it harder for any such alternative system to
establish itself in the market, but that does not keep people from trying.
One of the more interesting developments on the horizon has been FirefoxOS
— formerly known as Boot2Gecko — a system under development at Mozilla.
In the absence of any available hardware running this system, the recent
1.0
release of the FirefoxOS simulator seemed like a good opportunity to
get a feel for what the Mozilla folks are up to.
Naturally enough, the simulator
is distributed as a Firefox add-on. At 93MB, it's a bit larger than a
typical extension, but, then, it's supposed to be an entire operating
system. The extension refused to install on the archaic iceweasel shipped
with Debian Testing, but it works well enough on more recent Firefox
browsers. Running the extension yields a mostly-empty page with the
opportunity to load software modules and a button to run the simulator
itself. What is one to do in such a situation other than to push that
button and see what happens?
In this case, what happens is the arrival of a handset-shaped popup window
with a clock (two clocks, actually), and a battery indicator. Many
FirefoxOS features look a lot like their Android equivalents — a
resemblance that starts with the initial screen. Perhaps there is no
practical equivalent to the notification/status bar at the top of the
screen. Certainly it will help to make the experience familiar to users
coming over from an Android device.
That familiarity runs into a hitch at unlock time, though. As with other
devices, one starts by making a swipe gesture (upward, in this case) on the
screen. But then one must tap a padlock icon to actually unlock the
device. There is no explanation of why things were done this way, of
course. But it is not hard to imagine that the FirefoxOS developers did
not wish to start their foray into handset systems with a dispute over one
of Apple's higher-profile patents. So, likely as not, anybody who finds
the extra tap irritating has the US patent system to blame.
Like Android, the FirefoxOS home screen is split into several virtual
screens; one can move between them by dragging the background to the left
or the right. The actual implementation, though, more closely resembles
webOS, in that those screens have different purposes. The initial home
screen appears to be reserved for the clock, a standard-issue launcher bar
at the bottom, and a bunch of empty space. There does not appear to be any
provision for adding icons or widgets to this screen.
Dragging the home screen to the left yields a screen full of application
launcher icons. In fact, there are three such screens to be found in that
direction. Installing an application adds its icons to one of those
screens. As with webOS, one can, with a long press, drag icons around to
rearrange or delete them. The icons gravitate toward the upper left,
though; there is no way to arrange a gap in middle. They can be dragged
from one launcher screen to another, but they refuse to move to the home
screen. Icons can also be dragged to the launcher bar, which, amusingly,
will accept far more icons than it can hold, causing some to be pushed off
the side of the screen.
On the other side of the home screen is something that announces itself as
"Everything.me". It appears to be a way to search for resources locally
and remotely. The icons there can be supplemented with such useful
functions as "Celebs" and "Astrology." There is a search bar that will
yield a completely different set of icons with no real clue as to what is
behind them. Unfortunately, none of these icons appears to actually do
anything in version 1.0 of the simulator, so it's hard to evaluate the
functionality of this subsystem.
As one would expect, there is a "marketplace" from which additional
applications may be loaded. Also as one might expect, the list of
applications does not come close to what a more established system would
provide, but, if FirefoxOS is successful, that will presumably change. The
application installation process is relatively straightforward; just click
and it's there. The FirefoxOS privilege
model appears to be still evolving; certainly there are no signs of it
at the application installation level. Interestingly, there is a menu
under "settings" where those permissions can be viewed — and toggled, if
desired.
Actually running applications in the simulator is a hit-or-miss matter;
some of them work a lot better than others do. Switching between running
applications is accomplished by holding down the "home" button in a way
similar to how older Android releases behaved.
The impression one gets from the simulator is that the FirefoxOS developers
have managed to put together a credible system for handsets and other
mobile devices. Users of current systems will probably find gaps in
functionality and in the set of available applications, but that can be
expected to change if this platform takes off and becomes widely available
on real hardware. Anybody wanting a system that is more "Linux-like" than
Android may well be disappointed; there is not likely to be much
traditional Linux user-space functionality to be found behind the FirefoxOS
user interface. But this system may prove interesting indeed for users in
search of an alternative system based on free software and Mozilla's
commitment to its users' privacy and security.
Comments (25 posted)
By Nathan Willis
December 12, 2012
There are three new books about free software thanks to Google's 2012
Summer of Code Documentation
Camp. The week-long event started off with an unconference, but the
main objective was for each participating project to produce a
cohesive, book-length work of documentation. All three projects
delivered, and thanks to the arrangement made by FLOSSManuals with a local
printer, 30 copies of each book were in print late Friday evening.
FLOSSManuals has the sprint process down to a science, which is good
news for open projects of all stripes, but it is still feeling out how
best to sustain the sprint's energy after the participants part
company.
The three projects at this year's camp were the integrated library
system Evergreen, the educational programming environment Etoys, and
the type design application FontForge. FLOSSManuals has been
facilitating book sprints in
a variety of formats since 2008; the most common format is a retreat
where eight to ten project members congregate for a five-day writing and
editing session. The Documentation Camp format is a bit smaller in that
regard — each team had five or six participants and only three
days were devoted to book creation, with the rest spent on a
documentation unconference.
One purpose of the unconference was to get the three teams to swap
information and share insights and best practices about documentation,
but another was to jump start each team's collaboration. As is often
the case with open source projects, many team members had never met in
person and were used to interacting online. Sharing a small
conference room for ten to twelve hours a day and trading edits is
hardly common practice. But by the end of the unconference sessions,
facilitator Adam Hyde had each team focused on the preliminary steps
to writing a coherent book as a group.
Master plan
The teams were first tasked with coming up with a title and subtitle
for their books. Although titles can be arbitrary, the brief was more
specific: avoid "clever" titles; decide on a clear title that
addresses a specific audience. Too broad of an audience or too broad
of a scope, Hyde advised, makes for either an unfinished book or one
that is sloppy and difficult-to-read. He also advised the teams to
pick a topic that would be useful in attracting new people to their
respective projects.
I participated in the camp as a member of the FontForge group;
although we grappled (too long) to find our eventual title, we did
establish our target audience quickly, which provided focus for the
book. We decided to write an introduction to the font design process,
using FontForge as the example software. Experienced type designers,
we decided, can already make some use of FontForge's existing,
reference-like documentation. It is certainly imperfect, but, on the
other hand, writing a comprehensive FontForge manual of use to domain
experts would take more time than was available. At the same time, an
introduction to font design would be useful to the interested amateur
— particularly considering that FontForge is the only free font
design application. Currently, newcomers to the field who cannot
afford US $400 proprietary applications either struggle to learn
FontForge, or they give up without exploring type design at all.
The other two teams also picked well-defined target audiences and
subjects. The Evergreen project targeted system administrators tasked
with installing and maintaining an Evergreen installation (as opposed
to, for example, library staff members). The Etoys project targeted
school teachers wanting practical help integrating Etoys into their
classroom curriculum.
With a title and concept in hand, the next order of business was to
generate a rough sketch of the book's table of contents (TOC). The
TOC is essentially an outline of the narrative, so writing it as a
group forces the group to structure the subject matter, work it into
an orderly shape, and start deciding where to cut material. That is
by no means a simple task, as we discovered in the FontForge group.
Type design is a highly iterative process that involves multiple rounds
of testing, evaluation, and adjustment; unrolling that workflow into a
linear series of steps is fundamentally impossible.
Instead, we had to settle for arranging the workflow into a roughly
linear form, starting with a lot of conceptual material for the reader
to keep in mind, then do our best to minimize the amount of jumping
back-and-forth between chapters. The result requires the reader to
get familiar with several parts of FontForge's interface at once (the
drawing tools, the spacing tools, the validation tools, and the
font generation tools) rather than learning one at a time. That may
sound less than ideal, but after several days of rearranging the order
of materials, we were at least convinced that it was the best
arrangement possible.
Manual labor
The actual writing process occupied about a day and a half, and there
is not much to say about it other than that it was what one would
expect: gruntwork at the keyboard. All three of the groups had some
documentation that they could incorporate and adapt for some of the book
content, but for the most part, the content-creation process is
writing, rewriting, asking questions of other team members, and
building images to use for illustrations.
The software FLOSSManuals uses for book sprints (and other writing
projects) is the collaborative editor Booktype. We looked at Booktype's initial release in
February 2012. The software has evolved since then, but the basic
feature set is essentially the same. It offers a web-based WYSIWYG
editor for authoring, supports multiple users, and locks each chapter
while an editor has it open. It has a drag-and-drop TOC interface
allowing users to rearrange chapters and sections, keeps old versions
of every edit, and offers basic statistics on usage participation.
Perhaps the most unusual aspect of Booktype is the fact that anyone
can edit any book. This is a conscious decision on FLOSSManuals'
part; the goal of the project is to encourage open participation and
collaboration. That does not mean it sits well with everyone,
though. One of the teams expressed some concern that vandals (perhaps
outsiders, perhaps disgruntled community members) would erase or
destroy the text. To that, Hyde replied that incidents of destructive
behavior have hardly ever happened in the course of FLOSSManuals'
fifty-plus book sprints — in reality, he said, it is very
difficult to get anyone to contribute at all, and it is extremely rare
to see anyone willing to take the time to be destructive. Besides, he
added, encouraging positive contributions is a social
problem, and building a technical solution for it into Booktype simply
would not work. Fellow facilitator Allen Gunn compared it to the open
nature of Wikipedia, which had languished in obscurity when editing
was the purview of approved gatekeepers only. In any case, Booktype
does allow contributors to roll back any vandalism with minimal
fuss.
The book sprint editing process involved assigning two proofreaders
(not counting the author) to every chapter, and keeping track of their
progress on a whiteboard. Since there was a strict deadline at which
time the content had to be sent to the print shop, the
editing process became quite a rush as well. Hyde advised all of the
groups at the outset to avoid the temptation to start writing a
"style guide" at the beginning of the sprint, and instead to push
stylistic clean-up to the very end.
English majors might chafe at that suggestion, but in reality the
proofreading and editing process already involves so much work
(including unifying multiple writers' tones with consistency) that it
was little trouble to push formatting issues all the way to the end.
Hyde made a formatting pass of his own at the end of the final
evening, solely to clean up the HTML in Booktype. By late Thursday
night, the content was declared finished, and rendered to final output.
Booktype uses HTML internally as its file format, and renders it to
various output formats with a transformation engine called Objavi. Objavi can
create print-ready PDF, EPUB, Mobi, and a wide variety of other output
formats. Hyde created EPUB and Mobi versions of the books
immediately, while the hard copies were printed and bound overnight.
Wait for the sequel
The week ended with each team assessing the state of the completed
project, and planning how to proceed in the coming weeks and months.
Obviously three days is hardly sufficient to cover everything that a
quality book would need, much less to proofread and correct all of the
typos and human errors. There are also layout issues that can only be
revealed after the HTML has been rendered, as well as potential
localizations and translations to think about.
Hyde said that his hope was that all three projects continue to refine
and update their books, but that it requires intentionality. Open
source software is updated quickly, the teams are scattered around the
globe, and most participants have day jobs. Add to that the fact that
documentation remains an afterthought in many open source projects,
and it is all too easy for even a well-written book with motivated
authors to get out of date.
The theory behind the camp, after all, is for the projects to learn a
different and better way to produce documentation in a sustainable
fashion. Although that goal encompasses continuing to write new
material, it also includes maintaining the latest book going forward,
which is not a simple thing. Hyde highlighted past projects that have
excelled at the job (such as the CiviCRM manual and How To Bypass
Internet Censorship). He suggested several strategies for
transforming the documentation camp book into a sustainable, updated
work: how to select a maintainer, how to ask for volunteers, and how
to market the book to people outside of the project itself.
All three projects worked on their own plan of attack, and they met
together one last time to provide feedback on the sprint process and
camp as a whole. Finally, Hyde demonstrated some of the advanced
output rendering features of Objavi and showed some of the
still-in-development enhancements coming to Booktype.
The response to the camp from the teams was uniformly positive;
speaking as a member of the FontForge team, the process was a lot of
fun even if it did include a lot of late nights. In addition to
producing a worthwhile manual, it was also highly educational to
compare notes with other users while hammering out chapters. One team
member also observed that the process of writing out the how-to
material forced him to distill and organize a lot of information that
he carried around in his head, but had never looked at systematically
before. That is surely a worthwhile takeaway, and would be even apart
from the book.
Nevertheless, the documentation camp produced tangible results of use
to readers immediately. You can see all three of the books online
(and generate your own output version). The Evergreen manual is
entitled Evergreen in
Action, the Etoys book is entitled Learning
with Etoys, and the FontForge manual Start Designing with
FontForge. Only time will tell whether each team continues
to maintain and expand its documentation, but I can report that I
started receiving emails about expanding the FontForge book before the
end of the last night of camp. For his part, Hyde was off to
facilitate another book sprint the following week, as part of
FLOSSManuals' never-ending campaign to improve free documentation.
[The author would like to thank Google for support to attend the 2012 Summer of Code Documentation Camp]
Comments (2 posted)
By Michael Kerrisk
December 12, 2012
Here is LWN's fifteenth annual timeline of significant events in the
Linux and free software world. We will be breaking the timeline up into
quarters, and this is our report on July-August 2012. A timeline for the
remaining quarter of the year will appear next week.
This is version 0.8 of the 2012 timeline. There are almost certainly
some errors or omissions; if you find any, please send them to timeline@lwn.net.
LWN subscribers have paid for the development of this timeline, along
with previous timelines and the weekly editions. If you like what you see
here, or elsewhere on the site, please consider subscribing to LWN.
If you'd like to look further back in time, our timeline index page has links to the
previous timelines and some other retrospective articles going all the way
back to 1998.
Popular pet names Rover, Cheryl and Kate could be a thing of the
past. Banks are now advising parents to think carefully before naming their
child’s first pet. For security reasons, the chosen name should have at
least eight characters, a capital letter and a digit. It should not be the
same as the name of any previous pet, and must never be written down,
especially on a collar as that is the first place anyone would
look. Ideally, children should consider changing the name of their pet
every 12 weeks.
[...] We tried to call Barclays’ security expert R0b Ste!nway for a
comment, but he was not available for 24 hours, having answered his phone
incorrectly three times in succession.
-- NewsBiscuit
Akademy 2012 is held in Tallinn, Estonia, June 30-July 6 (LWN
coverage: Defensive publications, Plasma Active and Make Play Live; The Qt Project and KDE; KWin scripting; Freedom and the internet; Contour and Plasma Active; KDE successes and areas for improvement).
Oracle Linux 6.3 is released (announcement,
release
notes, and LWN article on Oracle's attempt
to draw users away from CentOS to their own RHEL clone).
Mozilla surprises Thunderbird users by announcing that it is pulling
developers from the project (LWN article).
The first patches adding support for
64-bit ARM processors are posted (LWN article).
Open Font Library 0.5 is released (announcement).
Michael Kerrisk joins LWN as an editor (LWN article).
CUPS 1.6 is released (announcement, LWN article).
Firebug 1.10.0 is released (LWN blurb).
A number of the developers all went to a climbing gym one
evening, and I found myself climbing with another kernel developer who
worked for a different company, someone whose code I had rejected in the
past for various reasons, and then eventually accepted after a number of
different iterations. So I've always thought after that incident, "always
try to be nice in email, you never know when the person on the other side
of the email might be holding onto a rope ensuring your safety."
-- Greg
Kroah-Hartman
Linux 3.5 is released (announcement; KernelNewbies summary; LWN
merge window summaries: part 1, part 2, and part
3; LWN development statistics article).
The Debian project launches a new effort to clarify why Debian is
not on the Free Software Foundation's free distribution list, though
little has changed since then (LWN article).
Bison 2.6 is released (LWN blurb; Motion tracking with Skeltrack).
CRtools 0.1 is released (LWN is released).
GUADEC is held in A Coruña, July 26-August 1 (LWN coverage: Open source and open "stuff"; Imagining Tor built-in to GNOME; New funding models for open source software;
Porting GNOME to Android; GNOME OS conversations).
Trust me: every problem in computer science may be solved
by an indirection, but those indirections are *expensive*. Pointer chasing
is just about the most expensive thing you can do on modern CPU's.
-- Linus Torvalds
The KDE project releases KDE Plasma Workspaces, KDE Applications,
and KDE Platform 4.9 (announcement).
Texas Linux Fest is held in San Antonio (LWN coverage: TexOS teaching open source).
LibreOffice 3.6 is released (announcement,
LWN blurb and an earlier
article looking at the branding challenge facing LibreOffice).
Starting next week, we will begin taking into account a
new signal in our rankings: the number of valid copyright removal notices
we receive for any given site. Sites with high numbers of removal notices
may appear lower in our results. This ranking change should help users find
legitimate, quality sources of content more easily—whether it’s a
song previewed on NPR’s music website, a TV show on Hulu or new music
streamed from Spotify.
-- Google
SCO files for Chapter 7 liquidation (LWN blurb).
CyanogenMod 9.0 is released (LWN blurb and earlier article previewing the release).
The GNOME project turns 15 (LWN article).
Calligra 2.5 is released (announcement,
LWN blurb).
Valgrind 3.8.0 is released (announcement).
Digia acquires Qt from Nokia (LWN blurb).
PowerTop 2.1 is released (LWN article).
Ben Hutchings announces plans to support the 3.2 kernel until Debian
7.0 reaches end of life, which probably means end of 2015 (announcement).
FreedomBox 0.1 is released (announcement,
earlier LWN article on FreedomBox as an
alternative to commercial home routers).
A critical Java zero-day exploit emerges (The
H article).
The third GStreamer Conference is held in San Diego, California,
August 27-28 (LWN coverage: The approach of
GStreamer 1.0; The road ahead; Linux media subsystems).
The 2012 Linux Kernel Summit is held in San Diego, California,
August 27-29 (LWN provided extensive coverage of the main summit, as well as the
associated the ARM
minisummit, Linux Security
Summit, and memcg/mm
minisummit).
Most importantly, a series of leaks over the past few
years containing more than 100 million real-world passwords have provided
crackers with important new insights about how people in different walks of
life choose passwords on different sites or in different settings. The
ever-growing list of leaked passwords allows programmers to write rules
that make cracking algorithms faster and more accurate; password attacks
have become cut-and-paste exercises that even script kiddies can perform
with ease.
-- Dan
Goodin in ars technica
LinuxCon North America is held in San Diego, California, August
29-31 (LWN coverage: Funding development;
Open hardware for open hardware; Dragons and penguins in space; The tragedy of the commons gatekeepers).
The Linux Plumbers Conference is held in San Diego, California,
August 29-31 (LWN coverage: Realtime
microconference).
MongoDB 2.2 is released (announcement).
The jury in the Apple v. Samsung patent suit finds in favor of Apple on
almost all claims (LWN blurb, LWN article on look-and-feel lawsuits).
So yeah, I do acknowledge that both modes of working make
sense, I just believe the default approach should be one where focus is on
stabilizing things, not on developing new stuff all the time.
-- Lennart Poettering
Linux From Scratch 7.2 is released (announcement).
openSUSE 12.2 is released (LWN blurb).
Qubes 1.0 is released (LWN blurb).
QEMU 1.2 is released (LWN blurb).
Twisted 12.2.0 is released (announcement).
Yes I have now read kernel bugzilla, every open bug (and
closed over half of them). An interesting read, mysteries that Sherlock
Holmes would puzzle over, a length that wanted a good editor urgently, an
interesting line in social commentary, the odd bit of unnecessary bad
language. As a read it is however overall not well explained or structured.
-- Alan
Cox
PostgreSQL 9.2 is released (announcement, LWN article on the 9.2 beta).
GNU patch 2.7 is released (announcement).
SyncEvolution 1.3 is released (announcement).
Cinnamon 1.6 is released (announcement).
The Linux Foundation announces the creation of the Automotive Grade Linux workgroup (LWN blurb).
Rackspace announces that it is handing over the OpenStack project
OpenStack Foundation (LWN blurb).
The OpenStreetMap project completes relicensing of its database to
Open Database License (announcement and 2008 LWN article on the motivation for the
license change).
The second Automotive Linux Summit is held in Gaydon, England
(LWN coverage: First signs of actual code;
Automotive Grade Linux).
The X.Org Developers Conference is held in Nuremberg, Germany
(LWN coverage: Status report from the X.Org
Board; Graphics stack security; Programming languages for X application
development; OpenGL futures).
GeeXboX 3.0 is released (LWN blurb).
Canonical decides to include Amazon search results in the Ubuntu
Dash (LWN blurb).
If by "intuitive" you mean "the same as the old
interface" then I must agree. Otherwise, I think you are just trying to
hold on to what you know.
-- David Lehman
Tent 0.1 is released (LWN blurb
and article).
GStreamer 1.0 is released (LWN blurb and article previewing the release).
GTK+ 3.6.0 is released (announcement).
GNOME 3.6 released (LWN blurb).
Slackware 14 is released (LWN blurb).
Open webOS 1.0 is released (announcement).
It is an accepted fact that memcg sucks. But can it suck
faster?
-- Glauber
Costa
Calibre 0.9.0 is released (announcement).
Python 3.3.0 is released (announcement, what's new in 3.3
document).
CIA.vc shuts down (LWN article).
Joomla 3.0 is released (LWN blurb).
Linux 3.6 is released (announcement; KernelNewbies summary; LWN
merge window summaries: part 1, part 2, and part
3; LWN development statistics article).
Comments (none posted)
Page editor: Jonathan Corbet
Security
December 11, 2012
This article was contributed by Marko Myllynen and Simo Sorce
It is well understood that centralized management of user identity
information offers numerous benefits for networks of almost any size, but
Linux has traditionally lacked an "out of the box" solution in this area.
This article will examine the FreeIPA system, which is meant to provide
that solution using well-established free software components.
A workable solution for the problem of central identity management (IdM)
necessarily consists of integrated components and interfaces to store
and manage authentication, identity, and policy information as well as
allowing delegation of various tasks to different stakeholders as
appropriate. And in today's cloudy atmosphere, a plain user identity and
authentication management solution would fall flat without addressing,
among other, the needs of secure computer-to-computer and
service-to-service communications.
While in the Windows world our cousins have long enjoyed a coherent
solution in the form of Active Directory
(AD)
to tackle these issues,
no such integrated, free solution has been available for Linux.
From a
technical perspective it has been possible to set up a centralized IdM
server on Linux by configuring multiple services and components
individually. However a comparison between the deployment of standards
like LDAP and Kerberos for IdM on Linux and Windows is illustrative:
both are ubiquitous in the Windows world while still far from the norm
in the Linux world.
If we reject the idea that this disparity is due to the
superior skills of Windows administrators compared to their Linux
counterparts, the most convincing explanation must be the lack of proper
tools on Linux. And quite often what is hard to deploy is hard to
manage; in other words, the real question is not whether something can be
done (it can) but whether it can be effectively and reliably maintained (it
depends).
Enter FreeIPA
FreeIPA (Free Identity, Policy, and Audit)
builds on
existing components and services to create a coherent and easy-to-deploy
identity management system.
Manually configuring services such as certificate management, DNS, LDAP
and Kerberos on a Linux server (which represent only a subset of FreeIPA
functionality) would be a significant task even for a skilled administrator,
especially considering that, in the case of IdM, securing and tuning the
services according to best practices is a necessity. And the follow-up
task of making all this work fault tolerant does not exactly sound like a
pleasure cruise
either. However, with FreeIPA all this can be achieved in a matter of
minutes by answering a few simple questions (such as domain name or
administrator passwords) asked by the ipa-server-install tool, which
will then configure, secure, and integrate all the needed IdM components
and services.
In addition to this server configuration capability,
FreeIPA provides a web UI and a unified command-line tool
which can be used to manage data and services. For FreeIPA clients, a
configuration tool, called ipa-client-install, is provided which
will enroll a Linux system into the IPA domain and enable services like
SSSD (although using traditional
client-side components to certain
extent is also possible) with the needed certificates and Kerberos keys
to enable secure client-to-server communications.
Features and use cases
FreeIPA does not try to reinvent the wheel when providing IdM features,
instead it adds integration and functionality between
production-hardened services like the MIT Kerberos, 389 LDAP Directory,
Certificate System, Apache, BIND DNS, NTPD, and certain Samba
components.
The use of Kerberos
for authentication and LDAP
for
account and information management should be unsurprising; these
standards are very widely established so it makes perfect sense to put
them at the heart of FreeIPA. While the standards themselves are in wide
use already, details often differ when deployment is done manually by
different administrators. This is where FreeIPA comes to the rescue by
providing predefined configurations, freeing up administrators
to concentrate on higher-level aspects of IdM and also providing
consistency across deployments. Together with SSSD, IPA also easily
allows using LDAP for host-based
authentication control (HBAC),
SSH host key management, and sudo
rules.
Using Kerberos authentication with services like Apache, CIFS file
shares, and SSH allows single sign-on (SSO) for users and provides
strong security in the form of mutual authentication.
On the IPA server side, the Dogtag
Certificate System is used to manage certificates,
including certificate issuance and revocation. On the client side,
certmonger can be used
track and
renew client certificates. With these two components as part of a
FreeIPA deployment, certificate management becomes a lot easier than
with running homemade scripts and manually transferring the certificate
files around, usually in haste after getting complaints that a
certificate is expired and blocking a production system. This should
also make users, at least in an ideal world, less likely to blindly
ignore certificate related warnings when they become a very rare
occurrence. With certificates and Kerberos
principals for servers and
services in place, FreeIPA enables reliable service-to-service and
computer-to-computer communications.
DNS integration can be
used as an example of how administrators are provided with flexibility
when deploying FreeIPA. BIND,
configured with the bind-dyndb-ldap plugin, can, optionally, be set up as
the domain DNS during deployment, but whether it makes sense to use it
for controlling a delegated DNS domain or take control of the entire DNS
infrastructure depends on the environment. The FreeIPA managed DNS setup
automatically provides SRV records for autodiscovery and IPA clients
canalso be configured to update their current IP addresses using
GSS-TSIG secured DNS Updates.
In addition to integrating components on a FreeIPA server, with the
recently released FreeIPA version 3 it is now also possible to integrate
FreeIPA itself with an existing Active Directory-based IdM
infrastructure by using the new IPA-AD trust
feature. This means that once a
trust between FreeIPA and AD domains has been established by
administrators, users from the trusted AD domain are allowed SSO- and
password-based access to services in the FreeIPA domain. And this of
course works the other way around: FreeIPA users are able to access
services in the Windows domain with their Kerberos credentials obtained
from the FreeIPA domain. At this point the platform of any given service
becomes irrelevant for users as any service is seamlessly accessible,
lowering the barriers of Linux and Windows integration considerably.
Another notable benefit is that administrators will be able to enroll
their Linux systems into their FreeIPA domain instead of joining them
directly to Microsoft AD — something that is known to cause slight
organizational challenges every now and then. Naturally, though,
operating system specific characteristics provided by FreeIPA and AD,
such as SELinux
policies
and Windows group policies (GPOs), are only applicable to the respective
client systems.
Using FreeIPA
After the initial installation, it is possible to use both the web UI and
command-line interface for administration. An experienced administrator
might prefer using the command-line approach but the browser-based web
UI makes delegating certain tasks — such as user and group creation and
management — to less seasoned operators feasible. Both interfaces utilize
the same internal framework so, apart from a few seldom-used tasks
provided only by the command-line interface, both interfaces can be used
to achieve the same results.
Depending on a single server for IdM in an entire organization would of
course be asking for serious trouble. Although the offline caching
features provided by SSSD mitigate this risk, the
ipa-replica-install command can be used to easily setup IdM
server replicas as appropriate in a given environment. Replication
topology can also be later adjusted to allow for optimized
configurations when multiple geographical locations are involved.
Although the full benefits of FreeIPA are available only when using SSSD on
clients, tools are available to make migration from existing solutions
like NIS
easier. A plugin that will serve data over the NIS protocol from the
LDAP database is available, and also a compatibility plugin that
provides the same LDAP data using the older RFC2307 schema is available
for those older LDAP clients that can't use RFC2307bis
extensions. The
same plugin also provides netgroup maps built from the internal
grouping model available in FreeIPA. So the rather typical use case of
NIS to manage users, netgroups, and automounter maps can be migrated to
FreeIPA-controlled domains on a system-by-system basis as feasible.
Conclusions and Future
FreeIPA offers an integrated solution built on proven components for
centralized identity management. It provides a wide range of features
and also allows for Windows domain integration in mixed environments.
The approach taken by FreeIPA — integrating existing, proven components
and greatly facilitating setup and management — makes FreeIPA an
appealing IdM solution for small and larger on-site and cloud-based
environments alike. The full server and client packaging is already
available for distributions like Fedora and included in RHEL 6. Client
packages are available in varying states of maturity for Ubuntu, Debian,
and Arch Linux, with the server side expected to follow a bit later.
What the future holds for FreeIPA is, of course, open to user needs and
community feedback. The 'A' part (audit) of IPA is currently not being
actively worked on but it might be another case of integrating a proven
component into FreeIPA. Other notable areas of future work include, for
example, DHCP integration and support for two-factor authentication with
one-time passwords, smart cards, and user certificates.
Compared to the manual configuration approach of a large number of
individual components FreeIPA already offers many benefits for
administrators and users. As the scale of computing environments
keeps growing, the need for a centralized IdM solution is getting more
and more important and FreeIPA is being actively developed to allow
Linux administrators to scale with their ever-increasing
responsibilities.
Comments (13 posted)
Brief items
I’ve learned that there is a “website intelligence” network that
tracks form submissions across their customer network. So, if a
visitors fills out a form on Site A with their name and email, Site
B knows their name and email too as soon as they land on the site.
—
Darren
Nix
Crucially, vulnerability information has a higher market value if
it is withheld from the maker of the vulnerable product. If the
maker finds out, they might close the hole and render the
information worthless. So the market in vulnerabilities rewards
researchers for making sure that the problems they discover are not
fixed–exactly the opposite of the traditional view in the field.
Policymakers should be taking a serious look at this market and
thinking about its implications. Do we want to foster an atmosphere
where researchers turn away from disclosure, and vulnerability
information is withheld from those who can fix problems? Do we want
to increase incentives for finding vulnerabilities that won’t be
fixed? Do we think we can keep this market from connecting bad guys
with the information they want to exploit?
—
Ed Felten
My whole life is on Google. My money, my history, my photos, my
memories, my books, my identity, my relationships. Even a simple
movement or administrative access requires my Google account.
And, starting tonight, trying to connect bring me a message: "Your
account has been disabled."
—
Lionel Dricot
Comments (20 posted)
NCSU Professor Xuxian Jiang has posted
an assessment of
the application verification service featured in the Android 4.2
release. "
However, based on our evaluation results, we feel this
service is still nascent and there exists room for improvement.
Specifically, our study indicates that the app verification service mainly
uses an app's SHA1 value and the package name to determine whether it is
dangerous or potentially dangerous. This mechanism is fragile and can be
easily bypassed. It is already known that attackers can change with ease
the checksums of existing malware (e.g., by repackaging or mutating
it)."
Comments (none posted)
New vulnerabilities
bind9: denial of service
| Package(s): | bind9 |
CVE #(s): | CVE-2012-5688
|
| Created: | December 6, 2012 |
Updated: | December 31, 2012 |
| Description: |
From the Ubuntu advisory:
It was discovered that Bind incorrectly handled certain crafted queries
when DNS64 was enabled. A remote attacker could use this flaw to cause Bind
to crash, resulting in a denial of service. |
| Alerts: |
|
Comments (none posted)
bogofilter: code execution
| Package(s): | bogofilter |
CVE #(s): | CVE-2012-5468
|
| Created: | December 12, 2012 |
Updated: | December 21, 2012 |
| Description: |
From the Debian advisory:
A heap-based buffer overflow was discovered in bogofilter, a software
package for classifying mail messages as spam or non-spam. Crafted
mail messages with invalid base64 data could lead to heap corruption
and, potentially, arbitrary code execution. |
| Alerts: |
|
Comments (none posted)
chromium: multiple vulnerabilities
| Package(s): | Chromium |
CVE #(s): | CVE-2012-5130
CVE-2012-5131
CVE-2012-5132
CVE-2012-5133
CVE-2012-5134
CVE-2012-5135
CVE-2012-5136
CVE-2012-5137
CVE-2012-5138
|
| Created: | December 12, 2012 |
Updated: | December 12, 2012 |
| Description: |
From the openSUSE advisory:
Chromium was updated to 25.0.1343
* Security Fixes (bnc#791234 and bnc#792154):
- - CVE-2012-5131: Corrupt rendering in the Apple OSX
driver for Intel GPUs
- - CVE-2012-5133: Use-after-free in SVG filters.
- - CVE-2012-5130: Out-of-bounds read in Skia
- - CVE-2012-5132: Browser crash with chunked encoding
- - CVE-2012-5134: Buffer underflow in libxml.
- - CVE-2012-5135: Use-after-free with printing.
- - CVE-2012-5136: Bad cast in input element handling.
- - CVE-2012-5138: Incorrect file path handling
- - CVE-2012-5137: Use-after-free in media source handling
|
| Alerts: |
|
Comments (none posted)
cups: privilege escalation
| Package(s): | cups, cupsys |
CVE #(s): | CVE-2012-5519
|
| Created: | December 6, 2012 |
Updated: | March 11, 2013 |
| Description: |
From the Ubuntu advisory:
It was discovered that users in the lpadmin group could modify certain CUPS
configuration options to escalate privileges. An attacker could use this to
potentially gain root privileges. |
| Alerts: |
|
Comments (none posted)
gimp: code execution
| Package(s): | gimp |
CVE #(s): | CVE-2012-5576
|
| Created: | December 7, 2012 |
Updated: | February 21, 2013 |
| Description: |
From the Ubuntu advisory:
It was discovered that GIMP incorrectly handled malformed XWD files. If a
user were tricked into opening a specially crafted XWD file, an attacker
could cause GIMP to crash, or possibly execute arbitrary code with the
user's privileges.
|
| Alerts: |
|
Comments (none posted)
gnome-system-log: privilege escalation
| Package(s): | gnome-system-log |
CVE #(s): | CVE-2012-5535
|
| Created: | December 10, 2012 |
Updated: | December 12, 2012 |
| Description: |
From the Red Hat bugzilla:
gnome-system-log-3.6.0-1.fc18 is set up so that
$ gnome-system-log
executes "logview" as root through pkexec, only asking for the invoking user's password (because the org.gnome.logview.config.date.pkexec.run (sic) action has default policy auth_self_keep).
Running an X11 application as root in a session of a completely unprivileged user is risky enough in itself; however logview also allows (via the "wheel" button/Open) opening any file on the system, including /etc/shadow. This is at least a confidentiality violation; reading various authentication cookies or ssh private keys might even allow this to be amplified into a privilege escalation. |
| Alerts: |
|
Comments (none posted)
horde4-imp: cross-site scripting
| Package(s): | horde4-imp |
CVE #(s): | CVE-2012-5565
|
| Created: | December 7, 2012 |
Updated: | December 12, 2012 |
| Description: |
From the openSUSE advisory:
This version update to version 5.0.24 addresses
CVE-2012-5565 (bnc#791179) to fix XSS vulnerabilities on
the compose page (traditional view), the contacts popup
window, and with certain IMAP mailbox names. |
| Alerts: |
|
Comments (none posted)
horde4-kronolith: cross-site scripting
| Package(s): | horde4-kronolith |
CVE #(s): | CVE-2012-5566
CVE-2012-5567
|
| Created: | December 7, 2012 |
Updated: | January 23, 2013 |
| Description: |
From the openSUSE advisory:
This version update to version 3.0.18 addresses bnc#791184:
Two sets (3.0.17 and 3.0.18) of XSS flaws |
| Alerts: |
|
Comments (none posted)
kernel: firewall bypass
| Package(s): | kernel |
CVE #(s): | CVE-2012-4444
|
| Created: | December 11, 2012 |
Updated: | December 19, 2012 |
| Description: |
From the Ubuntu advisory:
Zhang Zuotao discovered a bug in the Linux kernel's handling of overlapping
fragments in ipv6. A remote attacker could exploit this flaw to bypass
firewalls and initial new network connections that should have been blocked
by the firewall. |
| Alerts: |
|
Comments (none posted)
libtiff: code execution
| Package(s): | tiff |
CVE #(s): | CVE-2012-5581
|
| Created: | December 6, 2012 |
Updated: | December 31, 2012 |
| Description: |
From the Ubuntu advisory:
It was discovered that LibTIFF incorrectly handled certain malformed
images using the DOTRANGE tag. If a user or automated system were
tricked into opening a specially crafted TIFF image, a remote attacker
could crash the application, leading to a denial of service, or possibly
execute arbitrary code with user privileges. |
| Alerts: |
|
Comments (none posted)
mc: command execution
| Package(s): | mc |
CVE #(s): | CVE-2012-4463
|
| Created: | December 7, 2012 |
Updated: | December 12, 2012 |
| Description: |
From the CVE entry:
Midnight Commander (mc) 4.8.5 does not properly handle the (1) MC_EXT_SELECTED or (2) MC_EXT_ONLYTAGGED environment variables when multiple files are selected, which allows user-assisted remote attackers to execute arbitrary commands via a crafted file name.
|
| Alerts: |
|
Comments (none posted)
openshift-console: code execution
| Package(s): | openshift-console |
CVE #(s): | CVE-2012-5622
|
| Created: | December 11, 2012 |
Updated: | December 12, 2012 |
| Description: |
From the Red Hat advisory:
It was found that the OpenShift Management Console did not protect against
Cross-Site Request Forgery (CSRF) attacks. If a remote attacker could trick
a user, who was logged into the OpenShift Management Console, into visiting
an attacker controlled web page, the attacker could make changes to
applications hosted within OpenShift Enterprise with the privileges of the
victim which may lead to arbitrary code execution in the OpenShift
Enterprise hosted applications. |
| Alerts: |
|
Comments (none posted)
openstack-keystone: file permissions flaw
| Package(s): | openstack-keystone |
CVE #(s): | CVE-2012-5483
|
| Created: | December 11, 2012 |
Updated: | December 12, 2012 |
| Description: |
From the Red Hat advisory:
When access to Amazon Elastic Compute Cloud (Amazon EC2) was configured,
a file permissions flaw in Keystone allowed a local attacker to view the
administrative access and secret values used for authenticating requests to
Amazon EC2 services. An attacker could use this flaw to access Amazon EC2
and enable, disable, and modify services and settings. |
| Alerts: |
|
Comments (none posted)
php-symfony2-HttpFoundation: multiple vulnerabilities
| Package(s): | php-symfony2-HttpFoundation |
CVE #(s): | |
| Created: | December 10, 2012 |
Updated: | December 12, 2012 |
| Description: |
Symfony v2.1.4 fixes multiple bugs, some of which could be security issues. See the symfony changelog for the details.
Version 2.0.19 also fixes lots of bugs. See this changelog for details. |
| Alerts: |
|
Comments (none posted)
php-symfony-symfony: information disclosure
| Package(s): | php-symfony-symfony |
CVE #(s): | CVE-2012-5574
|
| Created: | December 6, 2012 |
Updated: | December 12, 2012 |
| Description: |
From the Red Hat bugzilla:
An information disclosure flaw was found in the way Symfony, a open-source PHP web framework, sanitized certain HTTP POST request values. A remote attacker could use this flaw to obtain (unauthorized) read access to arbitrary system files, readable with the privileges of the web server process. |
| Alerts: |
|
Comments (none posted)
plexus-cipher: insufficiently random salt
| Package(s): | plexus-cipher |
CVE #(s): | |
| Created: | December 6, 2012 |
Updated: | December 12, 2012 |
| Description: |
getSalt() falls back to Random (seeded by the current time) instead of SecureRandom.
These bugs just decreases the randomness of the salt/IV, so they may not actually result in an exploitable security vulnerability. But that depends on how this class is used.
See the Red Hat bugzilla for details. |
| Alerts: |
|
Comments (none posted)
tor: denial of service
| Package(s): | tor |
CVE #(s): | CVE-2012-5573
|
| Created: | December 7, 2012 |
Updated: | March 25, 2013 |
| Description: |
From the openSUSE advisory:
Tear down the circuit when receiving an unexpected SENDME
cell. Prevents circumvention of the network's flow
control, exhaustion of network resources and possible
denial-of-service attacks on entry nodes |
| Alerts: |
|
Comments (none posted)
xen: multiple vulnerabilities
| Package(s): | Xen |
CVE #(s): | CVE-2012-5510
CVE-2012-5511
CVE-2012-5512
CVE-2012-5514
CVE-2012-5515
|
| Created: | December 6, 2012 |
Updated: | December 24, 2012 |
| Description: |
From the SUSE advisory:
- CVE-2012-5510: Grant table version switch list
corruption vulnerability (XSA-26)
- CVE-2012-5511: Several HVM operations do not validate
the range of their inputs (XSA-27)
- CVE-2012-5512: HVMOP_get_mem_access crash /
HVMOP_set_mem_access information leak (XSA-28)
- CVE-2012-5514: Missing unlock in
guest_physmap_mark_populate_on_demand() (XSA-30)
- CVE-2012-5515: Several memory hypercall operations
allow invalid extent order values (XSA-31)
|
| Alerts: |
|
Comments (none posted)
wireshark: multiple vulnerabilities
| Package(s): | wireshark |
CVE #(s): | CVE-2012-5592
CVE-2012-5593
CVE-2012-5594
CVE-2012-5595
CVE-2012-5596
CVE-2012-5597
CVE-2012-5598
CVE-2012-5599
CVE-2012-5600
CVE-2012-5601
CVE-2012-5602
|
| Created: | December 10, 2012 |
Updated: | January 23, 2013 |
| Description: |
From the openSUSE advisory:
Wireshark security update to 1.8.4:
https://www.wireshark.org/docs/relnotes/wireshark-1.8.4.html
http://seclists.org/oss-sec/2012/q4/378
CVE-2012-5592 Wireshark #1 pcap-ng hostname disclosure
(wnpa-sec-2012-30)
CVE-2012-5593 Wireshark #2 DoS (infinite loop) in the USB
dissector (wnpa-sec-2012-31)
CVE-2012-5594 Wireshark #3 DoS (infinite loop) in the sFlow
dissector (wnpa-sec-2012-32)
CVE-2012-5595 Wireshark #4 DoS (infinite loop) in the SCTP
dissector (wnpa-sec-2012-33)
CVE-2012-5596 Wireshark #5 DoS (infinite loop) in the EIGRP
dissector (wnpa-sec-2012-34)
CVE-2012-5597 Wireshark #6 DoS (crash) in the ISAKMP
dissector (wnpa-sec-2012-35)
CVE-2012-5598 Wireshark #7 DoS (infinite loop) in the iSCSI
dissector (wnpa-sec-2012-36)
CVE-2012-5599 Wireshark #8 DoS (infinite loop) in the WTP
dissector (wnpa-sec-2012-37)
CVE-2012-5600 Wireshark #9 DoS (infinite loop) in the RTCP
dissector (wnpa-sec-2012-38)
CVE-2012-5601 Wireshark #10 DoS (infinite loop) in the
3GPP2 A11 dissector (wnpa-sec-2012-39)
CVE-2012-5602 Wireshark #11 DoS (infinite loop) in the
ICMPv6 dissector (wnpa-sec-2012-40)
|
| Alerts: |
|
Comments (none posted)
Page editor: Michael Kerrisk
Kernel development
Brief items
The 3.7 kernel is out,
released by Linus
on December 10.
"
Anyway, it's been a somewhat drawn out release despite the 3.7 merge
window having otherwise appeared pretty straightforward, and none of
the rc's were all that big either. But we're done, and this means that
the merge window will close on Christmas eve." Of course, "drawn
out" is a relative term; at 72 days, this cycle is only a few days
above average in length.
Headline features in this kernel include
64-bit
ARM support, improved security with
supervisor-mode access prevention,
SMB 2.1 support, server-side
TCP fast
open support,
signed kernel modules,
and more. See
the
KernelNewbies 3.7 page for details.
Stable updates:
3.0.55
and 3.4.22, containing a build error fix,
were released on December 5. The rather larger 3.2.35 update was released on
December 7.
3.0.56,
3.4.23 and
3.6.10 were released on December 10.
No stable updates are in the review process as of this writing.
Comments (none posted)
Modern network hardware has often sprouted various "offload"
engines, unfortunately now often enabled by default, which tend to
do more damage than good except for extreme benchmarking fanatics,
often primarily on big server machines in data centers. Start by
turning them off. We'll write more on this topic soon. The
implementors of this "smart" hardware are less "smart" than they
think they are.
—
The
Bufferbloat project on CoDel benchmarking best practices
When we enter the kernel mode, we start with saving CPU state.
Usually (and you are going to hate that word well before you read
to the end) it's stored in struct pt_regs, but it might be more
complicated. For our purposes it's better to think of it as
abstract saved state, leaving aside the question of how it's
represented.
—
Al Viro teaches a class on signal
handling
Comments (3 posted)
If (maintainer thinks their patch is right) {
patch doesn't need review
} else {
/* maintainer thinks the patch is wrong. */
/* XXX: why would you think your own patch is wrong? */
patch needs review
}
—
Dave Chinner
Review is part of the way we work as a community and we should
figure out how to fix our review process so that we can have
meaningful results from the review or we lose confidence in the
process and it makes it much harder to get reviewers to spend time
reviewing when their reviews are ultimately ignored.
—
Ric Wheeler
Anybody who claims that our "process" requires that things like
that go on the mailing list and pass long reviews and discussions
IS JUST LYING.
Because it's not true. We discuss big features, and we want code
review, yes, but the fact is, most small obvious patches do *not*
get reviewed, they just get fixed. You all know that, why the hell
are you suddenly claiming that this is so magically different?
—
Linus Torvalds
This is why this discussion reminds me so much of the wakelocks
discussion, and why I've made the same decision the Android folks
made, except they wasted far more time and got far more frustrated
--- I'll just keep the damned thing as a out-of-tree patch, until
there are enough other people willing to say that they need and are
using this patch because their workloads and use cases need it. It
will save me a whole lot of time.
—
Ted Ts'o
Comments (none posted)
Kernel development news
By Jonathan Corbet
December 12, 2012
The 3.8 merge window looks to be an interesting time. In theory, it closes
just before the Christmas holiday, though Linus has threatened to start his
celebrations early. Despite the possibly shortened window, there are,
according to linux-next maintainer Stephen
Rothwell, "
more commits in linux-next than ever before."
So expect to see a lot of changes flowing into the mainline in a relatively
short period of time.
As of this writing, some 3800 of those changes have been merged by
Linus. The most significant user-visible changes include:
- The cpuidle subsystem is now able to
associate different drivers with each CPU. This capability is needed
to support asymmetric architectures like big.LITTLE.
- Linux running as a Microsoft Hyper-V guest now supports memory-use
reduction via the Hyper-V balloon driver.
- Applications accessing huge pages via the mmap() or SYSV IPC
interfaces can now specify which page size they want.
- The x86 architecture code can, finally, support hotplugging of the
initial boot CPU ("CPU0").
- On the other hand, as discussed at the
2012 Kernel Summit, support for the venerable 386 architecture has been
removed from the kernel. Peter Anvin informed Linus of an important
loss of functionality from this change: "Unfortunately there's a
nostalgic cost: your old original 386 DX33 system from early 1991
won't be able to boot modern Linux kernels anymore." Linus was
unmoved, though, and merged the change.
- The XFS filesystem has gained a new verification mechanism that can
detect corrupted data read from the storage device.
- New hardware support includes:
- Processors and systems:
Broadcom BCM281XX SoCs,
Allwinner A1X SoCs,
USI Topkick boards,
ZyXEL NSA-310 boards,
MPL CEC4 boards, and
Samsung EXYNOS5440 SoCs.
Support for SH-Mobile SH7367 and SH7377 CPUs has been removed.
Also removed is support for the
Intel
PXA2xx/PXA3xx Marvell PXA95x architecture
on the assumption that nobody will miss it; anybody who disagrees
may want to do so in the near future.
- Memory-technology devices:
Wondermedia SD/MMC host controllers, and
Realtek PCI-E SD/MMC and Memstick card interfaces.
- Miscellaneous:
Texas Instruments ADS7830 analog-to-digital converters (ADCs),
Texas Instruments ADC081C021/027 ADCs,
Dialog Semiconductor DA9055 ADCs,
Analog Device AD54xx digital-to-analog converters,
ST Microelectronics SPEAr PLGPIO controllers,
Dialog Semiconductor DA9055 GPIO controllers,
Cirrus Logic CLPS711x/EP721x/EP731x-based GPIO controllers,
Technologic Systems TS-5500 digital I/O controllers,
Exar XR17V35x multi-port PCIe UARTs,
ARC (Synopsys) UARTs,
SystemBase PCI Multiport UARTs,
Commtech Fastcom Async-335 and Fastcom Async-PCIe cards,
ACPI enumerated SDHCI controllers,
Firewire-attached TTY devices,
Analog devices ADIS16136 gyroscopes, and
Analog Devices ADIS16480 inertial measurement units.
- Thermal: The kernel has a new thermal governor subsystem
capable of responding when the system gets too hot. A driver has
been added for ST-Ericsson DB8500 thermal regulators.
- USB:
Renesas R-Car USB phys.
- Staging graduations: the IndustryPack bus driver,
Maxim max1363 ADC driver,
Analog Devices AD7793 ADC driver, and
Analog Devices AD7298 ADC driver
have moved out of the staging tree. The RealTek PCI-E card reader
driver has been removed from staging since that functionality is
now provided by a separate mainline driver.
Changes visible to kernel developers include:
- The _devinit, __devinitdata,
__devinitconst,
__devexit, and __devexit_p() macros are on their way
out; many drivers have been fixed to stop using them. In the future,
the CONFIG_HOTPLUG option will no longer exist, so
initialization and finalization code needs to be kept around forever.
- The power management quality-of-service subsystem can now support
device-specific QOS flags. Two flags have been defined in 3.8:
PM_QOS_FLAG_NO_POWER_OFF and
PM_QOS_FLAG_REMOTE_WAKEUP.
- The devfreq subsystem now supports devices that can be suspended (or
placed into an idle state) independently of the rest of the system.
- The UIO driver subsystem has a new generic platform driver allowing
UIO devices to access memory allocated by CMA or the IOMMU subsystem.
- The per-entity load-tracking patch set has been merged; this code
allows the scheduler to better understand which processes (and control
groups) are putting load on the system, thus improving load balancing
and related decisions.
- The callback-free RCU implementation
has been merged, allowing the offloading of some read-copy-update
overhead from a subset of CPUs in the system.
The 3.8 merge window has just begun; there are a lot of subsystem trees yet
to be pulled into the mainline. LWN will continue to follow the kernel
repository as Linus pulls in more patches and establishes the feature set
for the 3.8 release; stay tuned.
Comments (9 posted)
By Michael Kerrisk
December 13, 2012
The results of the user namespaces work on Linux have been a long time in
coming, probably because they are the most complex of the various namespaces that have been
added to the kernel so far. The first pieces of the implementation started
appearing when Linux 2.6.23 (released in late 2007) added the
CLONE_NEWUSER flag for the clone() and unshare() system calls. By
Linux 2.6.29, that flag also became meaningful in the clone()
system call. However, until now many of the pieces necessary for a complete
implementation have remained absent.
We last looked at user
namespaces back in April, when Eric Biederman was working to push a raft of
patches into the kernel with the goal of bringing the implementation closer
to completion. Eric is now engaged in pushing further patches into the
kernel with the goal of having a more or less complete implementation of
user namespaces in Linux 3.8. Thus, it seems to be time to have another look
at this work. First, however, a brief recap of user namespaces is probably
in order.
User namespaces allow per-namespace mappings of user and group IDs. In
the context of containers, this means that
users and groups may have privileges for certain operations inside the
container without having those privileges outside the container. (In other
words, a process's set of capabilities for operations inside a user
namespace can be quite different from its set of capabilities in the
host system.) One of the specific goals of user namespaces is to allow a
process to have root privileges for operations inside the container, while
at the same time being a normal unprivileged process on the wider system
hosting the container.
To support this behavior, each of a process's user IDs has, in effect,
two values: one inside the container and another outside the
container. Similar remarks hold true for group IDs. This duality is
accomplished by maintaining a per-user-namespace mapping of user IDs: each
user namespace has a table that maps user IDs on the host system to
corresponding user IDs in the namespace. This mapping is set and viewed by
writing and reading the /proc/PID/uid_map
pseudo-file, where PID is the process ID of one of the processes in
the user namespace. Thus, for example, user ID 1000 on the host system
might be mapped to user ID 0 inside a namespace; a process with a user ID
of 1000 would thus be a normal user on the host system, but would have root
privileges inside the namespace. If no mapping is provided for a particular
user ID on the host system, then, within the namespace, the user ID is
mapped to the value provided in the file
/proc/sys/kernel/overflowuid (the default value in this file is
65534). Our earlier article went into more details of the
implementation.
One further point worth noting is that the description given in the
previous paragraph looks at things from the perspective of a single user
namespace. However, user namespaces can be nested, with user and group ID
mappings applied at each nesting level. This means that a process might
have distinct user and group IDs in each of the nested user namespaces in
which it is a member.
Eric has assembled a number of namespace-related patch sets for
submission in the upcoming 3.8 merge window. Chief among these is the set that completes the main pieces of the
user namespace infrastructure. With the changes in this set,
unprivileged processes can now create new user namespaces (using
clone(CLONE_NEWUSER)). This is safe, says Eric, because:
Now that we have been through every permission check in the kernel
having uid == 0 and gid == 0 in your local user namespace no
longer adds any special privileges.
Even having a full set of caps in your local user namespace is safe because
capabilities are relative to your local user namespace, and do not confer
unexpected privileges.
The point that Eric is making here is that following the work
(described in our earlier article) to implement the kuid_t and
kgid_t types within the kernel, and the conversion of various
calls to capable() to its namespace analog, ns_capable(),
having a user ID of zero inside a user namespace no longer grants special
privileges outside the namespace. (capable() is the kernel
function that checks whether a process has a capability;
ns_capable() checks whether a process has a capability
inside a namespace.)
The creator of a new user namespace starts off with a full set of
permitted and effective capabilities within the namespace, regardless of
its user ID or capabilities on the host system. The creating process thus
has root privileges, for the purpose of setting up the environment inside
the namespace in preparation for the creation or the addition of other
processes inside the namespace. Among other things, this means that the
(unprivileged) creator of the user namespace (or indeed any process with
suitable capabilities inside the namespace) can in turn create all other
types of namespaces, such as network, mount, and PID namespaces (those
operations require the CAP_SYS_ADMIN capability). Because the
effect of creating those namespaces is limited to the members of the user
namespace, no damage can be done in the host system.
Other notable user-space changes in Eric's patches include extending the unshare()
system call so that it can be employed to create user namespaces, and extensions that allow a process to use the setns()
system call to enter an existing user namespace.
Looking at some of the other patches in the series gives an idea of
just how subtle some of the details are that must be dealt with in order to
create a workable implementation of user namespaces. For example, one of the patches deals with the behavior of
set-user-ID (and set-group-ID) programs. When a set-user-ID program is
executed (via the execve() system call), the effective user ID of
the process is changed to match the user ID of the executable file. When a
process inside a user namespace executes a set-user-ID program, the effect
is to change the process's effective user ID inside the namespace to
whatever value was mapped for the file user ID. Returning to the example
used above, where user ID 1000 on the host system is mapped to user ID 0
inside the namespace, if a process inside the user namespace executes a
set-user-ID program owned by user ID 1000, then the process will assume an
effective user ID of 0 (inside the namespace).
However, what should be done if the file user ID has no mapping inside
the namespace? One possibility would be for the execve() call to
fail. However, Eric's patch implements another approach: the set-user-ID
bit is ignored in this case, so that the new program is executed, but the
process's effective user ID is left unchanged. Eric's reasoning is that
this mirrors the semantics of executing a set-user-ID program that resides
on a filesystem that was mounted with the MS_NOSUID flag. Those
semantics have been in place since Linux 2.4, so the kernel code paths
should for this behavior should be well tested.
Another notable piece of work in Eric's patch set concerns the files in
the /proc/PID/ns directory. This directory
contains one file for each type of namespace of which the process is a
member (thus, for each process, there are the files ipc, mnt,
net, pid, user, and uts). These files
already serve a couple of purposes. Passing an open file descriptor for one
of these files to setns() allows a process to join an existing
namespace. Holding an open file descriptor for one of these files, or bind
mounting one of the files to some other location in the filesystem, will
keep a namespace alive even if all current members of the namespace
terminate. Among other things, the latter feature allows the piecemeal
construction of the contents of a container. With this patch in Eric's recent series, a single
/proc inode is now created per namespace, and the
/proc/PID/ns files are instead implemented as
special symbolic links that refer to that inode. The practical upshot is
that if two processes are in, say, the same user namespace, then calling
stat() on the respective
/proc/PID/ns/user files will return the same inode
numbers (in the st_ino field of the returned stat
structure). This provides a mechanism for discovering if two processes are
in the same namespace, a long-requested feature.
This article has covered just the patch set to complete the user
namespace implementation. However, at the same time, Eric is pushing a
number of related patch sets towards the mainline, including: changes to the networking stack so that user
namespace root users can create network namespaces: enhancements and clean-ups of the PID namespace
code that, among other things, add unshare() and
setns() support for PID namespaces; enhancements to the mount namespace code that
allow user namespace root users to call chroot() and to create and
manipulate mount namespaces; and a series of
patches that add support for user namespaces to a number of file
systems that do not yet provide that support.
It's worth emphasizing one of the points that Eric noted
in a documentation patch for the user
namespace work, and elaborated on in a private mail. Beyond the
practicalities of supporting containers, there is another significant
driving force behind the user namespaces work: to free the UNIX/Linux API
of the "handcuffs" imposed by set-user-ID and set-group-ID programs. Many
of the user-space APIs provided by the kernel are root-only simply to
prevent the possibility of accidentally or maliciously distorting the
run-time environment of privileged programs, with the effect that those
programs are confused into doing something that they were not designed to
do. By limiting the effect of root privileges to a user namespace, and
allowing unprivileged users to create user namespaces, it now becomes
possible to give non-root programs access to interesting functionality that
was formerly limited to the root user.
There have been a few Acked-by: mails sent in response to
Eric's patches, and a few small questions, but the patches have otherwise
passed largely without comment, and no one has raised objections. It seems
likely that this is because the patches have been around in one form or
another for a considerable period, and Eric has gone to considerable effort
to address objections that were raised earlier during the user namespaces
work. Thus, it seems that there's a good chance that Eric's pull request to have the patches merged in the
currently open 3.8 merge window will be successful, and that a complete
implementation of user namespaces is now very close to reality.
Comments (19 posted)
December 11, 2012
This article was contributed by Neil Brown
When thinking about filesystems for modern flash storage devices, as
we have recently done with
f2fs and
NILFS2, two other filesystems that are likely
to quickly spring
to mind, and be almost as quickly discarded, are JFFS2 and UBIFS.
They spring to mind because they were designed specifically to work
with flash, and are discarded because they require access to "raw
flash" whereas the flash devices we have been considering have a
"flash translation layer" (FTL) which hides some of the details of the
flash device and which needs to be accessed much like a disk drive.
This quick discarding may well not be appropriate — these are open-source
filesystems after all and are thus free to be tinkered with. If the
Apollo 13 technicians were
able
to
link the Lithium hydroxide
canisters from the command module to the CO₂ scrubber in the Lunar
module, it shouldn't be too hard for us to link a raw-flash filesystem
to a FTL based storage chip — if it seemed like a useful thing to do.
Raw access to a flash device goes through the
"mtd" (Memory Technology
Devices) interface in Linux and, while this is a rich interface,
the vast majority of accesses from a filesystem are via three functions:
mtd_read(),
mtd_write() and mtd_erase(). The first two are easily
implemented by a block device — though you need to allow for the fact
that the mtd interface is synchronous while the block layer interface
is asynchronous — and the last can be largely ignored as an FTL
handles erasure internally. In fact, Linux provides a "block2mtd" device
which will present an arbitrary block device as an mtd device. Using this
might not be the most efficient way to run a filesystem on new
hardware, but it would at least work as a proof-of-concept.
So it seems that there could be some possibility of using one of these
filesystems, possibly with a little modification, on an FTL-based flash
device, and there could certainly be value in understanding them a
little better as, at the very least, they could have lessons to teach
us.
A common baseline
Despite their separate code bases, there is a lot of similarity
between JFFS2 and UBIFS — enough that it seems likely that the
latter was developed in part to overcome the shortcomings of the
former. One similarity is that, unlike the other filesystems we have
looked at, neither of these filesystems has a strong concept of a "basic block
size". The concept is there if you look for it, but it isn't
prominent.
One of the main uses of a block size in a filesystem is to manage free
space. Some blocks are in use, others are free. If a block is only
partially used — for example if it contains the last little bit of a
file — then the whole block is considered to be in use. For flash
filesystems, blocks are not as useful for free-space management as
this space is managed in
terms of "erase blocks," which are much larger than the basic blocks of
other filesystems, possibly as large as a few megabytes.
Another use of blocks in a filesystem is as a unit of metadata management.
For example NILFS2 manages
the ifile (inode file) as a sequence of blocks (rather than
just a sequence of inodes), while F2FS manages each directory as a set
of hash tables, each of which contains a fixed number of blocks.
JFFS2 and UBIFS don't take this approach at all. All data
is written consecutively to one or more erase blocks
with some padding to align things to four-byte boundaries, but with no
alignment so large that it could be called a block. When indexing of
data is needed, an erase-block number combined with a byte offset meets
the need, so the lack of alignment does not cause an issue there.
Both filesystems further make use of this freedom in space allocation
by compressing the data before it is written. Various compression
schemes are available including LZO and ZLIB together with some
simpler schemes like run-length encoding. Which scheme is chosen
depends on the desired trade off between space saving and execution
time. This compression can make a small flash device hold nearly
twice as much as you might expect, depending on the compressibility of
the files of course. Your author still recalls the pleasant surprise
he got when he found out how much data would fit on the JFFS2
formatted 256MB flash in the original
Openmoko Freerunner: a
reasonably complete Debian root filesystem with assorted development
tools and basic applications still left room for a modest amount of
music and some OSM map tiles.
In each case, the data and metadata of the filesystem are collected
into "nodes" which are concatenated and written out to a fresh erase
block. Each node records the type of data (inode, file, directory
name, etc), the address of the data (such as inode number), the type of
compression and a few other details.
This makes it possible to identify the contents of the
flash when mounting and when cleaning, and effectively replaces the
"segment summary" that is found in f2fs and NILFS2.
Special note should be made of the directory name nodes. While the
other filesystems we have studied store a directory much like a file,
with filenames stored at various locations in that file, these two
filesystems do not. Each entry in the directory is stored in its own
node, and these nodes do not correspond to any particular location in
a "file" — they are simply unique entries. JFFS2 and UBIFS each have
their own particular way of finding these names as we shall see, but
in neither case is the concept of a file offset part of that.
The one place where a block size is still visible in these filesystems
is in the way they chop a file up into nodes for storage. In JFFS2, a
node can be of any size up to 4KB so a log file could, for example, be
split up as one node per line. However the current implementation
always writes whole pages — to quote the in-line commentary,
"It sucks, but it's simple".
For UBIFS, data nodes must start at a 4KB-aligned
offset in the file so they are typically 4KB in size (before
compression) except when at the end of the file.
JFFS2 — the journaling flash filesystem
A traditional journaling filesystem, such as ext3 or xfs, adds a
journal to a regular filesystem. Updates are written first to the
journal and then to the main filesystem. When mounting the filesystem
after a shutdown, the journal is scanned and anything that is found is
merged into the main filesystem, thus providing crash tolerance.
JFFS2 takes a similar approach with one important difference — there
is no "regular filesystem". With JFFS2 there is only a journal, a
journal that potentially covers the entire device.
It is probably a little misleading to describe JFFS2 as "just one
journal". This is because it might lead you to think that when it
gets to the end of the journal it just starts again at the beginning.
While this was true of JFFS1, it is not for JFFS2.
Rather it might be clearer to think of each erase block as a little
journal. When one erase block is full, JFFS2 looks around for another
one to use. Meanwhile if it notices that some erase blocks are nearly
empty it will move all the active nodes out of them into a clean erase
block, and then erase and re-use those newly-cleaned erase blocks.
When a JFFS2 filesystem is mounted, all of these journals, and thus
the entire device, are scanned and every node found is incorporated
into an in-memory data structure describing the filesystem. Some
nodes might invalidate other nodes; this may happen when a file is
created and then removed: there will be a node recording the new
filename as belonging to some directory, and then another node
recording that the filename has been deleted. JFFS2 resolves all
these modifications and ends up with a data structure that describes
the filesystem as it was that last time something was written to it,
and also describes where the free space is. The structure is kept as
compact as possible and naturally does not contain any file data; instead,
it holds
only the addresses where the data should be found and so, while it
will be much smaller than the whole filesystem, it will still grow
linearly as the filesystems grows.
This need to scan the entire device at mount time and
store the skeleton of the filesystem in memory puts a limit on the
size of filesystem that JFFS2 is usable for. Some tens of megabytes,
or even a few hundred megabytes, is quite practical. Once the device
gets close to, or exceeds, a gigabyte, JFFS2 become quite impractical.
Even if memory for storing the tree were cheap, time to mount the
filesystem is not.
This is where UBIFS comes in. While the details are quite different,
UBIFS is a lot like JFFS2 with two additions: a tree to index all the
nodes in the filesystem, and another tree to keep track of free
space. With these two trees, UBIFS avoids both the need to scan the entire
device at mount time and the need to keep a skeleton of the
filesystem in memory at all times. This allows UBIFS to scale to much
larger filesystems — certainly many tens of gigabytes and probably more.
But before we look too closely at these trees it will serve us well to
look at some of the other details and in particular at "UBI", a layer
between the MTD flash interface layer and UBIFS. UBI uses an unsorted
collection of flash erase blocks to present a number of file system
images; UBI stands for Unsorted Block Images.
UBI — almost a Flash Translation Layer
The
documentation
for UBI explicit states that it is not a flash translation
layer. Nonetheless it shares a lot of functionality with an FTL,
particularly wear leveling and error management. If you imagined UBI
as an FTL where the block size was the same as the size of an erase
block, you wouldn't go far wrong.
UBI uses a flash device which contains a large number of Physical
Erase Blocks (PEBs) to provide one or more virtual devices (or "volumes")
which each
consist of a smaller number of Logical Erase Blocks (LEBs),
each slightly smaller than a PEB. It maintains a mapping from LEB to
PEB and this mapping may change from time to time due to various
causes including:
-
Writing to an LEB. When an LEB is written,
the data will be written to a new, empty, PEB and the mapping
from LEB to PEB will be updated. UBI is then free to erase the old PEB at its
leisure. Normally, the first new write to an LEB will make all the data
previously there inaccessible. However, a feature is available where
the new PEB isn't committed until the write request completes. This
ensures that after a sudden power outage, the LEB will either have the
old data or the complete new data, never anything else.
-
Wear leveling. UBI keeps a header at the start of each PEB which is
rewritten immediately after the block is erased. One detail in the
header is how many times the PEB has been written and erased. When UBI notices
that the difference between the highest write count and the lowest
write count in all the PEBs gets too high (based on a compile-time
configuration parameter: MTD_UBI_WL_THRESHOLD), it will move
an LEB stored in a PEB with a low write count (which is assumed to be
stable since the PEB containing it has not been rewritten often) to one
with a high write
count. If this data continues to be as stable as it has been, this will
tend to reduce the variation among write counts and achieve wear
leveling.
-
Scrubbing. NAND flash includes an error-correcting code (ECC) for
each page (or sub-page) which can detect multiple-bit errors and correct single-bit
errors. When an error is reported while reading from a PEB, UBI will
relocate the LEB in that PEB to another PEB so as to guard against a
second bit error, which would be uncorrectable. This process happens
transparently and is referred to as "scrubbing".
The functionality described above is already an advance on the flash
support that
JFFS2 provides. JFFS2 does some wear leveling but it is not precise.
It keeps no record of write counts but, instead, decides to relocate an
erase-block based on the roll of a dice (or actually the sampling of a random
number) instead. This probably provides some leveling of wear, but
there are no guarantees. JFFS2 also has no provision for scrubbing.
The mapping from PEB to LEB is stored spread out
over all active erase blocks in the flash device. After the PEB header
that records the write
count there is a second header which records the volume identifier and
LEB number of the data stored here. To recover this mapping at mount
time, UBI needs to read the first page or two from every PEB. While
this isn't as slow as reading every byte like JFFS2 has to, it would still
cause mount time to scale linearly with device size — or nearly
linearly as larger devices are likely to have larger erase block
sizes.
Recently this situation has improved. A new
feature known has "fastmap" made its
way into the UBI driver for Linux 3.7. Fastmap stores a recent copy
of the mapping in some erase block together with a list of the several
(up to 256) erase blocks which will be written next, known as the
pool.
The mount process then needs to examine the first 64 PEBs to find a
"super block" which points to the mapping, read the mapping, and then
read the first page of each PEB in the pool to find changes to the
mapping. When the pool is close to exhaustion, a new copy of the
mapping with a new list of pool PEBs is written out.
This is clearly a little more complex, but puts a firm cap
on the mount time and so ensures scalability to much larger devices.
UBIFS — the trees
With UBIFS, all the filesystem content — inodes, data, and directory
entries — is stored in nodes in various arbitrary Logical Erase Blocks,
and the addresses of these blocks are stored in a single B-tree. This is
similar in some ways to reiserfs (originally known as "treefs") and
Btrfs, and contrasts with filesystems like f2fs, NILFS2 and ext3
where inodes, file data, and directory entries are all stored with
quite different indexing structures.
The key for lookup in this B-tree is 64 bits wide, formed from a 32-bit inode
number, a three-bit node type, and a 29-bit offset (for file data)
or hash value (for directory entries). This last field, combined with a 4KB
block size used for indexing, limits the size of the largest file to two
terabytes, probably the smallest limit in the filesystem.
Nodes in this B-tree are, like other nodes, stored in whichever erase
block happens to be convenient. They are also like other nodes in that they are not
sized to align with any "basic block" size. Rather the size is chosen
based on the fan-out ratio configured for the filesystem. The default
fan-out is eight, meaning that each B-tree node contains eight keys
and eight pointers to other nodes, resulting in a little under 200
bytes per node.
Using small nodes means that fewer bytes need to be written when
updating indexes. On the other hand, there are more levels in the tree so more
reading is likely to be required to find a node. The ideal trade off
will depend on the relative speeds of reads and writes. For flash
storage that serves reads a lot faster than writes — which is not
uncommon, but seemingly not universal — it is likely that this fan-out
provides a good balance. If not, it is easy to choose a different
fan-out when creating a filesystem.
New nodes in the filesystem do not get included in the indexing B-tree
immediately. Rather, their addresses are written to a journal, to
which a few LEBs are dedicated. When the filesystem is mounted, this
journal is scanned, the nodes are found, and based on the type and
other information in the node header, they are merged into the indexing
tree. This merging also happens periodically while the filesystem is
active, so that the journal can be truncated.
Those nodes that are not yet indexed are sometimes referred to as
"buds" — a term which at first can be somewhat confusing. Fortunately
the UBIFS code is sprinkled with some very good documentation so it
wasn't too hard to discover that "buds" were nodes that would soon be
"leaves" of the B-tree, but weren't yet —
quite an apt botanical joke.
Much like f2fs, UBIFS keeps several erase blocks open for writes at
the same time so that different sorts of data can be kept separate
from each other, which, among other things, can improve cleaning
performance. These open blocks are referred to as different "journal heads".
UBIFS has one "garbage collection" head where the cleaner writes nodes
that it moves — somewhat like the "COLD" sections in f2fs. There is
also a "base" head where inodes, directory entries, and other non-data
nodes are written — a bit like the "NODE" sections in f2fs.
Finally, there are one or more "data" heads
where file data is written, though the current code doesn't appear to
actually allow the "or more" aspect of the design.
The other tree that UBIFS maintains is used for keeping track of free
space or, more precisely, how many active nodes there are in each
erase block. This tree is a radix tree with a fan-out of four. So if
you write the address of a particular LEB in base four (also known as
radix-four), then each digit would correspond to one level in the tree,
and its value indicates which child to follow to get down to the next
level.
This tree is stored in a completely separate part of the device with
its own set of logical erase blocks, its own garbage collection, and
consequently its own table of LEB usage counters. This last table
must be small enough to fit in a single erase block and so imposes a
(comfortably large) limit on the filesystem size. Keeping this tree
separate seems like an odd decision, but doubtlessly simplifies the
task of keeping track of device usage. If the node that records the
usage of an LEB were to be stored in that LEB, there would be
additional complexity which this approach avoids.
A transition to FTL?
While JFFS2 clearly has limits, UBIFS seem to be much less limited.
With 32 bits to address erase blocks which, themselves, could
comfortably cover several megabytes, the addressing can scale to petabyte
devices. The B-tree indexing scheme should allow large directories
and large files to work just as well as small ones. The two terabyte
limit on individual files might one day be a limit but that still
seems a long way off. With the recent addition of fastmap for UBI,
UBIFS would seem ready to scale to the biggest flash storage we have
available. But it still requires raw flash access while a lot of flash
devices force all access to pass through a flash translation layer.
Could UBIFS still be useful on those devices?
Given that the UBI layer looks a lot like an FTL it seems reasonable
to wonder if UBI could be modified slightly to talk to a regular block
device instead, and allow it to talk to an SD card or similar. Could
this provide useful performance?
Unfortunately such a conversion would be a little bit more than an
afternoon's project. It would require:
- Changing the expectation that all I/O is synchronous. This might
be as simple as waiting immediately after submitting each request,
but it would be better if true multi-threading could be achieved.
Currently, UBIFS disables readahead because it is incompatible with
a synchronous I/O interface.
- Changing the expectation that byte-aligned reads are possible.
UBIFS currently reads from a byte-aligned offset into a buffer,
then decompresses from there. To work with the block layer it
would be better to use a larger buffer that was sector-aligned, and
then understand that the node read in would be found at an offset into that
buffer, not at the beginning.
- Changing the expectation that erased blocks read as all ones.
When mounting a filesystem, UBIFS scans various erase blocks and
assumes anything that isn't 0xFF is valid data. An
FTL-based flash store will not provide that guarantee, so UBIFS would need to
use a different mechanism to reliably detect dead data. This is
not conceptually difficult but could be quite intrusive to the
code.
- Finding some way to achieve the same effect as the atomic LEB
updates that UBI can provide. Again, a well understood problem,
but possibly intrusive to fix.
So without a weekend to spare, that approach cannot be experimented
with. Fortunately there is an alternative.
As mentioned, there already exists a "block2mtd" driver which can be
used to connect UBIFS, via UBI and mtd, to a block device. This driver
in deliberately very simple and consequently quite inefficient. For
example, it handles the mtd_erase() function by writing blocks
full of 0xFF to the device. However, it turns out that it is
only an afternoons project to modify it to allow for credible testing.
This patch modifies the block2mtd driver to
handle mtd_erase() by recording the location of erased
blocks in memory,
return 0xFF for any read of an erased block, and
not write out the PEB headers until real data is to be written to
the PEB.
The result of these changes is that the pattern of reads and, more importantly,
writes to the block device will be much the same as the pattern of
reads and writes expected from a more properly modified UBIFS. It is
clearly not useful for real usage as important information is kept in
memory, but it can provide a credible base for performance testing.
The obvious choice of what to test it against is f2fs. Having
examined the internals of both f2fs and UBIFS, we have found substantial
similarity which is hardly surprising as they have both been designed
to work with flash storage. Both write whole erase blocks at a time
where possible, both have several erase blocks "open" at once, and
both make some efforts to collect similar data into the same erase
blocks. There are of course differences though:
UBIFS probably scales better to large directories,
it can compress data being written, and
it does not currently support exporting via NFS, partly because of the
difficulty of providing a stable index for directory entries.
The compression support is probably most interesting. If the CPU is
fast enough, compression might be faster than writing to flash and
this could give UBIFS an edge in speed.
I performed some testing with f2fs and UBIFS; the latter was tested twice,
with and without the use of compression (the non-compression case is marked
below as "NC").
Just for interest's sake I've added NILFS2, ext4 and
Btrfs. None of these are particularly designed for FTL based flash, though
NILFS2 can align writes with the erase blocks and so might perform well.
The results of the last two should be treated very
cautiously. No effort was made to tune them to the device used, and
all the results are based on writing to an empty device. For f2fs,
UBIFS, and NILFS2 we know that they can "clean" the device so they always write to
unused erase blocks. ext4 and Btrfs do not do the same cleaning so it
is quite possible that the performance will degrade on a more "aged"
filesystem. So the real long term values for these filesystems
might be better, and might be worse, than what we see here.
For testing I used a new class 10 16GB microSD card, which claims 10MB/s
throughput and seems to provide close to that for sequential IO. According
to the flashbench
tool, the card appears to have an 8MB erase block size; five erase blocks
can be open at a time, and only the first
erase block optimized for a PC-style file attribute table. The kernel used
was 3.6.6 for openSUSE with the above mentioned patch and the
v3 release of f2fs.
The tests performed were very simple. To measure small file performance,
a tar archive of the Linux kernel (v3.7-rc6) was unpacked ten times and then —
after unmounting and remounting — the files were read back in again
and "du" and "rm -r" were timed to check metadata performance. The
"rm -r" test was performed with a warm cache, immediately after the "du -a", which was performed on a cold cache.
The average times in seconds for these operations were:
| | ubifs | ubifs — NC | f2fs | NILFS2 | ext4 | Btrfs |
| Write kernel | 72.4 | 139.9 | 118.4 | 140.0 | 135.5 | 93.6 |
| Read
kernel | 72.5 | 129.6 | 175.7 | 95.6 | 108.8 | 121.0 |
| du -s | 9.9 | 8.7 | 48.6 | 4.4 | 4.4 | 13.8 |
| rm -r | 0.48 | 0.45 | 0.36 | 11.0 | 4.9 | 33.6 |
Some observations:
- UBIFS, with compression, is clearly the winner at reading and writing
small files. This test was run on an Intel Core i7 processor running at
1GHz; on
a slower processor, the effect might not be as big. Without
compression, UBIFS is nearly the slowest, which is a little surprising,
but that could be due to the multiple levels that data passes though
(UBI, MTD, block2mtd).
- f2fs is surprisingly poor at simple metadata access (
du -s). It is
unlikely that this is due to the format chosen for the filesystem — the
indirection of the Node Address Table is the only aspect of the design that
could possibly cause this slowdown and it could explain at most a factor of two.
This poor performance is probably some simple implementation issue. The number is
stable across the ten runs, so it isn't just a fluke.
- Btrfs is surprisingly fast at writing. The kernel source tree is
about 500MB in size, so this is around 5.5MB/sec, which is well
below what the device can handle but is still faster than anything
else. This presumably reflects the performance-tuning efforts that
the Btrfs team have made.
- "
rm -r" is surprisingly slow for the non-flash-focused
filesystems, particularly Btrfs. The
variance is high too. For ext4, the slowest "rm -r"
took 32.4 seconds, while, for Btrfs, the slowest was 137.8 seconds —
over 2 minutes. This seems to be one area where tuning the design
for flash can be a big win.
So there is little here to really encourage spending that weekend to
make UBIFS work well directly on flash. Except for the compression
advantage, we are unlikely to do much better than f2fs, which can be
used without that weekend of work. We would at least need to see how
compression performs on the processor found in the target device
before focusing too much on it.
As well as small files, I did some even simpler large-file tests. For
this, I wrote and subsequently read two large, already compressed,
files. One was an mp4 file with about one hour of video. The other was an
openSUSE 12.2 install ISO image. Together they total about 6GB. The total
times for each filesystem were:
| | ubifs | ubifs — NC | f2fs | NILFS2 | ext4 | Btrfs |
| write files | 850 | 876 |
838 | 1522 | 696 | 863 |
| read files | 1684 | 1539 | 571 | 574 | 571 | 613 |
The conclusions here are a bit different:
- Now ext4 is a clear winner on writes. It would be very
interesting to work out why. The time translates to about 8.8MB/sec which
is getting close to the theoretical maximum of 10MB/sec.
- Conversely, NILFS2 is a clear loser, taking nearly twice as long as the
other filesystems. Two separate runs showed similar results so it looks
like there is room for some performance tuning here.
- UBIFS is a clear loser on reads. This is probably because nodes
are not aligned to sectors so some extra reading and extra copying
is needed.
- The ability for UBIFS to compress data clearly doesn't help with these
large files. UBIFS did a little better with compression enabled,
suggesting that the files were partly compressible, but it wasn't
enough to come close to f2fs.
In summary, while f2fs appears to have room for improvement in some
aspects of the implementation, there seems little benefit to be
gained from pushing UBIFS into the arena of FTL-based devices. It
will likely remain the best filesystem for raw flash, while f2fs
certainly has some chance of positioning itself as the best filesystem
for FTL-based flash. However, we certainly shouldn't write off ext4 or
Btrfs. As noted earlier, these tests are not expected to give a firm
picture of these two filesystems so we cannot read anything conclusive
from them. However, it appears that both have something to offer, if
only we can find a way to isolate that something.
Comments (21 posted)
Patches and updates
Kernel trees
- Linus Torvalds: Linux 3.7 .
(December 11, 2012)
Core kernel code
Development tools
Device drivers
Documentation
Filesystems and block I/O
Memory management
Networking
Architecture-specific
Virtualization and containers
Miscellaneous
Page editor: Jonathan Corbet
Distributions
By Jonathan Corbet
December 12, 2012
Canonical's plan to raise revenue by advertising products sold by Amazon to
Ubuntu users has been the source of persistent grumbles across the net for
a few months. The volume of that grumbling increased considerably on
December 7, though, when Richard Stallman
criticized
the company for this practice. In turn, Richard has been criticized as
"childish" or as one trying to force his own morals on others. In truth,
this situation brings forward a number of questions on how to pay for free
software development and how users can "pay" for a free-of-charge
service.
The service in question is tied to the Ubuntu "Dash" application that, in a
default installation, is the user's window into the system as a whole. Both
applications and local files can be found by way of a dash search. In the
12.10 release, the dash can be hooked into online service accounts, meaning
that a search can find documents in network folders, web-hosted
photographs, and more. There are potential privacy issues associated with
such searches, of course, but these searches should only happen if the user
has provided his or her login information to the Ubuntu system. It is an
opt-in situation.
The Amazon searches are another story, though. By default, searches that
would otherwise be local
are reported back to an Ubuntu server, which then employs the
user's search terms to locate products on Amazon that the user might just
want to buy. The results are sent back to the user's system, which then
proceeds to load the associated product images directly from Amazon and do
its best to inspire a bit of retail activity — with Canonical getting a cut
of the proceeds, naturally. See the image to the right for an example; the
results can be surprisingly diverse.
Back in September, Canonical founder Mark Shuttleworth defended this
behavior, claiming that the retail offers from Amazon "are not
ads, they are results to your search." The idea that these results
are not advertisements is justified by saying that there is no payment for
their placement; the fact that Canonical only gets paid when a purchase is
made apparently changes the situation somehow. But the real concern is not
the obnoxiousness of being not-advertised at; it is the privacy
implications. Mark addressed that worry this way:
We are not telling Amazon what you are searching for. Your
anonymity is preserved because we handle the query on your
behalf. Don’t trust us? Erm, we have root. You do trust us with
your data already.
One can certainly argue that Mark has a point; if one does not trust
Canonical, installing an operating system provided by Canonical would
appear to be counterindicated. But he has also glossed over a couple of
important issues:
- The loading of images directly from Amazon will have the effect
of associating searches with specific IP addresses. There is a
reasonable chance that the user might connect directly to Amazon's web
site at some point, enabling Amazon to associate searches and
customers. Canonical may be reserving the search strings, but there
is still a fair amount of information being leaked.
- Canonical's "terms of service" allow
it to send search terms to "selected third parties." Likely as not,
those searches are also being archived — the terms allow both
Canonical and the "selected third parties" to store the information.
That gives Canonical (and others) a database of what
their users are trying to find on their own computers. Even if
Canonical flat-out refuses to exploit that database, and even if
Canonical has somehow managed to put together a truly secure
infrastructure for the management of that data, and even if all the
"selected third parties" are somehow equally as virtuous,
the simple fact is
that such databases constitute attractive nuisances for governments.
If that data exists, it will be subpoenaed and otherwise rifled
through by the authorities.
Given those little problems, it seems possible that those who are concerned
about the behavior of the Ubuntu Dash are not just in the thrall of
unreasonable paranoia. Maybe, just maybe, there is a reason for more sober
minds to be at least minimally concerned about what their operating system
is telling others about them.
Richard Stallman's broadside was arguably neither sober nor minimally
concerned; he called Ubuntu's code "spyware," described it as a violation
of the user's privacy, and called for a boycott of Ubuntu in general. To
do any less, he said, would be to legitimize this sort of "abuse" of
Ubuntu's users and damage the good name of free software in general. And,
besides, Ubuntu recommends non-free software and
Richard, naturally, doesn't like that either.
It is not uncommon for people to disagree with Richard's missives; that was
certainly the case this time around. Ubuntu community manager Jono Bacon
fired
back, describing Richard's views as "childish" and "FUD" (he has since
apologized
for the "childish" part). Phillip Van Hoof described
Canonical's approach as simply "another ethic" and also
tossed out the "childish" epithet. Richard's posting, it seems, was seen
as a sort of tantrum.
One can agree with Richard or not (your editor often does not), but
dismissing his concerns over the treatment of users' private data seems
uncalled for. We as a community need to (continue to) have a discussion
about a couple of related issues: how can we pay for free software
development at all levels of the stack, and how do we guarantee our users'
rights as the pendulum continues to swing toward centralized,
highly-connected computing?
Whether or not one likes Canonical's specific approach, one has to give the
company credit for trying to improve Linux and make it more attractive to a
wide range of users. Ubuntu has raised the bar for usability for all
distributions and, arguably, has brought Linux into settings where it was
not used before. In the process, a lot of money has been spent and a lot
of free software developers have been employed. That money needs to come
from somewhere; even Mark's personal fortune will not sustain it forever.
So Canonical needs to gain revenue from somewhere.
In these web-centric days, revenue seems to come from two sources: from the
users directly, or from advertisements. Canonical has been trying both
approaches in various ways. If the Amazon non-advertisements approach
yields real revenue for Canonical, it would be hard not to conclude that
some users, at least, are happy to be informed about how Amazon might have
what they appear to be looking for. If nobody likes the feature, it will
presumably go away. So, arguably, the real question is whether this
behavior should be enabled by default (though Richard dislikes it even as
an opt-in service). It is, it could be said, an easy way for users to help
fund the creation of their distribution.
The counterpoint, obviously, is that Canonical's business model challenges
are not anybody else's problem and that trying to resolve those challenges
through the sale of users' private information is not appropriate. Perhaps
that is true, but one can also certainly suggest that those wanting to have
access to Ubuntu free of charge and who do not want to be a part of this
kind of scheme could come up with a better idea for how the company should
fund its operations.
In general, the proliferation of centralized network services presents a
long list of privacy and freedom concerns. It often seems that many of the
companies involved are fighting to control how we interact with the rest of
the digital world. Systems that are built to be an intermediary between a
user and networked services arguably fall into that category as well. One could
easily point at recent Ubuntu distributions — nicely equipped to collect
login credentials and intermediate between the user and multiple services —
as an example of this type of system. But one could say the same about,
say, an Android handset. As is so often the case, convenience encourages
people to give up information that, otherwise, they would prefer to keep to
themselves. The success of many privacy-compromising services demonstrates
that clearly.
Members of the free software community like to think that, among other
things, they are building systems that are designed to safeguard the
interests of their users rather than those of some third party. Most of
the time, that turns out to be true. Sometimes we find surprises —
software that phones home with user information or otherwise fails to
properly respect its users; such software tends to get fixed quickly, often
by distributors before users ever encounter it. But software freedom is no
guarantee of absence of user-hostile behavior; we still need to pay
attention to what is going on. That is doubly true for software from any
distributor (since distributors are in a position of special trust) or from
company-controlled projects.
Whether the behavior of the Ubuntu Dash is user-hostile seems to be at
least partly in the eyes of the beholder. Certainly it would have been more
respectful to ask the user whether this behavior was desired before
communicating back to the mothership. In this case, at least, the behavior
is not hidden and is easily disabled at multiple levels (see this
EFF posting from October for more details on how this service works and
how to turn it off). The next example of questionable behavior may be more
subtle and harder to detect; free software does not free us from the need
to be vigilant.
Comments (46 posted)
Brief items
Curses! My plan to make Debian's default init system phone home has been
foiled!
--
Steve Langasek
Comments (none posted)
Version 7.0 of the
Slax distribution has
been
released.
"
Slax 7.0 is the major update of Slax Linux live operating system. It
includes newest Linux Kernel, KDE4 desktop, GCC compiler and lots of other
stuff and that all in just a ~210MB download. Furthermore it's available in
more than 50 localizations, so you can get a Slax that speaks your
language."
Comments (none posted)
This alpha features Raring Ringtail (13.04) images for Edubuntu and
Kubuntu. "
At the end of the 12.10 development cycle, the Ubuntu
flavour decided that it would reduce the number of milestone images going
forward and the focus would concentrate on daily quality and fortnightly
testing rounds known as cadence testing. Based on that change, The Ubuntu
product itself will not have an Alpha-1 release. Its first milestone
release will be the FinalBetaRelease on the 28th of March 2013.
Other Ubuntu flavours have the option to release using the usual
milestone schedule."
Full Story (comments: none)
Distribution News
Debian GNU/Linux
The latest bits from the Debian Project Leader cover the debian-cloud
initiative, Debian Squeeze images for Amazon EC2, DebConf13 organization,
Kevin Carrillo's newcomer survey, the "dpl-helpers" initiative, and several
other topics.
Full Story (comments: none)
Fedora
Fedora elections are over. Jaroslav Reznik and Michael Scherer have been
elected to the Fedora Board. Toshio Kuratomi, Miloslav Trmac, Marcela
Mašláňová and Stephen Gallagher have been elected to FESCo (Fedora
Engineering Steering Committee). Alejandro Perez, Buddhika Chandradeepa
Kurera and Truong Anh Tuan have been elected to FAmSCo (Fedora Ambassadors
Steering Committee).
Full Story (comments: none)
Newsletters and articles of interest
Comments (none posted)
Richard Stallman has
come out
against Ubuntu's Amazon partnership on the Free Software Foundation's
site. "
But there's more at stake here than whether some of us have
to eat some words. What's at stake is whether our community can effectively
use the argument based on proprietary spyware. If we can only say, 'free
software won't spy on you, unless it's Ubuntu,' that's much less powerful
than saying, 'free software won't spy on you.' It behooves us to give
Canonical whatever rebuff is needed to make it stop this. Any excuse
Canonical offers is inadequate; even if it used all the money it gets from
Amazon to develop free software, that can hardly overcome what free
software will lose if it ceases to offer an effective way to avoid abuse of
the users."
Comments (77 posted)
Charles H. Schulz
marks
the official launch of the OpenMandriva Association. "
It is not
everyday you see an example of a community who gains its independence with
the blessing and dedication of its former steward. But I probably would not
be writing these lines if I hadn’t witnessed what it takes to fulfill this
kind of commitment. The OpenMandriva project, foundation, community,
association is taking off. The best is yet to come. But just like with
every FOSS project out there, and especially Linux distributions, the
community will have to strive to prove it can bring its longstanding
promise: to deliver an innovative, user-friendly Linux distribution
developed by an inclusive and friendly community."
Comments (1 posted)
Katherine Noyes
takes
a quick look at six projects that were started this year. "
More
than 30 new distros joined our sphere in rapid succession thanks just to
the “31 Flavors of
Fun” experiment in August, but there were also several notable arrivals
that come to light over the course of the year with the potential to make a
lasting mark."
Comments (none posted)
Linux From Scratch has a
new
blog. "
The purpose of the blog is to expand upon LFS/BLFS by
providing examples of configuration and use that go beyond the books. New
articles will appear periodically to give practical examples of how to use
applications in an LFS environment."
Full Story (comments: none)
Page editor: Rebecca Sobol
Development
By Nathan Willis
December 12, 2012
The Ekiga project unveiled a new stable
release of its free software softphone on November 26, its first new
update in three years. There are certainly improvements, both in
usability and in technical prowess, but as 2013 draws near it is hard
to shake the feeling that desktop Session Initiation Protocol (SIP)
applications are no longer particularly cutting edge.
New caller
The new
release is tagged as Ekiga 4.0, a version number bump
appropriate considering some of the larger changes. For example, a
lot of work has gone into better integrating Ekiga with GNOME 3,
including the replacement of the application's custom icons with
standard (and theme-aware) GNOME icons, and use of the new GNOME
notification system. The notifications spawned by the application
include some nice touches, such as a notification when a new audio
device is detected (a common occurrence with USB and Bluetooth
headsets). The 4.0 release can also connect to evolution-data-server
address books, and uses Avahi to discover other chat clients
available on the local network.
Naturally, there are improvements on the multimedia and
connectivity front, too. Skypse' SILK audio codec is new,
although this does not make Ekiga Skype-compatible, since the project
cannot implement Skype's proprietary protocols. Also new are the
G.722.1 and G.722.2 audio codecs and partial support for multiple
video streams with H.239. For the non-codec-junkies, the more
memorable improvements are support for RTP's "type of service" field
(a traffic-shaping mechanism) and the SIP service route discovery
protocol (which allows service providers to supply information to
client applications about proxy routing).
Perhaps the most visible new feature in the 4.0 release is
auto-call-answering. The feature is basic at the moment (it is either
all on or all off); people who use Ekiga regularly will no doubt
appreciate it, but it will be more valuable if it evolves some more
flexibility — such as the ability to auto-answer known callers.
Ekiga does already support some call-forwarding rules, so perhaps this
is not out of the question. On a related note, the release notes
indicate that Ekiga 4.0 now both un-registers from the SIP server and
publishes the user's presence as "offline" whenever the application is
shut down. This was a problem in earlier releases, particularly if
one used the same SIP account from multiple locations.
The new release also boasts a new-installation-setup routine that
is run at the first start-up (though it can be re-run from the "Edit"
menu). It is centered around setting up an account at the ekiga.net
service (complete with options for both a free SIP address and a
refillable outbound-call account for dialing "real" phones). This is
certainly one of Ekiga's strong points; too many other SIP softphones
offer no simple way to actually set up an account, which effectively
makes a new install incomplete. SIP is not commonplace enough that
the average new user already has an active account somewhere, after
all. Consider that Mozilla recently learned
that the majority of Thunderbird users were surprised to find that a
new email account was not built-in to the application. Multiply that
factor by a thousand and it approximates the utility of a built-in SIP
account.
The setup process itself is pretty painless, with the possible
exception of the step that asks the user to select the proper
PulseAudio/ALSA audio devices. Despite the best efforts of intrepid
audio developers, the automatically populated list of choices is still
dominated by obtuse choices like HDA Intel (PTLIB/ALSA) and
HDA Intel (1) (PTLIB/ALSA) — good luck deciding between
those two — and ALSA options truncated for being too long to fit
into the drop-down menu's list (such as
alsa_output.pci-000_00_1b.analog-stereo...). If the default
settings do not work, the user is immediately stranded in the
wilderness.
Phone GNOME
After three years, one could be forgiven for forgetting that Ekiga was
still alive and kicking as a project. But the new release is a good
one that is worth careful consideration. For the first time, Ekiga
actually feels like a GNOME application. In recent years, I
have used Jitsi most often as my
softphone, but by comparison it has
never felt like anything other than an alien invader from the Java
realm. But Jitsi retains at least one advantage over Ekiga: it
supports call and chat encryption, plus an array of other security
features. Newer codecs like Opus and VP8 are nice, to be sure, but
one of the few bullet points that proprietary VoIP services like Skype
can never match is preserving the user's privacy and confidentiality.
Speaking of Skype, the eBay-
investor-group- Microsoft-owned service also
surprised Linux users in November when it bumped
its Linux client release to 4.1. That still leaves Linux a version
behind the other platforms, but it does fix a number of lingering
complaints from users — such as implementing support for
skype: URIs, conference calling, and stability problems. On
the other hand, the new release fully merges Skype accounts into
Microsoft's existing MSN/Xbox/Outlook/Hotmail account system, giving
the user access to one and all, even if he or she is only (and barely)
interested in one of them.
Free software diehards have long objected to Skype's closed
protocols (and justifiably so), but ignoring its existence
is probably only practical for employees of the Free Software
Foundation and the like. For the rest of us, the choice is "install
Skype and use it when it is necessary, or repeatedly argue about Skype
with friends and family."
Then again, Skype is not quite the hot commodity
it was three or four years ago. These days, ad-hoc video chatting is
the sweet spot, through Google Hangouts and services of that ilk.
That is possible partly because the majority of humanity is
already signed in to a Google service whenever at its keyboards, but
even for smaller players web-based services may be making standalone
SIP clients a thing of the past. Embedding SIP into hardware devices
is still a popular alternative to POTS
telephone service, but the SIP protocol suite has
never been easy to configure, and letting a web service provider
handle the details is enticing.
On that front, it is a good thing
Mozilla expended its energy pushing for non-royalty-bearing codecs in
WebRTC. At least it will be possible for the next generation of VoIP
applications to be free software. Of course, Ekiga may surprise us
again in a few years by being one of the better alternatives in that
fight as well.
Comments (9 posted)
Brief items
A reasonable subset of my audience believes that I should be deprived of niceness and instead substitute the righteous feeling I get from using Free software instead. As if it doesn’t matter whether I have a thing that does what I want: if I had a thing that involved free software instead then I could ignore that I can’t do what I care about and instead feel happy that at least I wasn’t doing what I wanted in a Free way.
—
Stuart Langridge
It's been a long and bumpy ride. I hardly recognize the people in this team photo from 2003.
—
Jelmer Vernooij, on the ten-year development cycle of Samba 4.0.0.
Comments (none posted)
Version 2.65 of the Blender 3D modeling and animation studio has been released. This latest version includes a fire simulation tool (to accompany the existing smoke simulation), plus improvements to motion blur rendering, mesh modeling, and many other editing features.
Comments (none posted)
Version 1.0 of the
SparkleShare
network-shared folder system has been
announced.
"
SparkleShare uses the version control system Git under the hood, so
people collaborating on projects can make use of existing infrastructure,
and setting up a host yourself will be easy enough. Using your own host
gives you more privacy and control, as well as lots of cheap storage space
and higher transfer speeds." LWN last
reviewed SparkleShare in 2010.
Comments (16 posted)
The long-awaited Samba 4.0 release is out. "
As the culmination of
ten years' work, the Samba Team has created the first compatible Free
Software implementation of Microsoft’s Active Directory protocols. Familiar
to all network administrators, the Active Directory protocols are the heart
of modern directory service implementations." See the announcement
(click below) for lots of details.
Full Story (comments: 16)
Google has released a project called RE2, an alternative regular expression matching engine that it describes as a "mostly drop-in replacement for PCRE's C++ bindings." RE2 implements regular expression matching without a backtracking search, the approach used by most other implementations that can have an exponential run time in worst-case scenarios.
Comments (5 posted)
Version 2.7 of the Bison parser generator is out. New features include
improved diagnostics, (experimental) exception handling, better graphical
state presentation, and more.
Full Story (comments: none)
Newsletters and articles
Comments (none posted)
At his blog, Max Jonas Werner examines the recent claim by Simon Phipps that half of all GitHub projects have no discernible license attached to the code, with just 20% including an actual license file. Werner arrives at different numbers, with 80% of projects having license information. Although his data mining was not comprehensive, he does supply the raw data for further analysis.
Comments (none posted)
Page editor: Nathan Willis
Announcements
Brief items
ROSA representatives have joined the Steering Committee of the Automotive
Grade Linux project. "
"ROSA has achieved good results in developing software for desktop computers and mobile devices. We have a vision how to make our products useful and attractive for automobiles" - says Vladimir Kryukov, ROSA Product Marketing Director.
In particular, at the exhibition in London in autumn 2012, ROSA demonstrated a prototype of ROSA Sputnik in-vehicle infotainment system. The working prototype allows to play multimedia files, see the places of interest by means of excursion modules and construct routes taking into account a large set of criteria. All information can be displayed either at the screen of on-board computer or at the car's windshield. For this purpose, specialized user interface modules were developed that use the Augmented Reality concepts."
Full Story (comments: 13)
Videos and slides from the
2012 LLVM Developers' Meeting (November 7-8, San Jose, CA) have been
posted. Topics covered include LLVM on supercomputers, AArch64 support,
C/C++ modules, integrated security, and more.
Comments (none posted)
Articles of interest
The Free Software Foundation Europe reports that the European Parliament has
adopted a proposal to create a patent with unitary effect for Europe.
"
This decision will leave Europe with a patent system that is both
deeply flawed and prone to overreach. It also ends democratic control of
Europe's innovation policy." The proposal still needs to be
ratified before it will take effect. "
According to the European
Parliament's website, "the international agreement creating a unified
patent court will enter into force on 1 January 2014 or after thirteen
contracting states ratify it, provided that UK, France and Germany are
among them."
Full Story (comments: 8)
A
brief
has been submitted to the US Court of Appeals; signed by Google, Facebook,
Red Hat and several other companies; stating that the combination of an
abstract idea and a computer should not be eligible for patent protection.
The H
takes
a look. "
The companies argued that such bare-bones claims grant exclusive rights over the abstract idea itself with no limit on how the idea is implemented, and that granting patent protection for such claims would impair, not promote, innovation. In their 30-page brief to the US Court of Appeals for the Federal Circuit, the signatories explain that this often grants exclusive rights to people who haven't themselves contributed significantly to a development, punishing those who later create innovation and cannot market the concrete applications of these abstract ideas unless they pay royalties."
Comments (4 posted)
Calls for Presentations
DjangoCon Europe will be held in Warsaw, Poland, May 15-19, 2013. The call
for papers closes January 8. "
We're looking for Django and Python enthusiasts, pioneers, adventurers
and anyone else who would like to share their Django achievements and
experiments with the rest of the community.
We are particularly keen to invite submissions from potential speakers
who have not previously considered speaking at an event like this - so
if you haven't, please consider it now!"
Full Story (comments: none)
PGCon 2013 will take place May 23-24 in Ottawa, Canada. The CfP deadline
is January 19. "
If you are doing something interesting with PostgreSQL, please submit
a proposal. You might be one of the backend hackers or work on a
PostgreSQL related project and want to share your know-how with
others. You might be developing an interesting system using PostgreSQL
as the foundation. Perhaps you migrated from another database to
PostgreSQL and would like to share details. These, and other stories
are welcome. Both users and developers are encouraged to share their
experiences."
Full Story (comments: none)
Upcoming Events
The Apache Software Foundation (ASF) has announced the program for
ApacheCon NA, to be held February 26-28 in Portland, Oregon. "
This
year's theme is "Open Source Community Leadership Drives Enterprise-Grade
Innovation", reflecting the enormous reach and influence of the ASF. Apache
products power half the Internet, petabytes of data, teraflops of
operations, billions of objects, and enhance the lives of countless users
and developers."
Full Story (comments: none)
Events: December 13, 2012 to February 11, 2013
The following event listing is taken from the
LWN.net Calendar.
| Date(s) | Event | Location |
December 9 December 14 |
26th Large Installation System Administration Conference |
San Diego, CA, USA |
December 27 December 29 |
SciPy India 2012 |
IIT Bombay, India |
December 27 December 30 |
29th Chaos Communication Congress |
Hamburg, Germany |
December 28 December 30 |
Exceptionally Hard & Soft Meeting 2012 |
Berlin, Germany |
January 18 January 19 |
Columbus Python Workshop |
Columbus, OH, USA |
January 18 January 20 |
FUDCon:Lawrence 2013 |
Lawrence, Kansas, USA |
| January 20 |
Berlin Open Source Meetup |
Berlin, Germany |
January 28 February 2 |
Linux.conf.au 2013 |
Canberra, Australia |
February 2 February 3 |
Free and Open Source software Developers' European Meeting |
Brussels, Belgium |
If your event does not appear here, please
tell us about it.
Page editor: Rebecca Sobol