By Nathan Willis
December 5, 2012
Google played host to its second annual "Summer of Code
Documentation Camp" at its Mountain View offices over the first week
of December, at which teams from three free software projects gathered
to sort out and write sorely-needed documentation in an intensive
workshop setting. This year's participating projects were the Evergreen integrated library
system (ILS), the Etoys
educational programming environment for children, and the open source
font creator FontForge.
During the last three days of the camp each team was tasked with
producing its own self-contained manual through a "book sprint," but
the first two days were composed of an unconference-style workshop
about tackling the challenges of documentation in an open project.
Despite the name of the event, the camp is a distinct entity from
Google's university-oriented Summer of Code mentorship program
(although warm in December, Mountain View is demonstrably not
in the Southern hemisphere...). But the two programs do share a
common theme. The facilities and amenities were donated by Google,
but the week's activities were run by Adam Hyde of FLOSSManuals and Allen "Gunner"
Gunn of Aspiration Tech.
FLOSSManuals, of course, is a project dedicated to enabling and
encouraging open source communities to write and publish quality
documentation; Aspiration Tech is a training and consulting group
focused on working with nonprofits and software.
Unconferring
The unconference portion of the week consisted of a series of breakout
sessions exploring documentation-related subjects. The unconference
format means, first and foremost, that the agenda for the sessions is
not fixed in advance, but is created by the participants during the
event itself. The result is a program that tackles the specific needs
of the participants — which, considering the time constraints
of the week, was especially important.
Gunn first led the participants in a series of brainstorming exercises
in order to generate a pool of topics of interest. The participants
then read through all of the suggestions and attempted to group them
into recurring subjects. From the resulting ad-hoc taxonomy, Gunn and Hyde
selected the initial breakout session topics. But after the first
round of breakouts was over, it was up to participants themselves to
facilitate the sessions, exploring the topic at hand and deciding what
(if any) additional sessions needed to follow later in the day.
At first glance, it might seem like the three projects represented
at the unconference would have very little in common to discuss. Yes,
every open source project struggles with writing documentation, just
like every open source project struggles with recruiting new talent.
But the projects themselves differed considerably in scope, purpose,
and technical detail. Evergreen is an integrated, server-based
application suite with a web interface, used and managed by a very
specific type of professional institution. Etoys is a self-contained
interpreted programming environment designed for classrooms and
tailored toward young children. FontForge is a graphical desktop
application used for an isolated task typically undertaken only by
specialists.
But the brainstorming exercises quickly revealed that the projects
grapple with a nearly identical set of documentation challenges. All
three struggle with accessibility and translation tasks, all three
have difficulty integrating updates to documentation with updates to
the codebase, all three have trouble setting up and maintaining a
workflow among documentation volunteers. Consequently, in each
breakout session (there were three or four simultaneously during each
session slot) Gunn ensured that there was at least one
representative from each of the three teams, so the teams were able
to exchange information and learn from each others' experiences.
Prep work
None of the session topics lent itself toward simple "how to"
answers. Rather, the goal of the breakouts was to prime each group's
thinking on the documentation tasks, in preparation for the writing
portion of the week that followed. But there were still insights to be
found in each discussion.
For example, in the translation session (in which I participated as a
representative of the FontForge team), it came out that all of the
projects have found success with open source tools for translating
application strings and in-program help messages: Pootle, Launchpad,
and other tools handle this task quite well, providing an overview
that indicates the percentage of translation completed for each
available language, offering translation suggestions based on other
projects' work, and so on. But none of those features are available
in the systems that projects use to maintain their documentation.
These string translation tools are designed to handle
short snippets of text only, and typically fail on paragraph-length content: long sections of text do not fit into the boxes in the interface, and the features that track completion expect the entire text to be translated at once (which is rarely possible or desirable with lengthy discussions). Conversely, the content management systems that support
multi-lingual web sites do not offer the percentage-completion
tracking or suggestions that make application translation more
manageable.
The other topics examined during the breakout sessions included
soliciting and incorporating audience feedback, iterating and
versioning documentation, developing and maintaining a healthy
community of documentation writing, finding tools to support remote
documentation efforts, and many more. By Tuesday, however, the
program shifted away from general topics and toward the specifics that
each team needed to address for its own book sprint. This meant
nailing down a specific target audience and drafting an outline of
what subject matter would be covered, start to finish. That can be a
daunting task, particularly in light of the knowledge that only three
days will be available for content creation and editing, and with
teams of only five or six people.
The writing itself began in earnest late on Tuesday, and although the
deadline still looms large and ominous, it is remarkable how quickly
the work starts to take on a concrete shape. But that is the secret
that FLOSSManuals has set out to share with the free software
community: writing documentation may seem hard, but then again writing
software is hard, too. And just like writing software, once the
community sets its mind to the task, it can accomplish considerable
feats. That lesson does not end on Friday, either; already there has
been talk among the projects about what subjects ought to be tackled
in future book sprints. Even though Evergreen, Etoys, and
FontForge will walk away from the week with a new book, hopefully the
lesson will not be limited to just those three projects, either.
[The author would like to thank Google for support to attend the
2012 Summer of Code Documentation Camp]
Comments (3 posted)
By Nathan Willis
December 5, 2012
Darktable is an open source
raw photo converter with built-in image-library-management features,
which puts it up against stiff competition. Previous releases boasted
a wealth of functionality but were hampered by the application's
peculiar user interface. The just-released 1.1
version, however,
makes big strides forward in usability while still adding several
interesting new features. The new features include front-end
functionality, plus a command-line interface and GPU hardware
acceleration through OpenCL.
We last looked at Darktable in
November 2011, shortly after the release of version 0.9.3. At that
time, the application offered a substantial collection of photo
adjustment tools via plugins — including several tools not
offered by competing raw converters like Rawstudio or RawTherapee.
But the interface made them difficult to use: unlabeled controls,
nonstandard widgets and status indicators, and a plugin selection
palette composed entirely of cryptic, similar-looking logos.
Consequently, the biggest news for most users is that Darktable has
evolved into a far more usable product, with interface updates
touching most areas of functionality.
Photo editing
All of the same plugins and their icons are still available; they have
simply been organized into a scrollable list, with the name of the
plugin next to its logo. The downside is that it might look a wee bit less like a TIE
fighter's control panel — but it is far more usable.
Darktable's editing interface allows you to stack multiple
adjustments on top of each other by activating their plugins. This
approach is different from the model employed by other raw editors
(Rawstudio, for example) in which the adjustments available are
presented as intrinsic qualities of the image. In Rawstudio's
approach, for instance, an image has one tone curve, and you can
change it or leave it alone. In Darktable, you can adjust the "base
curve," but you can also apply (for example) high-pass or low-pass
filters, each of which adjusts the curve in its own particular way.
Order is important in Darktable's approach; if you desaturate an
image then try to adjust its color balance, you will not have much to
play with.
One of the benefits of Darktable's approach is that the developers can
implement quirky and original features as plugins — effects that
incorporate several types of adjustment at once. In older
Darktable releases, though, some of these unique effects filters exhibited
the most troublesome usability hangups. The new release fixes almost all of
the issues: most of the curves, axes, and units are labeled, and in most cases
it is clear what effect changing a widget will have on the image.
Where controls remain unclear, there is usually a pop-up tooltip with
a decent explanation.
Many of the controls now feature a combination label-slider-spinbox
akin to the "spin scales" (or as I call them, "spladers"...) now found in GIMP.
There are new plugins and adjustment features on display as well,
including conditional
blending — which is precisely what it sounds like.
Conditional blending allows you to apply a blend mode (e.g.,
"multiply" or "soft light") only to regions of the image that fall
within a particular color or brightness range. There is also a nice
equalizer plugin that enhances local contrast, bringing out small
image details without radically affecting the overall tone of the
picture.
Speaking of user interfaces, the 1.1 release also introduces a command-line interface, darktable-cli. At the moment, it is only capable of resizing images, but the potential for using Darktable in scripting is intriguing.
Photo juggling
Those of us not blessed with obsessive-compulsive tendencies tend to
let our file storage get messy. In the old days, photographers would
have called this the "shoebox problem" in reference to stacks of boxes
filled with negatives and prints. A fully digital workflow alleviates
this to some degree, but hunting for a half-remembered image in the
desktop environment's file manager remains a slow and frustrating
ordeal. Although an entire category of application has sprung up to
offer a hand (the "image manager" like Shotwell or Digikam), most raw
photo editors are still forced to incorporate some file collection
management and search functionality, simply to save the user from
switching back and forth repeatedly.
Darktable 1.1 adds muscle to its file management skills. By far
the flashiest new feature is similar-image-search, which scours the
database of imported image files looking for photos that appear
"similar" visually — as scored by histogram, color, and lightness . I
am particularly partial to this feature because it was one of the main
selling points of imgSeek, a now-defunct project that was the subject
of my first-ever published review,
and remains a rarely-seen feature.
As was the case with imgSeek, the results of similar-image-searching
are imprecise, but if you have imported your entire collection, it
would surely assist you in finding the odd mis-labeled image buried in
a strange directory.
Darktable also allows you to categorize images in
"film rolls" (which despite the now-archaic terminology are merely
named collections), in addition to applying keywords, tags, and other
metadata. How keywords differ from tags is not explained, other than
the fact that they reside on opposite sides of the screen. What
is more distinctive is support for
geotagging images, complete with a colorful map widget. The map
feature is not fully integrated into the other image management tools
— instead, it is one of the four top-level application tabs (the
other three being image management, photo editing, and tethering).
Finally, the application has a new "Group images" mode (which is
toggled on or off with a G button in the upper toolbar)
which, when activated, hides redundant images, such as the JPEG
versions of existing raw photos.
Darktable 1.1 thus gives you multiple ways to find the image you are
thinking of, based on automatic or user-assigned metadata, image
properties, and the photo's point of origin. The application also
gives you quick access to common editing tasks from within the image
management tab. Direct export is the most obvious task, but Darktable
can also take a select set of differently-exposed images and blend
them into a high dynamic range (HDR) image, or it can immediately
apply a user-defined "style" with one click of a button. These styles
amount to templates; to define one you select an image to which you
have made changes and save its current state as a new style.
Shoot first, accelerate later
The last of the new user-visible features in the new release is
support for live previews in tethered shooting mode. Tethered
shooting refers to capturing images from a camera connected to the
computer over USB. There are several practical reasons for tethering,
including the ability to see larger and higher-quality output than can
be displayed on a camera's LCD screen. But tethered shooting
can also be helpful when setting up complicated studio shots, such as
time-lapses, macro-photography, or tricky-to-capture phenomena
(imagine capturing the arrow-striking-an-apple shot, for instance).
Live preview makes setting up and double-checking these
carefully-managed shoots far simpler.
Darktable 1.1 has new features under the hood, too. The most
prominent is support for GPU-based hardware acceleration, which
arrives courtesy of OpenCL. OpenCL support benefits users by
speeding up image transformations. When working with high-megapixel
raw images, every speed increase is important, because users want to
see even minute changes in settings reflected as soon as possible
on the screen. The trend in camera-making is to add more
megapixels with every release, of course, and these days the low
end of the price spectrum offers more pixels than the high end did a
few years ago — so time-saving is not the concern of
professional photographers alone.
Darktable, like GIMP, uses the GEGL library to perform image
transformations. As we mentioned in
May 2012, GEGL has been slowly but surely adding OpenCL support to its
operations in recent years, most recently through the work of
developer Victor Oliveira. OpenCL "kernels" are functions which can
be executed in parallel on GPUs or CPUs, and they are
architecture-independent (unlike, for example, NVIDIA's CUDA). Thus,
systems with a supported GPU automatically get GPU acceleration, but all
multi-core CPU systems automatically get multi-threading, too. At the
moment, the proprietary graphics drivers from NVIDIA and ATI offer the
best support for OpenCL, although the Nouveau driver project is making
progress on its own.
The list of changes since the Darktable 0.9 series includes other
features, too, but the most significant for the average user will no
doubt be the improved user interface. There are still quirks, but the
team has done an excellent job of fixing the biggest usability
blockers — and doing so without sacrificing the design aesthetic
that earlier releases established. Darktable's approach to image
editing has always been different from the other open source raw
converters. When its own user interface does not get in the way, it
makes a much stronger case — and, more importantly, it lets the
user experiment with the unique features, and stumble across
interesting effects.
Comments (9 posted)
By Michael Kerrisk
December 5, 2012
Here is LWN's fifteenth annual timeline of significant events in the
Linux and free software world. We will be breaking the timeline up into
quarters, and this is our report on April-June 2012. Timelines for the
remaining quarters of the year will appear in the coming weeks.
This is version 0.8 of the 2012 timeline. There are almost certainly
some errors or omissions; if you find any, please send them to timeline@lwn.net.
LWN subscribers have paid for the development of this timeline, along
with previous timelines and the weekly editions. If you like what you see
here, or elsewhere on the site, please consider subscribing to LWN.
If you'd like to look further back in time, our timeline index page has links
to the previous timelines and some other retrospective articles
going all the way back to 1998.
The 2012 Linux Storage, Filesystem, and Memory Management Summit is held
in San Francisco, April 1-2 (LWN coverage: day 1
and day 2).
Debian joins the Open Source Initiative as an affiliate (announcement).
The udev maintainer announces that the udev and systemd projects
will merge, noting that it will still be possible to run udev on a
system that is not using systemd (announcement).
I think one of the things that makes Debian off-putting
and unwelcoming is that we're a little *too* obsessed with criticizing
everyone's ideas, and what some people see as "healthy discussion" other
people see as "hurtful flamewars over bike shed colors."
-- Russ Allbery
Yukihiro "Matz" Matsumoto, creator of the Ruby language, wins the
2011 Free Software Foundation award for the advancement of free
software (announcement).
Red Hat celebrates becoming the first open source
company to turn over one billion dollars in a fiscal year with a US$100,000
donation to open source projects (LWN blurb).
The Kubuntu project acquires a new sponsor, as Blue Systems
hires two former Kubuntu developers away from Canonical (LWN blurb and article).
The 2012 Linux Foundation Collaboration Summit take place in San
Francisco, April 3-5 (LWN coverage: Trademarks for free software projects; The kernel panel; X and Wayland; The
Linux System Definition; The future of
GLIBC; LLVM and Linux).
We don't need a system to help us ignore bug reports; our
existing process handles that with admirable efficiency.
-- Robert Haas
Maintenance of the Linux 2.4 kernel comes to an end, eight years
after the release of Linux 2.6.0 (announcement).
PostGIS 2.0.0 is released (announcement).
The Samba team announce a fix for a remote code execution
vulnerability (LWN blurb).
The 2012 Linux Audio Conference takes place in Palo Alto,
California, April 12-15 (LWN coverage).
Stefano Zacchiroli is re-elected for a third term as leader of the Debian
Project (announcement).
A couple of times I've said "It looks like you could use
some help. Would you like me to co-maintain with you?" and have generally
gotten a positive response. If it's put in terms of "Looks like you're
busy, I can help" and not "You suck and should be fired so I can take over"
people seem to be pretty open to it.
-- Scott Kitterman
MythTV 0.25 is released (LWN article).
FreeBSD 8.3 is released (announcement, release
notes).
Calligra 2.4 is released (LWN blurb
and article).
Nathan Willis joins LWN as an editor (LWN article).
gitolite v3.0 is released (announcement).
OpenSSH 6.0 released (announcement).
Geary 0.1 is released (LWN article on this GNOME-based email client).
The Defensive Patent License is released (LWN article).
OpenBSD 5.1 is released (announcement).
An Apple programmer, apparently by accident, left a debug
flag in the most recent version of the Mac OS X operating system. In
specific configurations, applying OS X Lion update 10.7.3 turns on a
system-wide debug log file that contains the login passwords of every user
who has logged in since the update was applied. The passwords are stored in
clear text.
-- Emil
Protalinski
The Tizen project announces the 1.0 ("Larkspur") release of its SDK
and platform source code (LWN blurb and
article).
Ubuntu 12.04 LTS "Precise Pangolin" released (announcement).
Yocto Project 1.2 is released (announcement).
Xfce 4.10 is released (LWN blurb).
The Libre Graphics Meeting 2012 is held in Austria, Vienna, May
2-5 (LWN coverage: Inkscape quietly evolves
into a development platform; GIMP's new
release, new über-core, and future; Unusual
typography).
The inaugural Tizen conference takes place in San Francisco, 7-9
May (LWN coverage: Pitching HTML5 as a
development framework).
Dell announces Project Sputnik, which is aimed at creating a
commercial, Linux-based developer laptop (LWN blurb).
Apache OpenOffice 3.4 is released (LWN blurb, pointer to
an earlier timeline of the work on the project, and an earlier article looking at progress of the
project).
The GNU nPth project makes a first release of its GNU portable threads
library (announcement).
Open Build Service version 2.3 released (announcement).
GIMP 2.8 is released (release notes,
LWN blurb and article previewing the release).
The Document Foundation announces a certification program "to
foster the provision of professional services around LibreOffice" (announcement).
Red Hat Enterprise Linux turns 10 (press
release).
Enough data has come in to satisfy me that with all the
improvements in Linux over the last year, and with BQL, codel and fq_codel,
that we've won a major battle in the war against bufferbloat
-- Dave Täht
ConnMan 1.0 is released (LWN blurb).
Kdenlive 0.9 is released (announcement).
PowerTOP v2.0 is released (LWN blurb).
PulseAudio 2.0 is released (announcement).
PGCon 2012 is held in Ottawa, Canada, May 17-18 (LWN coverage).
Mandriva SA announces it will return control of the distribution
back "to the community". However, the Mageia community distribution
that earlier forked from Mandriva declines to work with Mandriva's
community effort (announcement,
LWN article on the announcement and an earlier article on the status of Mandriva).
When I helped to develop the open standards that
computers use to communicate with one another across the Net, I hoped for
but could not predict how it would blossom and how much human ingenuity it
would unleash. What secret sauce powered its success? The Net prospered
precisely because governments — for the most part — allowed the Internet to
grow organically, with civil society, academia, private sector and
voluntary standards bodies collaborating on development, operation and
governance.
-- Vint
Cerf
The printerd project is announced (LWN article).
Linux 3.4 is released (announcement; KernelNewbies summary; LWN
merge window summaries part 1, part 2, and part 3; LWN development statistics article).
Mageia 2 is released (announcement and LWN article).
LLVM 3.1 is released (announcement, release
notes)
Nmap version 6 is released (announcement).
ownCloud 4 is released (LWN blurb).
Perl 5.16.0 is released (announcement
and LWN article).
The jury in Oracle v. Google finds that Google did not
infringe any of Oracle's patents (LWN blurb and earlier article on
the case, Groklaw
follow-up).
Simon Phipps becomes president of the Open Source Initiative (The
H article).
The LibreOffice project embarks on a project to rebase and relicense
the LibreOffice source code (LWN article).
I couldn't have told you the first thing about Java
before this problem. I have done, and still do, a significant amount of
programming in other languages. I've written blocks of code like rangeCheck
a hundred times before. I could do it, you could do it. The idea that
someone would copy that when they could do it themselves just as fast, it
was an accident. There's no way you could say that was speeding them along
to the marketplace. You're one of the best lawyers in America, how could
you even make that kind of argument?
-- Judge
Alsup (Oracle v. Google) has a clue
The Software Freedom Conservancy announces that it is expanding its
license compliance efforts after signing up multiple Linux kernel and
Samba developers whose copyrights can be used in license compliance
efforts (article).
Fedora 17 is released (announcement).
GCC explorer is released (LWN blurb).
RPM 4.10 released (LWN blurb).
systemd 183 is released; this release merges the udev and
systemd projects (announcement).
The Linux Foundation announces the existence of the FOSS Bar Code
Tracker, a tool for tracking free and open source software components
(announcement).
In the Oracle v. Google suit, Judge Alsup rules that the Java
APIs are not copyrightable (LWN blurb).
Managing a volunteer open source project is a lot like
herding kittens, except the kittens randomly appear and disappear because
they have day jobs.
-- Matt Mackall
Obnam 1.0 is released (LWN blurb
and article on this backup system).
LinuxCon Japan is held in Yokohama, June 6-8 (videos; LWN coverage: Making kernel developers less grumpy; OpenRelief launches; One
zImage to rule them all; Advice for new
kernel hackers; The business of
contribution).
From the tone of the hearing, and the language of the
House resolution, we are being asked to believe that "the position of the
United States Government has been and is to advocate for the flow of
information free from government control."
If only it were true. The reality is that Congress increasingly has its
paws all over the Internet. Lawmakers and regulators are busier than ever
trying to expand the horizons of cyber-control across the board: copyright
mandates, cybersecurity rules, privacy regulations, speech controls, and
much more.
-- Jerry
Brito and Adam Thierer
Debian accepts a diversity statement (announcement).
Linus Torvalds co-wins the Millennium Technology Prize (BBC report).
The Apple versus Google-owned Motorola patent litigation takes a
surprising turn as Judge Richard Posner dismisses the case, calling the patent
system "dysfunctional" (GigaOm
article).
Emacs 24.1 is released (announcement).
MPlayer 1.1 is released (LWN blurb).
X11R7.7 is released (announcement
and LWN article).
SystemTap 1.8 is released (announcement).
Ulogd 2.0.0 is released (announcement).
The Electronic Frontier Foundation announces the Defend
Innovation patent reform project (press
release).
The Fedora and Ubuntu distributions outline their plans for dealing
with UEFI secure boot (LWN article on the Fedora plan and the Ubuntu plan).
Red Hat Enterprise Linux 6.3 is released (LWN blurb, release
notes).
Grub 2.0.0 is released (announcement).
Documentation is the sort of thing that will never be
great unless someone from outside contributes it (since the developers can
never remember which parts are hard to understand).
-- Avery Pennarun
The GNU C library (glibc) version 2.16 is released (announcement).
Many Linux servers misbehave as a result of the leap second added at the
end of the month (LWN article).
Comments (1 posted)
Page editor: Jonathan Corbet
Security
By Jonathan Corbet
December 5, 2012
The
FreedomBox project
is working toward the creation of an inexpensive, in-home device that can
be used for secure and private communications. The initial plan is to
create a version of the Debian distribution that can be installed on a
device like the
DreamPlug;
the resulting configuration should "just work" for nontechnical users in
potentially hostile situations. The project has many challenges to
overcome, one of which — the choice of MAC address for the network
interface — shows how tricky this problem space can be.
An interface's MAC address is a unique number identifying the interface to
any other devices it may communicate directly with. Ethernet-style MAC
addresses are six-byte quantities; half of those bytes identify the
manufacturer while the other half are meant to be a unique serial number.
The MAC address for the Ethernet interface on the system where this article
being typed is:
18:03:73:be:76:4a
This MAC address identifies the relevant system as having been manufactured
by Dell. If Dell has done its job properly (and there is no evidence to
the contrary), no other Ethernet interface on the planet should have that
same MAC address.
FreedomBox developer Nick Daly recently started
pondering the question of how a FreedomBox should set its MAC address.
The hardware will come with an address provided by the manufacturer, of
course, but that address can be changed by the kernel and there may well be
good reasons for doing so. Many of those were outlined in this lengthy message from John Gilmore, which
is well worth reading in its entirety; it forms the basis of this summary.
One obvious problem is that a static MAC address is a unique number
identifying a particular system. Most interfaces never operate with
anything but the vendor-supplied address; if a hostile party learns that
address, they can quickly identify the system it belongs to. So, while a
FreedomBox device might move around, a suitably informed observer will
always know which device it is. That allows the correlation of activities
over time and the monitoring of specific devices.
Current technologies make things worse. Quoting John:
Apple iPhones record the MAC addresses that are nearby, report
these to Apple, and Apple uses them to return a physical position
fix. This is used to more rapidly cause the GPS algorithm to
converge on a position, and also used when GPS isn't working. The
phones often report their GPS position and any nearby MAC addresses
back to Apple servers... It's easy for hackers to query that
database of MAC addresses and locations, by pretending to be an
iPhone seeking its location.
In other words, a hostile entity might not have to drive around a city in
an attempt to detect a device with a specific MAC address; instead, it is
just a matter of asking Apple, which has a widespread surveillance network
in place and can simply say where that device is to be found. Similar
information is maintained by other parties — Google, for example.
John also pointed out that it is often trivial to determine which IP
address is assigned to a device; it is often just a matter of sending a DNS
query to the MAC address of interest. That can enable the identification
of the location from which specific network activity has been generated.
Finally, there is the matter of that manufacturer identification number
found in every MAC address. If FreedomBox becomes a widely used and
effective system, certain authorities might develop a strong interest in
knowing where DreamPlug systems are to be found. The identifying
information found in the MAC address makes that identification a relatively
simple task. Turning on a DreamPlug could be a way of painting a target on
a specific location — not the sort of dream the owner may have been looking
for.
The obvious conclusion is that FreedomBox systems should not normally run
with the default MAC address provided by the vendor. They should, instead,
generate a new address, and that address should be changed frequently.
Fortunately, much of this is easy to do; any even remotely contemporary
hardware will allow the host system to provide a new MAC address, and the
data link layer (and above) protocols are pretty good about responding to
MAC address changes. So there is no obvious technical barrier preventing
frequent changing of a system's MAC address.
But there is still the question of what that address should be. Nick had
suggested defaulting to 00:00:00:00:00:00 by default, a choice
that would clearly prevent the identification of specific FreedomBoxes.
But there are problems with that choice, starting with the fact that
confusion would result as soon as two FreedomBoxes appeared on the same
network. So something a little smarter is needed.
One obvious possibility is to simply generate a six-byte random number and
use that. Care would have to be taken to avoid MAC address collisions on
any given net, but that is not a particularly hard problem to solve. There
are also the usual issues with having enough
entropy available to generate
a proper random number at boot time; without an adequate level of care,
that random address might be far less random than people expect. Once
again, that is a problem that should be amenable to a proper solution.
But, as John pointed out, there is another problem: real-world MAC
addresses follow a specific pattern; a random address, being unlikely to
fit that pattern, would probably stand out like a neon sign to anybody who
is looking for it. To be convincing, a system-chosen MAC address
cannot be completely random. It should have a recognized manufacturer
number, preferably a manufacturer that actually makes contemporary wireless
network interfaces. The serial number also needs to fit into a range that
was actually shipped by that manufacturer. In other words, a random MAC
address will only blend in if it makes the device look like some other
random piece of real-world hardware.
These problems are all tractable, but the solution requires a great deal of
due care if it is not to expose its users to unwanted consequences.
Indeed, the whole system must be designed and implemented with that level
of care; that is part of why the FreedomBox has not come to fruition as
quickly as many would have liked. Privacy is a surprisingly difficult
problem, with many pitfalls for those who try for a quick solution.
Comments (36 posted)
Brief items
My computer was arrested before me
-- Syrian protester
Dr. Taymour Karim
… the FBI has access to … the
emails of virtually everybody in the country.
-- NSA whistle-blower
William
Binney, interviewed on RT.com
Jellyfish are interesting to trojan writers. Deep at their heart they
are colony creatures, with stealth capabilities. What could be more
pertinent to those of us who feed on the world's information like
opportunistic predators?
Right now my feeling is that the world has been lucky because most of
the malicious software on the internet has been, at worst, a
rapscallion, or a scofflaw, or perhaps a ne'er-do-well. And there are
modern networks who fare pretty well against that kind of adversary. But
longer term, there's going to be malware that resembles science fiction
<http://www.immunityinc.com/downloads/TheLongRun.pdf>...or maybe
jellyfish? :>
--
Dave Aitel
These companies own us, so they can sell us off
-- again, like serfs -- to rival lords... or turn us in to
the authorities.
--
Bruce
Schneier
Comments (1 posted)
New vulnerabilities
apache2: denial of service
| Package(s): | apache2 |
CVE #(s): | CVE-2012-4557
|
| Created: | November 30, 2012 |
Updated: | December 5, 2012 |
| Description: |
From the Debian advisory:
A flaw was found when mod_proxy_ajp connects to a backend
server that takes too long to respond. Given a specific
configuration, a remote attacker could send certain requests,
putting a backend server into an error state until the retry
timeout expired. This could lead to a temporary denial of
service.
|
| Alerts: |
|
Comments (none posted)
claws-mail: user credential leak
| Package(s): | claws-mail |
CVE #(s): | CVE-2012-5527
|
| Created: | December 3, 2012 |
Updated: | January 18, 2013 |
| Description: |
From the Red Hat bugzilla:
A security flaw was found in the way vCalendar plug-in of Claws Mail displayed user credential information in the system tray display when using https scheme. A local attacker could use this flaw to obtain user credentials (username and password) used for connection to remote point. |
| Alerts: |
|
Comments (none posted)
firefox: multiple vulnerabilities
| Package(s): | Mozilla Firefox |
CVE #(s): | CVE-2012-5837
CVE-2012-4206
|
| Created: | November 29, 2012 |
Updated: | December 5, 2012 |
| Description: |
From the Mozilla advisory:
MFSA 2012-102 / CVE-2012-5837: Security researcher
Masato Kinugawa reported that when script is entered into
the Developer Toolbar, it runs in a chrome privileged
context. This allows for arbitrary code execution or
cross-site scripting (XSS) if a user can be convinced to
paste malicious code into the Developer Toolbar.
MFSA 2012-98 / CVE-2012-4206: Security researcher
Robert Kugler reported that when a specifically named DLL
file on a Windows computer is placed in the default
downloads directory with the Firefox installer, the Firefox
installer will load this DLL when it is launched. In
circumstances where the installer is run by an
administrator privileged account, this allows for the
downloaded DLL file to be run with administrator
privileges. This can lead to arbitrary code execution from
a privileged account.
|
| Alerts: |
|
Comments (none posted)
kernel: information leak
| Package(s): | kernel |
CVE #(s): | CVE-2012-4530
|
| Created: | December 3, 2012 |
Updated: | January 15, 2013 |
| Description: |
From the Red Hat bugzilla:
A memory disclosure flaw has been found in the way binfmt_script load_script()
function handled excessive recursions. An unprivileged local user could use
this flaw to leak kernel memory. |
| Alerts: |
|
Comments (none posted)
kernel: privilege escalation
| Package(s): | kernel |
CVE #(s): | CVE-2012-5513
|
| Created: | December 5, 2012 |
Updated: | December 24, 2012 |
| Description: |
From the Red Hat advisory:
A flaw in the way the Xen hypervisor implementation range checked guest
provided addresses in the XENMEM_exchange hypercall could allow a
malicious, para-virtualized guest administrator to crash the hypervisor or,
potentially, escalate their privileges, allowing them to execute arbitrary
code at the hypervisor level. |
| Alerts: |
|
Comments (none posted)
keystone: multiple vulnerabilities
| Package(s): | keystone |
CVE #(s): | CVE-2012-5571
CVE-2012-5563
|
| Created: | November 29, 2012 |
Updated: | December 11, 2012 |
| Description: |
From the Ubuntu advisory:
Vijaya Erukala discovered that Keystone did not properly invalidate
EC2-style credentials such that if credentials were removed from a tenant,
an authenticated and authorized user using those credentials may still be
allowed access beyond the account owner's expectations. (CVE-2012-5571)
It was discovered that Keystone did not properly implement token
expiration. A remote attacker could use this to continue to access an
account that is disabled or has a changed password. This issue was
previously fixed as CVE-2012-3426 but was reintroduced in Ubuntu 12.10.
(CVE-2012-5563)
|
| Alerts: |
|
Comments (none posted)
libxml2: code execution
| Package(s): | libxml2 |
CVE #(s): | CVE-2012-5134
|
| Created: | November 30, 2012 |
Updated: | March 1, 2013 |
| Description: |
From the Red hat advisory:
A heap-based buffer underflow flaw was found in the way libxml2 decoded
certain entities. A remote attacker could provide a specially-crafted XML
file that, when opened in an application linked against libxml2, would
cause the application to crash or, potentially, execute arbitrary code with
the privileges of the user running the application. (CVE-2012-5134)
|
| Alerts: |
|
Comments (none posted)
lynx: multiple vulnerabilities
| Package(s): | lynx-cur |
CVE #(s): | CVE-2010-2810
CVE-2012-5821
|
| Created: | November 30, 2012 |
Updated: | December 5, 2012 |
| Description: |
From the Ubuntu advisory:
Dan Rosenberg discovered a heap-based buffer overflow in Lynx. If a user
were tricked into opening a specially crafted page, a remote attacker could
cause a denial of service via application crash, or possibly execute
arbitrary code as the user invoking the program. This issue only affected
Ubuntu 10.04 LTS. (CVE-2010-2810)
It was discovered that Lynx did not properly verify that an HTTPS
certificate was signed by a trusted certificate authority. This could allow
an attacker to perform a "man in the middle" (MITM) attack which would make
the user believe their connection is secure, but is actually being
monitored. This update changes the behavior of Lynx such that self-signed
certificates no longer validate. Users requiring the previous behavior can
use the 'FORCE_SSL_PROMPT' option in lynx.cfg. (CVE-2012-5821)
|
| Alerts: |
|
Comments (none posted)
mod_security: multipart/invalid part ruleset bypass
| Package(s): | mod_security |
CVE #(s): | CVE-2012-4528
|
| Created: | December 3, 2012 |
Updated: | January 1, 2013 |
| Description: |
From the Red Hat bugzilla:
ModSecurity <= 2.6.8 is vulnerable to multipart/invalid part ruleset bypass, this was fixed in 2.7.0 (released on2012-10-16) |
| Alerts: |
|
Comments (none posted)
mysql: code execution
| Package(s): | mysql-5.1 |
CVE #(s): | CVE-2012-5611
|
| Created: | December 4, 2012 |
Updated: | February 10, 2013 |
| Description: |
From the CVE entry:
Stack-based buffer overflow in MySQL 5.5.19, 5.1.53, and possibly other versions, and MariaDB 5.5.2.x before 5.5.28a, 5.3.x before 5.3.11, 5.2.x before 5.2.13 and 5.1.x before 5.1.66, allows remote authenticated users to execute arbitrary code via a long argument to the GRANT FILE command.
|
| Alerts: |
|
Comments (none posted)
perl: code execution
| Package(s): | perl |
CVE #(s): | CVE-2012-5195
|
| Created: | November 30, 2012 |
Updated: | January 28, 2013 |
| Description: |
From the Ubuntu advisory:
It was discovered that Perl's 'x' string repeat operator is vulnerable
to a heap-based buffer overflow. An attacker could use this to execute
arbitrary code. (CVE-2012-5195) |
| Alerts: |
|
Comments (none posted)
Page editor: Michael Kerrisk
Kernel development
Brief items
The current development kernel is 3.7-rc8, reluctantly
released by Linus on December 3.
"
I really didn't want it to come to this, but I was uncomfortable
doing the 3.7 release yesterday due to last-minute issues, and decided to
sleep on it. And today, I ended up even *less* comfortable about it due to
the resurrection of a kswapd issue, so I decided that I'm going to do
another -rc after all." As he points out, that implies that the 3.8
merge window will run close to the holidays.
Stable updates:
3.6.9,
3.4.21 and 3.0.54 were released on December 3.
Meanwhile, 3.2.35 is in the review process;
its release can be expected at any time.
Comments (none posted)
I'm all in favour of "whence", which is indeed the name of that
lseek argument - since mediaeval times I believe.
It's good to have words like that in the kernel source: while
you're in the mood, please see if you can find good homes for
"whither" and "thrice" and "widdershins".
—
Hugh Dickins
Took vacation last week, spent most of it doing userspace coding.
It was joyous.
—
Rusty
Russell
If yes, yet again this illustrates why the use of atomic types
leads people down the path of believing that their code somehow
becomes magically safe through the use of this smoke-screen. IMHO,
every use of atomic_t must be questioned and carefully analysed
before it gets into the kernel - many are buggy through assumptions
that atomic_t buys you something magic.
—
Russell King
Comments (14 posted)
By Jonathan Corbet
December 5, 2012
Last week's edition included
an article on the
addition of the FALLOC_FL_NO_HIDE_STALE flag to the
fallocate() system call. Some developers, objecting to the
patch and the way it got into the kernel, had called for it to be reverted
before the 3.7 release went final. At the time, Linus had not made any
remarks in the discussion or indicated whether he would accept the revert.
That situation changed after Linus was prompted by Martin Steigerwald. His response was clear enough:
If you want something reverted, you show me the *technical* reason
for it. Not the "ooh, I'm so annoyed by how this was done" reason
for it.
And if your little feelings got hurt, get your mommy to tuck you
in, don't email me about it. Because I'm not exactly known for my
deep emotional understanding and supportive personality, am I?
There were some technical reasons offered in the discussion, along with the
more general process-oriented complaints. But it seems clear that Linus
has not found that discussion convincing. So, in the absence of a surprise
from somewhere, it seems that the new fallocate() flag will remain
for the 3.7 release, at which point it will become part of the kernel's
user-space ABI.
Comments (18 posted)
Kernel development news
By Michael Kerrisk
December 5, 2012
The abstract goal of containers is, in
effect, to provide a group of processes with the illusion that that they
are the only processes on the system. When fully implemented, this feature
has the potential to realize many practical benefits, such as light-weight
virtualization and checkpoint/restore.
In order to give the processes in a container the illusion that there
are no other processes on the system, various global system resources must
be wrapped in abstractions that make it appear that each container has its
own instance of the resources. This has been achieved by the addition of
"namespaces" for a number of global resources. Each namespace provides an
isolated view of a particular global resource to the set of processes that
are members of that namespace.
Step by step, more and more global resources have been wrapped in
namespaces, and before we look at another step in this path it's worth
reviewing the progress to date.
Namespaces so far
The first step
in the journey was mount namespaces, which can be used to provide a group
of processes with a private view of the mount points that make up the
filesystem hierarchy. Mount namespaces first appeared in the mainline
kernel in 2002, with the release of Linux 2.4.19. The clone()
flag used to create mount namespaces was given the rather generic name
CLONE_NEWNS for "new namespace", implying that no one was then
really considering the possibility that there might be other kinds of
namespaces; at that time, of course, containers were no more than a gleam
in the eyes of some developers.
However, as the concept of containers took hold, a number of other
namespaces have followed. Network
namespaces were added to provide a group of processes with a private
view of the network (network devices, IP addresses, IP routing tables, port
number space, and so on). PID namespaces
isolated the global "PID number space" resource, so that processes in
separate PID namespaces can have the same PIDs—in particular, each
namespace can have its own 'init' (PID 1), the "ancestor of all
processes". PID namespaces also allow techniques such as freezing the
processes in a container and then restoring them on another system while
maintaining the same PIDs.
Several other global resources have likewise been wrapped in
namespaces, so that there are also IPC
namespaces (initially implemented to isolate System V IPC identifiers
and later to isolate instances of the
virtual filesystems used in the implementation of POSIX message queues) and
UTS namespaces (which wrap the
nodename and domainname identifiers returned by uname(2)).
Work on one of the more complex namespaces, user namespaces, was started in about Linux
2.6.23 and seems to be edging towards
completion. When complete, user namespaces will allow per-namespace mappings
of user and group IDs, so that, for example, it will be possible for a process
to be root inside a container without having root privileges in the system
as a whole.
Of course, a Linux system has a large number of global resources, each
of which could conceivably be wrapped in a namespace. At the more extreme
end, for example, even a resource such as the system time could be wrapped,
so that different containers could maintain different concepts of the
time. (A time namespace was once proposed,
but the implementation was not merged.) The trick is to determine the
minimum set of resources that need to be wrapped for the practical
implementation of containers. (Of course, this "minimum set" may well grow
over time, as people develop new uses for containers.) A related question
is how those wrappings should be grouped so as to avoid an explosion of
namespaces that would increase application complexity. So, for example,
System V IPC and POSIX message queues could conceivably have been
wrapped in different namespaces, but the kernel developers concluded that
it makes practical sense to group them in a single "IPC" namespace.
The global kernel log problem
What is necessary for the practical implementation of containers
sometimes only becomes clear when one starts doing, well, practical
things. Thus, it was that in early 2010 Jean-Marc Pigeon reported that he had written a small utility
to build containers using the clone() system call that worked
fine, except that "HOST and all containers share the SAME /proc/kmsg,
meaning kernel syslog information are scrambled (useless)".
What Jean-Marc was discovering is that the kernel log is one of the
global resources that is not wrapped in a namespace. He went on to note
another ill-effect: "I have in iptables, reject packet logging on the
HOST, [but as soon as] rsyslog is started on one container, I can't see my
reject packet log any more." In other words, starting a
syslog daemon on the host or any container sucks up all of the
kernel log messages produced on the host or in any container. The point
here about iptables is particularly relevant: the inability to
isolate kernel log messages from iptables is a significant
practical problem when trying to employ the network namespaces facility
that the kernel already provides.
In response to Jean-Marc's question about how the problem could be
fixed, Serge Hallyn replied:
Well, the results of do_syslog() should be containerized. Kernel messages (oopses for
instance) should always go to the initial container. Shouldn't be hard to
do, but the question is what do we tie it to? User namespace? Network
namespace? … I'm tempted to say userns makes the most sense - if
you start a new userns you likely always want private syslog, whereas with
netns and pidns you may not.
do_syslog() is the kernel function that encapsulates the main
logic of the syslog(2)
system call. That system call retrieves messages from the kernel log ring
buffer (and performs a range of control operations on the log buffer) that
is populated by messages created using the kernel's printk()
function. Thus, though discussions on this topic have tended to use the
term "syslog namespace", that is something of a misnomer: what is really
meant is wrapping the kernel log resource in a namespace.
To avoid possible confusion, it is probably worth noting that the
syslog(2) system call is a quite different thing from the syslog(3)
library function, which writes messages to the UNIX domain datagram socket
(/dev/log) from which the user-space syslog daemon
(rsyslogd or similar) retrieves messages. (Because of this
collision of names, the GNU C library exposes the syslog(2) system
call under a quite different name: klogctl().) A picture helps
clarify things:
First attempts at a solution
In the event, "containerizing" do_syslog() turned out to be
more difficult than Serge thought. His first
shot at addressing the problem (a "gross hack" to "provide each
user namespace with its own syslog ring buffer") quickly uncovered
a further difficulty: the kernel's
printk() is sometimes called in contexts where there is no way to
determine in which of the per-namespace ring buffers a message should be
logged. For example, if the kernel is executing a network interrupt (to
process an incoming network packet) and wants to log a message, that
message should not be sent to the per-namespace kernel log of the
interrupted process. Rather, the message should be sent to the kernel log
associated with the network namespace for the network device; however,
the kernel data structures provide no way to obtain a reference to that
kernel log.
Jean-Marc himself also made an attempt
at implementing a solution. However, Serge pointed out that Jean-Marc's patch suffered
some of the same problems as his own earlier attempt. Serge went on to
describe what he thought would be the correct solution, which would require
the creation of a separate syslog namespace. His proposed solution can be
paraphrased as follows:
- The core of vprintk_emit() (which contains most of
implementation of the printk() function) should be moved into
a new nsvprintk_emit() function that takes an argument that specifies a
syslog namespace.
- vprintk_emit() would then become a wrapper around
nsvprintk_emit() that specifies the "initial" syslog namespace
(i.e., the syslog namespace of the host system).
- A namespace-aware version of printk(), called (say)
nsprintk(), should be created. That function would take a syslog
namespace argument and pass it to nsvprintk_emit().
- The kernel log ring buffer should be "containerized" as per Serge's
initial patch. Thus each syslog namespace would have its own ring buffer,
and syslog(2) would operate on the per-namespace ring buffer of
the calling process.
- At call sites in the kernel code where it is not appropriate to use the
syslog namespace of the currently executing process, calls to
printk() should be replaced with calls to nsprintk() that
pass a suitable syslog namespace argument.
Although Jean-Marc made a few more efforts to rework his patch in the
following weeks, the effort ultimately petered out without much further
comment or consensus on a solution. It seems that Serge and other kernel developers realized that the
problem was more complex than first thought and they had neither the time
to implement a solution themselves nor to help Jean-Marc toward
implementing a solution.
The main difficulty lies in the last of the points above, and its
solution was not really elaborated in Serge's mail. The kernel data
structures and code need to be modified to add suitable hooks to handle the
"no current process context problem"—the cases where
printk() is called from a context in which the currently executing
process can't be used to identify a suitable syslog namespace to which a
message should be logged.
Restarting work on a solution
Work in this area then seems to have gone quiet for more than two
years, until a few days ago when Serge proposed a new proof-of-concept patch set, pretty much
along the lines he described two years earlier. His description of the
patch noted that:
The syslog ns is tied to a user
namespace. You must create a new user namespace before you can create a
new sylog ns. The syslog ns is created through a new command (11) to
the __NR_syslog system call.
Once a task enters a new syslog ns, it's "dmesg", "dmesg -c" and /dev/kmsg
actions affect only itself, so that user-created syslog messages no longer
are confusingly combined in the host's syslog.
In other words, Serge's patch provides isolation for the kernel log by
implementing a new dedicated namespace for that purpose (rather than
providing the isolation by attaching the implementation to one of the
existing namespaces). Each syslog namespace instance would be tied to a
particular user namespace.
Normally, new namespaces of each type are created by suitable flags to
the clone() system call. Thus, for example, there are clone flags
such as CLONE_NEWUTS and CLONE_NEWUSER. However, a while
ago, the kernel developers realized that the flag space for
clone() was exhausted. (Providing additional flag space was one of
the motivations behind the proposal to add
an eclone() system call, a proposal that was ultimately
unsuccessful.) For this reason, Serge proposed instead to use a new command
to the syslog() system call to create syslog namespace instances.
Serge went on to note:
"printk" itself always goes
to the initial syslog_ns, and consoles belong only to the initial
syslog_ns. However printks relating to a specific network namespace, for
instance, can now be targeted to the syslog ns for the user ns which owns
the network ns, aiding in debugging in a container.
Serge's patch would solve the "no current process context problem" as
follows. As noted above, this case is handled by an
nsprintk()-style function that takes an argument (of type
struct syslog_ns *) that identifies the syslog namespace
to which the log message should be sent. The value for that argument can be
obtained via the struct net structure for the network
namespace instance: in the current user namespace implementation (git
tree), when a network namespace is created using clone(), a
pointer to the corresponding user namespace instance of the caller is
stored in the net structure. Serge's patch in turn provides a
linkage from that user namespace structure to the corresponding syslog
namespace.
Eric Biederman, the maintainer of the user namespace git tree, agreed with Serge's overall approach, but
queried one particular point:
I am not a fan of how this ties into the user namespace. I would prefer
closer or looser ties. The recursive reference count loop where a userns
refers to a syslogns and that syslogns refers to the same userns is
unpleasant.
In Serge's implementation, the syslog and user namespaces are
maintained as separate structures, but, as the recursive pointers between
the two namespace structures and the need to create a new user namespace
before creating a syslog namespace indicate, instances of each namespace
are not truly independent. In Eric's view then, the syslog and user
namespace structures should either be more fully decoupled, or they should
be much more tightly coupled.
Eric went on later to note that:
There is an argument to be made that syslog messages are the kind of
security identifiers like uid, gids, and keys that should be part of a user
namespace. I'm not fully convinced but there are some DOS attacks that
would naturally prevent.
The discussion ultimately led Serge to conclude that the syslog resource should
instead be grouped as part of the user namespace rather than as a separate
namespace:
I can't really think of a good case for not putting the syslogns straight
into the userns (i.e. not having a separate syslogns), so I'd say let's go
that route.
Serge's patch seems to have inspired another group to try implementing
syslog namespaces. A couple of days after Serge's patch, Rui Xiang posted some patches that he and his colleague
Libo Chen had developed to implement similar functionality. Rui began by
noting a couple of the obvious differences in their patch set:
In Serge's patch [...] syslog_namespace was tied to a user namespace. We add
syslog_ns tied to nsproxy instead, and implement ns_printk in ip_table
context.
We add syslog_namespace as a part of nsproxy, and a new flag
CLONE_SYSLOG to unshare syslog area.
Using nsproxy is the conventional way of dealing with the
namespaces associated with a process: it is a structure that contains
pointers to structures describing each of the namespaces that a process is
associated with. This contrasts with Serge's original approach, which hung
the syslog namespace off the user namespace.
Rui's team also took advantage of a detail that Serge perhaps
overlooked: there happens to be one spare bit in the flag space for
clone() because the CLONE_STOPPED flag was removed
several kernel releases ago. Therefore, Rui's team repurposed that
bit. Normally, it would not be safe to recycle flag bits in this way, but
the CLONE_STOPPED flag has a special history. It was initially
proposed for use specifically in the NPTL threading implementation, but the
final implementation abandoned the flag in favor of a different
approach. As such, CLONE_STOPPED is likely never to have had
serious user-space users.
Unsurprisingly, the overall approaches of the two patch sets have many
similarities, but there are differences in details such as how a syslog
namespace is associated with a struct net in order to solve
the "no current process context problem".
Although kernel flame wars between competing implementations are what
often make the biggest headlines in the online press, the subsequent
exchange between Serge, Rui, and Libo demonstrated that life on developer
mailing lists is usually more cordial. Serge asked:
I understand that user namespaces aren't 100% usable yet, but looking
long term, is there a reason to have the syslog namespace separate
from user namespace?
In response, Rui noted:
Actually we don't have strong preference. We'll think more about it. Hope
we can make consensus with Eric.
That in turn led Serge to ask Rui and
Libo if his patch set might suffice for their needs, with the gracious note
that:
I'm not at all wedded to my patchset. I'm happy to go with something else
entirely. My set was just a proof of concept.
There is one other notable difference in functionality between the two
patch sets. In Serge's patch set, system consoles belonged (by intention)
only to the initial syslog namespace, meaning that kernel log
messages from other syslog namespace instances can't be displayed on
consoles. By contrast, Rui and Libo's patches include consoles in the
syslog namespace, so that kernel messages from syslog namespaces other than
the initial namespace can be displayed on consoles. Rui and Libo would like
this functionality in order to be able to obtain kernel log messages from
containers when monitoring embedded devices that provide access to the
console over a serial port.
The summary of the discussion is that there are useful pieces in
both patches. Serge plans to revise his
patch to merge the syslog namespace functionality into user namespaces, add
the console functionality desired by Rui and Libo, and add some in-kernel
uses of the namespace-aware printk() interface as a
proof-of-concept for the implementation (as was done in the patches by Rui
and Libo).
Concluding remarks
The history of the work to provide syslog namespaces (or as it might
better be termed, namespace isolation for the kernel log) presents a
microcosm of work on namespaces in general. As has often been the case, the
implementation of namespaces often turns out to be surprisingly
complex. Much of that complexity hinges on detailed questions of
functionality (for example, the behavior of consoles in this case) and the
question of whether resources should be grouped inside a new namespace or
within an existing namespace. In the case of syslog namespaces, it looks
like a number of decisions have been made; there will probably be a few
more rounds of patches, but there seems to be general consensus on the
direction forward. Thus, there is a reasonable chance that proper namespace
isolation of kernel logging will appear in the kernel sometime around Linux
3.9 or soon afterward.
Comments (8 posted)
By Jonathan Corbet
December 5, 2012
The term "stable pages" refers to the concept that the system should not
modify the data in a page of memory while that page is being written out to
its backing store. Much of the time, writing new data to in-flight pages
is not actively
harmful; it just results in the writing of the newer data sooner than might
be expected. But sometimes, modification of in-flight pages can create
trouble; examples include hardware where data integrity features are in
use, higher-level RAID implementations, or filesystem-implemented
compression schemes. In those cases, unexpected data modification can
cause checksum failures or, possibly, data corruption.
To avoid these problems, the stable pages
feature was merged for the 3.0 development cycle. This relatively
simple patch set simply ensures that any thread trying to modify an
under-writeback page blocks until the pending write operation is complete.
This patch set, by Darrick Wong, appeared to solve the problem; by blocking
inopportune data modifications, potential problems were avoided and
everybody would be happy.
Except that not everybody was happy. In early 2012, some users started reporting performance problems associated with
stable pages. In retrospect, such reports are not entirely surprising;
any change that causes processes to block and wait for asynchronous events
is unlikely to make things go faster. In any case, the reported problems
were more severe than anybody expected, with multi-second stalls being
observed at times. As a result, some users (Google, for example) have added patches to
their kernels to disable the feature. The performance costs are too high,
and, in the absence of a use case like those described above, there is no
real advantage to using stable pages in the first place.
So now Darrick is back with a new patch set
aimed at improving this situation. The core idea is simple enough: a new
flag (BDI_CAP_STABLE_WRITES) is added to the
backing_dev_info structure used to describe a storage device. If
that flag is set, the memory management code will enforce stable pages as
is done in current kernels. Without the flag, though, attempts to write a
page will not be forced to wait for any current writeback activity. So the
flag gives the ability to choose between a slow (but maybe safer) mode or a
higher-performance mode.
Much of the discussion around this patch set has focused on just how that
flag gets set. One possibility is that the driver for the low-level
storage device will turn on stable pages; that can happen, for example,
when hardware data integrity features are in use. Filesystem code could
also enable stable pages if, for example, it is compressing data
transparently as that data is written to disk. Thus far, things work fine:
if either the storage device or the filesystem implementation requests
stable pages, they will be enforced; otherwise
things will run in the faster mode.
The real question is whether the system administrator should be able to
change this setting. Initial versions of the patch gave complete control over
stable pages to the user by way of a sysfs attribute, but a number of
developers complained about that option. Neil Brown pointed out that, if the flag could change at
any time, he could never rely on it within the MD RAID code; stable pages
that could disappear without warning at any time might as well not exist at
all. So there was
little disagreement that users should never be able to turn off the
stable-pages flag. That left the question of whether they should be able
to enable the feature, even if neither the hardware nor the
filesystem needs it, presumably because it would make them feel safer
somehow. Darrick had left that capability in, saying:
I dislike the idea that if a program is dirtying pages that are
being written out, then I don't really know whether the disk will
write the before or after version. If the power goes out before
the inevitable second write, how do you know which version you get?
Sure would be nice if I could force on stable writes if I'm feeling
paranoid.
Once again, the prevailing opinion seemed to be that there is no actual
value provided to the user in that case, so there is no point in making the
flag user-settable in either direction. As a result, subsequent updates
from Darrick took that feature out.
Finally, there was some disagreement over how to handle the ext3
filesystem, which is capable of modifying journal pages during writeback
even when stable pages are enabled. Darrick's patch changed the
filesystem's behavior in a significant way: if the underlying device
indicates that stable pages are needed and the filesystem is to be mounted
in the data=ordered mode, the filesystem will complain and mount
it read-only. The idea was that, now that the kernel could determine that
a specific configuration was unsafe, it should refuse to operate in that
mode.
At this point, Neil returned to point out
that, with this behavior, he would not be able to set the "stable pages
required" flag in the MD RAID code. Any system running an ext3 filesystem
over an MD volume would break, and he doesn't want to deal with the
subsequent bug reports. Neil has requested a variant on the flag whereby
the storage level could request stable pages on an optional basis. If
stable pages are available, the RAID code can depend on that behavior to
avoid copying the data internally. But that code can still work without
stable pages (by copying the data, thus stabilizing it) as long as it knows
that stable pages are unavailable.
Thus far, no patches adding that feature have appeared;
Darrick did, however, post a patch set
aimed at simply fixing the ext3 problem. It works by changing the stable
page mechanism to not depend on the PG_writeback page flag;
instead, it uses a new flag called PG_stable. That allows the
journaling layer to mark its pages as being stable without making them look
like writeback pages, solving the problem. Comments from developers have
pointed out some issues with the patches, not the least of which is that
page flags are in extremely short supply. Using a flag to work around a
problem with a single, old filesystem may not survive the review process.
The end result is that, while the form of the solution to the stable page
performance issue is reasonably clear, there are still a few details to be
dealt with. There appears to be enough interest in fixing this problem
to get something worked out. Needless to say, that will not happen for the
3.8 development cycle, but having something in place for 3.9 looks like a
reasonable goal.
Comments (26 posted)
By Jonathan Corbet
December 5, 2012
The kernel's power management subsystem has become increasingly effective
over recent years, to the point that our CPU power management is said to be
second to none. But, while the kernel endeavors to minimize the power
consumed by a given workload, it lacks mechanisms to put an overall limit
on the amount of power consumed. The recently-announced
PowerClamp driver by Jacob Pan and Arjan van
de Ven is intended to change that situation on Intel processors.
Most users will never want to use PowerClamp. As a general rule,
when one has purchased hardware with a given computational capability, one
wants that full capability to be available when needed. But there are
situations where it makes sense to run a system below its full speed. Data
centers have power-consumption and cooling constraints that can argue
against running all systems flat-out all the time. Even the owner of an
individual laptop or handheld system may wish to ensure that its operating
temperature does not exceed a given value; an overly hot laptop can be
uncomfortable to work with, even if it is still working within its
specified temperature range. So there can be value in telling the system
to run slower at times.
The PowerClamp driver allows the system administrator to set a desired idle
percentage by way of a sysfs attribute. That percentage is capped at 50%
in the current implementation. Once a percentage has been set, the kernel
monitors the actual idle time for each processor in the system. Should a
processor's idle time fall below the desired idle percentage, a special
kernel thread
(called kidle_inject/N, where N is the number of the CPU
to which the thread is assigned) is created to take corrective
action.
That thread operates as a high-priority realtime process, so it is able to
respond quickly when needed. Its job is relatively simple: look at the amount
of idle time on its assigned CPU and calculate the difference from the
desired idle time. Then, periodically, the thread will run, disable the
clock tick, and force the CPU into a sleep state for the required amount
of time. The sleeping is done for a given number of jiffies, so
the sleep states tend to be relatively long — a necessary condition for an
effective reduction in power usage.
Naturally, the PowerClamp thread will continue to monitor actual idle time
as it operates, adjusting the amount of forced sleep time as needed. It
also monitors the amount of desired sleep time that is lost to interrupts.
Interrupts remain enabled during the forced sleep, so they can bring the
processor back to an operational state before the PowerClamp driver would
have otherwise done so. Over time, the amount of sleep time lost in this
manner is tracked; the driver will then attempt to compensate by increasing
the amount of forced sleep time to try to pull the CPU back to the original
idle time target.
By itself, PowerClamp can come close to achieving the desired level of idle
time on a system with a changing workload. Often, though, the real goal is
not idle time as such; instead, the purpose is to keep the system within a
given level of power consumption or a set of thermal limits. Doing that
will require the implementation of additional logic in user space. By
monitoring the parameter of interest, a user-space process can implement a
control loop that adjusts the desired level of idle time as needed. The
PowerClamp driver can respond relatively quickly to those changes, giving the
control process an effective tool for the management of the amount of power
used by the system.
The driver has been through a couple of revisions with little in the way of
substantive comments. This patch poses a relatively small risk to the
system, since it
does not do anything if the feature is not in use. It could thus conceivably
be ready for merging as soon as the 3.8 development cycle. Some more
information can be found in the documentation
file included with the patch.
Comments (14 posted)
Patches and updates
Kernel trees
Core kernel code
Development tools
Device drivers
Filesystems and block I/O
Architecture-specific
Security-related
Virtualization and containers
Miscellaneous
- Lucas De Marchi: kmod 12 .
(December 5, 2012)
Page editor: Jonathan Corbet
Distributions
By Jonathan Corbet
December 5, 2012
Last week's review of Ubuntu core 12.10 on
the Nexus 7 tablet showcased one of the alternative operating systems
that can be installed onto this particular device. But Ubuntu is certainly
not the only choice out there. While he was busy installing software onto
the tablet, your editor decided to give the latest CyanogenMod build a
try. As is usual with CyanogenMod, the results were good, but also a bit
discouraging with regard to how the Android development community works.
The cyanogenmod.org page suggests that
there is no stable build for the Nexus 7, but there is a 10.0.0
release listed on the get.cm stable
releases page. Beyond that, "nightly" development builds are available
under the "grouper" code name. Naturally, the nightly build was chosen;
when would a self-respecting editor pick a stable build over something
leading-edge and potentially dangerous? As it happens, the installation of
the December 4 nightly build went without a hitch. Or, at least, it
did once your editor remembered to wipe the device prior to trying to boot
the new system; otherwise it simply hung at the boot splash screen. As
usual, one also needs to install the Google applications separately.
The CM10 nightly release works flawlessly, as far as your editor can tell.
It has some interesting differences from the stock Android install, many of
which are reminiscent of a handset-oriented system. For example, the number of
applications is far below what stock Android has; CyanogenMod
lacks Google+, Chrome (it has the standard Android browser), Maps and more,
but it does include the camera application by default. The missing
applications can, naturally, be installed easily from the "Play" store
afterward.
As reported here in July, recent
CyanogenMod builds seem to have fewer shiny features above stock Android
than they did in the past. There is a whole set of configuration options,
especially with regard to how the interface works. CyanogenMod also adds
profiles, a more configurable lock screen (though stock Android is catching
up and taking its own direction in this area), a more useful user space for
those who get to the command-line level, and a set of scary "performance"
knobs. That is about it; many users might not ever notice or make use of
the additional features that CyanogenMod offers. Given that, many users
might well wonder why they should bother installing CyanogenMod; for many
of them, the best answer might be that they shouldn't.
That is doubly true for Nexus 7 users at this particular point in time;
CM10, while not yet
released in stable form, is already obsolete: it is based on the Android
4.1.2 release. Anybody running a stock Nexus 7 is likely to have
already been updated to 4.2, which offers a
number of new features. The CyanogenMod developers are busily trying
to catch up with this release and the list of devices supported by the
experimental, 4.2-based CM10.1 release is growing, but the Nexus 7
does not yet appear there. So running CyanogenMod on this device means
accepting a net loss in new features: no fancy lock screen, no swipe
typing, no screen magnifier, etc.
Unfortunately, that state of affairs looks to be a permanent part of the
experience of running CyanogenMod (or any other Android derivative). As
has been pointed out many times, Android is (mostly) open source, but it is
not run as an open-source project. Instead, the world outside of Google
gets an occasional code dump after an official Android release is made.
Thanks to the heroic efforts of the Google folks working on the Android Open Source Project, those
code dumps are both timely and useful for the community. They are a great
gift, and we should never forget the value of that gift.
It is worth keeping in mind why things are done that way as well. Clearly,
it is easier to run a large software project without having to involve all
those pesky community people; there is a whole level of bikeshedding
behavior that the Android developers simply do not have to deal with.
Keeping the code under wraps also allows Google to control when it first
appears on devices — and which devices those will be. The Nexus-branded
handsets and tablets have a lot of nice features, including their relative
openness. Not the least of those features is that they tend to be the
first showcase for new versions of the Android system. If the public
Android repositories were always current, a new Android release could be
old news by the time it appeared on an officially blessed device.
So Google's reasoning is understandable, but it is still hard not to wish
for a different situation. An always-current public repository would allow
the CyanogenMod developers to keep up with the tree as it evolved, rather
than having to figure out a new code dump a couple of times each year.
Perhaps they could even manage to upstream more of their interesting work,
helping Android to evolve more quickly, and in more interesting
directions. It would help Android to be a real open-source project.
That, however, does not appear to be in the cards. So CyanogenMod and
others will tend to lag a bit behind what official Android can do, at least
if Google continues to develop and release the system at the current fast
pace.
The result is that, for those who have devices running current, relatively
pure Android
software, CyanogenMod may not have a lot to offer.
On the other hand,
CyanogenMod retains its value as a laboratory where new features can
be tested. It is unparalleled in its support for older devices that
are no longer supported by their manufacturers — and, in this industry,
"older" can have a value of less than one year. Users who have devices
that are infected with manufacturer- or carrier-supplied "enhancements"
will continue to appreciate the work that the CyanogenMod developers do.
So there is a an important role for CyanogenMod, even if that role is
changing over time.
Comments (8 posted)
Brief items
So next time you're not happy about something: just prefix your criticism with "I think". You may be surprised what difference it makes to the conversation.
Oh, two other magic words: "for me". Compare "This workflow is completely broken" vs "This workflow is completely broken for me". Amazing what difference those two words make...
--
Peter Hutterer
Documenting a distribution is a lot like law enforcement. You might get
results from routine patrols, but it is largely a complaint-driven
venture.
--
pete
Paraphrasing Star Trek's Bones, that would be
"Gentoo, Jim, but not as we know it."
--
Duncan
Comments (none posted)
The NetBSD Project has
announced
the release of NetBSD 5.2. "
NetBSD 5.2 is the second feature update of the NetBSD 5.0 release branch. It represents a selected subset of fixes deemed critical for security or stability reasons, as well as new features and enhancements."
Comments (none posted)
Red Hat has
released
a beta version of Red Hat Enterprise Linux (RHEL) 6.4. "
The beta release includes a broad set of updates to the platform's existing feature set, while also providing rich new functionality in the areas of identity management, file system, virtualization, storage and productivity tools."
Comments (none posted)
Distribution News
Mageia Linux
Mageia 1 has reached its end-of-life. There will be no further updates.
Users of Mageia 1 are encouraged to upgrade to Mageia 2.
Full Story (comments: none)
Red Hat Enterprise Linux
Red Hat Enterprise Linux Extended Update Support Add-On 6.0 is no
longer supported.
Full Story (comments: none)
Newsletters and articles of interest
Comments (none posted)
The H
looks
at the latest releases of two distributions based on Debian unstable
(sid),
aptosid and
siduction. "
Shortly after that release, the developers of siduction, a distribution forked from aptosid to be more community-focused, announced its second release candidate of siduction 2012.2 "Riders on the Storm". The final release of the distribution will add the Razor-qt desktop environment for the first time to its range of desktops. Both distributions aim to provide a usable desktop Linux solution based on Debian Sid with a stable upgrade path."
Comments (none posted)
Brian Warner
likes
Linux Mint with Cinnamon. "
The developers behind Linux Mint have historically done a great job in striking a comfortable balance between beauty and usability, but it's also worth calling out the team's ability to execute. While its roots go back to a number of Mint-specific GNOME Shell extensions, Cinnamon itself is quite stable despite being not quite one year old."
Comments (1 posted)
Page editor: Rebecca Sobol
Development
December 5, 2012
This article was contributed by Dave Phillips
The community of Linux audio developers is a relatively small group that
includes a core of programmers committed to the evolution of the Linux
audio ecosystem. Some developers work on the unglamorous but necessary
system components — sound card drivers, kernel interfaces, function
libraries, etc. — while others work on the shinier applications
level. Developer/musician Sean M. Bolton wears hats in both domains, with a
special attachment to the
DSSI plugin
interface. His contributions include code for the interface specification,
a DSSI plugin host, and three software synthesizers.
This article profiles Sean's synthesizers, Xsynth, WhySynth, and Hexter. As
we might expect, they are DSSI-compatible plugins, so any host that
supports DSSI should run them without complaint. You'll need the DSSI
library and utilities first, but most mainstream distributions include the
software in their package repositories. Check your system's package manager
for an installable bundle. If you need to build the DSSI system yourself,
have no fear, the software is easy to build, with no extraordinary
dependencies. The synthesizers are equally straightforward to build and
install.
For those who want to quickly play with these synthesizers,
the complete DSSI system includes a handy command-line utility for running
DSSI plugins as standalone applications. My tests were made with this
invocation from an xterm command prompt:
jack-dssi-host /usr/lib/dssi/somesynth.so
The synthesizers were correctly listed in the audio and ALSA MIDI tabs in QJackCtl. I
connected my Akai LP25 keyboard and various sequencers to play each synthesizer,
and I'm happy to report that I experienced no problems. Sean's synthesizers are
stable applications, ALSA and JACK compatible, with great sound and copious
presets. Let's look at each one in some detail.
Xsynth 0.9.4
Xsynth
is a 2-oscillator subtractive
synthesizer based on an original design by Steve Brookes. The synthesizer's
architecture is revealed by its patch editor (right) — the output from
the oscillators is mixed, filtered, amplified, and modulated before
reaching the audio output stage, a typical arrangement for a subtractive
synthesizer. Each oscillator can select one of seven cyclic waveforms, the
pulse-width duty
cycle is user-definable, and oscillator sync is
available. The filter is relatively simple, with controls only for cutoff
frequency, resonance, and rolloff mode. The amplifier envelope is likewise
uncomplicated, with a typical ADSR (attack/decay/sustain/release) envelope
to control the evolution of the sound's amplitude. The LFO
includes a set of six waveforms, a single control for the frequency of the
selected waveform, and two controls for pitch modulation and filter
modulation sensitivity. No on-board effects are available, but I prefer
mine external anyway. JACK-Rack or the awesome Rakarrak make good processing
companions for Xsynth.
Xsynth's documentation consists of a README file in the source package and
a default
collection of 50+ presets. The README contains much valuable information
regarding the synthesizer's architecture and controls, and the presets include
excellent examples of commonly encountered analog synthesizer sounds such as
string pads, resonant filter sweeps, and fat basses. From the README and
the presets you can learn all you need to know to master the synthesis
method. You can also learn that Xsynth responds to various MIDI messages,
including note-on/off, velocity, aftertouch, program change, mod wheel,
volume control, and others. See the file for the complete details.
Xsynth is a standard item in the full DSSI package, so if you've installed
DSSI you've already installed Xsynth and you're ready to roll with it. The
DSSI software is available in the repositories for most mainstream Linux
distributions, but if yours doesn't have it you can pick up the source code
on the DSSI site.
There's not much more to say about Xsynth. It's an uncomplicated
realization of a classic analog synthesis method, presented in an
uncluttered interface, easy to learn and use. I like the sound of many of
its presets, especially when they're routed through an external effects
processor, and it's great fun to program for my own sounds. But Xsynth is
no mere toy — even simple subtractive synthesis is capable of making
wonderful rich sounds, and you may well lose track of time while you
explore its capabilities.
WhySynth 120903
WhySynth
is what
Xsynth dreams of becoming when it grows up. Both synthesizers are designed with
the same large-scale architecture, that of the analog subtractive
synthesizer, but WhySynth's implementation of the synthesis method differs
profoundly at the detail level. The number of oscillators has been doubled
over what Xsynth offers,
the waveform selection has increased to more than 168 waves, and each
oscillator can be configured to a unique modality (e.g., wavecycle,
frequency modulation (FM),
noise). The number of filters has increased to two, we now have three LFOs,
and the number of envelope generators (EGs) has grown from two to
five. WhySynth adds an effects stage to the classic design, though it is
wisely restricted to only two reverbs and a delay line. The parameter
set for each stage has likewise expanded for considerably finer control
over your sounds.
WhySynth's UI follows the design set by Xsynth. The program opens to a
single panel with tabs for the preset patch list and the synthesizer's global
configuration. The patch editor is considerably more complex
than Xsynth's, so it gets its own window, available from the Edit
menu. Like its sibling, WhySynth's editor clarifies the organization of the
synthesis method. WhySynth is a deep synthesizer, meaning that it allows
very fine
control of the sound-shaping process, and it is capable of timbres and
effects not available with a simpler architecture.
WhySynth can import Xsynth patches and banks, so work done in that synthesizer
can be brought into WhySynth for more detailed design. WhSynth also
includes a facility for interpreting patches in sysex format for Kawai's K4
synthesizer. Actually, Sean indicates that WhySynth
mis-interprets them, but it's obvious that those patches provided a
great resource during the creation of WhySynth's 280+ presets, of which
more than 130 have been derived from patches for the K4.
The documentation is slim but informative. The WhySynth web site and the source
package README contain the same material describing the synthesizer's design in
detail. Given the complexity of the program, I suggest reading the
documentation thoroughly if you plan on making your own sounds. Of course
the default patches are instructive, and WhySynth provides a number of
"development" patches to be used as starter material. You'll also want to
read the docs to find out what MIDI controllers have been mapped to
WhySynth's synthesis parameters for dynamic control of your sound's
evolution.
Hexter 1.0.1
According to its "About" panel, Hexter is a
"Yamaha DX7 modeling software synthesizer for the DSSI Soft Synth
Interface". The DX7 was Yamaha's most
famous synthesizer built with
FM (frequency modulation) audio technology licensed from Stanford
University, where Dr. John Chowning
invented the method. This review is not the place for an explanation of FM,
please see the Wikipedia on FM
synthesis for a good summary and some excellent external links. It
suffices here to note that FM differs substantially from the synthesis
methods seen and heard in Xsynth and WhySynth.
Hexter opens to a display similar to its siblings, a single panel with tabs
for patch selection, global configuration, and performance settings. The
patch selector is self-explanatory. The "Configuration" tab sets the
synthesizer master tuning, output volume, and polyphony. It also toggles
access to ALSA "sysex" editing, about which I'll have more to say later. The
additional "Performance" tab includes settings for pitch bend, modulation
wheel, foot controller, and breath controller. The bend and mod wheels were
parts of the original DX7 hardware, the controllers were external
devices. All these devices were designed to add greater expressive
capabilities during performance, for example, by modulating a sustained tone by
controlling its vibrato with the mod wheel.
The DX7 spawned an industry of third-party extensions, add-ons, and designer
patch libraries. The hardware-based DX7 additions are history, but a vast
number of
DX7 patches are still available as systems-exclusive (sysex) data
files. Hexter can read a variety of DX-related sysex files, but only so far
as they contain the basic elements of the original DX7. Alas, there's no
support for the extended features of other members of Yamaha's DX
family. However, Hexter also supports the file format for the Sideman D/TX
patch editor/librarian by Voyetra. That's great news
for me, as I used that software for many years to program my beloved TX802, a rack-mount
synthesizer in the mkII family of Yamaha's DX/TX synthesizers. Alas, the 802 is
gone, but my patch collection has received a new lease on life thanks to
Hexter.
Catching up with Xsynth and WhySynth, Hexter now has a graphic editor for
basic DX7 patch parameters. Unlike its siblings, Hexter's editor
includes graphic envelope displays for relevant parameters. Alas, the
envelope breakpoints can't be adjusted directly, but the visual feedback is
immediate when values are set in the scroll boxes.
As I mentioned earlier, the Configuration tab includes a toggle for
enabling patch edits via system-exclusive messages. A separate ALSA MIDI
port is opened for receiving sysex messages from an appropriate device or
program, such as a hardware DX/TX synthesizer or a software
editor/librarian. The integrated editor will be sufficient for most users,
but I found a very helpful purpose for the sysex connection. The Voyetra
Sideman editor includes a well-designed patch randomizer, and Hexter
conveniently provides a compatible target synthesizer for the editor's
patches. The Sideman software runs smoothly under DOSemu, I use a MIDI connections utility
such as aconnect or QJackCtl to route its output to Hexter's sysex port,
and voilà: I can program Hexter in realtime with an editor running in an
emulated MS-DOS environment.
Hexter includes no on-board effects processors. No problem, it's a
JACK-savvy application, so just route its output to JACK-Rack or
Rakarrak. The original DX never sounded so good.
Regarding external control: Hexter's synthesis parameters are controllable
with a set of pre-assigned MIDI continuous controllers, including a lengthy
list of parameters addressed by the NRPN (non-registered parameter
numbers). The default assignments can be found in the source package's
README. Hexter also supports OpenSound
Control (OSC) but I
haven't yet looked into its possibilities.
I've owned four FM synthesizers from Yamaha. My first synthesizer was a TX81Z, followed by
an FB01, both of
which were 4-operator FM boxes. I stepped up to the 6-operator machines
with the TX802 and the massively powerful TG77. I've also
tested FM7 and FM8, excellent FM
synthesis programs from Native
Instruments. Alas, my hardware boxes are gone now, and the proprietary
software isn't made for Linux. However, I'm not complaining at all — I have
Hexter. I won't compare it to the hardware or the Native Instruments
synthesizers, I'll just say that I'm happy to hear my favorite FM sounds again.
Sounding Out
By now I hear you say, "But how do they sound?". Well, you can check out
the following links for some demonstrations of the Sean Bolton synthesizer
collection:
Subtractive and FM synthesizers are staple items in the complete
modern computer-controlled studio. The two synthesis methods make a good
pair; subtractive synthesizers are often used for dense, rich, layered sounds and
strings,
while FM excels in metallic and bell-like sounds. Sean Bolton has given us
three capable synthesizers, now it's up to us to show off their
capabilities. Try them all — they're all free and open-source Linux
software licensed under the GPL — and do let us know if you make some
joyful noises with them.
Comments (3 posted)
Brief items
I find it fascinating that DVCS aficionados haven't noticed that GitHub takes the D out of DVCS very effectively, thereby making git actually useful for most normal people.
—
Branko Čibej (virtual hat-tip to Markus Schaber)
I'm pretty sure that HURD stands for
"Hurd Users Relish Deviance" so I would expect Hurd folks to actually
appreciate these test failures.
—
Matt Mackall
Comments (11 posted)
Version 2.0.7 of the GNU Guile language is out. It adds an implementation
of "
curly infix
expressions," per-port reader options, a number of extension loading
improvements, and something known as "nested futures": "
Futures may now be nested: a future can itself spawn and then `touch'
other futures. In addition, any thread that touches a future that has
not completed now processes other futures while waiting for the touched
future to completed."
Full Story (comments: 104)
Mozilla's Josh Aas announced on his blog that the browser maker was joining the Internet Society (ISOC) as a "Silver" member, in order to support its Internet Engineering Task Force (IETF) work on core protocols beyond HTTP. "Today we’re heavily involved in IETF working groups relating to key Internet technologies such as TLS, HTTP and HTTP/2, RTCWeb, WebSockets, and others." ISOC, of course, is the parent organization of IETF and an alphabet soup of other Internet standards bodies.
Comments (none posted)
Version
4.0 of the Ekiga telephony application is out. It features a new
user interface, some new codecs, auto-answer functionality, a number of
improvements in SIP support, and more.
Comments (7 posted)
Matthew Garrett has
announced the
availability of the first "usable" version of the "shim" UEFI secure
bootloader. "
If you want, you're then free to impose any level of
additional signing restrictions - it's entirely possible to use this
signing as the basis of a complete chain of trust, including kernel
lockdowns and signed module loading. However, since the end-user has
explicitly indicated that they trust your code, you're under no obligation
to do so. You should make it clear to your users what level of trust
they'll be able to place in their system after installing your key, if only
to allow them to make an informed decision about whether they want to or
not."
Comments (none posted)
Version 6.25 of the Nmap network scanner is out; it contains a lot of new
stuff. "
Nmap 6.25 contains
hundreds of improvements, including 85 new NSE scripts, nearly 1,000 new OS
and service detection fingerprints, performance enhancements such as the
new kqueue and poll I/O engines, better IPv6 traceroute support, Windows 8
improvements, and much more."
Full Story (comments: 2)
Version 2012.11 of the buildroot embedded Linux system-creation tool has been released. This stable release adds initial support for Aarch64, a large number of new packages, and updates to several core components, including: "Binutils 2.23.1, GCC 4.7.2. We're now defaulting to GCC 4.6. Newer Codebench and Linaro external toolchains. Libtirpc support for modern Glibc variants"
Full Story (comments: none)
Newsletters and articles
Comments (none posted)
Owen Taylor has written a detailed update about his ongoing work on compositor frame timing (to which we provided an introduction back in August). This time, the issue at hand is coping with video playback or other sources that need a fixed frame rate not equal to the display's refresh rate. "I’m pretty happy with how this algorithm works out in testing, and it may be as good as we can get for X. The main downside I know of is that it only individually solves the two problems – handling clients that need all the rendering resources of the system and handling clients that want minimum jitter for displayed frames, it doesn’t solve the combination."
Comments (31 posted)
The
second issue of Gimp
Magazine has been released as a 100-page, 65MB PDF file. covered
topics include a graphic novel tutorial, oil painting, using graphic
tablets, a number of artist interviews, and more.
Comments (none posted)
Page editor: Nathan Willis
Announcements
Brief items
MariaDB developers Michael 'Monty' Widenius, David Axmark, and Allan
Larsson have
announced
the MariaDB Foundation. "
In its mission statement, the MariaDB
Foundation exists to improve database technology, including standards
implementation, interoperability with other databases, and building bridges
to other types of database such as transactional and NoSQL. To deliver this
the Foundation provides technical work in reviewing, merging, testing, and
releasing the MariaDB product suite. The Foundation also provides
infrastructure for the MariaDB project and the user and developer
communities." (Thanks to Dan Shearer)
Comments (1 posted)
Some videos from the 2012 LinuxCon in Barcelona
have been
posted. (Thanks to Scott Dowdle)
Comments (1 posted)
Articles of interest
Free Software Foundation's monthly newsletter is out, with a look at Cyber
Monday, Amazon books, MediaGoblin, software patents, VLC licensed, and much
more.
Full Story (comments: none)
The Free Software Foundation Europe's monthly newsletter covers Free
Software in the UK, secure boot, Free Software in Germany, and several
other topics.
Full Story (comments: none)
Calls for Presentations
The European Lisp Symposium will take place June 1-4, 2013 in Madrid,
Spain. The call for papers deadline is March 1. "
The purpose of the
European Lisp Symposium is to provide a forum for the discussion and
dissemination of all aspects of design, implementation and application of
any of the Lisp and Lisp-inspired dialects, including Common Lisp, Scheme,
Emacs Lisp, AutoLisp, ISLISP, Dylan, Clojure, ACL2, ECMAScript, Racket,
SKILL, Hop and so on. We encourage everyone interested in Lisp to
participate."
Full Story (comments: none)
Upcoming Events
linux.conf.au has announced that Bdale Garbee will be a keynote speaker at
the 2013 conference next January. "
Bdale Garbee is best known for his pioneering work with Debian, and for open source community-building efforts with the Linux Foundation, Freedombox, and Software in the Public Interest (SPI). He is a regular presence at linux.conf.au, wowing many recent conference-goers with his rocketry exploits and other hobby activities turned into open source projects."
Full Story (comments: none)
Events: December 6, 2012 to February 4, 2013
The following event listing is taken from the
LWN.net Calendar.
| Date(s) | Event | Location |
December 5 December 7 |
Qt Developers Days 2012 North America |
Santa Clara, CA, USA |
December 5 December 7 |
Open Source Developers Conference Sydney 2012 |
Sydney, Australia |
December 7 December 9 |
CISSE 12 |
Everywhere, Internet |
December 9 December 14 |
26th Large Installation System Administration Conference |
San Diego, CA, USA |
December 27 December 29 |
SciPy India 2012 |
IIT Bombay, India |
December 27 December 30 |
29th Chaos Communication Congress |
Hamburg, Germany |
December 28 December 30 |
Exceptionally Hard & Soft Meeting 2012 |
Berlin, Germany |
January 18 January 19 |
Columbus Python Workshop |
Columbus, OH, USA |
January 18 January 20 |
FUDCon:Lawrence 2013 |
Lawrence, Kansas, USA |
| January 20 |
Berlin Open Source Meetup |
Berlin, Germany |
January 28 February 2 |
Linux.conf.au 2013 |
Canberra, Australia |
February 2 February 3 |
Free and Open Source software Developers' European Meeting |
Brussels, Belgium |
If your event does not appear here, please
tell us about it.
Page editor: Rebecca Sobol