LWN.net Weekly Edition for September 20, 2012
LinuxCon: Dragons and penguins in space
One of the best things about large conferences like LinuxCon is that the extensive program draws in speakers from outside the core Linux and free software ecosystem. Such was the case at the North American edition of LinuxCon 2012 in August, which featured a presentation from an aerospace engineer at Space Exploration Technologies (SpaceX). The company is evidently using Linux extensively on the ground and in its launches, although it provided frustratingly little detail.
Beyond the cloud (literally)
The speaker, Jim Gruen, is a relatively new hire at SpaceX, working on flight software. He started off by explaining exactly what the company does and how it operates. Its long-term mission is to make space flight as routine as air travel; for the near-term that means competing for contracts from NASA for the space agency's private space flight programs, such as the Commercial Orbital Transportation Services (COTS) program and the Commercial Crew Development (CCDev) program. Each program incorporates several rounds of proposals and milestones overseen by NASA, ultimately ending in a flight mission with specific objectives. SpaceX has flown two successful NASA contract missions. COTS Demo Flight 1 (C1), in 2010, sent the company's unmanned "Dragon" capsule through two Earth orbits then splashed it down into the ocean. COTS Demo Flight 2/3 (C2+) followed in May 2012, which combined the COTS 2 objective of rendezvousing with the International Space Station (ISS) and the COTS 3 objective of docking with ISS.
Although that slate of projects would certainly qualify as interesting stuff in just about anyone's book, Gruen's explanation of SpaceX's operations was intriguing as well. The company believes strongly in vertical integration as a cost-cutting measure, to the point where it manufactures in-house 80% of what it puts into space. It buys raw metal and manufactures the parts for its rockets and capsules, and it designs, prototypes, and produces its own computers, circuit boards, and even chips. The goal of this approach, he said, is to have everyone working in the same building, and enable them to try new experiments very quickly.
With that background material out of the way, he explained how the company uses Linux. For starters, space flight generates an enormous amount of data, including flight telemetry, traffic between ground stations, media feeds from the spacecraft, and so on. Streaming, storage, and analysis of this data is done on Linux — though it is not a task set unique to SpaceX or to space flight, he admitted.
Gruen's team works on avionics, the navigation and control systems on the company's spacecraft. The team is responsible for the complete life cycle and operation of the equipment, he said: board bring-up, bootloading, hardware initialization, straight on up through the user-space software. The company's C1 flight was a proof-of-concept run for its Dragon capsule design, and on that mission it did not run Linux. However, the C2+ model and subsequent revisions do run Linux. This does not mean that Linux is merely running on an auxiliary computer, he emphasized: Dragon's core systems are Linux, a custom in-house distribution using the uboot bootloader with C++ code running on top. Linux handles the triply-redundant avionics system, the thrusters, and even the pyrotechnics (which in space-faring lingo refers to the explosive charges used to do things like deploy parachutes for re-entry). He also showed images from the C2+ mission's ISS docking procedure, which used computer vision software running on Linux to locate the docking port and align the spacecraft with the station.
Event horizon
Gruen's overview of the Dragon vehicle and its use of Linux was interesting, to be sure. Unfortunately, the overview was more or less the full extent of the detail available. He was accompanied by representatives of SpaceX who sat in the front row and who would not allow him to go into any specifics about the hardware or software of the system, nor to take questions from the audience. The room was packed to overflowing, and the session let out with plenty of time still on the clock.
Gruen attributed the restrictions on his subject matter to the US
State Department, whom he said classified SpaceX's business as
"building dangerous weapons
". Consequently, he expressed
his excitement to be giving the talk, but added that he was
"going to present as much as I can without breaking any laws and
going to jail
". That is certainly an unenviable bind to be in,
but the upshot was that the audience learned little
about SpaceX's Linux systems — and about the challenges or
discoveries its developers have encountered along the way.
What makes that limitation puzzling is that so many Linux developers were in the audience for the session — Gruen commented more than once that there was code running on Dragon that had been written by people there in the room. In fact, Linux is so widespread in the scientific community that it would have been a surprise to hear that Linux was not the platform of choice. After all, Linux has been trustworthy enough to run nuclear weapons simulations for the US Department of Energy for years, and reliable enough to run medical devices; it is not a big stretch to hear that it runs on an orbital capsule as well.
It was unclear how much of SpaceX's taciturnity was due to government regulation and how much was by choice. SpaceX is in a highly competitive business, to be sure, and has the right to work in private, but it seems a bit implausible to argue that how the company uses upstream code like Linux constitutes a trade secret. Is there any credible chance that a competitor such as Orbital Sciences is running Windows on its spacecraft and has something substantial to gain from hearing that SpaceX sees better performance from Linux's scheduler, or which GRUB limitations made uboot the bootloader of choice?
SpaceX's reluctance to discuss details stands out, because attendees heard several other talks about Linux in high-security and scientific environments just days earlier. For example, Kipp Cannon of the Laser Interferometer Gravitational-Wave Observatory (LIGO) collaboration spoke at GStreamer Conference about his group's use of Linux to capture and analyze live laser interferometry data from gravity wave detectors. Cannon's group uses the largest GStreamer pipelines ever made, on massive machine clusters, to process LIGO signals fast enough recognize events in time for astronomers to aim telescopes at them before the events end. Certainly getting to and docking with ISS is a tremendous technical challenge, but it is not a drastically bigger challenge than the real-time detection of gravitational waves from black hole collisions a galaxy or more away. LIGO is a collaborative effort, but it too has fierce competition from other experiments, both for funding and for results.
As for the security factor, the implication was that SpaceX's work is regulated by the US Government, although it is not clear why that is the State Department's purview. But the GStreamer Conference also had a presentation from researchers at the US Department of Defense's Night Vision and Electronic Sensors Directorate (NVESD), which uses Linux and open source software to calibrate and test night-vision equipment and create new algorithms for combining multiple sensors' media streams into usable displays. They made it quite clear that the algorithms they develop are classified, while still explaining how they used GStreamer and other open source software, and even contributed code back upstream. Like NVESD, SpaceX's core projects might be confidential, but the software engineering problems that constitute the daily grind are likely familiar to developers everywhere.
That is probably the main point. I am not particularly interested in spacecraft avionics or infrared sensor algorithms, but it would have made for a more interesting LinuxCon session if SpaceX had talked about some of the challenges or design decisions it has faced in its software program, and how it overcame them. For example, Gruen mentioned that the company uses the kernel's soft real-time support. It would be interesting to hear why Dragon does not use hard real-time — which seems at first glance like a plausible requirement. It would even be worthwhile to hear the story if the solution was to ditch a standard Linux component and write an in-house replacement. Consider the space capsule's storage system, which surely has high reliability and fail-over requirements. There are plenty of computing environments with demanding specifications; hearing how various Linux filesystems fared — even if those that do well in other high-performance applications were not up-to-snuff on Dragon — would have been informative.
Upstreaming
But in the long run there are more important factors than a single interesting talk. Any company can choose to isolate its internal code from the upstream projects on which it relies; the downside is that doing so will increase its own development costs over time. It will either have to expend resources maintaining internal forks of the software that it branches (and back-port important features and bug fixes from newer releases), or periodically perform a re-base then re-apply its own patch sets. Both options increase — in the time required and the complexity involved — the longer that a company commits to them.
Google has walked this path in years past. As we covered in 2009, historically the company maintained its own internal kernel code, rebasing every 17 months. The resulting maintenance effort included merging and debugging its own feature patches, plus backporting hundreds of features from newer upstream releases. Google had its own reasons for not upstreaming its kernel work, including reluctance to share what it regarded as patches of no use to others, but eventually it found the maintenance headaches too painful and modified its kernel development process.
Interestingly enough, the NVESD speakers commented that the DOD greatly prefers its developers to send their patches back to upstream projects — including, in this case, GStreamer — rather than to start their own forks and repositories (and subsequently maintain them). The SpaceX talk mentioned that the Dragon missions generate an enormous amount of video data, but did not go into detail about the software the company uses to stream or analyze it. If it uses GStreamer for the task (which is certainly a possibility), consider how much it stands to gain by interacting in the open with other industrial-sized GStreamer users like NVESD and LIGO — and vice versa.
Perhaps the State Department is simply more secretive than the DOD, but my suspicion is that SpaceX plays close to the vest largely due to the natural tendency for companies to keep their work private (particularly in a company that places a high value on vertical integration). Almost everyone experiences some reluctance when first dipping its toes in open source waters. Indeed, coming to LinuxCon was a good first step for SpaceX. Perhaps it will take a page from its clients at NASA and open up more, particularly where upstream projects like Linux are involved. After all, Gruen's talk was informative and entertaining, and it was nice to hear that Linux has proven itself to be a valuable component in the nascent space flight industry. One merely hopes that next year the company will come back to LinuxCon and engage a little more with the rest of the free software community.
Throwing one away
One of the (many) classic essays in The Mythical Man-Month by Frederick Brooks is titled "Plan to throw one away." Our first solution to a complex software development problem, he says, is not going to be fit for the intended purpose. So we will end up dumping it and starting over. The free software development process is, as a whole, pretty good at the "throwing it away" task; some would argue that we're too good at it. But there are times when throwing one away is hard; the current discussion around control groups in the kernel shows how situation can come about.What Brooks actually said (in the original edition) was:
One could argue that free software development has taken this advice to heart. In most projects of any significant size, proposed changes are subjected to multiple rounds of review, testing, and improvement. Often, a significant patch set will go through enough fundamental changes that it bears little resemblance to its initial version. In cases like this, the new subsystem has, in a sense, been thrown away and redesigned.
In some cases it's even more explicit. The 2.2 kernel, initially, lacked support for an up-and-coming new bus called USB. Quite a bit of work had gone into the development of a candidate USB subsystem which, most people assumed, would be merged sometime soon. Instead, in May 1999, Linus looked at the code and decided to start over; the 2.2.7 kernel included a shiny new USB subsystem that nobody had ever seen before. That code incorporated lessons learned from the earlier attempts and was a better solution — but even that version was eventually thrown away and replaced.
Brooks talks about the need for "pilot plant" implementations to turn up the problems in the initial implementation. Arguably we have those in the form of testing releases, development trees, and, perhaps most usefully, early patches shipped by distributors. As our ability to test for performance regressions grows, we should be able to do much of our throwing-away before problems in early implementations are inflicted upon users. For example, the 3.6 kernel was able to avoid a 20% regression in PostgreSQL performance thanks to pre-release testing.
But there are times when the problem is so large and so poorly understood that the only way to gain successful "pilot plant" experience is to ship the best implementation we can come up with and hope that things can be fixed up later. As long as the problems are internal, this fixing can often be done without creating trouble for users. Indeed, the history of most software projects (free and otherwise) can be seen as an exercise in shipping inferior code, then reimplementing things to be slightly less inferior and starting over again. The Linux systems we run today, in many ways, look like those of ten years or so ago, but a great deal of code was replaced in the time between when those systems were shipped.
But what happens when the API design is part of the problem? User interfaces are hard to design and, when they turn out to be wrong, they can be hard to fix. It turns out that users don't like it when things change on them; they like it even less if their programs and scripts break in the process. As a result, developers at all levels of the stack work hard to avoid the creation of incompatible changes at the user-visible levels. It is usually better to live with one's mistakes than to push the cost of fixing them onto the user community.
Sometimes, though, those mistakes are an impediment to the creation of a proper solution. As an example, consider the control groups (cgroups) mechanism within the kernel. Control groups were first added to the 2.6.24 kernel (January, 2008) as a piece of the solution to the "containers" problem; indeed, they were initially called "process containers." They have since become one of the most deeply maligned parts of the kernel, to the point that some developers routinely threaten to rip them out when nobody is looking. But the functionality provided by control groups is useful and increasingly necessary, so it's not surprising that developers are trying to identify and fix the problems that have been exposed in the current ("pilot") control group implementation.
As can be seen in this cgroup TODO list posted by Tejun Heo, lot of those problems are internal in nature. Fixing them will require a lot of changes to kernel code, but users should not notice that anything has changed at all. But there are some issues that cannot be hidden from users. In particular: (1) the cgroup design allows for multiple hierarchies, with different controllers (modules that apply policies to groups) working with possibly different views of the process tree, and (2) the implementation of process hierarchies is inconsistent from one controller to the next.
Multiple hierarchies seemed like an interesting feature at the outset; why should the CPU usage controller be forced to work with the same view of the process tree as, say, the memory usage controller? But the result is a more complicated implementation that makes it nearly impossible for controllers to coordinate with each other. The block I/O bandwidth controller and the memory usage controller really need to share a view of which control group "owns" each page in the system, but that cannot be done if those two controllers are working with separate trees of control groups. The hierarchy implementation issues also make coordination difficult while greatly complicating the lives of system administrators who need to try to figure out what behavior is actually implemented by each controller. It is a mess that leads to inefficient implementations and administrative hassles.
How does one fix a problem like this? The obvious answer is to force the use of a single control group hierarchy and to fix the controllers to implement their policies over hierarchies in a consistent manner. But both of those are significant, user-visible API and behavioral changes. And, once again, a user whose system has just broken tends to be less than appreciative of how much better the implementation is.
In the past, operating system vendors have often had to face issues like this. They have responded by saving up all the big changes for a major system update; users learned to expect things to break over such updates. Perhaps the definitive example was the transition from "Solaris 1" (usually known as SunOS 4 in those days) to Solaris 2, which switched the entire system from a BSD-derived base to one from ATT Unix. Needless to say, lots of things broke in the process. In the Linux world, this kind of transition still happens with enterprise distributions; RHEL7 will have a great many incompatible changes from RHEL6. But community distributions tend not to work that way.
More to the point, the components that make up a distribution are typically not managed that way. Nobody in the kernel community wants to go back to the pre-2.6 days when major features only got to users after a multi-year delay. So, if problems like those described above are going to be fixed in the kernel, the kernel developers will have to figure out a way to do it in the regular, approximately 80-day development cycle.
In this case, the plan seems to be to prod users with warnings of upcoming
changes while trying to determine if anybody really has difficulties with
them. So, systems where multiple cgroup hierarchies are in use will emit
warnings to the effect that the feature is deprecated and inviting email
from anybody who objects. Similar warnings will be put into specific
controllers whose behavior is expected to change. Consider the memory
controller; as Tejun put it: "memcg asked itself the existential
question of to be hierarchical or not and then got confused and decided to
become both
". The plan is to get distributors to carry a patch warning users of the non-hierarchical
mode and asking them to make their needs known if the change will truly be
a problem for them. In a sense, the distributors are being asked to run a
pilot for the new cgroup API.
It is possible that the community got lucky this time around; the features that need to be removed or changed are not likely to be heavily used. In other cases, there is simply no alternative to retaining the older, mistaken design; the telldir() system call, which imposes heavy implementation costs on filesystems, is a good example. We can never preserve our ability to "throw one away" in all situations. But, as a whole, the free software community has managed to incorporate Brooks's advice nicely. We throw away huge quantities of code all the time, and we are better off for it.
The accounting quest: PostBooks
The quest for a free-software accounting system suitable for a business like LWN continues; readers who have not been following this story so far may want to look at the previous installments: the problem statement and a look at Ledger. This time around, your editor will be evaluating PostBooks, a system that differs from Ledger in almost every way. PostBooks, as it turns out, is not without its problems, but it might just be a viable solution to the problem.PostBooks has been around as a commercial project since 2000 or so; it made the shift to a free software project in 2007. It is, however, a classic example of a corporate-controlled project, with the corporation in this case being a company called xTuple. The license is the "badgeware" Common Public Attribution License (CPAL), which requires the acknowledgment of the "original contributor" on splash screens and similar places. The CPAL is recognized as an open source license by the Open Source Initiative, but its attribution requirements are not popular with all users. The CPAL has not taken the world by storm; it has shown up in a few business-oriented projects like PostBooks, though.
Additionally, PostBooks is a project in the "open core" model: the core software is open source, but certain types of functionality are reserved for proprietary enhanced versions requiring payment and annual support fees. See the xTuple ERP editions comparison page for an overview of which features can be found in which versions. One need not look long on the net to find users complaining that one must buy a proprietary version to reach the necessary level of functionality, but your editor's impression is that the free edition should be sufficient for a wide range of companies.
At a first glance, the PostBooks development "community" reinforces the
impression of a corporate-controlled project. There are no development
mailing lists, for example. The source repository lives on SourceForge;
a look at the revision history shows a slow (but steady) trickle of changes
from a handful of developers. The developer documentation says that
"
PostBooks is written in C++ with the Qt toolkit used for the graphical
interface. One result of this choice is that the code is quite portable;
the client can run on Linux, Windows, and Mac OS systems. All data
lives in a PostgreSQL database; among other things, that allows clients
distributed across a network to access a single database server. PostBooks
is an inherently multi-user system.
As far as your editor can tell, no even remotely mainstream Linux
distribution packages PostBooks, so users are on their own. Building the
tool from source is not a task for the faint of heart; the code itself
comes with no build instructions at all. Those instructions can be found
on the xtuple.org
web site; among other things, they recommend not using the versions of
Qt and PostgreSQL supplied by the distributor. Your editor's attempts to
build the system (ignoring that advice) did not get far and were not
pursued for all that long. One need not look for long to find similar
stories on the net.
What this means is that most users are likely to be stuck installing the
binary versions (32-bit only) shipped by xTuple itself. Downloading a
massive script
from a company's web site and feeding it to a root shell is always a great
way to build confidence before entrusting one's accounting data to a new
application. The script offers to try to work with an existing PostgreSQL
installation, but your editor ran into trouble getting that to work and
ended up letting it install its own version. There are license acceptance
and registration screens to be gotten through; as a whole, it feels much
like installing a proprietary application.
One nice feature is the provision of some sample databases, allowing easy
experimentation with the software without having to enter a bunch of data
first.
Your editor will appreciate it if readers resist the urge to question the
concept of an accounting tutorial having interesting parts.
PostBooks has a number of features that may be of interest to certain types
of corporations — relationship management and materials tracking, for
example. For the purposes of this review, though, the main area of
interest is accounting. As would be expected, PostBooks implements
double-entry bookkeeping with various layers on top to support a set of
"standard" business processes. For users coming from a tool like
QuickBooks, the processes built into PostBooks may look bureaucratic and
arcane indeed. Tasks that seem like they should be simple can require a
long series of steps and screens to get through.
For
a small operation where a single person likely does everything, it can seem
like a lot of useless extra work.
The good news is that much of it can be bypassed if one knows how. A
"miscellaneous check" can be entered without going through the purchase
order mechanism at all; there are also "miscellaneous vouchers" for those
who want less hassle, but still want to require that two people are
involved in the process of spending the company's money. The sales side is
similar; one can go through a whole process of defining prospects,
generating sales orders, putting together bills of materials, invoicing,
and so on. Or one can just enter a payment by digging deeply enough in the
menus.
Interestingly, the report generation subsystem would appear to be used for
related tasks like form and check printing. One of the many aspects of the xTuple
revenue model is xtupleforms.com,
where various types of forms, including checks, can be purchased for use
with PostBooks. Happily for an organization like LWN, the available forms
include tax forms, and the dreaded 1099 in particular. The selection is
small and US-centric, but, for some businesses, that's all that is needed.
In the case of checks, there is only one alternative: a
single-check-per-page format. Unlike Intuit, xTuple would not allow LWN to
put its penguin logo on its checks — a major shortcoming, in your editor's
estimation. It doesn't seem like multiple checks per page is a
possibility, which may explain why nobody has put together a format
description for checks from Intuit. As a whole, support for check printing
is minimal, but sufficient, especially in a world where (even in the US),
the use of paper checks is in decline.
As an aside, there is a surprising lack of resources or help for users
wanting to transition from systems like QuickBooks. One would think that
would be a promising source of new users; certainly there is no shortage of
disgruntled QuickBooks users in search of a different system. But tools to
extract data from QuickBooks and import it into PostBooks are not really to
be found. What little information on the subject
exists on the xTuple site dates from 2007. Evidently QuickBooks is such a
data trap that extracting vital information is not a job for the meek.
Speaking of data, one of the key problems for a business like LWN is
getting transaction data into the system. Our transactions tend to be
small, but we have a fair number of them (never enough, mind you); entering
them by hand is not really an option, even in a system with fewer steps
than PostBooks requires. That is, quite simply, the kind of job that makes
us willing to tolerate having computers around. So some way to get data
into the accounting system automatically is required.
Hypothetically, since PostBooks uses PostgreSQL for its data storage,
feeding new data should really just be a matter of writing a bit of SQL.
In practice, the PostBooks database schema has about 550 tables in it, with
all kinds of interactions between them. A person with enough interest and
ability could certainly figure out this schema and find a way to put more
information into the database without corrupting the works.
This is the point where your editor feels the need to remind you that LWN's
staff is dominated by kernel-oriented people. Charging such people with
that task could lead to some amusing results indeed, but our accountant is
not quite so easily amused.
The folks at xTuple seem to have recognized this problem, so they have put
together a somewhat simpler
means for programmatic access to the database. They call it their API,
but it really is a set of PostgreSQL functions and views that provides a
simplified version of the database. Programmers can write SQL to access a
view in a way that closely matches the windows in the interactive client,
and the functions behind those views will take care of the task of keeping
the database consistent. Your editor has not yet tried actually
programming to this "API," but it looks like it should be adequate to get
the job done.
Readers who have made it all the way through this article will have noticed
that the first impressions from PostBooks were not all that great. And,
indeed, it still has the look of a mostly proprietary piece of software
that happens to have the source available. But, once one looks beyond the
first impressions, PostBooks looks like it might well be able to get the
job done.
What this program needs, arguably, is a fork in the go-oo style. This fork
would do its best to get all of its changes upstream, but would put effort
into making the system easier to build and package so that distributions
might start carrying it. In this way, the project might gain a bit more of
a free software feel while staying reasonably close to its corporate
overlords. But, of course, such a project requires sufficiently motivated
developers, and it's amazing how few free software developers find that
accounting systems are the itch they need to scratch.
Whether LWN will move over to PostBooks is not an answerable question at
this point. Further investigation — of both PostBooks and the alternatives
— is called for. But this first phase of research has not ruled it out.
PostBooks is not a perfect fit for what LWN needs, but that perfect fit
does not seem to exist anywhere. In this case, it may just be possible to
use PostBooks to get the job done. Stay tuned.
The majority of features added to the core are added as a result of
sponsorship
", but also suggests that outside developers could be
given direct commit access to the repository. One has to assume that
attempts to add features found only in the proprietary versions would not
be welcomed.
The code
Using PostBooks
The initial PostBooks screen (shown on right) reinforces the "proprietary
software" feeling; it consists mostly of advertisements for xTuple products
and services. From there, though, it's one click to the top-level features
of the program, divided into relationship management, sales, purchasing,
accounting, and manufacturing. Finding one's way around the program takes
some time; there is a lot of functionality hidden away in various corners
and the correct way to get there is not always obvious. The tutorials provided by xTuple
(free online, but payment required for the PDF version) can be a good place
to start, but reading through them sequentially is a good idea. Important
details tend to be hidden in surprising places in a way that can frustrate
attempts to skip directly to the interesting parts.
For example, a purchase in QuickBooks is most likely handled, after the
fact, by simply entering the bill from the supplier, then perhaps printing a
check. The PostBooks purchasing window (right) has rather more steps:
check for the desired item in inventory, enter a purchase request, generate
a purchase order, release the purchase order to the world, enter a bill,
generate a voucher for the bill, let the voucher age (a process well known
to — and detested by — anybody who has tried to get a large company to pay
a bill), enter a payment, set up a check run, and actually print a check.
All these steps exist because larger companies actually do things that way,
usually with different people handing different phases of the process.
Indeed, PostBooks has an elaborate roles mechanism that can be used to
limit users to specific steps in the chain.
PostBooks has what appears to be a reasonably flexible report generation
subsystem, but, in the free edition at least, rather little use is made
of it. The set of reports available within the application is relatively
basic; it should cover what many companies need, but not much more.
PostBooks is not an application for those who cannot function without pie
charts.
Data
In conclusion
Security
LSS: DNSSEC
Paul Wouters gave a presentation on DNSSEC (Domain Name System Security Extensions) on the first day of the 2012 Linux Security Summit (LSS). It was a "pretty generic" look at DNSSEC, giving an overview of what it is and does. He also talked about some of the problems that can occur when using DNSSEC, particularly with hotspots or VPNs, along with some interesting applications that a secure data distribution mechanism will allow.
As the subtitle to Wouters's talk described, DNSSEC is a "cryptographically secured globally distributed database". But, not many people are actually running DNSSEC, at least yet. He would like to see that change, because DNSSEC has uses beyond just securing DNS. It can be used to safely distribute content that can't be modified.
Digging for DNSSEC information
Wouters stepped through some examples of using the dig program to query for DNSSEC information. The first thing to note when doing DNSSEC queries is the Authenticated Data (ad) flag that is reported in the results. It indicates that the results were validated by the recursive name server. So, an A (address) record returned from the name server with the ad flag is simply a claim by that server that it did the necessary verification. In order to be sure of the authenticity, though, the local name server needs to do its own verification.
One can retrieve the signature that corresponds to a given A record, using a command like:
$ dig +dnssec fedoraproject.org
To verify the mapping from name to IP address, DNSSEC resolvers use the signatures returned in the RRSIG (DNSSEC signature) record. In order to reduce the processing burden, DNSSEC servers typically do not do any cryptographic operations on the fly, and rely, instead, on pre-generated signatures. The signatures for each host in the domain are generated elsewhere, then installed on the DNSSEC server. Signatures have a duration associated with them, to stop replay attacks, but that means the signatures need to be periodically regenerated.
But, the name servers cannot know in advance the queries that might be made. For example, querying doesnotexist.fedoraproject.org is a perfectly valid request, but a server can't have canned, signed responses for each invalid host. So, instead, it returns signed responses for the host names alphabetically on either side of the requested name as NSEC (next secure) records. Those essentially describe a range of names that do not exist for the domain. Multiple queries could be used to map the full namespace of the domain, which is one of the criticisms of DNSSEC. One could avoid that by having the signing key on the server to sign the "does not exist" response, but that is generally considered to be insecure.
In order to verify the signature on query results, the public key corresponding to the private key that signed the entry must be retrieved. That is available as DNSKEY record type for the domain. There will often be more than one key defined for a domain to make it easier to roll over to a new key, Wouters said.
A chain of trust needs to be established in order to verify that the keys being used are actually those used by the domain. In order to do that, the parent domain (i.e. org for fedoraproject.org) will have a hash of the valid keys in use by the child. That information is reported in DS (delegation signer) records from the parent. That trust chain can be followed all the way back to the root zone, whose keys are widely available and are typically statically stored on the system. These trust chains can be examined using the drill utility.
When using straight DNS, all answers are considered to be insecure. But, with DNSSEC, there are several different states that a query answer could be in. The answer could be verified from a known trust anchor (such as a root zone key), and thus be "secure". Or it could be proven that there is no trust anchor that can be used to verify the answer, so it is "insecure" (as with straight DNS).
But, there are two other states that an answer can be in, Wouters said. The first is the "bogus" state, which means that the cryptographic verification failed. That typically results in a SERVFAIL error from the resolver. There is also the "indeterminate" state, which indicates that the answers needed to verify the result were missing or incomplete. The indeterminate state causes lots of problems, he said. It could be mapped to the bogus state, but that makes it harder for browsers and other applications to handle "soft failures".
DNSSEC and Linux
Linux distributions have a variety of DNSSEC tools available. For resolvers, BIND or Unbound are available, with the latter being preferred for on-the-fly changes. For DNS servers, one should use BIND, NSD, or PowerDNS, any of which are modern DNS servers that should work fine, he said. To sign zones, OpenDNSSEC is the most widely used tool, but there are others, including dnssec-signzone which is part of BIND. Lastly, there are a number of utilities for querying and debugging DNSSEC installations (e.g. dig and drill) that can be found by doing a package manager search for "dnssec".
Using DNSSEC for name resolution on Fedora or RHEL is very simple, Wouters said. Installing BIND or Unbound using yum, then doing:
echo "nameserver 127.0.0.1" >/etc/resolv.conf
is all that's needed. DNSSEC has been in the default configuration since
Fedora 15. But, he cautioned, you probably shouldn't do that, at least on a
laptop, because you depend on spoofed DNS all the time.
In some ways, DNSSEC is too good because it blocks operations like hotspot authentication. The system needs to "know when to accept lies" in DNS answers, he said. There are also problems with VPNs when there are two different views of the DNS namespace (i.e. a VPN-internal view and the external view). Additionally, problems occur because so many different applications "mess with resolv.conf". Those problems all need to be addressed at once.
Some recent changes to NetworkManager have been made to help with the hotspot problem. The new dnssec-triggerd daemon can be used to detect hotspots and inform NetworkManager, which will ask the user if they are logging into a hotspot. If so, NetworkManager will change resolv.conf to temporarily bypass the local DNSSEC-enabled resolver, allow the user to log in, then re-enable DNSSEC resolution.
Unfortunately, many hotspots also hijack the traffic they carry. That means that the DNSSEC resolver may not get clean responses even after the authentication dance. For example, some will filter out signatures, while others block the DNS port (53) to force users to their (generally non-DNSSEC) name servers. There are workarounds that dnssec-triggerd will try (e.g. running DNS over port 80 to suitably configured servers), but there will be situations where getting DNSSEC responses is not possible. In that case, NetworkManager asks the user if they want to disable DNS or run in insecure mode. Dan Walsh pointed out that some organizations may want to remove that choice, so that users cannot run with regular DNS. Wouters said that can be configured for those who need it.
VPNs are another problem because there is a need to use internal IP addresses. To handle that, VPN clients often rewrite the resolv.conf file, but Fedora has removed that ability from the Openswan client. Instead, Openswan informs the resolver, which adds entries for the internal DNS server. Once the VPN link is shut down, the resolver removes that information.
For signing your own DNSSEC zones, he recommends the OpenDNSSEC package. It has tools to automatically re-sign zones when they are about to expire and there are systemd unit files available to control the signing daemon. Overall, OpenDNSSEC is "working pretty well" for handling all of the signing chores, he said.
Converting applications
The standard gethostbyname() call that is used to look up host names from programs does not return DNSSEC information (e.g. whether the answer was secure or not), so programs that are going to use DNSSEC need to be changed. Wouters did that conversion for Openswan using libunbound.
The changes are fairly straightforward, and he shows much of the code in his slides [PDF]. A DNSSEC cache context is established at program start, and populated with information from /etc/hosts and /etc/resolv.conf, as well as with keys for the root zone. It then makes synchronous calls to ub_resolve() whenever a host name lookup is needed, returning an error if the reply is not secure.
The conversion was "not rocket science", he said. There are other alternatives, but he liked the libunbound interface. It supports both callbacks and threads for asynchronous resolution. In addition, it has good tutorials.
There are several RFCs and draft RFCs for publishing data using the DNSSEC infrastructure. For example, HTTPS certificates, SSH known_hosts keys, IPsec public keys, PGP email keys, and others are all possibilities for being distributed via DNSSEC. He had some other thoughts, including file hashes (for file integrity checking), SELinux policies, and even a secure Twitter-like text publishing scheme. There are many different options, Wouters said, and once DNSSEC becomes pervasive, there is lots of fun to be had.
Brief items
Security quote of the week
CRIME Attack Uses Compression Ratio of TLS Requests as Side Channel to Hijack Secure Sessions (threatpost)
Threatpost is reporting on a browser vulnerability that affects secure cookies when TLS or SPDY compression is supported. Researchers Juliano Rizzo and Thai Duong, who also discovered the BEAST flaw, have called the new vulnerability "Compression Ratio Info-leak Made Easy" or CRIME. "Rizzo said that browsers that implement either TLS or SPDY compression are known to be vulnerable. That includes Google Chrome and Mozilla Firefox, as well as Amazon Silk. But the attack also works against several popular Web services, such as Gmail, Twitter, Dropbox and Yahoo Mail. SPDY is an open standard developed by Google to speed up Web-page load times and often uses TLS encryption. [...] Google and Mozilla have developed patches to defend against the CRIME attack, Rizzo said, and the latest versions of Chrome and Firefox are protected. The researchers will present their results at Ekoparty next week." Some speculation on the details can be found at Stack Exchange.
New vulnerabilities
asterisk: remote command execution
| Package(s): | asterisk | CVE #(s): | CVE-2012-2186 | ||||||||||||||||||||
| Created: | September 18, 2012 | Updated: | September 19, 2012 | ||||||||||||||||||||
| Description: | From the CVE entry:
Incomplete blacklist vulnerability in main/manager.c in Asterisk Open Source 1.8.x before 1.8.15.1 and 10.x before 10.7.1, Certified Asterisk 1.8.11 before 1.8.11-cert6, Asterisk Digiumphones 10.x.x-digiumphones before 10.7.1-digiumphones, and Asterisk Business Edition C.3.x before C.3.7.6 allows remote authenticated users to execute arbitrary commands by leveraging originate privileges and providing an ExternalIVR value in an AMI Originate action. | ||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||
asterisk: ignores ACL rules
| Package(s): | asterisk | CVE #(s): | CVE-2012-4737 | ||||||||||||
| Created: | September 18, 2012 | Updated: | September 19, 2012 | ||||||||||||
| Description: | From the Asterisk advisory:
When an IAX2 call is made using the credentials of a peer defined in a dynamic Asterisk Realtime Architecture (ARA) backend, the ACL rules for that peer are not applied to the call attempt. This allows for a remote attacker who is aware of a peer's credentials to bypass the ACL rules set for that peer. | ||||||||||||||
| Alerts: |
| ||||||||||||||
bind9: denial of service
| Package(s): | bind9 | CVE #(s): | CVE-2012-4244 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | September 13, 2012 | Updated: | October 15, 2012 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Debian advisory: It was discovered that BIND, a DNS server, does not handle DNS records properly which approach size limits inherent to the DNS protocol. An attacker could use crafted DNS records to crash the BIND server process, leading to a denial of service. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
blender: insecure temporary files
| Package(s): | blender | CVE #(s): | CVE-2012-4410 | ||||||||
| Created: | September 18, 2012 | Updated: | September 19, 2012 | ||||||||
| Description: | From the Red Hat bugzilla:
An insecure temporary file use flaw was found in the way 'undo save quit' routine of Blender kernel of Blender, a 3D modeling, animation, rendering and post-production software solution, performed management of 'quit.blend' temporary file, used for session recovery purposes. A local attacker could use this flaw to conduct symbolic link attacks, leading to ability to overwrite arbitrary system file, accessible with the privileges of the user running the blender executable. | ||||||||||
| Alerts: |
| ||||||||||
chromium: multiple vulnerabilities
| Package(s): | chromium | CVE #(s): | CVE-2012-2865 CVE-2012-2866 CVE-2012-2867 CVE-2012-2868 CVE-2012-2869 CVE-2012-2872 | ||||||||
| Created: | September 19, 2012 | Updated: | September 19, 2012 | ||||||||
| Description: | From the CVE entries:
Google Chrome before 21.0.1180.89 does not properly perform line breaking, which allows remote attackers to cause a denial of service (out-of-bounds read) via a crafted document. (CVE-2012-2865) Google Chrome before 21.0.1180.89 does not properly perform a cast of an unspecified variable during handling of run-in elements, which allows remote attackers to cause a denial of service or possibly have unknown other impact via a crafted document. (CVE-2012-2866) The SPDY implementation in Google Chrome before 21.0.1180.89 allows remote attackers to cause a denial of service (application crash) via unspecified vectors. (CVE-2012-2867) Race condition in Google Chrome before 21.0.1180.89 allows remote attackers to cause a denial of service or possibly have unspecified other impact via vectors involving improper interaction between worker processes and an XMLHttpRequest (aka XHR) object. (CVE-2012-2868) Google Chrome before 21.0.1180.89 does not properly load URLs, which allows remote attackers to cause a denial of service or possibly have unspecified other impact via vectors that trigger a "stale buffer." (CVE-2012-2869) Cross-site scripting (XSS) vulnerability in an SSL interstitial page in Google Chrome before 21.0.1180.89 allows remote attackers to inject arbitrary web script or HTML via unspecified vectors. (CVE-2012-2872) | ||||||||||
| Alerts: |
| ||||||||||
dbus: root code execution
| Package(s): | dbus-1 | CVE #(s): | CVE-2012-3524 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | September 13, 2012 | Updated: | January 23, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the SUSE advisory: This update fixes a vulnerability in the DBUS auto-launching feature that allowed local users to execute arbitrary programs as root. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
devscripts: multiple vulnerabilities
| Package(s): | devscripts | CVE #(s): | CVE-2012-2240 CVE-2012-2241 CVE-2012-2242 | ||||||||
| Created: | September 17, 2012 | Updated: | September 19, 2012 | ||||||||
| Description: | From the Debian advisory:
CVE-2012-2240: Raphael Geissert discovered that dscverify does not perform sufficient validation and does not properly escape arguments to external commands, allowing a remote attacker (as when dscverify is used by dget) to execute arbitrary code. CVE-2012-2241: Raphael Geissert discovered that dget allows an attacker to delete arbitrary files when processing a specially-crafted .dsc or .changes file, due to insuficient input validation. CVE-2012-2242: Raphael Geissert discovered that dget does not properly escape arguments to external commands when processing .dsc and .changes files, allowing an attacker to execute arbitrary code. This issue is limited with the fix for CVE-2012-2241, and had already been fixed in version 2.10.73 due to changes to the code, without considering its security implications. | ||||||||||
| Alerts: |
| ||||||||||
dhcp: denial of service
| Package(s): | dhcp | CVE #(s): | CVE-2012-3955 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | September 14, 2012 | Updated: | March 11, 2013 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Mageia advisory: In the ISC DHCP server, prior to 4.2.4-P2, reducing the expiration time for an active IPv6 lease may cause the server to crash (CVE-2012-3955). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
gnupg: key spoofing
| Package(s): | gnupg, gnupg2 | CVE #(s): | |||||
| Created: | September 17, 2012 | Updated: | September 21, 2012 | ||||
| Description: | From the Ubuntu advisory:
It was discovered that GnuPG used a short ID when downloading keys from a keyserver, even if a long ID was requested. An attacker could possibly use this to return a different key with a duplicate short key id. | ||||||
| Alerts: |
| ||||||
horizon: cross-site scripting
| Package(s): | horizon | CVE #(s): | CVE-2012-3540 | ||||||||||||
| Created: | September 13, 2012 | Updated: | October 24, 2012 | ||||||||||||
| Description: | From the Ubuntu advisory: Thomas Biege discovered that the Horizon authentication mechanism did not validate the next parameter. An attacker could use this to construct a link to legitimate OpenStack web dashboard that redirected the user to a malicious website after authentication. | ||||||||||||||
| Alerts: |
| ||||||||||||||
kernel: denial of service
| Package(s): | linux | CVE #(s): | CVE-2012-3412 CVE-2012-3511 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | September 17, 2012 | Updated: | November 7, 2012 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Ubuntu advisory:
Ben Hutchings reported a flaw in the Linux kernel with some network drivers that support TSO (TCP segment offload). A local or peer user could exploit this flaw to to cause a denial of service. (CVE-2012-3412) A flaw was discovered in the madvise feature of the Linux kernel's memory subsystem. An unprivileged local user could exploit the flaw to cause a denial of service (crash the system). (CVE-2012-3511) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
keystone: privilege escalation
| Package(s): | keystone | CVE #(s): | CVE-2012-4413 | ||||||||||||
| Created: | September 13, 2012 | Updated: | September 19, 2012 | ||||||||||||
| Description: | From the Ubuntu advisory: Dolph Mathews discovered that when roles are granted and revoked to users in Keystone, pre-existing tokens were not updated or invalidated to take the new roles into account. An attacker could use this to continue to access resources that have been revoked. | ||||||||||||||
| Alerts: |
| ||||||||||||||
libxslt: denial of service
| Package(s): | libxslt | CVE #(s): | CVE-2012-2870 CVE-2012-2871 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | September 14, 2012 | Updated: | October 4, 2012 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Red Hat advisory: libxslt 1.1.26 and earlier, as used in Google Chrome before 21.0.1180.89, does not properly manage memory, which might allow remote attackers to cause a denial of service (application crash) via a crafted XSLT expression that is not properly identified during XPath navigation, related to (1) the xsltCompileLocationPathPattern function in libxslt/pattern.c and (2) the xsltGenerateIdFunction function in libxslt/functions.c. (CVE-2012-2870) libxml2 2.9.0-rc1 and earlier, as used in Google Chrome before 21.0.1180.89, does not properly support a cast of an unspecified variable during handling of XSL transforms, which allows remote attackers to cause a denial of service or possibly have unknown other impact via a crafted document, related to the _xmlNs data structure in include/libxml/tree.h. (CVE-2012-2871) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
mcrypt: code execution
| Package(s): | mcrypt | CVE #(s): | CVE-2012-4409 | ||||||||||||||||
| Created: | September 19, 2012 | Updated: | October 17, 2012 | ||||||||||||||||
| Description: | From the Red Hat bugzilla:
A buffer overflow was reported in mcrypt version 2.6.8 and earlier due to a boundary error in the processing of an encrypted file (via the check_file_head() function in src/extra.c). If a user were tricked into attempting to decrypt a specially-crafted .nc encrypted flie, this flaw would cause a stack-based buffer overflow that could potentially lead to arbitrary code execution. | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
openjpeg: code execution
| Package(s): | openjpeg | CVE #(s): | CVE-2012-3535 | ||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | September 17, 2012 | Updated: | November 2, 2012 | ||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Red Hat advisory:
It was found that OpenJPEG failed to sanity-check an image header field before using it. A remote attacker could provide a specially-crafted image file that could cause an application linked against OpenJPEG to crash or, possibly, execute arbitrary code. | ||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||
otrs: cross-site scripting
| Package(s): | otrs | CVE #(s): | CVE-2012-4600 | ||||||||||||
| Created: | September 19, 2012 | Updated: | September 19, 2012 | ||||||||||||
| Description: | From the CVE entry:
Cross-site scripting (XSS) vulnerability in Open Ticket Request System (OTRS) Help Desk 2.4.x before 2.4.14, 3.0.x before 3.0.16, and 3.1.x before 3.1.10, when Firefox or Opera is used, allows remote attackers to inject arbitrary web script or HTML via an e-mail message body with nested HTML tags. | ||||||||||||||
| Alerts: |
| ||||||||||||||
php: header injection
| Package(s): | PHP5 | CVE #(s): | CVE-2011-1398 CVE-2011-4388 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | September 13, 2012 | Updated: | February 28, 2013 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Ubuntu advisory:
It was discovered that PHP incorrectly handled certain character sequences when applying HTTP response-splitting protection. A remote attacker could create a specially-crafted URL and inject arbitrary headers. (CVE-2011-1398, CVE-2012-4388) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
php5: header injection
| Package(s): | php5 | CVE #(s): | CVE-2012-4388 | ||||||||||||
| Created: | September 17, 2012 | Updated: | September 19, 2012 | ||||||||||||
| Description: | From the CVE entry:
The sapi_header_op function in main/SAPI.c in PHP 5.4.0RC2 through 5.4.0 does not properly determine a pointer during checks for %0D sequences (aka carriage return characters), which allows remote attackers to bypass an HTTP response-splitting protection mechanism via a crafted URL, related to improper interaction between the PHP header function and certain browsers, as demonstrated by Internet Explorer and Google Chrome. NOTE: this vulnerability exists because of an incorrect fix for CVE-2011-1398. | ||||||||||||||
| Alerts: |
| ||||||||||||||
spice-gtk: privilege escalation
| Package(s): | spice-gtk | CVE #(s): | CVE-2012-4425 | ||||||||||||||||||||||||||||||||
| Created: | September 18, 2012 | Updated: | June 27, 2014 | ||||||||||||||||||||||||||||||||
| Description: | From the Red Hat advisory:
It was discovered that the spice-gtk setuid helper application, spice-client-glib-usb-acl-helper, did not clear the environment variables read by the libraries it uses. A local attacker could possibly use this flaw to escalate their privileges by setting specific environment variables before running the helper application. | ||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||
tor: denial of service
| Package(s): | tor | CVE #(s): | CVE-2012-4419 | ||||||||||||||||||||||||
| Created: | September 14, 2012 | Updated: | February 4, 2013 | ||||||||||||||||||||||||
| Description: | From the Debian advisory: By providing specially crafted date strings to a victim tor instance, an attacker can cause it to run into an assertion and shut down | ||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||
wordpress: largely unspecified
| Package(s): | wordpress | CVE #(s): | |||||||||||||||||
| Created: | September 13, 2012 | Updated: | April 11, 2013 | ||||||||||||||||
| Description: | From the WordPress release announcement: Version 3.4.2 also fixes a few security issues and contains some security hardening. The vulnerabilities included potential privilege escalation and a bug that affects multisite installs with untrusted users. These issues were discovered and fixed by the WordPress security team. | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
ypserv: memory leaks
| Package(s): | ypserv | CVE #(s): | |||||||||
| Created: | September 19, 2012 | Updated: | September 19, 2012 | ||||||||
| Description: | ypserv 2.29 fixes some memory leaks. | ||||||||||
| Alerts: |
| ||||||||||
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 3.6-rc6, released on September 16. "Fairly normal statistics: two thirds drivers, with the remaining third being a mix of architecture updates, filesystems (gfs2 and nfs) and random core stuff (scheduler, workqueue, stuff like that)." Linus is hoping to pull the final 3.6 release together before too much longer.
Stable updates: 3.0.43, 3.4.11, and 3.5.4 were released on September 14 with the usual set of important fixes.
Quotes of the week
Kernel development news
LSS: Integrity for directories and special files
Over the last few years, the Linux kernel has added features to measure the integrity of files on disk to protect against offline attacks. The integrity measurement architecture (IMA) was added in the 2.6.30 kernel, and other pieces have followed, but the job is not done. Dmitry Kasatkin gave a presentation at the 2012 Linux Security Summit (LSS) on an extension to the integrity subsystem to handle the contents of directories as well as various special files.
Integrity protection is needed to prevent attackers from altering the contents of a filesystem without the kernel's awareness, by removing the disk or booting into an alternative operating system. Runtime integrity is already handled by the existing access control mechanisms, Kasatkin said. Those include discretionary access control (DAC) mechanisms like the traditional Unix file permissions or mandatory access control (MAC) schemes such as those provided by SELinux or Smack. But those mechanisms rely on trusting the access control metadata (e.g. permissions bits or security extended attributes), which can be tampered with in an offline attack.
IMA measures the integrity of files by calculating a cryptographic hash over the file contents, which is stored in the security.ima extended attribute (xattr). IMA can also be used in conjunction with a Trusted Platform Module (TPM) to remotely attest to the integrity of the running system.
The extended verification module (EVM) was added in 3.2 to protect the inode metadata of files against offline attacks. That metadata includes the security xattrs (including those for SELinux and Smack along with security.ima), mode (permissions), owner, inode number, etc. Once again, a hash of the values is used, and EVM stores that as the security.evm xattr on the file.
The digital signature extension was added in the 3.3 kernel to allow the IMA and EVM xattrs to be signed. In addition to storing a hash value in the xattrs, a digital signature of the hash value can also be stored and verified.
The IMA-appraisal feature, which Kasatkin said is being targeted for 3.7, will inhibit access to files whose IMA hash does not match the contents (i.e. the file has been changed offline). There were some locking problems that prevented IMA-appraisal from being merged earlier, but those have been resolved.
But, all of those pieces don't add up to everything needed for real integrity protection, Kasatkin said. While EVM protects the inode metadata and IMA protects the contents of regular files, there is a missing piece: file names. In Linux, the inode does not contain the file name, as it lives in the directory entries, and the association between a file name and an inode is not protected.
The result is that files can be deleted, renamed, or moved in an offline attack without being detected by the integrity subsystem. In addition, symbolic links and device nodes are currently unprotected, which means that those files can be added, modified, or removed offline without detection. Various attacks are possible via changing directory entries, he said. One could delete a file required for booting, or restore a backup version (and associated security xattrs) of a program with known vulnerabilities.
Using two virtual machines, Kasatkin simulated an offline attack by creating files in one VM, then mounting the disk in the other VM and changing some of the files. With the existing integrity code (including IMA-appraisal), he was unable to access files with changed contents in the original VM, but had no problems accessing files that had been renamed or moved (nor were deleted files detected).
That problem leads to the directory and special file integrity protection that he has proposed. For directories, two new hooks, ima_dir_check() and ima_dir_update(), would be added. The former would be called during path lookup (from may_lookup()) and would deny access if any directory entries in the path had been unexpectedly altered. When directories are updated in the running system, ima_dir_update() would be called to update the integrity measurement to reflect those changes.
The implementation of the verification starts from the root inode during a path lookup. Nothing happens when the filesystem is mounted, the verification is done lazily during file name lookup. Whenever a dentry (directory cache entry) is allocated for a directory, a call is made to ima_dir_check() to verify it. This proposed callback does not break RCU path walk, so it should not cause scalability problems on larger machines. The integrity measurement is calculated with a hash over the list of entries in the directory, using the inode number, name, type, and offset values for each, and storing the result in security.ima on the directory (which is then protected with EVM).
For special files, like symbolic links and device nodes, there is one new hook that has been added: ima_link_check(). It is called during path lookup (follow_link()) and for the readlink() system call. The measurement is a hash of the target path for symbolic links or the major and minor device numbers for device nodes. Once again, those values are stored in security.ima and are verified before access.
The user-space tools used to set the integrity measurements for image creation also need updating to support the new features. The evmctl command (part of the ima-evm-utils package) has added the ability to set the reference hashes for directories and special files.
Kasatkin then demonstrated the integrity protections of the new code. If a file is moved or removed, the directory holding the file can no longer be accessed, so commands like ls or cd fail with an EPERM. He also presented performance numbers that showed relatively modest decreases compared to IMA/EVM without the directory and special file handling code, but more substantial declines when compared to not having IMA/EVM enabled at all. Interestingly, though, both flavors of IMA/EVM performed better on a file copy test than did a disk encrypted using dm-crypt. Disk encryption is another way to thwart offline attacks, of course.
It would seem that the kernel integrity subsystem is approaching "completion". The final pieces of the puzzle are now available; Kasatkin and others are hopeful they will be acceptable upstream soon, though he did note that the VFS developers had not yet reviewed the most recent patch set. For those that need this kind of protection for Linux, though, the wait may nearly be over.
KS2012: The memcg/mm minisummit
Day two (28 August) of the 2012 Kernel Summit included a day-long minisummit entitled "memcg minisummit" chaired by Ying Han and Johannes Weiner. Ying noted that the original minisummit title was something of a misnomer, since it had grown in scope to cover both memory control groups (memcg) and memory-management (mm) topics generally.
The session began with a statement that it was assumed that everyone in the room was familiar with previous discussions on the topics to be discussed. (Some of these previous discussions took place in the April 2012 LSF/MM meeting. Coverage of that event can be found in LWN articles here and here.) Given the context of the summit, this assumption was considered reasonable by everyone, though readers without a memory-management background may find the record of the discussion a little hard to follow at times.
Except for one very brief topic, coverage of the various sessions is split out into separate articles. The topics covered were as follows:
- Improving kernel-memory accounting for
memory cgroups; some users need better accounting of kernel-memory
usage inside cgroups (control groups), in order to to prevent poorly
behaved cgroups from exhausting system memory.
- Kernel-memory shrinking; a
discussion stemming from Ying Han's patches to implement a per-cgroup slab
shrinker.
- Improving memory cgroups performance
for non-users; how do we resolve the problem that the
current memcg implementation has a performance impact even when memory
cgroups are not being used?
- Memory-management performance
topics; short discussions of various performance and
scalability topics.
- Hierarchical reclaim for memory
cgroups; what is the best way to reclaim memory from soft-limited
trees of memory cgroups when the system is under memory pressure?
- Reclaiming mapped pages; toward
improving reclaim of mapped pages to handle a wider variety of workloads.
- Volatile ranges; looking at
various ideas on improving the implementation of this proposed kernel
feature.
- Memory-management patches work: Michal Hocko briefly
discussed the origin of the memcg-devel tree. This tree has
evolved into being a general memory-management development tree that is not
rebased like linux-next, but instead takes a mainline release from
Linus Torvald's tree and applies Andrew Morton's patches against them.
This gives memory-management developers a common, relatively stable ground
to implement against. The tree already has a few users and they seem to be
happy so far. (Since the meeting, the
tree has been moved to kernel.org, and renamed from
memcg-devel to mm.)
-
Moving zcache toward the mainline; what
are the barriers to getting the compressed cache feature merged?
- Dirty/writeback LRU; a discussion
of Fengguang Wu's proposal to split the file LRU list into clean
and dirty lists.
- Proportional I/O controller; two
proposed solutions to improve its performance for cgroup workloads.
- Shared-memory accounting in memory
cgroups; dealing with some scenarios where memory cgroups are unfairly
charged for memory usage.
- NUMA scheduling; a discussion of competing patch sets that implement this feature.
By and large, this was considered a successful meeting by the memory-management developers in attendance. Ying Han kept everyone on track and the meeting to schedule, and each of the topics were discussed in detail; good progress was made on many issues, and the participants gained insights into several issues that will affect an increasing number of users in the future. Hopefully, some of the remaining issues will now be more easily resolved on mailing lists.
[Michael Kerrisk would like to thank Fengguang Wu, Glauber Costa, Johannes Weiner, Michal Hocko, and especially Mel Gorman for assistance with the write-up of the minisummit.]
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Development tools
Device drivers
Documentation
Filesystems and block I/O
Memory management
Networking
Security-related
Virtualization and containers
Page editor: Jonathan Corbet
Distributions
Twin Peaks v. Red Hat
Red Hat is no stranger to lawsuits, having grappled with the Firestar patent case in 2008 and dealt successfully with patent troll IP Innovations in 2010. But the company is now bringing a GPL compliance suit to court for the first time in its history. Red Hat filed the complaint in a countersuit against a patent infringement case that was launched in early 2012. If it goes to trial, it could bring several GPL-interpretation questions to the test.
The original litigant in the case is Twin Peaks Software (TPS), which makes proprietary network backup software. TPS sued Red Hat and Red Hat's recently-acquired subsidiary Gluster on February 23, 2012. TPS charges that the GlusterFS software violates TPS's US patent 7,418,439, which covers TPS's "Mirror File System" (MFS). GlusterFS is a network filesystem that aggregates multiple storage shares into a single volume. TPS's products include TPS Replication Plus, which automatically mirrors changes between two NFS filesystems over the network, and TPS Clustering Plus, which extends a similar feature set to larger clusters.
Red Hat initially responded to the patent infringement suit on August 2, both denying the infringement and asserting that the patent itself is invalid:
Had things stopped there, the case might have proceeded as a standard software patent infringement lawsuit. Red Hat's answer to the initial claim invoked numerous other counterarguments, such as denying that TPS has the right to ask for an injunction against the allegedly infringing Red Hat products, but it stuck to denying the claims of the initial suit. But Red Hat then followed up with a September 13 countersuit that charges TPS with a copyright violation claim, and asks for an injunction against the violating products. The products in question are TPS Replication Plus and TPS My Mirror, a freeware edition of Replication Plus. Red Hat claims that both products incorporate code from mount — specifically the 2.12a version from the util-linux package for which Red Hat is the registered copyright holder — and that TPS is in violation of the terms of the license by not providing or offering the corresponding source code.
At Groklaw, Mark Webbink argues that this action ups the stakes considerably, because even if TPS's suit against Red Hat were successful, Red Hat would experience only a small impact on its bottom line, due to the relatively minor role GlusterFS plays in Red Hat's core business. If Red Hat's countersuit were successful, however, TPS would lose the sales of 50% of its products — a hit few businesses could survive.
The countersuit is in most respects a standard GPL-violation charge, much like those brought against other proprietary software vendors by other enforcement entities. But it also brings to light some peculiarities of the free software licensing realm. Red Hat alleges that the mount code in question is under GPL version 2, specifically. Failure to comply with GPLv2's source code provisions automatically terminates the violator's rights to distribute the code (section 4). The most common interpretation of this section of GPLv2 was that only the copyright holder can reinstate the violator's right to distribute the copied software. In that case, if TPS is found to have copied mount code, Red Hat could effectively force TPS to rewrite its products by refusing to reinstate its rights under the GPLv2. But not everyone agrees with that interpretation; uncertainty over the meaning of that section was also one reason why GPLv3 added provisions for a violator to regain its right to distribute by coming into compliance with the license.
Another wrinkle to the copyright-violation issue is the possibility that there are portions of other GPL-licensed works inside TPS's products. The countersuit does not address this possibility, but it cannot rule it out, either. The difference between copying from one GPL-licensed work and copying from several could be great. In the event that there are multiple GPL violations of different copyrights, even if Red Hat agreed to reinstate TPS's right to distribute mount and all other Red Hat-copyrighted code, it cannot reinstate TPS's right to distribute software written by others. That problem is academic at the moment, but it may not remain so: Eben Moglen wrote on the Software Freedom Law Center (SFLC) blog that he is investigating whether TPS's products contain software that has been copied from SFLC clients.
Moglen also says that if a violation is proven in the TPS case, it
would be "a particularly severe offense
" because TPS has
chosen to sue a member of the free software community. Consequently,
it would be profiting from the work of free software developers while
simultaneously suing them. In contrast, most GPL violations are
reported to be unintentional; Bradley Kuhn estimated
in 2011 that 98% of the violation incidents he had worked on were
cases of negligence and not malice. "Malice" might be hard to pin
down, but the fact that TPS actively initiated this legal battle
certainly increases the chances that Red Hat will choose to fight it
out rather than settle.
If Red Hat does pursue the suit, this will also be the first GPL violation case brought by a commercial Linux distribution. Many of the high-profile GPL compliance cases in years past have been fought by independent projects like BusyBox or non-commercial groups like gpl-violations.org. Fighting out the GPL violation charge also has a different feel in this case because most other GPL enforcement actions are taken in order to bring the offending party into compliance. That is not the goal here: Red Hat is using the charge to wage an injunction-versus-injunction battle. The highest-grossing Linux distributor pursuing a GPL violation charge may not have the David-versus-Goliath feel of the other cases, but it could still be an important day in court — both for Red Hat and for anyone else who builds a business on free software.
Brief items
Distribution quotes of the week
Announcing the release of Fedora 18 Alpha
The alpha release of Fedora 18 "Spherical Cow" is available for testing. "Already mooo-tivated to give F18 Alpha a try? Great! We still hope that you'll read onwards; there are fabulous features in this release you may want to know about, as well as important information regarding specific, common F18 Alpha installation issues and bugs, all of which are detailed in this release announcement."
Mandriva Linux 2012 Alpha
Mandriva Linux 2012 Alpha (Tenacious Underdog) has been released. The new alpha features Linaro's gcc 4.7 branch, installer improvements, LXDE shipped by default, KDE 4.9.0 and more.Ubuntu 11.04 (Natty Narwhal) EOL
Ubuntu 11.04 has been out for nearly 18 months, and will reach its end-of-life on October 28, 2012. There will be no updates, including security updates, for Natty after that date.
Distribution News
Other distributions
CentOS-6.3 / x86_64 UEFI Installer Released
CentOS has released a new installer for 6.3 x86_64 that will work on UEFI enabled machines. "I'm trying to make sure that we do enough testing and have enough resources for UEFI testing to ensure that the next and subsequent releases do not have a problem in this environment. In the mean time, the installer buildsystem for CentOS-6 has been updated to also build and test the UEFI requirements in sync with the rest of the installer build process."
Newsletters and articles of interest
Distribution newsletters
- Debian Project News (September 17)
- DistroWatch Weekly, Issue 474 (September 17)
- Maemo Weekly News (September 17)
- Ubuntu Weekly Newsletter, Issue 283 (September 16)
Vajna: Frugalware history
Miklos Vajna, founder of Frugalware, looks at the history of this general purpose distribution. "Looking back, it was all quite lame. :-) I used a mail address called "mamajom" (English translation could be "momonkey"), tied to an ISP, with a lengthy signature at the end of every mail I sent and was using my IRC nick instead of my real one everywhere… OTOH, I made some decisions I’m happy about even today. The first four developers (Ádám Zlehovszky, Krisztián Vasas, Zsolt Szalai and me) were all Hungarian and despite of this, I forced every code, test and documentation to be in English, to possibly turn the project into an international one in the future. And that proved to very, very useful."
First alpha of Mandriva Linux 2012 now available (The H)
The H takes a look at the first alpha for Mandriva Linux 2012. "Nearly two months later than originally planned, the first alpha for Mandriva Linux 2012, code-named "Tenacious Underdog", has been released for testing. The new development release upgrades the KDE desktop to version 4.9.0 from August and brings improvements to the distribution's installer, which is now said to be smaller and faster; the installer's text mode is also noted to be working again. Other changes include the complete removal of the HAL (hardware abstraction layer) and the switch to Linaro's GCC 4.7 branch, as well as various package updates and bug fixes."
Page editor: Rebecca Sobol
Development
Keeping Up With Kdenlive
Kdenlive is a non-linear editor and digital video workstation, similar in design to a digital audio workstation such as Ardour or Qtractor. In an audio workstation, segments of recorded sound are arranged and edited in a layout of individual tracks. Kdenlive works in the same way with video data. You can record directly into the program, transfer video data from an external device, and load video files from disk or over a network. Basic editing operations include cut/copy/paste, trim, crop, and so forth. Internal processing functions and external plugins can be invoked to apply corrective functions and special effects, and the final rendering stage offers a variety of target formats. Other features include titling functions, image sequencing, audio manipulation, and DVD production utilities.This article presents Kdenlive from my perspective as a casual but serious user; that is, I compile the program myself, I work with it frequently, but my demands on its capabilities are relatively light. I use Kdenlive to make simple video productions from clips from my shows and from experimental audio/visual works created with Jean-Pierre Lemoine's AVSynthesis. As I'll detail later, I've also been using Kdenlive as a stand-alone creative environment. My productions aren't especially fancy, but they do illustrate the basic actions described here.
Prerequisites
Like other Linux video editors, Kdenlive relies on a foundation of prior
work. Its primary dependency is the Media Lovin'
Toolkit (MLT), a production framework for streamlining the task of
programming multimedia applications. MLT itself depends on the FFmpeg software for many of
its video services. For the complete Kdenlive experience you'll also need
the Xine media playback
system, the RecordMyDesktop
software, and the QImage module
from Qt4. The whole list is an impressive suite of libraries and
utilities. Combine that suite with a clearly organized Qt4 GUI, and behold,
you have Kdenlive.
Incidentally, Kdenlive programmer Dan Dennedy is involved with developing and maintaining some of those other projects. Dan is a member of the Kdenlive and MLT development crews; his involvement with both projects helps to ensure their compatibility and to keep them up to date. I'm sure it also gives him an edge as the chief Kdenlive bug fixer (his job description on the Contributors page).
Retrieve, compile, install
Kdenlive can be found in all mainstream Linux distribution software repositories, so check with your package manager to see if a version is available for your system. However, even if your distribution provides Kdenlive it may be an out-of-date version — the current public release is version 0.9.2 — or it may exhibit problematic behavior due to the changing nature of some of its dependencies. If you choose to compile Kdenlive you have some options to consider first. You can download and build from a source tarball, or you can clone and compile from Kdenlive's Git source repository. However, in either case you are responsible for the installation of Kdenlive's dependencies, a non-trivial matter (see the Installing Required Libraries page for more information).
Fortunately, developers Mads Dydensborg and Dan Dennedy have written a shell script (build-kdenlive.sh) that will compile the current Kdenlive source code along with the source code for its more mercurial dependencies (i.e. FFmpeg and friends). You can read about the details of the script and its use in a thread on compiling Kdenlive for AVLinux. The script appears to be system-agnostic — I use it to build Kdenlive on machines running Debian Squeeze, Ubuntu 10.04, and 64-bit Arch — and it can be customized easily if needed. I must emphasize that the script builds what it finds on-line. If a dependency's current codebase is buggy then your Kdenlive binary is likely to be buggy too. The script takes a while to run, and, if it stops before completion, it will issue an error message, typically regarding a missing external dependency. Install the missing package, re-start the script, be patient, and eventually you'll have an up-to-date and stable Kdenlive.
The whole environment is "sandboxed", i.e. its binaries and libraries are not installed. You run your newly-minted version with the start-kdenlive command in the date-stamped directory at ~/kdenlive. Here's how I do it :
cd /home/dave/kdenlive/20120915 ./start-kdenlive
The start-up script knows where to find everything it needs to successfully invoke your new Kdenlive.
I've been using the script for a few months, and I've had only two notable issues with its builds. Unfortunately both issues were show-stoppers. I reported them to the Kdenlive Mantis bug tracker, and within twenty-four hours the problems had been resolved by Kdenlive's main developer Jean-Baptiste Mardelle. I always keep a previously working version, so I lost no productivity, but I was impressed by the quick resolutions. Advice to all Kdenlive users: report problems to the bug tracker, don't just mention them on the forums. Kdenlive will leave a crash report that can help its programmers locate the source of a problem, and running the program with the --gdb debugging option generates a full back-trace easily copied to a Mantis report.
In this article, my descriptions and screenshots apply only to versions of Kdenlive compiled from the build script. The script pulls the Kdenlive source code from the current source repository and may include fixes and features not yet available in the public releases. As of September 15 it builds MLT version 0.8.3 and Kdenlive version 0.9.3.
Learning Kdenlive
The Kdenlive user's manual is a work in progress and thus a bit uneven. Some sections are quite detailed while others are lacking any entry at all. It definitely provides some assistance to the beginner, but once you've moved on to the next level you'll want more knowledge about more aspects of the program. Fortunately Kdenlive's users have been busy creating their own documentation projects, posting tutorials, demonstrations, and creative works on sites such as YouTube and Vimeo. And of course the forums at Kdenlive.org are always open for discussion of anything about Kdenlive.
In my opinion the best way to learn about Kdenlive is to dive into it and start playing. Create a test project, import some files, and start working with the tools. Spend some time learning the environment before starting any serious work. Import and arrange some clips in the track display, test-drive a few video effects processors, check out the view and zoom options. Record a clip from your webcam. Get to know the place.
Earlier I mentioned that I've been using Kdenlive as a stand-alone creative
environment. A very cool "online resources" feature opens connections to the
free media databases at Freesound
(for audio files), the Internet
Archive (for video footage), and OpenClipArt.org (for images). If you're
at a loss for something to do in Kdenlive, right-click anywhere in the
Project Tree panel, open the Online Resources dialog (shown on the right),
and search
those sites with whatever descriptors you can imagine. This feature makes
Kdenlive more than a well-stocked toolkit for video production - it invites
and inspires creativity, always a welcome feature in software designed for
the artistic spirit.
Regarding cameras and codecs
The Kdenlive web site maintains a list of compatible cameras and other video devices. The program will try to identify a default input device, but you may need to make further adjustments for a best fit. For example, Kdenlive recognized my laptop's integrated camera, but its default settings prevented successful video capture. I discovered that FFmpeg doesn't like the device's default frame rate, so I edited that value (in Kdenlive) to one I knew was valid for my camera and FFmpeg. Voila, the camera now records without complaint from Kdenlive.
I also have an old Samsung miniDV camera that is supported by Kdenlive. I can record directly into the program from the camera, or I can transfer previously recorded material. Support is also available for cameras capable of recording in the HDV and AVCHD high-definition formats, though AVCHD compatibility is not yet guaranteed. See the Supported Audio & Video Formats page on the Kdenlive Web site for more information supported cameras and camcorders.
A video codec is a piece of software that compresses and decompresses a stream of video data. The various video formats require specific codecs. Because there are many formats, there are many codecs, and Kdenlive wants its share. Fortunately, it's not a huge share — twelve are required and should be included in your distribution's package repositories. My work requires no esoteric or proprietary codecs, and so far I've had no trouble loading and saving videos in AVI, MPEG, FLV, and other common formats.
By the way, Kdenlive also supports a number of common audio file types, including the MP3, Ogg Vorbis, and WAV formats. When recording you can choose to include an audio track, but users should note that results may vary with the recording device. Audio is badly aligned when I record from the webcam, while I have no such problem when recording from the midiDV camera. Fortunately, Kdenlive includes a few tools for audio correction; you should be able to realign audio and video tracks with only a little effort.
Titling, text, and other random tips and tricks
Kdenlive's titling module is a helpful and flexible tool. The
standard controls are present — font selection, text sizing and colors,
outlining, placement, etc. — along with control over the title text
rotation and the clip's duration. By default, a title clip includes a
composite transition that allows transparency of the titles when placed
over another clip. Blur and typewriter effects are available in the title
clip editor, and further processing can be obtained from the effect list. A
title clip can be saved as a template or rendered as a video clip for later
use. For example, I made a virtual finale reel from a title clip with text,
an added soundtrack, scratchlines and other "old movie" effects, and a
brief video quote at the end. You can view the results at The Fin on my YouTube channel.
The effect list dialog includes a "dynamic text" item in the "misc" submenu. I assumed that it placed user-specified text at keyframe breakpoints, but it appears not to work in that manner. I'll continue to investigate its operation. The "overlay" transition provides another text effect for the transparent combination of text over video clips and image sequences. The transition is easy to apply; there are no parameters except the selected target track, and it works well.
It's easy to create a video from a sequence of still images. Select an image, place it in a video track, then use the clip's length handle to stretch it to the desired time. You can also add video effects and transitions to your image sequences, including the popular Ken Burns effect used to create a sense of animation in an image sequence.
Kdenlive supports the frei0r
video plugins and the LADSPA audio
processors, thus greatly expanding the program's power. Effects are chosen
from the effect list and edited, muted, reordered, or deleted in the "Effect
Stack" panel (left). Many effects are keyframeable, meaning that you
can create a control curve to automate an effect parameter, just like it's
done in a digital audio workstation. Kdenlive's keyframe implementation
includes excellent tools
to add precisely measured values at precise locations in a clip. To the
right, you can see the control curve described by the parameters in
the panel on the left. By
the way, the order of effects is important, so if you think one is behaving
oddly move it up or down in the stack.
Audio support in Kdenlive is basic but definitely useful. Alas, there's no support for the JACK audio server. It is possible to run the program with JACK but the configuration is a bit tricky. In my opinion, until Kdenlive offers direct support for JACK you're better off with PulseAudio or plain ALSA for the duration of your session. Kdenlive also provides a call-out to a user-specified audio editor, Audacity by default. Select the "Edit Clip" item from the "Project" menu to invoke the editor. The call-out works fine, but as far as I could tell Kdenlive does not auto-update an altered file, even after I selected Reload Clip. I'm probably missing something obvious and shall await enlightenment.
Sending it forward
When you're satisfied with your work it's time to save your session and
select a render target. Click on the big red "Render" button to open the render
dialog for choosing your work's destination format and its
output parameters. Kdenlive provides a good batch of presets, but none of
the parameter values are set in stone, so feel free to change them as
needed. The length of time required for rendering will vary according to
the format selected — a lossless format will take considerably more time
than a base-quality YouTube target, and of course the file size will
reflect its quality. Given a possibly lengthy render time, you might want to
render draft-quality work in a lower-quality format and reserve the higher
resolution for your later revisions.
Outro
I hope you've enjoyed this rather random ramble through Kdenlive. Of course there's much more to the program than I've been able to describe here. My use case is basic, and I've yet to explore many of Kdenlive's other interesting features. However, for my purposes Kdenlive meets and exceeds my expectations from a non-linear editor/digital video workstation. Call it a non-linear editor or a digital video workstation, in my opinion Kdenlive is an excellent environment for Linux desktop video production.
Brief items
Quote of the week
Cinnamon 1.6 released
Version 1.6 of the Cinnamon desktop has been released. It includes a number of workspace improvements, a new window "quick-list," better notification tracking, an improved sound applet, and more. "Cinnamon will eventually handle all visible layers of the Gnome desktop and provide an integrated experience, not only in terms of window and workspace management, but also in terms of file browsing, configuration and desktop presentation. Cinnamon 1.6 comes with tight integration for Nemo and a brand new backgrounds selection screen."
FFADO 2.1.0 released
FFADO is a set of user-space drivers for FireWire-attached audio devices. The newly-announced 2.1.0 release adds better support for the current Linux FireWire stack, better JACK integration, udev support, and, of course, support for a lot of new devices.
Newsletters and articles
Development newsletters from the last week
- Caml Weekly News (September 18)
- What's cooking in git.git (September 14)
- What's cooking in git.git (September 17)
- Mozilla Hacks Weekly (September 13)
- OpenStack Community Newsletter (September 14)
- Perl Weekly (September 17)
- PostgreSQL Weekly News (September 16)
- Ruby Weekly (September 13)
Experimental animation and video techniques in Linux (Linux Journal)
The Linux Journal reviews various interesting video animation tools. "So, why tell you about a bizarre-looking application that hasn't been updated in years when there are plenty of other video editors for Linux? Well, for all ZS4's graphical quirks, it can accomplish some very interesting compositing effects."
An update on the KDE-powered Vivaldi tablet
Aaron Seigo posted an update on the Vivaldi tablet project, with good news on the long search for a hardware provider that will comply with the GPL, despite several setbacks. "I dug more and the combination of seeing that the tablet space is still very much open (with all sorts of speculation about Android's position in it, Apple's ability to keep a death grip on the space, the rise of Amazons and others, etc..) and the reaffirmation that if we don't make open devices who will, I feel it is more important than ever to keep going.
" Pau Garcia i Quiles has a different take on the news, asking why KDE needs to base the product on an alternative Linux stack at all — when it could be adapted to run directly on Android instead.
Refactoring KWin's OpenGL support
Martin Graesslin has posted a detailed account of his recent work on KWin's OpenGL compositor, including separating OpenGL 1.0 and OpenGL 2.0 support, and a refactor that can use EGL as the backend on X, rather than GLX. "The change also nicely grouped all the OpenGL 1 code into one area which will be easier to remove once we decide that the End of Life is reached. Removing code is not as easy as one might expect and that is actually quite a project.
"
Page editor: Nathan Willis
Announcements
Brief items
The Linux Foundation's Automotive Grade Linux workgroup
The Linux Foundation has announced the creation of a workgroup to help with the creation of Linux-based automotive solutions; initial members include Jaguar Land Rover, Nissan, and Toyota, along with a number of tech companies. "The Automotive Grade Linux Workgroup will work with the Tizen project as the reference distribution optimized for a broad set of automotive applications ranging from Instrumentation Cluster to In-Vehicle-Infotainment (IVI) and more. The Linux Foundation will host this effort, providing a neutral environment for collaboration among the Linux kernel community, other open source software communities and the automotive industry."
LibreOffice Localization Program in Saudi Arabia
The Document Foundation and the National Program for Free and Open Source Software Technologies (Motah) at King Abdulaziz City for Science and Technology (KACST) in Saudi Arabia are working together to enhance the Arabic language support in LibreOffice. "Motah LibreOffice Project is one of the activities of Motah program at KACST, where several software products in various fields are studied to explore the extent of Arabic support and their suitability to the needs of Arab users. Thereafter, Motah team will work at improving the selected software products to meet those needs and requirements. LibreOffice was selected to be the first localization project because of its importance as an office suite whose functions are needed by all computer users."
LPC 2012 slides, notes, and videos
The Linux Plumbers Conference organizers have made slides, notes and videos from the 2012 event available.An Open Letter To The OpenStack Community
Rackspace has announced that it is handing over the project and all its assets to the OpenStack Foundation. "When we launched OpenStack, the goal was to build a broad community of developers, users, companies and other organizations that together would drive the vision of an open and ubiquitous platform for public and private clouds. We’ve accomplished that and could not be more excited about the work we have all done together. In just two years the OpenStack community has flourished beyond our expectations – it now comprises almost 6,000 individual contributors from 88 countries and nearly 200 companies, including many of the giants of enterprise IT. The community has generated more than half a million lines of code and the OpenStack software has been downloaded more than 300,000 times from the central code repositories alone. OpenStack now powers some incredible cloud environments, including our own public cloud here at Rackspace as well as our private cloud software offering."
OpenStreetMap license change completes
The license change for the OpenStreetMap database has been a long affair — LWN first covered the discussion in 2008. As of September 12, though, the transition has been made; the OpenStreetMap database is now covered by the Open Database License. See the project's Legal FAQ for more information.Rackspace sued for hosting GitHub
Personalweb Technologies and Level 3 Communications have filed a lawsuit [PDF] against Rackspace, alleging that Rackspace's hosting of GitHub infringes upon a long list of software patents.Southampton engineers a Raspberry Pi Supercomputer
The University of Southampton (UK) has put out a press release about a 64-node supercomputer recently built out of Raspberry Pi systems. In addition, the racks were built out of Lego. "Professor [Simon] Cox adds: 'The first test we ran – well obviously we calculated Pi on the Raspberry Pi using MPI, which is a well-known first test for any new supercomputer.' [...] 'The team wants to see this low-cost system as a starting point to inspire and enable students to apply high-performance computing and data handling to tackle complex engineering and scientific challenges as part of our on-going outreach activities.'" There is a guide to building your own, as well as a page for pictures of the supercomputer.
Articles of interest
The SFLC's guide on managing copyright information
The Software Freedom Law Center has released a guide to managing copyright information in a free software project. "Nearly every free software project includes two types of legal information with its source code: the license(s) of the code and the copyright notices of contributing authors. As important as it is to maintain this information, it’s not always easy in a collaborative development process. With multiple developers contributing regularly to the code, some information can be left out or buried, and other information can be retained after it is no longer accurate. We wrote this guide to help projects minimize the problems and improve the usefulness of their legal information. It explains the purpose of license and copyright notices, the legal requirements behind them, and the current best practices for maintaining this information in a free software project’s source distribution."
Intel declares Clover Trail Atom processor a "no Linux" zone (ars technica)
Ars technica reports that the upcoming Intel "Clover Trail" processor will be for Windows only. "To achieve that, Intel worked closely with Microsoft to instrument the chip to allow Windows 8 to control Clover Trail's advanced power management features, which support what [Intel VP David] Perlmutter called 'always-on' functionality. It's that special sauce in Clover Trail that won't be supported for other operating systems, including Linux, likely in part because of Intel’s desire to keep those features close to the vest—and because of contractual obligations to Microsoft."
Intel's new Clover Trail chip will support Android and Linux (ZDNet)
ZDNet reports that Intel is working on a Clover Trail Atom processor for Linux. "We now know that Intel will officially support the popular open-source operating systems on the Clover Trail family as well. In an e-mail from an Intel spokesperson, Intel said, "Intel has plans for another version of this platform directed at Linux/Android; however we are not commenting on the platform specifics or market segments at this time. Stay tuned.”"
Upcoming Events
Schedule for Bootstrapping Awesome
Bootstrapping Awesome combines the Gentoo Miniconf, LinuxDays, and the openSUSE Conference together in Prague, Czech Republic in October 2012. The schedule is available.PyCon Finland 2012 registration is open
Registration is open for PyCon Finland which takes place October 22-23, in Otaniemi, Finland. "This year we have Python talks ranging from hardcore Python development, visualization and data processing to choosing Python as the language of education and extending Python to support static typing."
PyCon Argentina 2012
PyConAR will take place November 12-17, 2012 in Buenos Aires, Argentina. The schedule is available and registration is open. "Sessions will include two days of talks from local and international renowned experts, with inaugural Science Track and “extreme” talks, preceded of one day of tutorial/workshops and three days of sprints meetings. Exhibition hall will include posters and community booths (SOLAR free software civil association, FACTTIC federation of IT cooperatives, Mozilla-Ar, Ubuntu-Ar, PostgreSQL, Hacklabs and “Programming with robots” project from LINTI UNLP)."
LCA2013 Programming Miniconfs Announced
More linux.conf.au (LCA) miniconfs have been announced. The programming miniconfs will look at Open Programming, Developer Automation and Continuous Integration, and the Browser. LCA will take place January 28-February 2, 2013 in Canberra, Australia.LCA2013 Announces Conference Schedule
The schedule is available for linux.conf.au (LCA) which takes place January 28-February 2, 2013 in Canberra, Australia. "The conference will feature six streams of talks across five days. The first two days will be for miniconferences, with the rest of the conference dedicated to eighty-four talks and six tutorials, on topics ranging from software engineering to systems administration. This year, there is a heavy focus on deep technical content, including many talks on the Linux kernel, and various hardware platforms. The conference also boasts four keynotes from pivotal industry figures, which will be announced in the next few months."
Events: September 20, 2012 to November 19, 2012
The following event listing is taken from the LWN.net Calendar.
| Date(s) | Event | Location |
|---|---|---|
| September 14 September 21 |
Debian FTPMaster sprint | Fulda, Germany |
| September 17 September 20 |
SNIA Storage Developers' Conference | Santa Clara, CA, USA |
| September 18 September 21 |
SUSECon | Orlando, Florida, US |
| September 19 September 20 |
Automotive Linux Summit 2012 | Gaydon/Warwickshire, UK |
| September 19 September 21 |
2012 X.Org Developer Conference | Nürnberg, Germany |
| September 21 | Kernel Recipes | Paris, France |
| September 21 September 23 |
openSUSE Summit | Orlando, FL, USA |
| September 24 September 25 |
OpenCms Days | Cologne, Germany |
| September 24 September 27 |
GNU Radio Conference | Atlanta, USA |
| September 27 September 29 |
YAPC::Asia | Tokyo, Japan |
| September 27 September 28 |
PuppetConf | San Francisco, US |
| September 28 September 30 |
Ohio LinuxFest 2012 | Columbus, OH, USA |
| September 28 September 30 |
PyCon India 2012 | Bengaluru, India |
| September 28 October 1 |
PyCon UK 2012 | Coventry, West Midlands, UK |
| September 28 | LPI Forum | Warsaw, Poland |
| October 2 October 4 |
Velocity Europe | London, England |
| October 4 October 5 |
PyCon South Africa 2012 | Cape Town, South Africa |
| October 5 October 6 |
T3CON12 | Stuttgart, Germany |
| October 6 October 8 |
GNOME Boston Summit 2012 | Cambridge, MA, USA |
| October 11 October 12 |
Korea Linux Forum 2012 | Seoul, South Korea |
| October 12 October 13 |
Open Source Developer's Conference / France | Paris, France |
| October 13 October 14 |
Debian BSP in Alcester (Warwickshire, UK) | Alcester, Warwickshire, UK |
| October 13 October 14 |
PyCon Ireland 2012 | Dublin, Ireland |
| October 13 October 15 |
FUDCon:Paris 2012 | Paris, France |
| October 13 | 2012 Columbus Code Camp | Columbus, OH, USA |
| October 13 October 14 |
Debian Bug Squashing Party in Utrecht | Utrecht, Netherlands |
| October 15 October 18 |
OpenStack Summit | San Diego, CA, USA |
| October 15 October 18 |
Linux Driver Verification Workshop | Amirandes,Heraklion, Crete |
| October 17 October 19 |
LibreOffice Conference | Berlin, Germany |
| October 17 October 19 |
MonkeySpace | Boston, MA, USA |
| October 18 October 20 |
14th Real Time Linux Workshop | Chapel Hill, NC, USA |
| October 20 October 21 |
PyCon Ukraine 2012 | Kyiv, Ukraine |
| October 20 October 21 |
Gentoo miniconf | Prague, Czech Republic |
| October 20 October 21 |
PyCarolinas 2012 | Chapel Hill, NC, USA |
| October 20 October 23 |
openSUSE Conference 2012 | Prague, Czech Republic |
| October 20 October 21 |
LinuxDays | Prague, Czech Republic |
| October 22 October 23 |
PyCon Finland 2012 | Espoo, Finland |
| October 23 October 25 |
Hack.lu | Dommeldange, Luxembourg |
| October 23 October 26 |
PostgreSQL Conference Europe | Prague, Czech Republic |
| October 25 October 26 |
Droidcon London | London, UK |
| October 26 October 27 |
Firebird Conference 2012 | Luxembourg, Luxembourg |
| October 26 October 28 |
PyData NYC 2012 | New York City, NY, USA |
| October 27 | Central PA Open Source Conference | Harrisburg, PA, USA |
| October 27 October 28 |
Technical Dutch Open Source Event | Eindhoven, Netherlands |
| October 27 | pyArkansas 2012 | Conway, AR, USA |
| October 27 | Linux Day 2012 | Hundreds of cities, Italy |
| October 29 November 3 |
PyCon DE 2012 | Leipzig, Germany |
| October 29 November 2 |
Linaro Connect | Copenhagen, Denmark |
| October 29 November 1 |
Ubuntu Developer Summit - R | Copenhagen, Denmark |
| October 30 | Ubuntu Enterprise Summit | Copenhagen, Denmark |
| November 3 November 4 |
OpenFest 2012 | Sofia, Bulgaria |
| November 3 November 4 |
MeetBSD California 2012 | Sunnyvale, California, USA |
| November 5 November 7 |
Embedded Linux Conference Europe | Barcelona, Spain |
| November 5 November 7 |
LinuxCon Europe | Barcelona, Spain |
| November 5 November 9 |
Apache OpenOffice Conference-Within-a-Conference | Sinsheim, Germany |
| November 5 November 8 |
ApacheCon Europe 2012 | Sinsheim, Germany |
| November 7 November 9 |
KVM Forum and oVirt Workshop Europe 2012 | Barcelona, Spain |
| November 7 November 8 |
LLVM Developers' Meeting | San Jose, CA, USA |
| November 8 | NLUUG Fall Conference 2012 | ReeHorst in Ede, Netherlands |
| November 9 November 11 |
Free Society Conference and Nordic Summit | Göteborg, Sweden |
| November 9 November 11 |
Mozilla Festival | London, England |
| November 9 November 11 |
Python Conference - Canada | Toronto, ON, Canada |
| November 10 November 16 |
SC12 | Salt Lake City, UT, USA |
| November 12 November 16 |
19th Annual Tcl/Tk Conference | Chicago, IL, USA |
| November 12 November 17 |
PyCon Argentina 2012 | Buenos Aires, Argentina |
| November 12 November 14 |
Qt Developers Days | Berlin, Germany |
| November 16 November 19 |
Linux Color Management Hackfest 2012 | Brno, Czech Republic |
| November 16 | PyHPC 2012 | Salt Lake City, UT, USA |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
