One of the best things about large conferences like LinuxCon is that
the extensive program draws in speakers from outside the core Linux
and free software ecosystem. Such was the case at the North American
edition of LinuxCon 2012 in August, which featured a presentation from
an aerospace engineer at Space Exploration Technologies (SpaceX). The
company is evidently using Linux extensively on the ground and in its
launches, although it provided frustratingly little detail.
Beyond the cloud (literally)
The speaker, Jim Gruen, is a relatively new hire at SpaceX, working on
flight software. He started off by explaining exactly what the
company does and how it operates. Its long-term mission is to make
space flight as routine as air travel; for the near-term that means
competing for contracts from NASA for the space agency's private space
flight programs, such as the Commercial Orbital Transportation
Services (COTS) program and the Commercial Crew Development (CCDev)
program. Each program incorporates several rounds of proposals and
milestones overseen by NASA, ultimately ending in a flight mission
with specific objectives. SpaceX has flown two successful NASA
contract missions. COTS Demo Flight 1 (C1), in 2010, sent the
company's unmanned "Dragon" capsule through two Earth orbits then
splashed it down into the ocean. COTS Demo Flight 2/3 (C2+) followed in May
2012, which combined the COTS 2 objective of
rendezvousing with the International Space Station (ISS) and the COTS
3 objective of docking with ISS.
Although that slate of projects would certainly qualify as
interesting stuff in just about anyone's book, Gruen's explanation of
SpaceX's operations was intriguing as well. The company believes
strongly in vertical integration as a cost-cutting measure, to the
point where it manufactures in-house 80% of what it puts into space.
It buys raw metal and manufactures the parts for its rockets and
capsules, and it designs, prototypes, and produces its own computers,
circuit boards, and even chips. The goal of this approach, he said,
is to have everyone working in the same building, and enable them to
try new experiments very quickly.
With that background material out of the way, he explained how the
company uses Linux. For starters, space flight generates an enormous
amount of data, including flight telemetry, traffic between ground
stations, media feeds from the spacecraft, and so on. Streaming,
storage, and analysis of this data is done on Linux — though it is not a
task set unique to SpaceX or to space flight, he admitted.
Gruen's team works on avionics, the navigation and control systems on
the company's spacecraft. The team is responsible for the complete
life cycle and operation of the equipment, he said: board bring-up,
bootloading, hardware initialization, straight on up through the
user-space software. The company's C1 flight was a proof-of-concept
run for its Dragon capsule design, and on that mission it did not run
Linux. However, the C2+ model and subsequent revisions do run Linux.
This does not mean that Linux is merely running on an auxiliary computer, he
emphasized: Dragon's core systems are Linux, a custom in-house
distribution using the uboot bootloader with C++ code running on
top. Linux handles the triply-redundant avionics system, the
thrusters, and even the pyrotechnics (which in space-faring lingo
refers to the explosive charges used to do things like deploy parachutes
for re-entry). He also showed images from the C2+ mission's ISS
docking procedure, which used computer vision software running on Linux to
locate the docking port and align the spacecraft with the station.
Gruen's overview of the Dragon vehicle and its use of Linux was
interesting, to be sure. Unfortunately, the overview was more or less
the full extent of the detail available. He was accompanied by
representatives of SpaceX who sat in the front row and who would not
allow him to go into any specifics about the hardware or software of
the system, nor to take questions from the audience. The room was
packed to overflowing, and the session let out with plenty of time
still on the clock.
Gruen attributed the restrictions on his subject matter to the US
State Department, whom he said classified SpaceX's business as
"building dangerous weapons." Consequently, he expressed
his excitement to be giving the talk, but added that he was
"going to present as much as I can without breaking any laws and
going to jail." That is certainly an unenviable bind to be in,
but the upshot was that the audience learned little
about SpaceX's Linux systems — and about the challenges or
discoveries its developers have encountered along the way.
What makes that limitation puzzling is that so many Linux developers
were in the audience for the session — Gruen commented
more than once that there was code running on Dragon that had been
written by people there in the room. In fact, Linux is so widespread
in the scientific community that it would have been a surprise to hear
that Linux was not the platform of choice. After all, Linux
has been trustworthy enough to run nuclear weapons simulations for the
US Department of Energy for years, and reliable enough to run medical
devices; it is not a big stretch to hear that it runs on an orbital
capsule as well.
It was unclear how much of SpaceX's taciturnity was due to
government regulation and how much was by choice. SpaceX is in a
highly competitive business, to be sure, and has the right to work in
private, but it seems a bit implausible to argue that how the company
uses upstream code like Linux constitutes a trade secret. Is there any
credible chance that a competitor such as Orbital Sciences is running
Windows on its spacecraft and has something substantial to gain from
hearing that SpaceX sees better performance from Linux's scheduler, or
which GRUB limitations made uboot the bootloader of choice?
SpaceX's reluctance to discuss details stands out, because
attendees heard several other talks about Linux in high-security
and scientific environments just days earlier. For example, Kipp Cannon of
the Laser Interferometer Gravitational-Wave Observatory (LIGO)
collaboration spoke at GStreamer Conference about his group's
use of Linux to capture and analyze live laser interferometry data from
gravity wave detectors. Cannon's group uses the largest GStreamer pipelines ever
made, on massive machine clusters, to process LIGO signals fast enough recognize events in time for astronomers to aim telescopes at them before the events end. Certainly getting to and
docking with ISS is a tremendous technical challenge, but it is not
a drastically bigger challenge than the real-time
detection of gravitational waves from black hole collisions a
galaxy or more away. LIGO is a collaborative effort, but it
too has fierce competition from other experiments, both for funding
and for results.
As for the security factor, the implication was that SpaceX's work
is regulated by the US Government, although it is not clear why that is the
State Department's purview. But the GStreamer Conference also had a
presentation from researchers at the US Department of Defense's Night
Vision and Electronic Sensors Directorate (NVESD), which uses Linux and open
source software to calibrate and test night-vision equipment and
create new algorithms for combining multiple sensors' media streams
into usable displays. They made it quite clear that the algorithms
they develop are classified, while still explaining how they used
GStreamer and other open source software, and even contributed code
back upstream. Like NVESD, SpaceX's core projects might be confidential,
but the software engineering problems that constitute the
daily grind are likely familiar to developers everywhere.
That is probably the main point. I am not particularly interested in
spacecraft avionics or infrared sensor algorithms, but it would have
made for a more interesting LinuxCon session if SpaceX had talked
about some of the challenges or design decisions it has faced in its
software program, and how it overcame them. For example, Gruen
mentioned that the company uses the kernel's soft real-time support.
It would be interesting to hear why Dragon does not use hard
real-time — which seems at first glance like a plausible
requirement. It would even be worthwhile to hear the story if the
solution was to ditch a standard Linux component and write an in-house
replacement. Consider the space capsule's storage system, which
surely has high reliability and fail-over requirements. There are
plenty of computing environments with demanding specifications;
hearing how various Linux filesystems fared — even if those that do well
in other high-performance applications were not up-to-snuff on Dragon — would have
But in the long run there are more important factors than a single
interesting talk. Any company can choose to isolate its internal code
from the upstream projects on which it relies; the downside is that doing so will increase its own development costs over time. It will either have to expend resources
maintaining internal forks of the software that it branches (and
back-port important features and bug fixes from newer releases), or
periodically perform a re-base then re-apply its own patch sets. Both
options increase — in the time required and the complexity involved — the
longer that a company commits to them.
Google has walked this path in years past. As we covered in 2009, historically the company
maintained its own internal kernel code, rebasing every 17 months.
The resulting maintenance effort included merging and debugging its
own feature patches, plus backporting hundreds of features from newer
upstream releases. Google had its own reasons for not upstreaming its kernel
work, including reluctance to share what it regarded as patches of no
use to others, but eventually it found the maintenance headaches too
painful and modified its kernel development process.
Interestingly enough, the NVESD speakers commented that the DOD
greatly prefers its developers to send their patches back to upstream
projects — including, in this case, GStreamer — rather
than to start their own forks and repositories (and subsequently
maintain them). The SpaceX talk mentioned that the Dragon missions
generate an enormous amount of video data, but did not go into detail about the
software the company uses to stream or analyze it. If it uses
GStreamer for the task (which is certainly a possibility), consider
how much it stands to gain by interacting in the open with other
industrial-sized GStreamer users like NVESD and LIGO — and vice
Perhaps the State Department is simply more secretive than the DOD,
but my suspicion is that SpaceX plays close to the vest largely
due to the natural tendency for companies to keep their work private (particularly in a company that places a high value on vertical integration).
Almost everyone experiences some reluctance when first dipping its
toes in open source waters. Indeed, coming to LinuxCon was a good
first step for SpaceX. Perhaps it will take a page from
its clients at NASA and open up more,
particularly where upstream projects like Linux are involved. After
all, Gruen's talk was informative and entertaining, and it was nice to
hear that Linux has proven itself to be a valuable component in the nascent space
flight industry. One merely hopes that next year the company will
come back to LinuxCon and engage a little more with the rest of the
free software community.
Comments (35 posted)
One of the (many) classic essays in The Mythical Man-Month
Frederick Brooks is titled "Plan to throw one away." Our first solution to
a complex software development problem, he says, is not going to be fit for
purpose. So we will end up dumping it and starting over. The free
software development process is, as a whole, pretty good at the "throwing
it away" task; some would argue that we're too
good at it. But
there are times when throwing one away is hard; the current discussion
around control groups in the kernel shows how situation can come about.
What Brooks actually said (in the original edition) was:
In most projects, the first system built is barely usable. It may
be too slow, too big, awkward to use, or all three. There is no
alternative but to start again, smarting but smarter, and build a
redesigned version in which these problems are solved. The discard
and redesign may be done in one lump, or it may be done
piece-by-piece. But all large-system experience shows that it will
One could argue that free software development has taken this advice to
heart. In most projects of any significant size, proposed changes are
to multiple rounds of review, testing, and improvement. Often, a
significant patch set will go through enough fundamental changes that
it bears little resemblance to its initial version. In cases like this,
the new subsystem has, in a sense, been thrown away and redesigned.
In some cases it's even more explicit. The 2.2 kernel, initially, lacked
support for an up-and-coming new bus called USB. Quite a bit of work had
gone into the development of a candidate USB subsystem which, most people
assumed, would be merged sometime soon. Instead, in May 1999, Linus
looked at the code and decided to start over; the 2.2.7 kernel included a
shiny new USB subsystem that nobody had ever seen before. That code
incorporated lessons learned from the earlier attempts and was a better
solution — but even that version was eventually thrown away and replaced.
Brooks talks about the need for "pilot plant" implementations to turn up
the problems in the initial implementation. Arguably we have those in the
form of testing releases, development trees, and, perhaps most usefully,
early patches shipped by distributors. As our ability to test for
performance regressions grows, we should be able to do much of our
throwing-away before problems in early implementations are inflicted upon
users. For example, the 3.6 kernel was able to avoid a 20% regression in
PostgreSQL performance thanks to pre-release testing.
But there are times when the problem is so large and so poorly understood
that the only way to gain successful "pilot plant" experience is to ship
the best implementation we can come up with and hope that things can be
fixed up later. As long as the problems are internal, this fixing can
often be done without creating trouble for users. Indeed, the history of
most software projects (free and otherwise) can be seen as an exercise in
shipping inferior code, then reimplementing things to be slightly less
inferior and starting over again. The Linux systems we run today, in many
ways, look like those of ten years or so ago, but a great deal of code was
replaced in the time between when those systems were shipped.
But what happens when the API design is part of the problem? User
interfaces are hard to design and, when they turn out to be wrong, they can
be hard to fix. It turns out that users don't like it when things change
on them; they like it even less if their programs and scripts break in
the process. As a result, developers at all levels of the stack work hard to
avoid the creation of incompatible changes at the user-visible levels. It
is usually better to live with one's mistakes than to push the cost of
fixing them onto the user community.
Sometimes, though, those mistakes are an impediment to the creation of a
proper solution. As an example, consider the control groups (cgroups) mechanism
within the kernel. Control groups were first added to the 2.6.24 kernel
(January, 2008) as a piece of
the solution to the "containers" problem; indeed, they were initially
called "process containers." They have since become one of the most deeply
maligned parts of the kernel, to the point that some developers routinely
threaten to rip them out when nobody is looking. But the functionality
provided by control groups is useful and increasingly necessary, so it's
not surprising that developers are trying to identify and fix the problems
that have been exposed in the current ("pilot") control group
As can be seen in this cgroup TODO list
posted by Tejun Heo, lot of those problems are internal in nature. Fixing
them will require a lot of changes to kernel code, but users should not
notice that anything has changed at all. But there are some issues that
cannot be hidden from users. In particular: (1) the cgroup design
allows for multiple hierarchies, with different controllers (modules that
apply policies to groups) working with possibly different views of the
process tree, and (2) the implementation of process hierarchies is
inconsistent from one controller to the next.
Multiple hierarchies seemed like an interesting feature at the outset; why
should the CPU usage controller be forced to work with the same view of the
process tree as, say, the memory usage controller? But the result is a
more complicated implementation that makes it nearly impossible for
controllers to coordinate with each other. The block I/O bandwidth
controller and the memory usage controller really need to share a view of
which control group "owns" each page in the system, but that cannot be done
if those two controllers are working with separate trees of control
groups. The hierarchy implementation issues also make coordination
difficult while greatly complicating the lives of system administrators
who need to try to figure out what behavior is actually implemented by each
controller. It is a mess that leads to inefficient implementations and
How does one fix a problem like this? The obvious answer is to force the
use of a single control group hierarchy and to fix the controllers to
implement their policies over hierarchies in a consistent manner. But both
of those are significant, user-visible API and behavioral changes. And,
once again, a user whose system has just broken tends to be less than
appreciative of how much better the implementation is.
In the past, operating system vendors have often had to face issues like
this. They have responded by saving up all the big changes for a major
system update; users learned to expect things to break over such updates.
Perhaps the definitive example was the transition from "Solaris 1"
(usually known as SunOS 4 in those days) to Solaris 2, which switched
the entire system from a BSD-derived base to one from ATT Unix. Needless
to say, lots of things broke in the process. In the
Linux world, this kind of transition still happens with enterprise
distributions; RHEL7 will have a great many incompatible changes from
RHEL6. But community distributions tend not to work that way.
More to the point, the components that make up a distribution are typically
not managed that way. Nobody in the kernel community wants to go back to
the pre-2.6 days when major features only got to users after a multi-year
delay. So, if problems like those described above are going to be fixed in
the kernel, the kernel developers will have to figure out a way to do it in
the regular, approximately 80-day development cycle.
In this case, the plan seems to be to prod users with warnings of upcoming
changes while trying to determine if anybody really has difficulties with
them. So, systems where multiple cgroup hierarchies are in use will emit
warnings to the effect that the feature is deprecated and inviting email
from anybody who objects. Similar warnings will be put into specific
controllers whose behavior is expected to change. Consider the memory
controller; as Tejun put it: "memcg asked itself the existential
question of to be hierarchical or not and then got confused and decided to
become both." The plan is to get distributors to carry a patch warning users of the non-hierarchical
mode and asking them to make their needs known if the change will truly be
a problem for them. In a sense, the distributors are being asked to run a
pilot for the new cgroup API.
It is possible that the community got lucky this time around; the features
that need to be removed or changed are not likely to be heavily used. In
other cases, there is simply no alternative to retaining the older,
mistaken design; the telldir() system call, which imposes heavy
implementation costs on filesystems, is a good example. We can never
preserve our ability to "throw one away" in all situations. But, as a
whole, the free software community has managed to incorporate Brooks's
advice nicely. We throw away huge quantities of code all the time, and we
are better off for it.
Comments (30 posted)
The quest for a free-software accounting system suitable for a business
like LWN continues; readers who have not been following this story so far
may want to look at the previous installments: the problem statement
and a look at Ledger
. This time around, your
editor will be evaluating PostBooks
, a system that differs
from Ledger in almost every way. PostBooks, as it turns out, is not without its
problems, but it might just be a viable solution to the problem.
PostBooks has been around as a commercial project since 2000 or so; it made
the shift to a free software project in 2007. It is, however, a classic
example of a corporate-controlled project, with the corporation in this
case being a company called xTuple. The license is the "badgeware"
Common Public Attribution License
(CPAL), which requires the acknowledgment of the "original contributor" on
splash screens and similar places. The CPAL is recognized as an open source
license by the Open Source Initiative, but its attribution requirements are
not popular with all users. The CPAL has not taken the world by storm; it
has shown up in a few business-oriented projects like PostBooks, though.
Additionally, PostBooks is a project in the "open core" model: the core
software is open source, but certain types of functionality are reserved
for proprietary enhanced versions requiring payment and annual support
fees. See the xTuple ERP
editions comparison page for an overview of which features can be found
in which versions. One need not look long on the net to find users
complaining that one must buy a proprietary version to reach the necessary
level of functionality, but your editor's impression is that the free
edition should be sufficient for a wide range of companies.
At a first glance, the PostBooks development "community" reinforces the
impression of a corporate-controlled project. There are no development
mailing lists, for example. The source repository lives on SourceForge;
a look at the revision history shows a slow (but steady) trickle of changes
from a handful of developers. The developer documentation says that
"The majority of features added to the core are added as a result of
sponsorship," but also suggests that outside developers could be
given direct commit access to the repository. One has to assume that
attempts to add features found only in the proprietary versions would not
PostBooks is written in C++ with the Qt toolkit used for the graphical
interface. One result of this choice is that the code is quite portable;
the client can run on Linux, Windows, and Mac OS systems. All data
lives in a PostgreSQL database; among other things, that allows clients
distributed across a network to access a single database server. PostBooks
is an inherently multi-user system.
As far as your editor can tell, no even remotely mainstream Linux
distribution packages PostBooks, so users are on their own. Building the
tool from source is not a task for the faint of heart; the code itself
comes with no build instructions at all. Those instructions can be found
on the xtuple.org
web site; among other things, they recommend not using the versions of
Qt and PostgreSQL supplied by the distributor. Your editor's attempts to
build the system (ignoring that advice) did not get far and were not
pursued for all that long. One need not look for long to find similar
stories on the net.
What this means is that most users are likely to be stuck installing the
binary versions (32-bit only) shipped by xTuple itself. Downloading a
from a company's web site and feeding it to a root shell is always a great
way to build confidence before entrusting one's accounting data to a new
application. The script offers to try to work with an existing PostgreSQL
installation, but your editor ran into trouble getting that to work and
ended up letting it install its own version. There are license acceptance
and registration screens to be gotten through; as a whole, it feels much
like installing a proprietary application.
One nice feature is the provision of some sample databases, allowing easy
experimentation with the software without having to enter a bunch of data
The initial PostBooks screen (shown on right) reinforces the "proprietary
software" feeling; it consists mostly of advertisements for xTuple products
and services. From there, though, it's one click to the top-level features
of the program, divided into relationship management, sales, purchasing,
accounting, and manufacturing. Finding one's way around the program takes
some time; there is a lot of functionality hidden away in various corners
and the correct way to get there is not always obvious. The tutorials provided by xTuple
(free online, but payment required for the PDF version) can be a good place
to start, but reading through them sequentially is a good idea. Important
details tend to be hidden in surprising places in a way that can frustrate
attempts to skip directly to the interesting parts.
Your editor will appreciate it if readers resist the urge to question the
concept of an accounting tutorial having interesting parts.
PostBooks has a number of features that may be of interest to certain types
of corporations — relationship management and materials tracking, for
example. For the purposes of this review, though, the main area of
interest is accounting. As would be expected, PostBooks implements
double-entry bookkeeping with various layers on top to support a set of
"standard" business processes. For users coming from a tool like
QuickBooks, the processes built into PostBooks may look bureaucratic and
arcane indeed. Tasks that seem like they should be simple can require a
long series of steps and screens to get through.
For example, a purchase in QuickBooks is most likely handled, after the
fact, by simply entering the bill from the supplier, then perhaps printing a
check. The PostBooks purchasing window (right) has rather more steps:
check for the desired item in inventory, enter a purchase request, generate
a purchase order, release the purchase order to the world, enter a bill,
generate a voucher for the bill, let the voucher age (a process well known
to — and detested by — anybody who has tried to get a large company to pay
a bill), enter a payment, set up a check run, and actually print a check.
All these steps exist because larger companies actually do things that way,
usually with different people handing different phases of the process.
Indeed, PostBooks has an elaborate roles mechanism that can be used to
limit users to specific steps in the chain.
a small operation where a single person likely does everything, it can seem
like a lot of useless extra work.
The good news is that much of it can be bypassed if one knows how. A
"miscellaneous check" can be entered without going through the purchase
order mechanism at all; there are also "miscellaneous vouchers" for those
who want less hassle, but still want to require that two people are
involved in the process of spending the company's money. The sales side is
similar; one can go through a whole process of defining prospects,
generating sales orders, putting together bills of materials, invoicing,
and so on. Or one can just enter a payment by digging deeply enough in the
PostBooks has what appears to be a reasonably flexible report generation
subsystem, but, in the free edition at least, rather little use is made
of it. The set of reports available within the application is relatively
basic; it should cover what many companies need, but not much more.
PostBooks is not an application for those who cannot function without pie
Interestingly, the report generation subsystem would appear to be used for
related tasks like form and check printing. One of the many aspects of the xTuple
revenue model is xtupleforms.com,
where various types of forms, including checks, can be purchased for use
with PostBooks. Happily for an organization like LWN, the available forms
include tax forms, and the dreaded 1099 in particular. The selection is
small and US-centric, but, for some businesses, that's all that is needed.
In the case of checks, there is only one alternative: a
single-check-per-page format. Unlike Intuit, xTuple would not allow LWN to
put its penguin logo on its checks — a major shortcoming, in your editor's
estimation. It doesn't seem like multiple checks per page is a
possibility, which may explain why nobody has put together a format
description for checks from Intuit. As a whole, support for check printing
is minimal, but sufficient, especially in a world where (even in the US),
the use of paper checks is in decline.
As an aside, there is a surprising lack of resources or help for users
wanting to transition from systems like QuickBooks. One would think that
would be a promising source of new users; certainly there is no shortage of
disgruntled QuickBooks users in search of a different system. But tools to
extract data from QuickBooks and import it into PostBooks are not really to
be found. What little information on the subject
exists on the xTuple site dates from 2007. Evidently QuickBooks is such a
data trap that extracting vital information is not a job for the meek.
Speaking of data, one of the key problems for a business like LWN is
getting transaction data into the system. Our transactions tend to be
small, but we have a fair number of them (never enough, mind you); entering
them by hand is not really an option, even in a system with fewer steps
than PostBooks requires. That is, quite simply, the kind of job that makes
us willing to tolerate having computers around. So some way to get data
into the accounting system automatically is required.
Hypothetically, since PostBooks uses PostgreSQL for its data storage,
feeding new data should really just be a matter of writing a bit of SQL.
In practice, the PostBooks database schema has about 550 tables in it, with
all kinds of interactions between them. A person with enough interest and
ability could certainly figure out this schema and find a way to put more
information into the database without corrupting the works.
This is the point where your editor feels the need to remind you that LWN's
staff is dominated by kernel-oriented people. Charging such people with
that task could lead to some amusing results indeed, but our accountant is
not quite so easily amused.
The folks at xTuple seem to have recognized this problem, so they have put
together a somewhat simpler
means for programmatic access to the database. They call it their API,
but it really is a set of PostgreSQL functions and views that provides a
simplified version of the database. Programmers can write SQL to access a
view in a way that closely matches the windows in the interactive client,
and the functions behind those views will take care of the task of keeping
the database consistent. Your editor has not yet tried actually
programming to this "API," but it looks like it should be adequate to get
the job done.
Readers who have made it all the way through this article will have noticed
that the first impressions from PostBooks were not all that great. And,
indeed, it still has the look of a mostly proprietary piece of software
that happens to have the source available. But, once one looks beyond the
first impressions, PostBooks looks like it might well be able to get the
What this program needs, arguably, is a fork in the go-oo style. This fork
would do its best to get all of its changes upstream, but would put effort
into making the system easier to build and package so that distributions
might start carrying it. In this way, the project might gain a bit more of
a free software feel while staying reasonably close to its corporate
overlords. But, of course, such a project requires sufficiently motivated
developers, and it's amazing how few free software developers find that
accounting systems are the itch they need to scratch.
Whether LWN will move over to PostBooks is not an answerable question at
this point. Further investigation — of both PostBooks and the alternatives
— is called for. But this first phase of research has not ruled it out.
PostBooks is not a perfect fit for what LWN needs, but that perfect fit
does not seem to exist anywhere. In this case, it may just be possible to
use PostBooks to get the job done. Stay tuned.
Comments (21 posted)
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
- Security: LSS: DNSSEC; New vulnerabilities in asterisk, chromium, dbus, kernel, ...
- Kernel: Integrity for directories and special files; Extensive memory management minisummit coverage.
- Distributions: Twin Peaks v. Red Hat; Fedora, Frugalware, Mandriva, ...
- Development: Keeping up with Kdenlive; Cinnamon 1.6; Vivaldi; KWin; ...
- Announcements: Automotive Grade Linux workgroup, LibreOffice Localization, LPC videos, OpenStack, OpenStreetMap, Raspberry Pi Supercomputer, ...