By Jake Edge
May 31, 2013
Linux Foundation executive director Jim Zemlin kicked off the Spring (at
least in the northern hemisphere) edition of the Automotive
Linux Summit with some advice for the automotive industry executives
and managers in the audience. There is battle going on for developers and
other open-source-knowledgeable employees, and it is an important battle to
win. There is so much open source development going on
that talented people are being snapped up by the competition—not just
competitors in the automotive world, but mobile, web, cloud, and other
areas as well.
He started off by noting that a typical car now has more than 100 million
lines of code in it. Customer expectations are changing all aspects of
computing and automotive is just part of that trend. What people see in
their mobile phones, tablets, and other consumer electronic devices is
creating a demand that leads to a "huge increase in the amount of software
in vehicles". That's
why the industry is using open source: because it is an essential tool to
help meet those expectations.
But just using open source is not enough. In addition, the
automotive industry needs more software developers. More software means
more developers, he said. In Silicon Valley right now, the demand for
developers in
the consumer electronics, web, mobile, and cloud areas is huge; there is a
war for talent going on. In fact, top developers now have agents, much
like Hollywood actors.
Zemlin has learned some lessons about hiring the best programmers in the
world. The reason he knows something about it is because he is Linus
Torvalds's boss.
Torvalds is an incredible programmer who shares some traits with someone
else that Zemlin is the boss of: his daughter. Both are adorable, geniuses, and,
most importantly, neither of them "listen to anything I have to say", he
said to a round of laughter.
More seriously, in working with Torvalds he has learned some things about
how to work with the best programmers. There is a big difference between
the best programmer and an average one; Zemlin likened it to the difference
between a house painter and Picasso. The best programmers can do the work
of 100 average programmers, he said.
Five lessons
There are five lessons he has learned about hiring great programmers. The
first is to hire developers who play at work; programmers "who goof off". That
may sound crazy, but if you look back at Torvalds's original
email about Linux it says (paraphrased) "not doing anything big, just
something for fun". Torvalds created Linux because it was fun and he still
does it today because it continues to be fun. All of the best programmers
do their work because they find it fun.
Zemlin mentioned a study from a book called Drive (by Daniel Pink)
that looked at what motivates creative people. Since software is a
creative profession, it gives insights into our industry, he said. In the
study,
people were divided into three groups and paid at three different levels (a
lot, an average amount, and very little) for doing various tasks. For
manual tasks, like factory work, those who were paid the best did the best
work. But for creative tasks it was reversed, those who were paid the most
did the
worst. The study was originally run with students at MIT, but in case the
students were atypical, they ran the study again in India: same result.
The "best creative people are not motivated by money alone", Zemlin said.
Money is important, but it's not the only thing. What motivates great
programmers is to be in an environment where they have the opportunity to
"master their craft". Employers should be looking for people who are
driven to master the skill of software development, he said.
Hire people who want to give their software away was lesson number two. If
you really love what you do, you don't mind giving it away, he said. That
leads to an obvious question: how do you make money if you give your
software away? There are companies who have figured out how to make money
in new ways, which is different from the old way of "keeping everything
inside and
selling it". He put up a comparative stock chart from 2008 to 2012 for
three companies: Red Hat, IBM, and Microsoft. In that time, Red Hat has
doubled, IBM is up 85% and Microsoft is flat, so the one who barely
gives away any of its software is the one that has seen no gain in its share
price. Automotive companies are a lot like IBM, Zemlin said, they don't
need to make
money on the software, they can make it on products, services, and so on.
Lesson three is to hire developers who don't stick to a plan. He asked: is planning
important to the automotive industry? It is, of course, so he is not
saying to have no overall plan, but companies shouldn't try to centrally
plan everything,
he said. Torvalds controls what goes into Linux, but he doesn't have a plan
for what comes next. Without a plan, though, Linux seems to do well
enough, with
1.3 million Linux-based phones activated daily, 92% of the high performance
computing
market running Linux, 700,000 Linux-based televisions sold daily, nearly
all of the major stock exchanges and commodities markets running on Linux,
and on
and on.
In software development, it is good to let organic ideas grow, rather than
to try to plan everything out in advance. For example, an organic
community formed that cared about Linux battery life. That community
worked on fixing the power performance of Linux, which turned out to help
the high performance computing (HPC) community because most of the cost of
HPC is
power. So, without any kind of central planning, a problem was fixed that
helped more than just those originally interested in it.
"Hire jerks" is Zemlin's fourth lesson. Linux developers "can be kind of
difficult", he said, they will engage in flame wars over the code that is
submitted to the linux-kernel mailing list. That public criticism was a
problem for Japanese and other Asian developers when they first started
getting involved. But public criticism actually helps create better ideas,
he said.
A 2003 study done at the University of California, Berkeley looked at how
people create the best ideas. The participants were split into two groups,
and one was told to brainstorm about ideas. That meant that all ideas were
considered good ideas and that criticism was not allowed because it might
stop the flow of ideas. The other group was told to be critical, to debate
and argue about the ideas as they were raised. The group that used
criticism was eight times better than the other group; it created more
ideas, better ideas, and was far more successful than the brainstorming
group. That means "it's OK to be a jerk", Zemlin said, but don't go
overboard. The conclusion is that it is important to be critical of
others' ideas.
Zemlin's last lesson is that automotive companies should hire a person to
manage their
external research and development. They should borrow an idea from the
consumer electronics companies, Intel, IBM,
Red Hat, and others to have someone that helps determine their open source
strategy. That person would help decide which projects to participate in,
what efforts to fund, and so on. It would be a person who is familiar with
open source licenses, who knows how to hire open source developers, and is
knowledgeable about how open source works.
"Talent is what is going to make the difference", Zemlin said. Open source
is going to "help you compete", but automotive companies have to hire the
best software developers. The good news for the automotive industry is
that "everyone loves cars". The industry has a reputation for being "sexy"
and "exotic", so auto companies can "leverage that position to hire cool
developers". He concluded with an admonishment: software development is a
talent war, hire the best and you
will succeed, but if you don't, your competition certainly will.
[ I would like to thank the Linux Foundation for travel assistance so that I
could attend
the Automotive Linux Summit Spring and LinuxCon Japan. ]
Comments (16 posted)
Linus Torvalds and Dirk Hohndel sat down at LinuxCon Japan
2013 for a "fireside chat" (sans fire), ostensibly to discuss where
Linux is going. While they touched on that subject, the conversation was
wide-ranging over both Linux and non-Linux topics, from privacy to
diversity and from educational systems to how operating systems will look in
20-30 years. Some rather interesting questions—seemingly different
from those
that might be asked at a US or European conference—were asked along the
way.
Hohndel is the CTO of the Intel Open Source Technology Center, and Torvalds
"needs no introduction" as Linux Foundation executive director Jim Zemlin
said at the outset. Given Zemlin's comment, though, Hohndel asked how
Torvalds introduces himself, does he mention that he is a "benevolent
dictator", for example. But Torvalds said that in normal life, he doesn't
try to pull Linux into things, he just introduces himself as "Linus"
(interestingly, he pronounced it as lie-nus) and leaves it at that. He
has, he says, a regular life and doesn't get recognized in the
streets—something he seemed quite happy about.
Releases and merging
The 3.10-rc3 release was made just before
they had left Portland for Tokyo, so Hohndel asked about where things are
heading and what can be expected in coming releases.
Torvalds said that the kernel release cycle has been "very stable" over the
last few years and that there are never any plans for new features and when
they will be released. Instead, he releases what is ready at the time the
cycle starts. People know that if they miss a particular release with
their new feature, ten weeks down the line there will be another
merge window for it to get added.
Most of the changes that go in these days are for new hardware, as the core
has stabilized, Torvalds said. The changes for new hardware come both in
drivers and in
support for new CPUs, particularly in the ARM world. Over the last few
years, there has been a lot of work to clean up the ARM tree, which was a
mess but has gotten "much better". These days, ARM and x86 are the two
architectures that get the most attention, but Linux supports twenty or so
architectures.
Noting that Torvalds had seemed a little more unhappy than usual recently,
Hohndel asked if it was caused by the number of patches he was merging. Hohndel
said that the 3.10 merge window was the largest ever, and that the -rc3
announcement showed some displeasure with how things were going. Torvalds
said that the size of the merge window was not really a problem, unless the
code merged is "untested and flaky". It is a problem when there are a lot
of fixes
to the newly merged features that come in during the -rc releases. "I want
code to be ready" when its merge is requested. Given the ten week release
schedule, there are only six or seven weeks to get everything to work, so
he is unhappy when people ask him to merge code that makes it harder to
converge on the final release. When that happens, it results in "a lot of
cursing", he said.
If he gets "too annoyed" at some subsystem or area of the kernel, Torvalds
sometimes resorts to refusing to pull code from the developer(s) in
question. It is "the only thing I can do", when things get to that point.
It is an indication that "you need to clean up your process because I don't
want the pain you are causing". Normally that happens in private with a
rejection of a pull request in a "try again next time" message, but
sometimes he does it publicly. His job is to integrate changes, so he
wants to say "yes", which makes it painful for both sides when he gets too
frustrated and has to say "no".
Diversity
Hohndel noted that Kernel Summit pictures tend to contain only white males, but
he thinks we are making some progress on making the kernel community more
representative of the world we live in; "is it improving?", he asked.
Torvalds said that he thinks it is improving, but that the Kernel Summit is
the "worst possible example" because it mostly represents those who have
been involved for 10-15 years. In the early days, Linux was mostly
developed in western Europe and the US, which makes the diversity at the
summit rather low.
Beyond geographic diversity, the number of women in the open source
software world is low, though Torvalds is not clear on why that is. It is
getting better through efforts by companies like Intel and organizations
like the Linux Foundation to help women get more involved so that the
community won't be so "one-sided". He noted that there were few Japanese
kernel developers when the first Japan Linux Symposiums started, but that
has now changed. Japan (and Asia in general) are much better represented
these days.
The first-time contributors to the kernel are more diverse than they were a
few years ago, Hohndel said, which is a step in the right direction. There
is a problem using that as a measure, though, Torvalds said, because it is
fairly easy to do one patch or a few trivial patches. Going from one patch
to ten patches is a big jump. There are a lot of people who do something
small, once, but that taking the next step is hard. Something
like half of all the kernel contributors have only ever contributed one
patch. That is probably healthy overall, but looking at first-time
contributors may not be an indicator of the makeup of the actual
development community.
An audience member pointed out that in addition to the low numbers of women at
the conference, there was also a lack of college and high school
students. Torvalds said that he didn't find that too surprising, as even
those using or developing open source at school probably wouldn't attend a
conference like LinuxCon. There is definitely a need for more women
participating, though, so hopefully the outreach programs will help there.
Hohndel mentioned the Outreach
Program for Women, which has kernel
internships funded by the Linux Foundation. Sarah Sharp of Intel has
been overseeing the program, which has been "extremely successful" in
getting applicants. Torvalds said that it had brought in over a hundred
patches to the kernel from the applicants.
Education
Another audience member mentioned a university in Switzerland that uses
open source software as part
of its curriculum, guiding the students to the culture of open source, IRC
channels, and the like. Torvalds said that there are other universities
with similar programs, which is good. He pointed out that it is "not just
about the kernel", which is a hard project to enter, but that other open
source projects are good steppingstones to kernel development. Often
times, a project needs help from the kernel, so that's how its participants
start to get involved with the kernel.
In answer to a question about the differences in educational systems and
whether there are specific advantages in learning technology,
Torvalds noted that he had first-hand experience with Finland's system and
second-hand with the US system through his kids. Finland makes an
excellent example, he said, because there is a lot of technology that comes
out of a fairly small country "in the middle of nowhere". It has a
population of five million and there are cities in Japan that are bigger.
In Finland, education is free, so that students don't have to worry about
how to pay for it. That means that they can "play around" some rather than
just focusing on school work.
Torvalds spent eight and a half years at his university in Finland, and
only came away with a Masters degree. In some sense, that's not very
efficient, he said, but he worked on Linux during that time. Finland's
system gives people the freedom to take risks while they are attending
school, which is important. Some will take that freedom to just drink and
party, but some will do something risky and worthwhile. In the US, he
can't imagine someone going to a good university for that long, because
they can't afford to do so. Finland is not necessarily so strict about
getting people to graduate in four years, which gives it a "huge advantage"
because of that openness and freedom.
Contributing
A non-developer in the audience asked about how he could make a
contribution to the
kernel. Torvalds was emphatic that you don't need to be a developer to
contribute. Part of the beauty of open source is that people can do what it
is they are interested in. When he started working on Linux, he had no
interest in doing the things needed to turn it into a product. There is
documentation, marketing, support, and so on that need to be done to make a
product, but he only wanted to write code. He left it to others to do the
work needed for making it a product.
Depending on one's interests there are countless things that can be done to
contribute. Translating documentation or modifying a distribution so that
it works better for Japanese people are two possibilities that he
mentioned. Beyond that, Hohndel said testing is a huge part of the
process. Running kernels during the development cycle and reporting bugs
is a crucial part of making Linux better. But it is not just about the
kernel, he said. When Torvalds is on stage the conversation naturally
drifts in that direction, but Hohndel said that there are tens of thousands
of open source projects out there. People can document, translate, and
test those programs. There are a "ton of opportunities" to contribute.
For example, Torvalds and Hohndel work on Subsurface, which is a graphical
dive log tool. Torvalds said that it involves lots of graphical user
interface (GUI) work that he has "no interest in at all". A GUI designer,
even one who can't write code, would be welcome. Creating mockups of the
interface that someone else could write the code for would be very
useful. Of course, "real men write kernel code", he said. Hohndel chided
him by noting that statements like that might be part of the reason why
there aren't more women involved. A somewhat chagrined Torvalds agreed:
"real men and women write kernel code".
The future
Another question concerned non-volatile main memory and how Torvalds
thought that would change computers. First off, there has been talk about
non-volatile main memory for a long time and it's always just a few years
off, Torvalds said, so he is skeptical that we will see it any time soon.
But if it is finally truly coming, he thinks its biggest impact will be on
filesystems. For many years, we have been making filesystems based
on the block layout of disks, but non-volatile memory would mean that we
get byte addressability for storage. That would allow us to get away from
block-based organization for filesystems.
For main memory, even if it is all non-volatile, he thinks working memory
will still be treated as RAM is today. Non-volatile memory will make
things like suspending
much easier, but there won't be a lot of externally visible changes. There
will still be a notion of long-term storage in files and databases.
But the internals of the kernel will change. It will take a long time
before we see any changes, even if the hardware is available soon, Torvalds
said. It takes a good bit of time before any new technology becomes
widespread and
common. There is "huge inertia" with existing hardware; people tend to
think technology moves faster than it actually does.
What did he think operating systems would look like in 20-30 years was
another audience question. "Not much different" was Torvalds's answer.
Today's operating systems are fairly similar to those of 40 years ago, at least
conceptually. They are much bigger and more complicated, but the basic
concepts are the same. If you looked at an operating systems book from the
1970s, today's operating systems would be recognizable in it. The outward
appearance has changed, GUIs are much different than the interfaces on
those older systems, but much of the rest is the same. "Files are still
files", we can just store more of them. There won't be that much change
because "most of what we do, we do for good reasons", so throwing that away
makes little sense.
Another recent trend is wearable computing, as typified by Google Glass,
Hohndel said. What did Torvalds think of that? The idea of a small screen
that is always there is attractive to him, Torvalds said, but the problem
is not on the output side, it is the input part that is difficult. He
hates writing email on his cell phone, so he would love to see some kind of
"Google Glassy
thing" for input. He hates voice recognition; perhaps it would work for
writing a letter or something, but you can't edit source code that way. "Up
up up up up" just doesn't work. Maybe someone in the audience will figure
out a better way, he said.
The privacy implications of Google Glass don't really bother Torvalds. He
said that others (like Hohndel) are much more protective of their privacy
than he is. "My life is not that interesting", Torvalds said, and doesn't
need to be that private. "All of the interesting things I do, I want to
put out there", so there are just a few things (like his bank password)
that he cares about protecting. While people are unhappy with Google Glass
because it can record anything the person sees, it is something people
could already do without Glass, so it's "not a big issue" to him.
Young and stupid
"I was young, I was stupid, and I did not know how much work it would be".
That's how Torvalds started his answer to a question about his inspiration
to write Linux. He wanted an operating system, and was even willing to buy
one, but there was nothing affordable available that he wanted to run.
Some are religious about open source, but he is not, and would have bought
something if it was available. So he started writing Linux and made it
open source because he wanted to "show off" and didn't want to sell it.
Torvalds knew that he was a good programmer, but he also knew he was not a
good businessman. If Linux hadn't attracted other people right away, the
project probably would have died within six months. But the involvement of
others was a "huge motivating factor" for him to continue, because it "made
it much more fun". Some people ask him if he regrets making Linux open
source, because he could be as rich as Bill Gates today if he hadn't.
It's a kind of nonsensical question, because Linux wouldn't be where it is
today if he hadn't opened it up, so he wouldn't have reached Gates-level
wealth even if he had kept it closed. But there is "no question in my
mind" that making Linux open source was the right thing to do.
He has never had a plan for Linux, it already did what he wanted it to do in
1991. But many others in the Linux community did have plans for
Linux and those plans were quite different. Some were interested in small
devices and cell phones, others in putting it into gas pumps, some wanted
to use it in space, still others for rendering computer graphics. All of
those different ideas helped shape Linux into what it is today. It is a
better way to develop a stable operating system, Torvalds said. When
a single company has a plan for their operating system, which changes with some
regularity, it destabilizes the system. Linux on the other hand has many
companies who know where they want it to go, which has a tendency to keep
Linux stable.
[I would like to thank the Linux Foundation for travel assistance to attend
the Automotive Linux Summit Spring and LinuxCon Japan.]
Comments (100 posted)
By Nathan Willis
May 31, 2013
The SIL Open Font License (OFL) is the dominant software license in
the open font community, for a variety of reasons—including the
fact that it was written specifically to meet the needs of type
designers, as opposed to being an adaptation of another license. But
one of its most controversial clauses has long been the Reserved Font
Name (RFN) clause, an option that allows the licensor to require any
derivatives of the licensed font be renamed. To some, RFNs are an
essential tool necessary to cope with the peculiarities of digital
fonts, but to others, they are a non-free relic that causes multiple
practical problems. Recently, the OFL's authors asked for comments on
an update to the official license FAQ, which spawned the
latest debate on whether RFNs are ultimately beneficial or harmful in
the context of free software, and, if they are harmful, how best to
resolve the problem.
The OFL was written by Nicolas Spalinger and Victor Gaultney at SIL
International, and specifically attempts to adhere to the Free
Software Foundation (FSF)'s definition of
free software, the Open Source
Definition (OSD), and the Debian Free
Software Guidelines. In general, it grants the licensee the right
to use, copy, modify, and redistribute the licensed font, with the
expected requirements on preserving the copyright notice and not
changing the license terms when redistributing.
But it does contain some clauses that are
atypical among free software licenses. For one, it requires that the
font not be sold as a standalone offering. The FSF highlights
this as "unusual," but harmless, since even the inclusion
of a simple "Hello world" script satisfies the requirement. The RFN
requirement is more strict, requiring a new name to be assigned for
any modification, not just for formal or ongoing forks:
3) No Modified Version of the Font Software may use the Reserved Font
Name(s) unless explicit written permission is granted by the corresponding
Copyright Holder. This restriction only applies to the primary font name as
presented to the users.
Any RFNs claimed by the designer must be specified as such in the
copyright statement placed at the beginning of the license. SIL's FAQ
goes into additional detail about RFNs, noting that transforming the
font into a different format normally does constitute creating a
modified version, and that giving a modified version a name that
incorporates the RFN (such as Foo Sans or Foo Remade, in reference to an
RFN Foo) is not permitted. It even notes that rebuilding the
font from source counts as creating a modified version (thus
triggering the need for a rename) if the rebuild produces a final
version that is not identical to the font as released by the designer.
Nevertheless, a type designer is not required to specify any RFNs
when releasing a font under the OFL. SIL encourages type designers to
use RFNs, however, citing four reasons in a paper entitled "Web Fonts and
Reserved Font Names." First, the paper says, RFNs help to avoid
confusing name collisions when a user has both the original and a
modified version of a font installed (particularly in application
"Font" menus, which offer limited space for displaying font names).
Second, they help "protect" the designer from dilution concerns—i.e.,
broken or inferior derivatives of the font being presented to users
with the same name. The third reason is a corollary; sparing the
designer of the original font from responding to support requests that
actually stem from bugs in a modified version. Finally, an RFN
requirement "encourages derivatives" by forcing modifiers
to consciously choose a new, distinct name for their derivative fonts.
This paper is a draft, currently posted for public comment. Gaultney emailed
the Open Font Library mailing list in May to ask for feedback on the
paper, and on a new
revision to the OFL FAQ, which adds new questions about deploying
OFL fonts in mobile or embedded devices and about bundling OFL fonts
in web templates.
Reservations
The historical justification for the RFN clause is that the font
name is one of type designers' few ways to distinguish their work from
that of the competition (after all, however much metadata may exist
in the actual file, an application's "Font" menu still presents the
name alone). Requiring derivative works to use a different name is
not unheard of in free software (the OSD permits it), but it is rare; as Khaled Hosny pointed
out, the LaTeX
license also requires renaming. But some in the open font world
say that the RFN clause poses an unreasonable burden in practice,
especially when seen in light of HTTP-delivered web fonts.
After all, the clause requires explicit written permission from the
original designer in order to release a modification that reuses the
RFN, and the designer may be difficult or impossible to contact (e.g.,
a designer who has subsequently passed away). Moreover, the
standard for what constitutes a modified version of a
font is extremely low; as SIL's draft paper explains in detail, even
common practices like subsetting (stripping out unused characters) or
optimizing the font for delivery are considered modifications. This
standard is certainly lower than the clauses found in most other free
software licenses; it is hard to imagine any license requirement being
triggered by rebuilding the software from source.
In fact, since optimizing a font for delivery in a web page is
considered creating a modified version, then every web server that
hosts an OFL font with RFNs must rename its copy of the font or else
get prior written permission from the original designer to serve it
as-is. Neither choice is particularly appealing; as Dave Crossland
said in the thread "it's unreasonable to expect every person
publishing a blog who makes their own subset to contact every
copyright holder every time they want to use a new OFL-RFN web
font." For a popular font, that could quickly add up to tens
of thousands of requests. Alternatively, those tens of thousands of
servers will deliver the font under a score of other names, and that
might not hurt the designer's reputation, but it does little
to improve it.
Plan B
To some open font designers, specifying an RFN simply is not worth
it, and they attach no RFNs to their fonts. Barry Schwartz argued
that it is too much trouble, and the benefits too small; the majority
of the free software world managed to cope with the potential of name
collisions and misdirected support requests just fine:
It makes the OFL look complicated and
frightening, which is the opposite of what should be the goal. Plus,
if someone intends to give a font a different name, they don’t need to
be told to do it; and, if they do not intend to, they are not going to
corrupt society to the core. The worst that will happen is you’ll have
to be careful where you got the font.
The rest of the software community has managed to get along for
decades without having everyone give their version of ‘ls’ a different
name. It creates problems, big ones, but the alternative is worse.
But other designers are interested in maintaining the "brand"
established by their OFL fonts. Without assigning an RFN, the
question remains whether such brand protection is possible. In the
email cited above, Crossland
advocated using trademarks to protect font names. Trademarks
offer similar protections of the font name, and it is a common
practice in free software to publish trademark usage guidelines that
spell out acceptable uses without requiring prior permission. He
pointed to the Subversion project's guidelines,
although there are plenty of examples.
Of course, a trademark approach has its problems. How trademarks
work varies from jurisdiction to jurisdiction (and may even involve
registration fees). Is it also common for a trademark infringement
claim to be weakened or denied if the trademark holder demonstrably
allows other infringement. Vernon Adams proposed an
alternative solution; writing a "preemptive permission" statement to
accompany the OFL, which grants permission to modify an RFN font in
specific ways—in particular, an "engineering exception" listing
the common modifications made when deploying a web font.
Crossland replied that SIL is unlikely to compose such a
boilerplate exception, since it would dilute the OFL. Gaultney
concurred, adding that writing such an
exception would also involve "the basic conceptual difficulty of
defining and evaluating what changes would be allowed." Adams,
however, disagreed about the
notion of diluting the OFL:
Surely it would not be 'diluting' the OFL to reshape it to bring more
clarity to the licensing of this whole 'minor modification' space that
webfont services are opening up? IMO the OFL needs to be ever so
slightly tweaked, but only to better protect the freedom of OFL'd
fonts. That's not a dilution, that's a re-concentration.
On the other hand, expecting designers to rely on an external
triggers such as 'trademarks' to plug this issue, does seem to dilute
the license.
It could be a while before there is any resolution to the debate
over RFNs. SIL has certainly not expressed an interest in revising
the license, which it sees as meeting the desired goals. Both an
RFN "engineering exception" permission grant and a trademark usage
policy would require careful thinking and writing before deployment.
Expecting each type designer to write their own policy is unlikely to
bear fruit, but as this debate illustrates, the open font community
clearly has many more issues to discuss before it could produce any
general consensus on a suite of options. In the meantime, the rest
of the free software community might find the discussion informative;
as we saw in XBMC's case, trademarks
and name collisions can affect software of any stripe.
Comments (12 posted)
Page editor: Jonathan Corbet
Security
By Jonathan Corbet
May 27, 2013
Certain projects are known for disclosing a large number of vulnerabilities
at once; such behavior is especially common in company-owned projects where
fixes are released in batches. Even those projects, though, rarely turn up with 30
new CVE numbers in a single day. But, on May 23, the X.org project
did exactly that when it
disclosed a large
number of security vulnerabilities in various X client libraries — some of
which could be more than two decades old.
The vulnerabilities
The X Window System has a classic client/server architecture, with the X
server providing display and input services for a range of client
applications. The two sides communicate via a well-defined (if much extended)
protocol that, in theory, provides for network-transparent operation. In
any protocol implementation, developers must take into account the
possibility that one of the participants is compromised or overtly
hostile. In short, that is what did not happen in the X client
libraries.
In particular, the client libraries contained many assumptions about the
trustworthiness of the data coming from the X server. Keymap indexes were
not checked to verify that they fell in the range of known keys. Very
large buffer size values from the server could
cause integer overflows on the client side; that, in turn, could lead
to the allocation of undersized buffers that could subsequently be
overflowed. File-processing code could be forced
into unbounded recursion by hostile input. And so on. The bottom line
is that an attacker who controls an X server has a long list of possible
ways to compromise the clients connected to that server.
Despite the seemingly scary nature of most of these vulnerabilities, the impact on
most users should be minimal. Most of the time, the user is in control of
the server, which is usually running in a privileged mode. Any remote
attacker who can compromise such a server will not need to be concerned
with client library exploits; the game will have already been lost. The
biggest threat, arguably, is attacks against setuid programs by a local
user. If the user can control the server (perhaps by using one of the
virtual X server applications), it may be possible to subvert a privileged
program, enabling privilege escalation on the local machine. For this
reason, applying the updates makes sense in many situations, but it may not
be a matter of immediate urgency.
Many of these vulnerabilities have been around for a long time; the
advisory states that "X.Org believes all prior versions of these
libraries contain these flaws, dating back to their introduction."
That introduction, for the bulk of the libraries involved, was in the
1990's. That is a long time for some (in retrospect) fairly obvious errors
to go undetected in code that is this widely used.
Some thoughts
One can certainly make excuses for the developers who implemented those
libraries 20 years or so ago. The net was not so hostile — or so pervasive
— and it hadn't yet occurred to developers that their code might have to
interact with overly hostile peers. A lot of code written in those days
has needed refurbishing since.
It is a bit more interesting to ponder why that refurbishing took so long
to happen
in this case. X has long inspired fears of security issues, after all.
But, traditionally, those fears have been centered around the server, since
that is where the privilege lies. If you operate under the assumption that
the server is the line of defense, there is little reason to be concerned
about the prospect of the server attacking its clients. It undoubtedly
seemed better to focus on reinforcing the server itself.
Even so, one might think that somebody would have gotten around to looking
at the X library code before Ilja van Sprundel took on the task in 2013.
After all, if vulnerable code exists, somebody, somewhere will figure out a
way to exploit it, and attackers have no qualms about looking for problems
in ancient code. The X libraries are widely used and, for better or worse,
they do often get linked into privileged programs that, arguably, should
not be mixing interface and privilege in this way. It seems fairly likely
that at least some of these vulnerabilities have been known to attackers
for some time.
Speaking of review
As Al Viro has pointed out, the security
updates caused some problems of their own due to bugs that would have been
caught in a proper review process. Given the age and impact of the
vulnerabilities, it arguably would have been better to skip the embargo
process and post the fixes publicly before shipping them. After all, as Al
notes, unreviewed "security" fixes could be a way to slip new
vulnerabilities into a system.
In the free software community, we tend to take pride in our review
processes which, we hope, keep bugs out of our code and vulnerabilities out
of our system. In this case, though, it is now clear that some of our most
widely used library code has not seen a serious review pass for a long
time. Recent kernel vulnerabilities, too, have shown that our code is not
as well reviewed as we might like to think. Often, it seems, the
black hats are scrutinizing our code more closely than our developers and
maintainers are.
Fixing this will not be easy. Deep code review has always been in short
supply in our community, and for easily understandable reasons: the work is
tedious, painstaking, and often unrewarding. Developers with the skill to
perform this kind of review tend to be happier when they are writing code
of their own. Getting these developers to volunteer more of their time for
code review is always going to be an uphill battle.
The various companies working in this area could help the situation by
paying for more security review work. There are some signs that more of
this is happening than in the past, but this, too, has tended to be a hard
sell. Most companies sponsor development work to help ensure that their
own needs are adequately met by the project(s) in question. General
security work does not add features or enable more hardware; the rewards
from doing this work may seem nebulous at best. So most companies,
especially if they do not feel threatened by the current level of security
in our code, feel that security work is something they can leave to others.
So we will probably continue to muddle along with code that contains a
variety of vulnerabilities, both old and new. Most of the time, it works
well enough — at least, as far as we know. And on that cheery note, your
editor has to run; there's a whole set of new security updates to apply.
Comments (66 posted)
Brief items
With a guarantee of secure Internet access points, opposition groups would be able to link their terrestrial and wireless networks with those of like-minded groups. This would enable them to reach deeper into the country, giving broad sections of the Syrian populace Internet access. And because the United States would be able to monitor those networks, we could make sure that moderate opposition elements would be the primary beneficiaries.
—
The
New York Times puts out a call for a "cyberattack" for Syria
You can trade a little security for a bit of convenience. Then sacrifice some more security for some extra convenience. Then buy even more convenience at expense of security. There’s nothing particularly bad in this tradeoff in non-mission critical applications, but where should it stop? Apparently, Apple decided to maintain its image as being more of a “user-friendly” rather than “secure” company.
In its current implementation, Apple’s two-factor authentication does not prevent anyone from restoring an iOS backup onto a new (not trusted) device. In addition, and this is much more of an issue, Apple’s implementation does not apply to iCloud backups, allowing anyone and everyone knowing the user’s Apple ID and password to download and access information stored in the iCloud.
—
Vladimir Katalov of ElcomSoft finds some dubious Apple
security decisions
For any given politician, the implications of these four reasons are
straightforward. Overestimating the threat is better than underestimating
it. Doing something about the threat is better than doing nothing. Doing
something that is explicitly reactive is better than being proactive. (If
you're proactive and you're wrong, you've wasted money. If you're proactive
and you're right but no longer in power, whoever is in power is going to
get the credit for what you did.) Visible is better than
invisible. Creating something new is better than fixing something old.
Those last two maxims are why it's better for a politician to fund a
terrorist fusion center than to pay for more Arabic translators for the
National Security Agency. No one's going to see the additional
appropriation in the NSA's secret budget. On the other hand, a high-tech
computerized fusion center is going to make front page news, even if it
doesn't actually do anything useful.
—
Bruce
Schneier
Comments (none posted)
X.Org has disclosed a long list of vulnerabilities that have been fixed in
the X Window System client libraries; most of them expose clients to
attacks by a hostile server. "
Most of the time X clients & servers
are run by the same user, with the server more privileged from the clients,
so this is not a problem, but there are scenarios in which a privileged
client can be connected to an unprivileged server, for instance, connecting
a setuid X client (such as a screen lock program) to a virtual X server
(such as Xvfb or Xephyr) which the user has modified to return invalid
data, potentially allowing the user to escalate their privileges."
There are 30 CVE numbers assigned to these vulnerabilities; expect the
distributor updates to start flowing shortly.
Full Story (comments: 55)
A security issue has been identified in the tool used by the Fedora Project
to create cloud images. "
Images generated by this tool, including
Fedora Project “official” AMIs (Amazon Machine Images), AMIs whose heritage
can be traced to official Fedora AMIs, as well as some images using the AMI
format in non-Amazon clouds, are affected, as described below." The
flaw has been assigned
CVE-2013-2069.
Full Story (comments: none)
The H
reports
increasing attempts to compromise servers via a security hole in Ruby
on Rails. "
On his blog, security expert Jeff Jarmoc reports
that the criminals are trying to exploit one of the vulnerabilities
described by CVE identifier 2013-0156. Although the holes were closed
back in January, more than enough servers on the net are probably still
running an obsolete version of Ruby." The current versions of Ruby on Rails are 3.2.13, 3.1.12 and 2.3.18.
Comments (none posted)
Google has
announced
that it will be disclosing information on actively-exploited
vulnerabilities after seven days. "
Seven days is an aggressive
timeline and may be too short for some vendors to update their products,
but it should be enough time to publish advice about possible mitigations,
such as temporarily disabling a service, restricting access, or contacting
the vendor for more information. As a result, after 7 days have elapsed
without a patch or advisory, we will support researchers making details
available so that users can take steps to protect themselves."
Comments (2 posted)
New vulnerabilities
chromium: multiple vulnerabilities
| Package(s): | chromium-browser |
CVE #(s): | CVE-2013-2837
CVE-2013-2838
CVE-2013-2839
CVE-2013-2840
CVE-2013-2841
CVE-2013-2842
CVE-2013-2843
CVE-2013-2844
CVE-2013-2845
CVE-2013-2846
CVE-2013-2847
CVE-2013-2848
CVE-2013-2849
|
| Created: | May 29, 2013 |
Updated: | July 15, 2013 |
| Description: |
From the Debian advisory:
CVE-2013-2837:
Use-after-free vulnerability in the SVG implementation allows remote
attackers to cause a denial of service or possibly have unspecified
other impact via unknown vectors.
CVE-2013-2838:
Google V8, as used in Chromium before 27.0.1453.93, allows
remote attackers to cause a denial of service (out-of-bounds read)
via unspecified vectors.
CVE-2013-2839:
Chromium before 27.0.1453.93 does not properly perform a cast
of an unspecified variable during handling of clipboard data, which
allows remote attackers to cause a denial of service or possibly
have other impact via unknown vectors.
CVE-2013-2840:
Use-after-free vulnerability in the media loader in Chromium
before 27.0.1453.93 allows remote attackers to cause a denial of
service or possibly have unspecified other impact via unknown
vectors, a different vulnerability than CVE-2013-2846.
CVE-2013-2841:
Use-after-free vulnerability in Chromium before 27.0.1453.93
allows remote attackers to cause a denial of service or possibly
have unspecified other impact via vectors related to the handling of
Pepper resources.
CVE-2013-2842:
Use-after-free vulnerability in Chromium before 27.0.1453.93
allows remote attackers to cause a denial of service or possibly
have unspecified other impact via vectors related to the handling of
widgets.
CVE-2013-2843:
Use-after-free vulnerability in Chromium before 27.0.1453.93
allows remote attackers to cause a denial of service or possibly
have unspecified other impact via vectors related to the handling of
speech data.
CVE-2013-2844:
Use-after-free vulnerability in the Cascading Style Sheets (CSS)
implementation in Chromium before 27.0.1453.93 allows remote
attackers to cause a denial of service or possibly have unspecified
other impact via vectors related to style resolution.
CVE-2013-2845:
The Web Audio implementation in Google Chrome before 27.0.1453.93
allows remote attackers to cause a denial of service (memory
corruption) or possibly have unspecified other impact via unknown
vectors.
CVE-2013-2846:
Use-after-free vulnerability in the media loader in Google Chrome
before 27.0.1453.93 allows remote attackers to cause a denial of
service or possibly have unspecified other impact via unknown
vectors, a different vulnerability than CVE-2013-2840.
CVE-2013-2847:
Race condition in the workers implementation in Google Chrome before
27.0.1453.93 allows remote attackers to cause a denial of service
(use-after-free and application crash) or possibly have unspecified
other impact via unknown vectors.
CVE-2013-2848:
The XSS Auditor in Google Chrome before 27.0.1453.93 might allow
remote attackers to obtain sensitive information via unspecified
vectors.
CVE-2013-2849:
Multiple cross-site scripting (XSS) vulnerabilities in Google Chrome
before 27.0.1453.93 allow user-assisted remote attackers to inject
arbitrary web script or HTML via vectors involving a (1)
drag-and-drop or (2) copy-and-paste operation.
|
| Alerts: |
|
Comments (none posted)
FlightGear: code execution
| Package(s): | FlightGear |
CVE #(s): | |
| Created: | May 29, 2013 |
Updated: | June 7, 2013 |
| Description: |
From the FlightGear blog:
FlightGear generates a remote format string vulnerability that could crash the application or potentially execute arbitrary code under certain conditions. |
| Alerts: |
|
Comments (none posted)
gnutls: denial of service
| Package(s): | gnutls26 |
CVE #(s): | CVE-2013-2116
|
| Created: | May 30, 2013 |
Updated: | July 5, 2013 |
| Description: |
From the Debian advisory:
It was discovered that a malicious client could crash a GNUTLS server
and vice versa, by sending TLS records encrypted with a block cipher
which contain invalid padding. |
| Alerts: |
|
Comments (none posted)
kernel: information leak
| Package(s): | linux |
CVE #(s): | CVE-2013-3226
|
| Created: | May 24, 2013 |
Updated: | May 30, 2013 |
| Description: |
From the CVE entry:
The sco_sock_recvmsg function in net/bluetooth/sco.c in the Linux kernel before 3.9-rc7 does not initialize a certain length variable, which allows local users to obtain sensitive information from kernel stack memory via a crafted recvmsg or recvfrom system call. |
| Alerts: |
|
Comments (none posted)
kvm guest image: no root password
| Package(s): | kvm guest image |
CVE #(s): | CVE-2013-2069
|
| Created: | May 24, 2013 |
Updated: | June 11, 2013 |
| Description: |
From the Red Hat advisory:
It was discovered that when no 'rootpw' command was specified in a
Kickstart file, the image creator tools gave the root user an empty
password rather than leaving the password locked, which could allow a local
user to gain access to the root account. |
| Alerts: |
|
Comments (2 posted)
moodle: multiple vulnerabilities
| Package(s): | moodle |
CVE #(s): | CVE-2013-2079
CVE-2013-2080
CVE-2013-2081
CVE-2013-2082
CVE-2013-2083
|
| Created: | May 29, 2013 |
Updated: | June 7, 2013 |
| Description: |
From the CVE entries:
mod/assign/locallib.php in the assignment module in Moodle 2.3.x before 2.3.7 and 2.4.x before 2.4.4 does not consider capability requirements during the processing of ZIP assignment-archive download (aka downloadall) requests, which allows remote authenticated users to read other users' assignments by leveraging the student role. (CVE-2013-2079)
The core_grade component in Moodle through 2.2.10, 2.3.x before 2.3.7, and 2.4.x before 2.4.4 does not properly consider the existence of hidden grades, which allows remote authenticated users to obtain sensitive information by leveraging the student role and reading the Gradebook Overview report. (CVE-2013-2080)
Moodle through 2.1.10, 2.2.x before 2.2.10, 2.3.x before 2.3.7, and 2.4.x before 2.4.4 does not consider "don't send" attributes during hub registration, which allows remote hubs to obtain sensitive site information by reading form data. (CVE-2013-2081)
Moodle through 2.1.10, 2.2.x before 2.2.10, 2.3.x before 2.3.7, and 2.4.x before 2.4.4 does not enforce capability requirements for reading blog comments, which allows remote attackers to obtain sensitive information via a crafted request. (CVE-2013-2082)
The MoodleQuickForm class in lib/formslib.php in Moodle through 2.1.10, 2.2.x before 2.2.10, 2.3.x before 2.3.7, and 2.4.x before 2.4.4 does not properly handle a certain array-element syntax, which allows remote attackers to bypass intended form-data filtering via a crafted request. (CVE-2013-2083)
|
| Alerts: |
|
Comments (none posted)
nginx: denial of service and information disclosure
| Package(s): | nginx |
CVE #(s): | CVE-2013-2070
|
| Created: | May 23, 2013 |
Updated: | July 8, 2013 |
| Description: |
The nginx web server suffers from a vulnerability that can lead to denial of service or information disclosure problems when the proxy_pass option is used with an untrusted upstream server. See this advisory for more information. |
| Alerts: |
|
Comments (none posted)
otrs2: privilege escalation
| Package(s): | otrs2 |
CVE #(s): | CVE-2013-3551
|
| Created: | May 29, 2013 |
Updated: | May 30, 2013 |
| Description: |
From the Debian advisory:
A vulnerability has been discovered in the Open Ticket Request System,
which can be exploited by malicious users to disclose potentially
sensitive information.
An attacker with a valid agent login could manipulate URLs in the ticket
split mechanism to see contents of tickets and they are not permitted to
see. |
| Alerts: |
|
Comments (none posted)
owncloud: multiple vulnerabilities
| Package(s): | owncloud |
CVE #(s): | CVE-2013-2045
CVE-2013-2046
CVE-2013-2039
CVE-2013-2085
CVE-2013-2040
CVE-2013-2041
CVE-2013-2042
CVE-2013-2044
CVE-2013-2047
CVE-2013-2043
CVE-2013-2048
CVE-2013-2089
CVE-2013-2086
CVE-2013-2049
|
| Created: | May 28, 2013 |
Updated: | June 24, 2013 |
| Description: |
From the Mageia advisory:
ownCloud before 5.0.6 does not neutralize special elements that are
passed to the SQL query in lib/db.php which therefore allows an
authenticated attacker to execute arbitrary SQL commands (CVE-2013-2045).
ownCloud before 5.0.6 and 4.5.11 does not neutralize special elements
that are passed to the SQL query in lib/bookmarks.php which therefore
allows an authenticated attacker to execute arbitrary SQL commands
(CVE-2013-2046).
Multiple directory traversal vulnerabilities in (1)
apps/files_trashbin/index.php via the "dir" GET parameter and (2)
lib/files/view.php via undefined vectors in all ownCloud versions
prior to 5.0.6 and other versions before 4.0.15, allow authenticated
remote attackers to get access to arbitrary local files (CVE-2013-2039,
CVE-2013-2085).
Cross-site scripting (XSS) vulnerabilities in multiple files inside
the media application via multiple unspecified vectors in all ownCloud
versions prior to 5.0.6 and other versions before 4.0.15 allows
authenticated remote attackers to inject arbitrary web script or HTML
(CVE-2013-2040).
Cross-site scripting (XSS) vulnerabilities in (1)
apps/bookmarks/ajax/editBookmark.php via the "tag" GET parameter
(CVE-2013-2041) and in (2) apps/files/js/files.js via the "dir" GET
parameter to apps/files/ajax/newfile.php in ownCloud 5.0.x before 5.0.6
allows authenticated remote attackers to inject arbitrary web script or
HTML (CVE-2013-2041).
Cross-site scripting (XSS) vulnerabilities in (1)
apps/bookmarks/ajax/addBookmark.php via the "url" GET parameter and in
(2) apps/bookmarks/ajax/editBookmark.php via the "url" POST parameter
in ownCloud 5.0.x before 5.0.6 allows authenticated remote attackers
to inject arbitrary web script or HTML (CVE-2013-2042).
Open redirect vulnerability in index.php (aka the Login Page) in
ownCloud before 5.0.6 allows remote attackers to redirect users to
arbitrary web sites and conduct phishing attacks via a URL in the
redirect_url parameter (CVE-2013-2044).
Index.php (aka the login page) contains a form that does not disable
the autocomplete setting for the password parameter, which makes it
easier for local users or physically proximate attackers to obtain the
password from web browsers that support autocomplete (CVE-2013-2047).
Due to not properly checking the ownership of an calendar, an
authenticated attacker is able to download calendars of other users
via the "calendar_id" GET parameter to /apps/calendar/ajax/events.php.
Note: Successful exploitation of this privilege escalation requires
the "calendar" app to be enabled (enabled by default) (CVE-2013-2043).
Due to an insufficient permission check, an authenticated attacker is
able to execute API commands as administrator. Additionally, an
unauthenticated attacker could abuse this flaw as a cross-site request
forgery vulnerability (CVE-2013-2048).
Incomplete blacklist vulnerability in ownCloud before 5.0.6 allows
authenticated remote attackers to execute arbitrary PHP code by
uploading a crafted file and accessing an uploaded PHP file.
Note: Successful exploitation requires that the /data/ directory is
stored inside the webroot and a webserver that interprets .htaccess
files (e.g. Apache) (CVE-2013-2089).
The configuration loader in ownCloud 5.0.x before 5.0.6 includes
private data such as CSRF tokens in a JavaScript file, which allows
remote attackers to obtain sensitive information (CVE-2013-2086). |
| Alerts: |
|
Comments (1 posted)
pmount: should be built with PIE flags
| Package(s): | pmount |
CVE #(s): | |
| Created: | May 30, 2013 |
Updated: | May 30, 2013 |
| Description: |
From the Red Hat bugzilla:
http://fedoraproject.org/wiki/Packaging:Guidelines#PIE says that "you MUST
enable the PIE compiler flags if your package has suid binaries...".
However, currently pmount is not being built with PIE flags. This is a
clear violation of the packaging guidelines. |
| Alerts: |
|
Comments (none posted)
python-backports-ssl_match_hostname: denial of service
| Package(s): | python-backports-ssl_match_hostname |
CVE #(s): | CVE-2013-2098
|
| Created: | May 30, 2013 |
Updated: | May 30, 2013 |
| Description: |
From the Red Hat bugzilla:
A denial of service flaw was found in the way python-backports-ssl_match_hostname, an implementation that brings the ssl.match_hostname() function from Python 3.2 to users of earlier versions of Python, performed matching of the certificate's name in the case it contained many '*' wildcard characters. A remote attacker, able to obtain valid certificate with its name containing a lot of '*' wildcard characters, could use this flaw to cause denial of service (excessive CPU time consumption) by issuing request to validate that certificate for / in an application using the python-backports-ssl_match_hostname functionality.
See the upstream bug report for additional information. |
| Alerts: |
|
Comments (none posted)
request-tracker: multiple vulnerabilities
Comments (none posted)
socat: denial of service
| Package(s): | socat |
CVE #(s): | CVE-2013-3571
|
| Created: | May 29, 2013 |
Updated: | June 11, 2013 |
| Description: |
From the Mandriva advisory:
Under certain circumstances an FD leak occurs and can be misused
for denial of service attacks against socat running in server mode. |
| Alerts: |
|
Comments (none posted)
spip: privilege escalation
| Package(s): | spip |
CVE #(s): | |
| Created: | May 28, 2013 |
Updated: | May 30, 2013 |
| Description: |
From the Debian advisory:
A privilege escalation vulnerability has been found in SPIP, a website
engine for publishing, which allows anyone to take control of the
website. |
| Alerts: |
|
Comments (none posted)
spnavcfg: should be built with PIE flags
| Package(s): | spnavcfg |
CVE #(s): | |
| Created: | May 30, 2013 |
Updated: | May 30, 2013 |
| Description: |
From the Red Hat bugzilla:
http://fedoraproject.org/wiki/Packaging:Guidelines#PIE says that "you MUST
enable the PIE compiler flags if your package has suid binaries...".
However, currently spnavcfg is not being built with PIE flags. This is a
clear violation of the packaging guidelines. |
| Alerts: |
|
Comments (none posted)
SUSE Manager: authentication checking problem
| Package(s): | SUSE Manager |
CVE #(s): | CVE-2013-2056
|
| Created: | May 30, 2013 |
Updated: | May 30, 2013 |
| Description: |
From the SUSE advisory:
spacewalk-backend has been updated to fix an authentication
checking problem. (bnc#819365, CVE-2013-2056) |
| Alerts: |
|
Comments (none posted)
tomcat: multiple vulnerabilities
| Package(s): | tomcat6 |
CVE #(s): | CVE-2013-1976
CVE-2013-2051
|
| Created: | May 29, 2013 |
Updated: | May 30, 2013 |
| Description: |
From the Red Hat advisory:
A flaw was found in the way the tomcat6 init script handled the
tomcat6-initd.log log file. A malicious web application deployed on Tomcat
could use this flaw to perform a symbolic link attack to change the
ownership of an arbitrary system file to that of the tomcat user, allowing
them to escalate their privileges to root. (CVE-2013-1976)
Note: With this update, tomcat6-initd.log has been moved from
/var/log/tomcat6/ to the /var/log/ directory.
It was found that the RHSA-2013:0623 update did not correctly fix
CVE-2012-5887, a weakness in the Tomcat DIGEST authentication
implementation. A remote attacker could use this flaw to perform replay
attacks in some circumstances. Additionally, this problem also prevented
users from being able to authenticate using DIGEST authentication.
(CVE-2013-2051) |
| Alerts: |
|
Comments (none posted)
tomcat: multiple vulnerabilities
| Package(s): | tomcat6, tomcat7 |
CVE #(s): | CVE-2012-3544
CVE-2013-2067
|
| Created: | May 29, 2013 |
Updated: | August 7, 2013 |
| Description: |
From the Ubuntu advisory:
It was discovered that Tomcat incorrectly handled certain requests
submitted using chunked transfer encoding. A remote attacker could use this
flaw to cause the Tomcat server to stop responding, resulting in a denial
of service. This issue only affected Ubuntu 10.04 LTS and Ubuntu 12.04 LTS.
(CVE-2012-3544)
It was discovered that Tomcat incorrectly handled certain authentication
requests. A remote attacker could possibly use this flaw to inject a
request that would get executed with a victim's credentials. This issue
only affected Ubuntu 10.04 LTS, Ubuntu 12.04 LTS, and Ubuntu 12.10.
(CVE-2013-2067) |
| Alerts: |
|
Comments (none posted)
varnish: should be built with PIE flags
| Package(s): | varnish |
CVE #(s): | |
| Created: | May 29, 2013 |
Updated: | June 27, 2013 |
| Description: |
From the Red Hat bugzilla:
http://fedoraproject.org/wiki/Packaging:Guidelines#PIE
says that "you MUST
enable the PIE compiler flags if your package is long running ...".
However, currently varnish is not being built with PIE flags. This is a
clear violation of the packaging guidelines.
|
| Alerts: |
|
Comments (none posted)
X.Org: many, many vulnerabilities
Comments (none posted)
xen: possible privilege escalation
| Package(s): | xen |
CVE #(s): | CVE-2013-2072
|
| Created: | May 28, 2013 |
Updated: | May 30, 2013 |
| Description: |
From the Red Hat bugzilla:
The Python bindings for the xc_vcpu_setaffinity call do not properly check their inputs. Systems which allow untrusted administrators to configure guest vcpu affinity may be exploited to trigger a buffer
overrun and corrupt memory.
An attacker who is able to configure a specific vcpu affinity via a toolstack which uses the Python bindings is able to exploit this issue.
Exploiting this issue leads to memory corruption which may result in a DoS against the system by crashing the toolstack. The possibility of code execution (privilege escalation) has not been ruled out.
The xend toolstack passes a cpumap to this function without sanitization. xend allows the cpumap to be configured via the guest configuration file or the SXP/XenAPI interface. Normally these interfaces are not considered safe to expose to non-trusted parties. However systems which attempt to allow guest administrator control of VCPU affinity in a safe way via xend may expose this issue. |
| Alerts: |
|
Comments (none posted)
Page editor: Jake Edge
Kernel development
Brief items
The current development kernel is 3.10-rc3,
released on May 26 with some grumbles
about how it contains more changes than he would have liked. "
I can
pretty much guarantee that -rc4 is going to be smaller, because (a) I'm
going to be grumpy if people try to push as much to rc4 as happened to rc3,
and (b) I'm going to be traveling for most of next week (and part of the
week after). I'll have internet, but I really really hope and expect that
things should be calmer coming up. Right? RIGHT GUYS?"
Stable updates: 3.9.4, 3.4.47,
and 3.0.80 were released on May 24.
Comments (none posted)
Sarah Sharp
reports
on the response to the availability of a set of Outreach Program for
Women internships working on the Linux kernel. "
As coordinator for
the Linux kernel OPW project, I was really worried about whether applicants
would be able to get patches into the kernel. Everyone knows that kernel
maintainers are the pickiest bastards^Wperfectionists about coding style,
getting the proper Signed-off-by, sending plain text email, etc. I thought
a couple applicants would be able to complete maybe one or two patches,
tops. Boy was I wrong!" In the end, 41 applicants submitted
applications and 18 of those submitted 374
patches to the kernel, of which 137 were accepted.
Comments (2 posted)
Kernel development news
By Jake Edge
May 30, 2013
As part of the developer track at this year's Automotive
Linux Summit Spring, Greg
Kroah-Hartman talked about interprocess communication (IPC) in the kernel with
an eye toward the motivations behind kdbus. The work on kdbus is
progressing well and Kroah-Hartman expressed optimism that it would be
merged before the end of the year. Beyond just providing a faster D-Bus
(which could be accomplished without moving it into the kernel, he said),
it is his hope that kdbus can eventually replace Android's binder IPC
mechanism.
Survey of IPC
There are a lot of different ways to communicate between processes
available in Linux (and, for many of the mechanisms, more widely in Unix).
Kroah-Hartman
strongly recommended Michael Kerrisk's book, The Linux Programming Interface, as
a reference to these IPC mechanisms (and most other things in the Linux
API). Several of his slides
[PDF] were taken directly from the book.
All of the different IPC mechanisms fall into one of three categories, he
said: signals, synchronization, or communication. He used diagrams from Kerrisk's book (page 878) to show the
categories and their members.
There are two types of
signals in the kernel, standard and realtime, though the latter doesn't see much
use, he said.
Synchronization methods are numerous, including futexes and
eventfd(), which
are both relatively new. Semaphores are also available, both as the
"old style" System V semaphores and as "fixed up" by
POSIX. The latter come in both named and unnamed varieties. There is also
file locking, which has two flavors: record locks to lock a
portion of a file and file locks to prevent access to the whole
file. However, the code that implements file locking is "scary", he said.
Threads have four separate types of synchronization methods (mutex,
condition variables, barriers, and read/write locks) available as well.
For communication, there are many different kernel services available too.
For data transfer, one can use pseudo-terminals. For byte-stream-oriented
data, there are pipes, FIFOs, and stream sockets. For communicating via
messages, there are
both POSIX and System V flavored message queues. Lastly, there is
shared memory which
also comes in POSIX and System V varieties along with mmap()
for anonymous and file mappings. Anonymous mappings with mmap()
were not something Kroah-Hartman knew about until recently; they ended up
using them in kdbus.
Android IPC
"That is everything we have today, except for Android", Kroah-Hartman said.
All of the existing IPC mechanisms were "not enough for Android", so that
project
added ashmem, pmem, and binder. Ashmem is "POSIX shared memory for the
lazy" in his estimation. The Android developers decided to write kernel
code rather than user-space code, he said. Ashmem uses virtual memory and
can discard memory segments when the system is under memory pressure.
Currently, ashmem lives in the staging tree, but he thinks that Google is
moving to other methods, so it may get deleted from the tree soon.
Pmem is a mechanism to share physical memory. It was used to talk to GPUs.
Newer versions of Android don't use pmem, so it may also go away. Instead,
Android is using the ION memory allocator now.
Binder is "weird", Kroah-Hartman said. It came from BeOS and its
developers were from academia. It was developed and used on systems
without the System V IPC APIs available and, via Palm and Danger, came
to Android. It is "kind of like D-Bus", and some (including him) would
argue that Android should have used D-Bus, but it didn't. It has a large
user-space library that must be used to perform IPC with binder.
Binder has a number of serious security issues when used outside of an
Android environment, he said, so he stressed that it should never be
used by other Linux-based systems.
In Android, binder is used for intents and app
separation; it is good for passing around small messages, not pictures or
streams of data. You can also use it to pass file descriptors to other
processes. It is not particularly efficient, as sending a message
makes lots of hops through the library. A presentation
[YouTube] at this year's Android
Builders Summit showed that one message required eight kernel-to-user-space
transitions.
More IPC
A lot of developers in the automotive world have used QNX, which has a nice
message-passing model. You can send a message and pass control to another
process, which is good for realtime and single processor systems,
Kroah-Hartman said.
Large automotive companies have built huge systems on top of QNX messages,
creating large libraries used by their applications. They would like to be
able to use those libraries on Linux, but often don't know that there is a
way to get the QNX message API for Linux. It is called SIMPL and it works well.
Another solution, though it is not merged into the kernel, is KBUS, which was created by some
students in England. It provides simple message passing through the
kernel, but cannot pass file descriptors. Its implementation involves
multiple data copies, but for 99% of use cases, that's just fine, he said.
Multiple copies are still fast on today's fast processors. The KBUS
developers never asked for it to be merged, as far as he knows, but if they
did, there is "no
reason not to take it".
D-Bus is a user-space messaging solution with strong typing and process
lifecycle handling. Applications subscribe to messages or message types
they are interested in. They can also create an application bus to listen
for messages sent to them. It is widely used on Linux desktops and
servers, is well-tested, and well-documented too. It uses the operating system
IPC services and can run on Unix-like systems as well as Windows.
The D-Bus developers have always said that it is not optimized for speed.
The original developer, Havoc Pennington, created a list of ideas on how to
speed it up if that was of interest, but speed was not the motivation
behind its development. In the automotive industry, there have been
numerous efforts to speed D-Bus up.
One of those efforts was the AF_BUS address
family, which came about because in-vehicle infotainment (IVI) systems
needed better D-Bus performance. Collabora was sponsored by GENIVI to come
up with a solution and AF_BUS was the result. Instead of the four
system calls required for a D-Bus message, AF_BUS reduced that to
two, which made it "much faster". But that solution was rejected by the
kernel network maintainers.
The systemd project rewrote libdbus in an effort to simplify the
code, but it turned out to significantly increase the performance of D-Bus
as well. In preliminary benchmarks, BMW found
[PPT] that the systemd D-Bus library increased performance by 360%.
That was unexpected, but the rewrite did take some shortcuts and listened
to what Pennington had said about D-Bus performance. Kroah-Hartman's
conclusion is that "if you want a faster D-Bus, rewrite the daemon, don't
mess with the kernel". For example, there is a Go implementation of D-Bus
that is "really fast". The Linux kernel IPC mechanisms are faster than any
other operating system, he said, though it may "fight" with some of the
BSDs for performance supremacy on some IPC types.
kdbus
In the GNOME project, there is plan for something called "portals" that
will containerize GNOME applications. That would allow running
applications from multiple versions of GNOME at the same time while also
providing application separation so that misbehaving or malicious
applications could not affect others. Eventually, something like Android's
intents will also be part of portals, but the feature is still a long way
out, he said. Portals provides one of the main motivations behind kdbus.
So there is a need for an enhanced D-Bus that has some additional
features. At a recent GNOME hackfest, Kay Sievers, Lennart Poettering,
Kroah-Hartman,
and some other GNOME developers sat down to discuss a new messaging
scheme, which is what kdbus is. It will support multicast and
single-endpoint messages, without any extra wakeups from the kernel, he said.
There will be no blocking calls to kdbus, unlike binder which can sleep,
as the API for kdbus is completely asynchronous.
Instead of
doing the message filtering in user space, kdbus will do it in the kernel
using Bloom
filters, which will allow the kernel to only wake up the destination
process, unlike D-Bus. Bloom filters have been publicized by Google
engineers recently,
and they are an "all math" scheme that uses hashes to make searching very
fast. There are hash collisions, so there is still some searching that
needs to be done, but the vast majority of the non-matches are eliminated
immediately.
Kdbus ended up with a naming database in the kernel to track the message
types and bus names, which "scared the heck out of me", Kroah-Hartman
said. But it turned to be "tiny" and worked quite well. In some ways, it
is similar to DNS, he said.
Kdbus will provide reliable order guarantees, so that messages will be
received in the order they were sent. Only the kernel can make that
guarantee, he said, and the current D-Bus does a lot of extra work to try to
ensure the ordering. The guarantee only applies to messages sent from a
single process, the order of "simultaneous" messages from multiple
processes is not guaranteed.
Passing file descriptors over kdbus will be supported. There is also a
one-copy message passing mechanism that Tejun Heo and Sievers came up
with. Heo actually got zero-copy working, but it was "even scarier", so
they decided against using it. Effectively, with one-copy, the kernel
copies the message from
user space directly into the receive buffer for the destination process.
Kdbus might be fast enough to handle data streams as well as messages, but
Kroah-Hartman does not know if that will be implemented.
Because it is in the kernel, kdbus gets a number of attributes almost for
free. It is namespace aware, which was easy to add because the namespace
developers have made it straightforward to do so. It also integrated with
the audit subystem,
which is important to the enterprise distributions. For D-Bus,
getting SELinux support was a lot of work, but kdbus is Linux Security
Module (LSM) aware, so it got SELinux (Smack, TOMOYO, AppArmor, ...)
support for free.
Current kdbus status
As a way to test kdbus, the systemd team has replaced D-Bus in systemd with
kdbus. The code is available in the systemd tree, but it is still a work
in progress. The kdbus developers are not even looking at speed yet, but
some rudimentary tests suggest that it is "very fast". Kdbus will require
a recent kernel as it uses control groups (cgroups); it also requires some
patches that were only merged into 3.10-rc kernels.
The plan is to merge kdbus when it is "ready", which he hopes will be
before the end of the year. His goal, though it is not a general project
goal, is to replace Android's binder with kdbus. He has talked to the
binder people at Google and they are amenable to that, as it would allow
them to delete a bunch of code they are currently carrying in their trees.
Kdbus will not "scale to the cloud", Kroah-Hartman said in answer to a
question from the audience, because it only sends
messages on a single system. There are already inter-system messaging
protocols that can be used for that use case. In addition, the network
maintainers placed a restriction on kdbus: don't touch the networking
code. That makes sense because it is an IPC mechanism, and that is where
AF_BUS ran aground.
The automotive industry will be particularly interested because it is used
to using the QNX message passing, which it mapped to libdbus. It chose
D-Bus because it is well-documented, well-understood, and is as easy to use
as QNX. But, it doesn't just want a faster D-Bus (which could be achieved
by rewriting it), it wants more: namespace support, audit support, SELinux,
application separation, and so on.
Finally, someone asked whether Linus Torvalds was "on board" with kdbus.
Kroah-Hartman said that he didn't know, but that kdbus is self-contained,
so he doesn't think Torvalds will block it. Marcel Holtmann said that
Torvalds was "fine with it" six years ago when another, similar idea had
been proposed. Kroah-Hartman noted that getting it past Al Viro might be
more difficult than getting it past Torvalds, but binder is "hairy code"
and Viro is the one who found the security problems there.
Right now, they are working on getting the system to boot with systemd
using kdbus. There are some tests for kdbus, but booting with systemd will
give them a lot of confidence in the feature. The kernel side of the code
is done, he thinks, but they thought that earlier and then Heo came up with
zero and one-copy. He would be happy if it is merged by the end of the
year, but if it isn't, it shouldn't stretch much past that, and he
encouraged people to start looking at kdbus for their messaging needs in
the future.
[ I would like to thank the Linux Foundation for travel assistance so that I
could attend
the Automotive Linux Summit Spring and LinuxCon Japan. ]
Comments (12 posted)
According to Btrfs developer Chris Mason, tuning Linux filesystems to work
well on solid-state storage devices is a lot like working on an old,
clunky car. Lots of work goes into just trying to make the thing run with
decent performance. Old cars may have mainly hardware-related problems,
but, with Linux,
the bottleneck is almost always to be found in the software. It is, he
said, hard to give a customer a high-performance device and expect them to
actually see that performance in their application. Fixing this problem
will require work in a lot of areas. One of those areas, supporting and
using atomic I/O operations, shows particular potential.
The problem
To demonstrate the kind of problem that filesystem developers are grappling
with, Chris started with a plot from a problematic customer workload on an
ext4 filesystem; it showed alternating periods of high and low I/O
throughput rates. The source of the problem, in this case, was a
combination of (1) overwriting an existing file and (2) a
filesystem that had been
mounted in the data=ordered mode. That combination causes data
blocks to be put into a special list that must get flushed to disk every
time that the filesystem commits a transaction. Since the system in
question had a fair amount of memory, the normal asynchronous writeback
mechanism didn't kick in, so dirty blocks were not being written steadily;
instead, they all had to be flushed when the transaction commit happened.
The periods of low throughput corresponded to the transaction commits;
everything just stopped while the filesystem caught up with its pending work.
In general, a filesystem commit operation involves a number of steps, the
first of which is to write all of the relevant file data and wait for that
write to complete. Then the critical metadata can be written to the log;
once again, the filesystem must wait until that write is done. Finally,
the commit block can be written — inevitably followed by a wait. All of
those waits are critical for filesystem integrity, but they can be quite
hard on performance.
Quite a few workloads — including a lot of database workloads — are
especially sensitive to the latency imposed by waits in the filesystem.
If the number of
waits could be somehow reduced, latency would improve. Fewer waits would
also make it possible to send larger I/O operations to the device, with a
couple of significant benefits: performance would improve, and, since large
chunks are friendlier to a flash-based device's garbage-collection
subsystem, the lifetime of the device would also improve. So reducing the
number of wait operations executed in a filesystem transaction commit is an
important prerequisite for getting the best performance out of contemporary
drives.
Atomic I/O operations
One way to reduce waits is with atomic I/O operations — operations that are
guaranteed (by the hardware) to either succeed or fail as a whole. If the
system performs an atomic write of four blocks to the device, either all
four blocks will be successfully written, or none of them will be. In
many cases, hardware that supports atomic operations can provide the same
integrity guarantees that are provided by waits now, making those waits
unnecessary. The T10 (SCSI) standard committee has approved a simple
specification for atomic operations; it only supports contiguous I/O
operations, so it is "not very exciting." Work is proceeding on vectored
atomic operations that would handle writes to multiple discontiguous areas
on the disk, but that has not yet been finalized.
As an example of how atomic I/O operations can help performance, Chris
looked at the special log used by Btrfs to implement the fsync()
system call. The filesystem will respond to an fsync() by writing
the important data to a new log block. In the current code, each commit
only has to wait twice, thanks to some recent work by Josef Bacik: once
for the write of the data and metadata, and once for the superblock write.
That work brought a big performance boost, but atomic I/O can push things
even further. By using atomic operations to eliminate one more wait,
Chris was able to improve performance by 10-15%;
he said he thought the improvement should be better than that, but even
that level of improvement is the kind of thing database vendors send out
press releases for. Getting a 15% improvement without even trying that
hard, he said, was a nice thing.
At Fusion-io, work has been done to enable atomic I/O operations in the MariaDB
and Percona database management systems. Currently, these operations are
only enabled with the
Fusion-io software development kit and its "DirectFS" filesystem. Atomic
I/O operations allowed the elimination of the MySQL-derived
double-buffering mode, resulting in 43% more transactions per second and
half the wear on the storage device. Both improvements matter: if you have
made a large investment in flash storage, getting twice the life is worth a
lot of money.
Getting there
So it's one thing to hack some atomic operations into a database
application; making atomic I/O operations more generally available is a
larger problem. Chris has developed a set of API changes that will allow
user-space programs to make use of atomic I/O operations, but there are
some significant limitations, starting with the fact that only direct I/O
is supported. With buffered I/O, it just is not possible for the kernel to
track the various pieces through the stack and guarantee atomicity. There
will also need to be some limitations on the maximum size of any given I/O
operation.
An application will request atomic I/O with the new O_ATOMIC flag
to the open() system call. That is all that is required; many
direct I/O applications, Chris said, can benefit from atomic I/O operations
nearly for free. Even at this level, there are benefits. Oracle's
database, he said, pretends it has atomic I/O when it doesn't; the result
can be "fractured blocks" where a system crash interrupts the writing of a
data block that been scattered across a fragmented filesystem, leading to
database corruption. With atomic I/O operations, those fragmented blocks
will be a thing of the past.
Atomic I/O operation support can be taken a step further, though, by adding
asynchronous I/O (AIO) support. The nice thing about the Linux AIO
interface (which is not generally acknowledged to have many nice aspects)
is that it allows an application to enqueue multiple I/O operations with a
single system call. With atomic support, those multiple operations — which
need not all involve the same file — can all be done as a single atomic
unit. That allows multi-file atomic operations, a feature which can be
used to simplify database transaction engines and improve performance.
Once this functionality is in place, Chris hopes, the (currently small)
number of users of the kernel's direct I/O and AIO capabilities will
increase.
Some block-layer changes will clearly be needed to make this all work, of
course. Low-level drivers will need to advertise the maximum number of
atomic segments any given device will support. The block layer's plugging
infrastructure, which temporarily stops the issuing of I/O requests to
allow them to accumulate and be coalesced, will need to be extended.
Currently, a plugged queue is automatically unplugged when the current
kernel thread schedules out of the processor; there will need to be a means
to require an explicit unplug operation instead. This, Chris noted, was
how plugging used to work, and it caused a lot of problems with lost unplug
operations. Explicit unplugging was removed for a reason; it would have to
be re-added carefully and used "with restraint." Once that feature is
there, the AIO and direct I/O code will need to be modified to hold queue
plugs for the creation of atomic writes.
The hard part, though, is, as usual, the error handling. The filesystem
must stay consistent even if an atomic operation grows too large to
complete. There are a number of tricky cases where this can come about.
There are also challenges with deadlocks while waiting for plugged I/O.
The hardest problem, though, may be related to the simple fact that the
proper functioning of atomic I/O operations will only be tested when things
go wrong — a system crash, for example. It is hard to know that
rarely-tested code works well. So there needs to be a comprehensive test
suite that can verify that the hardware's atomic I/O operations are working
properly. Otherwise, it will be hard to have full confidence in the
integrity guarantees provided by atomic operations.
Status and future work
The Fusion-io driver has had atomic I/O operation support for some time,
but Chris would like to make this support widely available so that
developers can count on its presence.
Extending it to NVM Express is in progress now; SCSI support will probably
wait until the specification for vectored atomic operations is complete.
Btrfs can use (vectored) atomic I/O operations in its transaction commits;
work on other filesystems is progressing. The changes to the plugging code
are done with the small exception of the deadlock handler; that gap needs
to be filled before the patches can go out, Chris said.
From here, it will be necessary to finalize the proposed changes to the
kernel API and submit them for review. The review process itself could
take some time; the AIO and direct I/O interfaces tend to be contentious,
with lots of developers arguing about them but few being willing to
actually work on that code. So a few iterations on the API can be expected
there. The FIO benchmark
needs to be extended to test atomic I/O. Then
there is the large task of enabling atomic I/O operation in
applications.
For the foreseeable future, a number of limitations will apply to atomic
I/O operations. The most annoying is likely to be the small maximum I/O
size: 64KB for the time being. Someday, hopefully, that maximum will be
increased significantly, but for now it applies. The complexity of the AIO
and direct I/O code will challenge filesystem implementations; the code is
far more complex than one might expect, and each filesystem interfaces with
that code in a slightly different way. There are worries about performance
variations between vendors; Fusion-io devices can implement atomic I/O
operations at very low cost, but that may not be the case for all hardware.
Atomic I/O operations also cannot work across multiple devices; that means
that the kernel's RAID implementations will require work to be able to
support atomic I/O. This work, Chris said, will not be in the initial
patches.
There are alternatives to atomic I/O operations, including explicit I/O
ordering (used in the kernel for years) or I/O checksumming (to detect
incomplete I/O operations after a crash). But, for many situations, atomic
I/O operations look like a good way to let the hardware help the software
get better performance from solid-state drives. Once this functionality
finds its way into the mainline, taking advantage of fast drives might just
feel a bit less like coaxing another trip out of that old jalopy.
[Your editor would like to thank the Linux Foundation for supporting his
travel to LinuxCon Japan.]
Comments (20 posted)
May 30, 2013
This article was contributed by Andrew Shewmaker
The Linux kernel manages sparse sets of index ranges for various subsystems. For
instance, the I/O memory management unit (IOMMU) keeps track of the
addresses of outstanding device memory mappings for each
PCIE device's domain and tracks holes for new allocations. File systems cache
pending operations, extent state, free extents, and more. A simple linked
list fails to provide good performance for search, insertion, and
deletion operations as it grows, so more advanced abstract data types like
red-black trees (rbtrees) were implemented. The
kernel's rbtrees generally provide good performance as long as they are only
being accessed by one reader/writer, but they suffer when multiple readers and
writers are contending for locks.
There are variants of rbtrees that allow efficient
read-copy-update (RCU) based reads, but not fine-grained
concurrent writes. So, Chris Mason of Btrfs fame is developing a
skiplist implementation that will allow file systems like Btrfs
and XFS to perform many concurrent updates to their extent indexes.
This article will describe the basic skiplist and what makes this skiplist variant
cache-friendly. In part two, I'll describe the skiplist API and compare the
performance of skiplists to rbtrees.
Basic skiplist
William Pugh first described skiplists in 1990 as a probabilistic alternative
to balanced trees (such as rbtrees) that is simpler, faster, and more
space-efficient. A skiplist is composed of a hierarchy of ordered linked lists, where
each higher level contains a sparser subset of the list below it. The size of
the skiplist decreases at a constant rate for each higher level. For instance,
if level zero has 32 elements and a probability p=0.5 that an element
appears in the next higher level, then level one has 16 elements, level two has eight,
etc. The subset selected for the next higher level is random, but the quantity
of items in the subset isn't random.
By way of analogy, consider an old-fashioned printed dictionary with multiple
search aids built into it. This diagram shows how the different aids are similar
to levels of a skiplist:
At the highest level there are grooves on the edge
of the book that mark sections for each letter, A-Z. The search can continue by
looking at word prefixes centered at the top of each page. Next, guide
words at the top left and right of
each page show the span of words for each page. At the lowest level, a page is
scanned word by word.
A skiplist replicates something like this index structure in a digital form.
History of kernel skiplists
Josh MacDonald investigated
cache-friendly skiplists
on Linux 2.4 more than a decade ago. His skiplists are more space-efficient than
rbtrees with more than 100 keys, searches are faster after 1,000 keys, and
deletions and insertions at 10,000 keys. He also performed various concurrency
experiments. When Josh posted an
RFC to
the kernel mailing list he presented them as a "solution waiting for
the right problem to come along". At the time, no discussion ensued, but
Chris later became aware of Josh's work during his research.
Unfortunately, Josh's implementation focused on reader/writer locks, which only
allow one writer at a time, and Chris told me that he wanted to use RCU.
Also, Josh's coding style differs from the kernel's and his macros make his code
a little difficult to work with.
Other well-known Linux kernel developers have experimented with skiplists
as well. Con
Kolivas initially thought they would be useful for his
BFS CPU scheduler. Ultimately,
BFS
experienced worse behavior in general when using skiplists,
so Con returned to his previous list structure.
This was due in part to the BFS scheduler's common case of having few tasks to
schedule, so it didn't benefit from theoretically faster lookups for larger
numbers of items in the list. Also, the scheduler process wasn't parallel
itself, so it didn't benefit from a skiplist's more easily exploited concurrency.
Andi Kleen recalled the discouraging results from his own
skiplist
investigation to Con. He found that the variable number of pointers needed
to link different levels of a skiplist made it difficult to achieve efficient
use of memory and cache without limiting the size of the skiplist.
In 2011, Chris asked Liu Bo to try replacing Btrfs's rbtree-based
extent_map code, which maps logical file offsets to physical disk
offsets, with a skiplist-based implementation. The mappings are
read-mostly, until a random write workload triggers
a copy-on-write operation. Liu created an
initial skiplist
patch for Btrfs beginning with Con's implementation and adding support for
concurrency with RCU. Results were mixed, and Chris's user-space experimentation
led him to start work to make Liu's skiplists more cache-friendly.
You may be saying to yourself, "I thought the whole point of Btrfs was to use
B-trees for everything." Btrfs does use its copy-on-write B-trees for any
structure read from or written to the disk. However, it also uses rbtrees for
in-memory caches. Some rbtrees batch pending inode and extent operations.
Other caches are: the extent state cache — tracking whether extents are
locked, damaged, dirty, etc.; the free space cache — remembering free
extents for quicker allocations; and the previously mentioned extent_map.
In addition to the rbtrees, a radix tree manages the extent buffer cache,
which is used like the page cache, but for blocks of metadata that might be larger
than a page.
All of these data structures have multiple threads contending for access to them
and might benefit from skiplists, though the delayed operation trees and free
space cache have the most contention. However, Chris's real target for this
skiplist is some pending RAID5/6 parity logging code. It needs to enable
"fast concurrent reads and writes into an exception store with new
locations for some extents." Ideally, Chris hopes to make his skiplist
general purpose enough for others to use. If skiplists can provide lockless
lookups and be used for both buffer cache indexes and extent trees, then
Dave Chinner would consider using them in XFS.
Cache-friendly skiplist
When reading the following description of Chris Mason's skiplist implementation, keep in mind
that it is a skiplist for range indexes, or extents. The extents being managed
are not identified by a single number. Each has an index/key pointing to its
beginning and a range/size. Furthermore, each element of the skiplist is
composed of multiple extents, which will be referred to as slots.
This new implementation of a cache-friendly skiplist is a bit more
complicated than the picture of the basic skiplist may suggest; it is best
examined in pieces. The first of those pieces is described by this
diagram (a subset of the full data structure
diagram):
This figure shows a skiplist anchored by an sl_list structure
pointing to the initial entry (represented by struct sl_node) in
the list. That sl_node structure has an array of pointers (called
ptrs), indexed by level, to the head sl_node_ptr
structures of each skiplist level. The sl_node_ptr structure
functions like a typical kernel list_head structure, except that
each level's head sl_node_ptr->prev is used, possibly
confusingly, to point to the item with the greatest key at that level of
the skiplist. All locking is done at the sl_node level and will be
described in part two of this article.
The skiplist grows from the head sl_node_ptr array on the right of
the diagram above
into an array of linked lists as shown below:
Note that none of the structures listed so far contain any keys or
data, unlike traditional skiplists. That's because Chris's skiplist items
are associated with more than one key.
Another difference is that previous skiplist implementations lack prev
pointers. Pugh's original skiplist didn't need something like prev
pointers because it didn't support concurrency. MacDonald's skiplist does
support concurrency, but its protocol uses context structures that combine
parent/child pointer information with lock states. Mason's skiplists use a
different concurrency protocol that uses prev pointers.
A superficial difference between this skiplist and others is its apparent lack
of down pointers (allowing movement from one level of index to the
next), although they exist as the ptrs[] array
in sl_node. While the pointer array name is unimportant, its position
is, because different level nodes are created with different sizes of arrays.
See the ptrs array at the end of sl_node and sl_node
at the end of struct sl_leaf (which holds the actual leaf data):
The variable size of nodes could
cause memory fragmentation if they were allocated from the same pool of memory,
so Chris designed his skiplist with one slab cache for each level. This
effectively addresses Andi Kleen's concerns mentioned earlier regarding wasting
memory while keeping good performance.
The way in which keys are handled is cache-friendly in a couple of ways. First,
keys are not associated directly with individual items in this skiplist, but each
leaf manages a set of keys. SKIP_KEYS_PER_NODE is currently set to 32
as a balance between increasing cache hits and providing more chances for
concurrency. Second, an sl_leaf contains an array of keys.
Technically a keys array is unnecessary, since the keys are part of
the slots linked to a leaf. However, a search only needs the keys and
not the rest of the sl_slot structures until the end of the search. So,
the array helps skiplist operations avoid thrashing the cache.
Each sl_slot is an extent defined by its key and size
(index range). A developer can adapt the skiplist for their own use by embedding
sl_slot into their own data structure. The job of reference counting
belongs to the container of sl_slot.
See the full diagram for a picture of how
all the pieces described above fit together.
With 32 keys per item, a maximum level of 32, and half the
number of items in each higher level, Chris's skiplist should be able to
efficiently manage around 137 billion extents (index ranges).
The statistically balanced nature of the skiplist allows it to handle sparse
sets of indexes in such a way that modifications avoid the need for an rbtree's
rebalancing operations, and so they lend themselves to simpler concurrency
protocols. Chris also designed his skiplist to be cache-friendly by
having each element represent a set of keys/slots, keeping duplicates of
the slot's keys in an array, and creating a slab cache for each level of
the skiplist.
Part two of this series will offer a description of the skliplist API and a
discussion of the performance of skiplists vs. rbtrees.
Comments (4 posted)
Patches and updates
Kernel trees
Core kernel code
Development tools
Device drivers
Documentation
Filesystems and block I/O
Memory management
Networking
Architecture-specific
Security-related
Miscellaneous
Page editor: Jonathan Corbet
Distributions
By Nathan Willis
May 30, 2013
At the 2013 Tizen Developer Conference (TDC) in San Francisco, Jaguar
Land Rover's (JLR) Matt Jones demonstrated the most fully-realized
in-vehicle infotainment (IVI) system yet released on the Tizen
platform. Part proof-of-concept, part development tool, the "download
and go" image (as Jones described it) also integrates components from
several of the highest-profile projects in the Linux IVI space (which
are independent of one another, but frequently have overlapping areas
of concern). That integrated build should help users get a clearer picture of the current state of
development; a helpful aide considering the multi-year product
development cycle of the typical automobile.
The phone with wheels
Jones started out with a quick overview of the automakers' desire
to build a standardized platform for IVI. Carmakers, he said, tend to
sell a lot of different cars in a lot of different countries, so even
"simple" tasks like networking can be convoluted: different cellular
networks, different carriers to make deals with, and different
regulations all complicate the process. And in the end, all the
consumer really cares about is being able to get things accomplished.
JLR has conducted market research on what exactly it is that car
buyers want from their IVI system, and the somewhat surprising answer
is that most expect it to work just like a big mobile phone. That is,
consumers have come to expect certain application-level features from
their smartphones, such as instant access to the music they have paid
for, regardless of where they are. So they expect the same thing from
the computer in their car's dashboard. But when consumers were asked
what new applications they expected from cars a few years down the
road, they had a different answer: they had no idea at all.
The lesson JLR took from this research is that the IVI platform needs to
offer the ease-of-use of smartphones, but be as flexible as the PC, so
that it can adapt to whatever new applications developers come up with
in the future. Obviously JLR and other companies have decided that
Linux is the best choice on which to build the operating system layer,
and in recent years there have been a lot of efforts to develop the
other parts of the stack, such as the middleware and application APIs.
Nevertheless, Jones said, it was still pretty difficult for
application developers to get started writing code for the Linux-based
platform that was coalescing. One could go download a lot of separate
pieces, such as the GENIVI Alliance
projects or various offerings on GitHub, but those components are
generally pretty deep in the stack, and putting them all together is
not simple.
GENIVI (where Jones is a vice president) then conducted
its own survey of engineers, and they overwhelmingly responded that
they wanted a downloadable system image with which to develop and test
code. That resonated with the feedback given to the Linux
Foundation's Automotive Grade
Linux (AGL) workgroup, which found that developers wanted an SDK of
some sort, rather than having to build test systems from scratch with
Open Build System, as they have had to in the past.
And they're off ....
With the votes from so many quarters pointing toward the same
thing, Jones said, it was clearly time to develop a tangible Linux IVI
system. The result is AGL
Demonstrator, a runnable IVI system available as both a VMWare
image and installable on x86 hardware. The AGL Demonstrator is built
on top of Tizen 1.0, integrating GENIVI components, and sports a
custom HTML5 GUI. Both downloads are provided as binary ISO images;
presumably the work as a whole inherits Tizen's licensing (which, as
with most Linux distributions, incorporates components under a mixture of different
open source licenses).
The target hardware is a Nexcom
NDiS 166; a Celeron-based embedded Linux box. Jones said that the
"what price would you pay" question seemed to split answers into two
distinct camps: the "professional" camp that expected a roughly $1000 box (with a nice
display included), and the DIY camp that wanted something under $200, with a
more bring-your-own-peripherals expectation. Although he did not go
into pricing details, he suggested that the Nexcom system is
in the former category; he described a modest set-top box setup as an
easy-to-acquire low-end alternative. However, Jones also said that many
other contemporary x86 systems should work; the Nexcom was simply the
test hardware.
The tests that the AGL Demonstrator were subjected to were
real-world engineering problems.
Jones said that The Linux Foundation's Rudolf Streif led an effort
(purely for
experimental purposes) to rip out the heating-and-air-conditioning
controls from a test vehicle, and hook them up to software controls in
the Demonstrator system. Three people (working off and on) were able to do it in about
two and half weeks.
In April, AGL held a user-experience (UX) design contest based on Demonstrator,
asking developers to contribute HTML5-based interface designs. The
demo UX included in the image is designed to "look cool" and look
different from the competition, Jones said, but they wanted to
challenge the community to take part as well. At the time of the
talk, the winners of the contest had not yet been announced, but Jones
did point out one important accomplishment: an entry ported Ford's
AppLink
(a smartphone connection tool) to the Demonstrator platform, and in less
than a week. At the Automotive Linux Summit in Tokyo the week
following the Tizen event, Streif announced
the winners, Ford's among them.
The road map
Despite its newness, the AGL Demonstrator has been a success, and
Jones indicated that the plan is to keep it going. The first order of
business is to update it to the more recent Tizen 2.1 release.
Following that, the plan is to integrate Wayland support (which is not
slated to arrive in Tizen proper until the 3.0 release, sometime in
late 2013). Next, Jones said, there are several existing open source
components that need to be integrated, including the Navit navigation
system, BlueZ and oFono for hands-free telephony, Near-Field
Communications (NFC) support, and GStreamer
media playback with Digital
Living Network Alliance (DLNA) support.
Those components are primarily application-level pieces of the
puzzle, but Jones indicated that there are also car-adaptation pieces
still needing work, such as Tizen's Automotive
Message Broker, a still-under-development framework to exchange
sensor data and automotive component messages (e.g., steering status
or which seat belts are engaged). Most of the automotive signals and
messages have some usefulness for application developers, but not all
of them have standardized APIs yet.
AGL Demonstrator clearly fills a high-priority gap in the Linux IVI
story. Not only does it allow independent application developers to
write and test code on IVI systems, but it offers would-be
contributors the chance to take part in platform development. As
Jones's talk illustrated, even though there are multiple groups
tackling IVI work at one level of the stack or another (GENIVI, AGL,
Tizen, etc.), simply putting the pieces together in one place makes
them far more useful. Of course, the gaps in AGL Demonstrator's
platform support also illustrate how much work remains to be
done—but at least with the Demonstrator as an option, motivated
members of the Linux community don't have to wait for someone else to
cross the next bug off the list.
[The author wishes to thank the Linux Foundation for travel
assistance to Tizen Dev Con.]
Comments (none posted)
Brief items
If Linux distros were Jedi, Debian would be Obi-Wan Kenobi. old & wise, with excellent foresight. He does what needs to be done.
Ubuntu is Anakin Skywalker - incredibly powerful but you never really know whether he's good or evil.
Linux Mint is Luke Skywalker - the one who will restore balance to the force.
Yoda is Slackware.
--
4Sci
(Thanks to Mert Dirik)
I'm firmly of the opinion that there are benefits to Secure Boot. I'm also in favour of setups like Fast Boot. But I don't believe that anyone should be forced to agree to a EULA purely in order to be able to boot their own choice of OS on a system that they've already purchased.
--
Matthew Garrett
There is a social element to this bug report as well, of course. It served for many as a sort of declaration of intent. But it's better for us to focus our intent on excellence in our own right, rather than our impact on someone else's product.
--
Mark
Shuttleworth (closes Ubuntu bug #1)
Comments (3 posted)
The Qt Blog
introduces
"Boot to Qt", which is "
a light-weight UI stack for embedded
linux, based on the Qt Framework - Boot to Qt is built on an Android
kernel/baselayer and offers an elegant means of developing beautiful and
performant embedded devices." Access is invitation-only currently;
a release is forecast for sometime around the end of the year.
Comments (17 posted)
Fedora 19 beta has been released. "
The Beta release is the last
important milestone before the release of Fedora 19. Only critical bug
fixes will be pushed as updates, leading up to the general release of
Fedora 19. Join us in making Fedora 19 a solid release by downloading,
testing, and providing your valuable feedback." Take a look at some
of the
known
bugs and test away.
Full Story (comments: none)
The Fedora ARM team has announced the Fedora 19 Beta for ARM release.
"
This marks the last significant milestone before reaching the final
release of Fedora 19 for ARM, with only critical bug fixes being added as updates to make this our most solid release to date.
This marks the first time the Fedora ARM team will be releasing the F19 Beta
alongside Primary Architectures." Additional information can be
found on
Fedora 19
Beta for ARM page.
Full Story (comments: none)
Ubuntu's
bug #1 has
served as a sort of rallying point for the project. Mark Shuttleworth has
now
closed
that bug, saying that it is time to move on. "
Android may not be
my or your first choice of Linux, but it is without doubt an open source
platform that offers both practical and economic benefits to users and
industry. So we have both competition, and good representation for open
source, in personal computing. Even though we have only played a small
part in that shift, I think it's important for us to recognize that the
shift has taken place. So from Ubuntu's perspective, this bug is now
closed."
Comments (87 posted)
Distribution News
Debian GNU/Linux
Long time Debian developer Ray Dassen died on May 18. "
The Debian Project honours Ray's great work and his strong dedication to
Debian and Free Software. His technical knowledge and his ability to
share that knowledge with others will be missed. His contributions will
not be forgotten, and the high standards of his work will continue to
serve as an inspiration to others."
Full Story (comments: none)
Fedora
John Rose, aka inode0, has been reappointed to the Fedora Board. "
His insight and knowledge about Fedora's culture and history, his ongoing participation in the Ambassadors' group, and his belief in preserving freedom within the project are, I believe, important facets to the Board's collective knowledge, and I'm pleased that he is willing to stay with us another year."
Full Story (comments: none)
Newsletters and articles of interest
Comments (none posted)
The H
reviews
the recently released Linux Mint 15. "
Linux Mint's package management applications have always been excellent. The distribution introduced a style of grouping applications and showing user ratings several releases before Ubuntu switched to their better known version in Ubuntu's Software Centre. Mint's update manager has long been regarded by many as one of the most informative of any Linux distribution. Linux Mint 15 continues this trend by replacing Ubuntu's "Software Sources" tool with its own utility called MintSources (which is also labelled "Software Sources" in the user interface for simplicity's sake). MintSources adds a number of useful features such as an improved interface for managing Personal Package Archives (PPAs) and easier ways to install and uninstall additional repositories and their corresponding signing keys. While these features will, most likely, be of little interest to newcomers to Linux distributions or Linux Mint, they do however simplify life for more advanced users who routinely add cutting edge software that is not available in the Ubuntu or Linux Mint repositories yet."
Comments (none posted)
Page editor: Rebecca Sobol
Development
By Nathan Willis
May 30, 2013
One of the oft-cited tensions in the Tizen project
(as in its predecessor, MeeGo) is that the end goal is to offer a
common application platform for some Linux systems that are strikingly
different under the hood. That is, application developers are
understandably interested in offering their social networking apps and
games on phones, smart TVs, and in-vehicle entertainment
units—but the hardware and environments defining these products
can be radically different. Phones have battery life and tiny screens to
consider; smart TVs have the "ten foot UI" and users' aversion to
keyboards. But perhaps the clearest challenge in this space has
always been in-vehicle infotainment (IVI) systems, where audio routing
is the canonical illustration of how different the car is from a
desktop Linux system. Media playback, hands-free phone calls,
navigation, and alert messages all contend for the right to deliver
audio to the driver and passengers, without distracting the driver.
Intel's Jaska Uimonen presented the solution in the works for Tizen
IVI at the 2013 Tizen Developer Conference in San Francisco. The
audio management problem, he explained, comes down to three core
issues: routing, policy-controlled volume, and policy-controlled
preemption. The routing issue involves multiplexing the multiple
audio input and output points of a vehicle (such as speaker sets for
front and rear seats, headphone jacks, hands-free microphones,
traditional audio dash-units, and hot-pluggable Bluetooth
devices—a category that can include headsets and mobile phones).
The users' expectation is that audio will always be routed to the
correct zone, but a wide array of combinations is possible, and configuration
can change on the fly. For example, an incoming hands-free phone call
may need to target the driver alone, while the radio continues to play
to the other seats (perhaps including the front passenger seat). This
differs from mobile device audio management, Uimonen said, which
typically makes a "select and route" decision; IVI audio requires a
full switching matrix.
But there is still more. In addition to simply connecting each
audio stream to the appropriate port, Uimonen said, the audio manager
must also mix the volumes of the audio streams in a useful and safe
manner. Turn-by-turn navigation directions are more important to hear
than the radio, and thus the radio volume should dip momentarily for
each navigation instruction—and hopefully do so smoothly. Then
again, there are also situations when one audio stream may need to
preempt others entirely, such as a safety alarm. Exactly which
preemption and volume scenarios are appropriate is a system policy
decision, of course, which will likely vary from manufacturer to
manufacturer. Thus Tizen IVI's audio management framework needs to be
flexible enough to support configurable policies.
Sound off
The audio management solution developed for Tizen IVI uses
PulseAudio for routing, which is not surprising, although it does
introduce some new code to support the notion of audio "zones" (e.g.,
driver, passenger, rear seat entertainment unit, etc.) which is not
found in PulseAudio on the desktop. It implements the necessary
policy-based decision making with Murphy, a relatively new "resource
routing" engine started in 2012.
Murphy maintains an internal database
of system state for whichever resources it is responsible for routing
(audio, video, and network streams are the example uses). Policies for
resource control are described in Lua statements, and separate plugins
can implement control for each resource Murphy monitors.
The Murphy daemon will listen for events from different sources (such
as udev or Websockets), and can execute scripts or C programs to
re-route resources in response.
For example, in the IVI audio scenario, a Bluetooth headset
appearing or disappearing would trigger a D-Bus event from BlueZ,
while a speed-sensitive volume adjustment might read speedometer data
from the vehicle's CAN bus or Automotive
Message Broker. The Murphy-driven audio manager runs as a plugin
to PulseAudio; thus when a Murphy policy rule dictates routing a
particular input to a particular output, the plugin should connect the
associated PulseAudio sources and sinks.
But the actual implementation is a bit more complicated, because
PulseAudio does not natively support certain useful options, such as
combining sinks and sources or combining multiple sinks.
Thus, the Tizen IVI audio manager adds the concept of
"nodes"—essentially an abstraction layer, nodes represent a
"routing endpoint" which might have a source (e.g., microphone) and a
sink (e.g., speaker). A telephony application needs both to function,
so the routing policy treats them as a single object. Playing audio
over all of the speaker sets in the vehicle might actually involve
routing the same audio stream to multiple independent speaker
clusters, so the routing policy offers a "combined" node that serves
them all.
The audio policies are based around tagging each audio stream with
a class; for example, "music," "navigation," or "ui event." The
system integrator can then write routing, volume, and preemption
rules based on the class of the stream and the various nodes in the
car. Obviously not every possible node will be present when the car
leaves the factory (Bluetooth headsets and portable music players
being the most obvious examples), but there are generally a fixed
number of speaker and microphone devices, and only a handful of
well-understood hot-pluggable device types to worry about.
When an application creates a new audio stream, the system
automatically creates a default route for it based on its class (which
might specify that "music" streams are routed to the whole-car speaker
system, but "navigation" streams are routed to the front-seat speakers
only). Users can request an explicit route to override the default if
desired, assuming there is application support, that is.
This explicit routing feature is designed to work with GENIVI's audio
management system, although GENIVI's system is not currently built for
Tizen images. However, the GENIVI system does define the client-side
interface through which applications can request specific audio
routing, and since GENIVI is an industry-wide effort, the Tizen system
supports the interface. As for application support within the existing
Tizen application ecosystem, Uimonen noted that Tizen's WebKit runtime
has been patched to support tagging streams with classes; HTML5 applications can tag
their <audio> and <video> elements.
There is no word yet on progress of the feature in Tizen's native
application APIs.
The answer is simple. Volume.
The volume and preemption rules for audio streams prioritize
certain classes over others, Uimonen explained, but the audio manager's
decision to lower the volume of a stream is hidden from the
application itself. Permitting applications to know when their volume
level was being lowered in the mix might have some unwanted effects,
he said—misbehaving applications might fight it by raising the
volume of the stream, which is dangerous on its own but could also
potentially lead to race conditions between competing applications.
The audio manager also adds one more feature to PulseAudio: the
ability to smoothly ramp up or ramp down the volume of a stream. The
plan is to send that patch upstream, Uimonen said, although it has not
been sent yet. The other code is available in
the Tizen repositories under the "IVI" profile. All of the code to
implement the audio manager will be open source, however, the actual
policies and configurations are hardware-specific; as with other such
system configuration points, it may be expected that many auto manufacturers
will choose not to release their audio routing config files or scripts.
Uimonen commented that the volume ramping work and the other
additions to PulseAudio are hopefully going to be merged with the
project upstream, because there are use cases outside of the IVI
context that would benefit from the ability to perform more complex
audio routing. That list might include home theater systems or
whole-house audio setups, but even the basic desktop configuration
these days is likely to incorporate multiple sound devices (Bluetooth,
USB audio adapters, and so on). For the moment, then, the unusual
demands of the IVI industry may be pushing development, but ultimately
everyone stands to gain.
[The author wishes to thank the Linux Foundation for travel assistance to Tizen Dev Con.]
Comments (12 posted)
Brief items
I put the Web APIs at the end of the outline. With any luck, we'll
run out of time before we get to them.
— Bob Spencer, presenting at Tizen Dev Con's hands-on development lab.
Any
advanced postfix configuration (even from the official documentation)
looks like McGyver was out of duct-tape but had to build a nuclear reactor
from kitchen parts with only the transparent tape for office use.
—
Bernhard R. Link
Comments (2 posted)
The Document Foundation has
announced
the first Beta release of LibreOffice 4.1. "
The upcoming 4.1 will be
our sixth major release in two and a half years, and comes with a nice set
of new features." See the
list of
known bugs before you start testing.
Comments (59 posted)
Git version 1.8.3 is out. This release incorporates quite a few small changes over 1.8.2; users are encouraged to read the full release notes for details. A backward-compatibility note for Git 2.0 is also included.
Full Story (comments: none)
Jorgen Schäfer has released version 1.0 of Elpy, the Python development environment for Emacs. In addition to standard IDE features like code completion and auto-indentation, Elpy supports inline documentation, on-the-fly checks with flymake, snippet expansion, and code refactoring.
Comments (none posted)
Imaging Resource covers the availability of source code for Samsung's Android-based NX300 and NX2000 digital cameras. In contrast, Canon cameras, which have active third-party replacement firmware projects, require reverse-engineering. In theory, the source should make writing CHDK or Magic Lantern–like firmware for these devices a simpler affair. (Thanks to Ted Clark)
Comments (3 posted)
MIT's Game Lab has released a new beta of its relativistic physics simulation "game" A Slower Speed of Light; this edition runs on Linux in addition to other platforms. More interestingly, the Game Lab has also released the underlying physics engine, OpenRelativity, as an open source library—under the MIT License, which in this case is a relatively predictable choice.
Comments (none posted)
Newsletters and articles
Comments (none posted)
The list of accepted projects for Google Summer of Code (GSoC) 2013 has been posted online. Many mentoring organizations have written their own blog announcements, of course, but the full catalog is visible—and sortable—on the official page.
Comments (none posted)
Groklaw examines recent comments from the Software Freedom Law Center (SFLC) over the recently posted patent cross-license draft for Google's VP8 codec. The story quotes SFLC's Aaron Williamson on the subject, who notes that the agreement is "not perfect, but no other modern web video format provides nearly the same degree of protection for FOSS implementations."
Comments (none posted)
Page editor: Nathan Willis
Announcements
Brief items
The Linux Foundation has announced AllGo Embedded Systems, Suntec Software
and Wargaming have joined the Foundation. "
The demand for devices to
become more intelligent and connected in the gaming and automotive
industries is driving more demand for interactive entertainment and
embedded software in the Linux market. The newest Linux Foundation members
are expanding investment in Linux in order to advance software in-vehicle
systems and online gaming and leverage the collaborative development
model."
Full Story (comments: none)
Calls for Presentations
PGConf.DE 2013 will take place November 8 in Oberhausen, Germany. The call
for papers closes September 15. Talks may be in English or German.
Full Story (comments: none)
Upcoming Events
PyOhio 2013, the annual Python programming conference for Ohio and the
surrounding region, will take place Saturday, July 27-28 in Columbus,
Ohio. The call for proposals ends June 1.
Full Story (comments: none)
Events: May 31, 2013 to July 30, 2013
The following event listing is taken from the
LWN.net Calendar.
| Date(s) | Event | Location |
May 29 May 31 |
Linuxcon Japan 2013 |
Tokyo, Japan |
May 31 June 1 |
Texas Linux Festival 2013 |
Austin, TX, USA |
June 1 June 2 |
Debian/Ubuntu Community Conference Italia 2013 |
Fermo, Italy |
June 1 June 4 |
European Lisp Symposium |
Madrid, Spain |
June 3 June 5 |
Yet Another Perl Conference: North America |
Austin, TX, USA |
| June 4 |
Magnolia CMS Lunch & Learn |
Toronto, ON, Canada |
June 6 June 9 |
Nordic Ruby |
Stockholm, Sweden |
June 7 June 8 |
CloudConf |
Paris, France |
June 7 June 9 |
SouthEast LinuxFest |
Charlotte, NC, USA |
June 8 June 9 |
AdaCamp |
San Francisco, CA, USA |
| June 9 |
OpenShift Origin Community Day |
Boston, MA, USA |
June 10 June 14 |
Red Hat Summit 2013 |
Boston, MA, USA |
June 13 June 15 |
PyCon Singapore 2013 |
Singapore, Republic of Singapor |
June 17 June 18 |
Droidcon Paris |
Paris, France |
June 18 June 20 |
Velocity Conference |
Santa Clara, CA, USA |
June 18 June 21 |
Open Source Bridge: The conference for open source citizens |
Portland, Oregon, USA |
June 20 June 21 |
7th Conferenza Italiana sul Software Libero |
Como, Italy |
June 22 June 23 |
RubyConf India |
Pune, India |
June 26 June 28 |
USENIX Annual Technical Conference |
San Jose, CA, USA |
June 27 June 30 |
Linux Vacation / Eastern Europe 2013 |
Grodno, Belarus |
June 29 July 3 |
Workshop on Essential Abstractions in GCC, 2013 |
Bombay, India |
July 1 July 5 |
Workshop on Dynamic Languages and Applications |
Montpellier, France |
July 1 July 7 |
EuroPython 2013 |
Florence, Italy |
July 2 July 4 |
OSSConf 2013 |
Žilina, Slovakia |
July 3 July 6 |
FISL 14 |
Porto Alegre, Brazil |
July 5 July 7 |
PyCon Australia 2013 |
Hobart, Tasmania |
July 6 July 11 |
Libre Software Meeting |
Brussels, Belgium |
July 8 July 12 |
Linaro Connect Europe 2013 |
Dublin, Ireland |
| July 12 |
PGDay UK 2013 |
near Milton Keynes, England, UK |
July 12 July 14 |
5th Encuentro Centroamerica de Software Libre |
San Ignacio, Cayo, Belize |
July 12 July 14 |
GNU Tools Cauldron 2013 |
Mountain View, CA, USA |
July 13 July 19 |
Akademy 2013 |
Bilbao, Spain |
July 15 July 16 |
QtCS 2013 |
Bilbao, Spain |
July 18 July 22 |
openSUSE Conference 2013 |
Thessaloniki, Greece |
July 22 July 26 |
OSCON 2013 |
Portland, OR, USA |
| July 27 |
OpenShift Origin Community Day |
Mountain View, CA, USA |
July 27 July 28 |
PyOhio 2013 |
Columbus, OH, USA |
If your event does not appear here, please
tell us about it.
Page editor: Rebecca Sobol